Matter uses IPv6 for its operational communications, and leverages on both IPv6 Unicast and Multicast addressing for accessing its Nodes and Groups, respectively

 

A fundamental aspect of Matter is that it works both on high-throughput network mediums such as Wi-Fi and Ethernet, but also on low-latency, low-bandwidth, such as Thread. If all Multicast packets from Wi-Fi are bridged into Thread we'd be burdening the network, and potentially flooding it. Thread's goal is to enable IPv6 in low-power, low-latency mesh networking, not high-bandwidth data transfer. While Thread's ICMPv6 pings in a local network are typically under few tens of milliseconds RTT, its total bandwidth is limited to 250 kbps at the IEEE 802.15.4 PHY. With packet retransmissions and overhead, the typical max bandwidth is around 125 kbps. In other words, orders of magnitude less than Wi-Fi.

 

Frames on the IEEE 802.15.4 PHY are 127 bytes, but the largest (and typical) maximum transmission unit (MTU) of IPv6 packets in Thread is 1280 bytes. Thus IPv6 packets often need to be split into several PHY frames. This process is defined by RFC4944.

 

Group messages are also important as they allow simultaneous control of several Matter Nodes through Multicast. In order to route this traffic into the Thread network, both Matter and Thread implement the Unicast Prefix-based IPv6 Multicast Addressing Scheme defined by the RFC 3306.

 

This method allows the selection of the destination Nodes of a Multicast packet based on their shared IPv6 Unicast prefix.

 

For example, a Matter Multicast address might look like this:

 

FF35:0040:FD<Fabric ID>00:<Group ID>

Table 1 details how this address is constructed:

Bits	Description
12 bits	0xFF3
4 bits	0x05
Scope: site-local

8 bits	0x00
reserved

8 bits	0x40
Indicates a 64-bit long prefix

8 bits	0xFD
Designates a ULA prefix

56 bits	Fabric ID
8-bits	0x00
16-bits	Group ID

 

Comments


Comments are closed