• No results found

OpenThread vs. Contiki IPv6 :An Experimental Evaluation

N/A
N/A
Protected

Academic year: 2022

Share "OpenThread vs. Contiki IPv6 :An Experimental Evaluation"

Copied!
46
0
0

Loading.... (view fulltext now)

Full text

(1)

IT 17 077

Examensarbete 30 hp Oktober 2017

OpenThread vs. Contiki IPv6:

An Experimental Evaluation

Christoph Ellmer

Institutionen för informationsteknologi

(2)
(3)

Teknisk- naturvetenskaplig fakultet UTH-enheten

Besöksadress:

Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0

Postadress:

Box 536 751 21 Uppsala

Telefon:

018 – 471 30 03

Telefax:

018 – 471 30 00

Hemsida:

http://www.teknat.uu.se/student

Abstract

OpenThread vs. Contiki IPv6: An Experimental Evaluation

Christoph Ellmer

Low-power wireless networks based on the IEEE 802.15.4 standard are widely used for emerging Internet of Things applications. So far, however, there is no standard network stack for such networks. Different standardization bodies and industry alliances are driving their own standards, which hinders interoperability of IoT devices and hence slows down growth in this sector.

Furthermore, it is little known how the different stacks compare in performance. This master’s thesis contributes by conducting a comparison between two network stacks for such low-power networks: the IPv6 network stack of the Contiki operating system and OpenThread. Contiki is a well known operating system tailored for low-power networks and its IPv6 network stack implements various IETF RFCs.

OpenThread is an open source implementation of Thread. Thread focuses on low-power networks in the home environment and as recently published by the Thread group, an alliance of companies active in the field.

In this master’s thesis we port OpenThread to Contiki and compare both stacks with respect to latency, packet loss, and implementation complexity. For this purpose we conduct experiments focusing on point-to-point traffic between devices in a

low-power wireless network. The experiments are performed in a testbed of IoT devices, which is installed in an office environment. We find that neither stack outperforms the other.

IT 17 077

Examinator: Arnold Neville Pears Ämnesgranskare: Thiemo Voigt

Handledare: Nicolas Tsiftes and Joakim Eriksson

(4)
(5)

Acknowledgments

I would like to show my gratitude to my supervisors at RISE SICS, Nicolas Tsiftes and Joakim Eriksson for their guidance and feedback throughout the last months.

Furthermore I would like to thank Thiemo Voigt and the Networked Embedded Systems Group at RISE SICS for the friendly working environment.

Special thanks go to Derrick Alabi for the good times and discussions throughout the Thesis and the Master’s Program.

(6)
(7)

Contents

1 Introduction 1

1.1 Problem Statement . . . 2

1.2 Limitations . . . 2

1.3 Contributions . . . 2

1.4 Thesis Structure . . . 2

2 Background 3 2.1 The Contiki Operating System . . . 3

2.2 Low-Power Wireless Networks . . . 4

2.3 IEEE 802.15.4 . . . 5

2.4 6LoWPAN . . . 6

2.4.1 Header Compression . . . 7

2.4.2 Packet Fragmentation . . . 7

2.4.3 Link Layer Forwarding . . . 7

2.5 Contiki IPv6 Stack . . . 8

2.5.1 RPL . . . 9

2.5.2 Application Layer . . . 10

2.6 Thread . . . 10

2.6.1 Addressing . . . 11

2.6.2 Routing Algorithm . . . 11

2.6.3 Encryption, Commissioning And Joining . . . 12

2.7 Related Work . . . 12

2.8 Alternatives For Low Power Networks . . . 13

3 Porting OpenThread to Contiki 15 3.1 Platforms . . . 15

3.2 Contiki’s and OpenThread’s Platform Abstraction . . . 16

3.2.1 Radio module . . . 17

3.2.2 Random Module . . . 17

3.2.3 Alarm . . . 17

3.2.4 Universal Asynchronous Receiver and Transmitter (UART) . . . 18

3.2.5 Logging and Misc . . . 18

3.2.6 General . . . 18

3.2.7 Unimplemented Modules . . . 18

4 Evaluation 21 4.1 Experimental Setup . . . 21

(8)

4.2 Measuring Latencies And Packet Loss . . . 22

4.3 Round-Trip Times . . . 23

4.4 Packet loss . . . 27

4.5 Lines of Code . . . 29

4.6 Firmware Sizes . . . 30

5 Conclusions and Future Work 33

(9)

Chapter 1

Introduction

The initial research of wireless sensor networks was motivated by military considerations; most notably the development of surveillance systems was of interest [33]. Nowadays wireless sensor networks find use in a wide variety of applications like environmental or structural monitoring, industrial processes, or health related systems. The number of devices, also called nodes, in such applications ranges from a few to several thousands. The nodes typically perform only a limited set of tasks like sensing and controlling the surrounding environment. Therefore the nodes are often low-cost embedded devices with constrained computational resources, equipped with a radio transceiver. They are optimized for low energy consumption and can run on batteries for years, which makes them suitable to be placed at remote or non-fixed positions.

Due to the nodes’ low price tag and low energy consumption, low-power wireless networks play an important role in the emerging Internet of Things (IoT). Examples for IoT applica- tions include smart solutions like power monitoring, heating, and lighting control for cities and buildings. It is expected that the IoT grows from approximately 6 billion connected devices in 2016 to 18 billion devices in 2022 [2]. In this huge market the development of industry stan- dards for communication in low power wireless networks is essential; a common, well-tested, and established standard speeds up application development and enables interoperability be- tween nodes running different operating systems. Different standardization bodies and industry alliances, however, try to establish their own standards.

With the advent and spread of IPv6 in recent years there is a trend to directly implement this protocol for low-power wireless networks. Contiki is a well known operating system tailored for such networks, which runs on a variety of platforms [16] [11]. Following this trend, its current communication stack implements a variety of open standards1 for low-power wireless IPv6 networking. The performance of Contiki’s low-power IPv6 stack has been thoroughly tested and it has been awarded the IPv6 Ready Silver logo [22][24][26].

During the last years, an industry alliance, the Thread group, developed a new IPv6-based network stack called Thread [28]. It is specifically designed for smart IoT applications in home environments, and tries to overcome the segregation in this market. It is partially based on open standards, but until recently only members of the Thread group had access to the complete Thread specification. An open source implementation, however, is available: OpenThread [42].

1 IETF RFCs, Internet Engineering Task Force Request For Comments

(10)

1.1 Problem Statement

In general there is little knowledge on how different communication standards for low-power wireless networks compare in performance. The focus of this thesis is to conduct such a comparison between OpenThread and the Contiki operating system’s current low-power IPv6 stack.

In order to establish similar prerequisites for the performance evaluation, I port OpenThread to the Contiki operating system. I assess the latency and the packet loss rate of each stack, which indicate the capability and reliability of the network, respectively. Since I only plan to evaluate the network stacks in isolation we focus on point-to-point traffic between devices within the network. The experiments are conducted in a testbed of IoT devices which is installed in an office environment. Additionally we study the firmware sizes to assess if the stacks are suitable for the same type of physical devices.

1.2 Limitations

The goal of the thesis is to enable a comparison of OpenThread and Contikis IPv6 stack based on similar assumptions. Therefore, OpenThread is ported to run on top of the Contiki operating system. It is, however, not the primary goal of this thesis to perfectly integrate OpenThread in Contiki; i.e., not all OpenThread features are incorporated in this port and it is not intended to include the port in the official Contiki repository. This poses potential inaccuracies for the performance comparison since Contiki’s low-power stack is more tightly integrated within the operating system than the OpenThread port.

Furthermore, it is not intended to include other stacks for low-power wireless networks, like OpenWSN [55], in the the comparison.

1.3 Contributions

This master’s thesis contributes to current research by conducting the first performance com- parison between OpenThread and Contiki’s low-power IPv6 stack. This is to the best of our best knowledge also the first comparison of a Thread-based network stack and a network stack which is solely based on IETF standards. The thesis shows that neither of the stacks outper- forms the other.

1.4 Thesis Structure

The remaining part of the thesis is organized as follows: Chapter 2 introduces the Contiki operating system and describes both Thread and the standards used in Contiki’s low-power IPv6 network stack. Chapter 3 narrates the porting of OpenThread to Contiki. Chapter 4 describes the experimental setup and discusses the results of the conducted latency and packet loss measurements. It also compares the implementation complexity of both stacks. Finally, Chapter 5 concludes with a summary and outlines future work.

(11)

Chapter 2

Background

This chapter describes the characteristics and the evolution of low-power wireless networks.

It then focuses on the two network stacks of interest for this thesis, Thread and the Contiki operating system’s low-power IPv6 stack, and discusses the similarities and differences between them. The first section shortly presents the Contiki operating system.

2.1 The Contiki Operating System

Contiki is a portable operating system designed for resource-constrained IoT devices [11]. It includes a small kernel, reusable application modules and hardware drivers for a variety of supported platforms.

Applications running on top of Contiki are implemented using processes which are scheduled and executed by Contiki’s kernel. The kernel uses a cooperative scheduling strategy, i.e., the processes either run to completion or return control to the scheduler on a voluntary basis. A process can be preempted by interrupts, the preempted process however resumes directly when the interrupt handler returns. In order to obtain a responsive system the processes need to return control frequently.

The processes are implemented as Protothreads [17]. They share the stack with Contiki’s kernel, which leads to a very small memory overhead and fast context switches. They were specifically designed to facilitate an event-driven execution model and provide a variety of tools to communicate with other processes via events. Synchronous events are immediately delivered to the target process, and the sending process continues when the receiving process returns control. Asynchronous events are inserted in an event queue and dispatched by the kernel.

Processes often return control to the scheduler by blocking waiting for an event.

There are three wireless network stacks included in Contiki: Rime [20], an IPv4 stack, and an IPv6 stack. The IPv4 stack, uIP, was the first IP-based network stack for low-power wireless networks [15]. It uses on-demand routing similar to the Ad-hoc On-demand Distance Vector protocol [13]: a node broadcasts the desired destination address and neighboring nodes then advertise if they have a route to the destination available. The Rime network stack uses a similar routing strategy and is more lightweight compared to the IPv4 stack. Another on- demand routing protocol is LOADng1, which is still being actively developed [9].

1 The Lightweight On-demand Ad-hoc Distance-vector Routing Protocol - Next Generation

(12)

For many platforms, however, the default network stack is the IPv6 stack, which is used for the comparison with OpenThread. It uses a different routing strategy than the other two stacks and is described in more detail in the next sections.

2.2 Low-Power Wireless Networks

Before the advent of the IoT, most applications based on low-power wireless networks were deployed at remote locations, which might be difficult to reach. In order to minimize the maintenance cost and to maximize the expected lifetime of the network, energy efficiency has been a main objective of research. The energy consumption of the nodes is dominated by the radio idle-listening for incoming packets. This led to the development of duty-cycled link layer protocols which strive to minimize idle listening. Examples of such protocols are B-MAC [46], X-MAC [8] and ContikiMAC [21].

Even though Contiki supported IPv4 from the beginning, the Internet Protocol was often con- sidered too complex and not appropriate for the characteristics of low-power wireless networks, which are primarily given by constrained access to energy and constrained computational re- sources. The active research in the field produced many lightweight ad-hoc network protocols like Contiki’s Rime stack. Even industry-driven standardization attempts like ZigBee [4] and Z-Wave [5] were not based on the Internet Protocol. The lack of a common network layer with the Internet, however, introduces several issues like the need for an application-layer gateway that translates between the two protocols. This is often difficult and packets might be dropped if no mapping exists. Furthermore such mappings are often stateful, which implies that only one gateway can be used to route the traffic between the sensor network and adjacent networks.

This leads to a single point of failure in the application.

In 2008, Hui and Culler demonstrated the feasibility of the Internet Protocol version 6 for low power wireless networks [31]. IPv6 has several advantages over IPv4, like the Stateless Address Autoconfiguration (SLAAC) [47] and a huge address space. This way every node in a low-power network is directly addressable using a widespread technology, which simplifies development and deployment of IoT applications. Since then several working groups within the Internet Engineering Task Force (IETF) worked on IPv6 related standards for low-power wireless networks, amongst others the IPv6 over Low power Wireless Personal Area Networks (6LoWPAN) group, the Routing Over Low power and Lossy networks (ROLL) group, and the Constrained RESTful Environments(CoRE) group.

Low-power wireless networks often use the widespread IEEE 802.15.4 wireless communication technology. The 6LoWPAN group standardizes the transmission of IPv6 packets using IEEE 802.15.4 radios by means of the 6LoWPAN adaption layer [41]. This layer compresses the comparatively large IPv6 and UDP headers and handles packet fragmentation and reassembly.

It is part of Contiki’s IPv6 stack as well as of Thread, and is explained in more detail in section 2.4.

The ROLL group focuses on routing solutions for lossy, low-power networks, and released the RPL routing protocol in RFC6550 [51]. RPL is a major component in Contiki’s IPv6 stack and is discussed in section 2.5.1. The group also standardized the Trickle algorithm, a well-known mechanism to disseminate information in a network in an energy efficient manner [38][37].

Changes or inconsistencies in the network topology lead to a fast communication rate, which then decreases exponentially such that only a few routing protocol packets per hour are sent.

(13)

The Trickle algorithm is used in Thread and Contiki’s IPv6 stack to spread link quality and routing advertisements throughout the network.

The CoRE group focuses on the application layer of the network stack and specified the Con- strained Application Protocol (CoAP) in RFC7252 [49]. CoAP provides functionality similar to the HTTP application layer. It is based on UDP instead of TCP to account for the char- acteristics of lossy low-power networks. Both stacks, Contiki (see the er-coap application) and OpenThread, provide a CoAP implementation.

As is apparent from the previous paragraphs, Thread and Contiki’s stack partially utilize the same technologies and protocols. The following sections focus first on the common parts, before discussing each stack in more detail.

2.3 IEEE 802.15.4

The IEEE 802.15.4 standard defines the physical layer and the medium access control layer for low power and low data rate wireless personal area networks (WPANs). It either operates in a region-dependent sub-GHz frequency band or in the worldwide unlicensed 2400 MHz to 2483.5 MHz frequency range. The latter uses offset quadrature phase-shift keying (OQPSK) and offers a bit rate of 250 kB/s. The standard offers two different addressing modes: either the unique extended 64-bit address of the low-power network interface is used or a 16-bit short address which is obtained after association to the personal area network (PAN). The 16-bit address is unique within the PAN and its usage is preferred according to the standard.

Several revisions of the standard exist, Thread is based on the version published in 2006 [1] and uses the 2.4 GHz band exclusively. Contiki’s IPv6 stack is designed in a modular way and does not commit itself to a specific version of the IEEE 802.15.4 standard. Furthermore, Contiki’s IPv6 stack runs on top of other physical technologies like wired power-line communication (PLC) and Ethernet.

IEEE 802.15.4 compliant devices have an energy detection mechanism and provide both RSSI (radio signal strength indication) of received frames and LQI (link quality indication) to the upper layers. This information may be used to establish reliable routes in the network. Trans- mitters perform clear channel assessments (CCA) and implement a CSMA-CA (carrier sense multiple access with collision avoidance) algorithm to minimize packet losses due to colli- sions.

The 802.15.4 standard defines a slotted and an unslotted mode of operation. In the unslotted mode devices may transmit anytime while in the slotted mode the channel access is organized by means of a superframe. A coordinator defines the superframe which is partitioned in timeslots within which the transmission of a frame has to be completed. Beside normal timeslots there are also contentionless guaranteed timeslots (GTS). Devices with special bandwidth needs can exclusively use a GTS without utilizing the CSMA-CA algorithm. Thread operates solely in the unslotted mode whereas Contiki’s IPv6 stack can be configured to either of both modes.

Contiki’s slotted mode implements the IEEE 802.15.4e amendment of the standard which also uses a channel hopping strategy to increase robustness against multi-path fading [23][32].

The devices in an IEEE 802.15.4-based network form either a peer-to-peer topology or a star topology. In the star topology the end-devices communicate only via a full function device (FFD), which acts as the coordinator. End-devices may either be FFDs or so-called reduced

(14)

Frame Control

Sequence Number

Addressing Fields

Auxiliary Security

Header

Frame Playload FCS

MAC Header MAC Playload MAC Footer

2 1 0-20 0-14 variable 2

Bytes

(a) 802.15.4 MAC frame format.

Frame Type

Security Enabled

Frame Pending

ACK Request

PAN ID Com- pression

Reserved

Dest.

Addressing Mode

Frame Version

Source Addressing

Mode

0:2 3 4 5 6 7:9 10:11 12:13 14:15

Bits

(b) 802.15.4 Frame Control field.

Figure 2.1: 802.15.4 frame structure.

function devices (RFDs). The coordinator buffers messages for the end-devices and does not send them directly. The end-device needs to request the data explicitly before it is transmitted by the coordinator. In the peer-to-peer topology all devices are FFDs, and peers within range can exchange messages directly.

There is an upper limit of 127 bytes on the frame size. Figure 2.1 shows the general MAC layer frame format. Depending on the addressing and security options in the header the actual playload carried in the frames may be as low as 88 bytes. The first three bits in the frame control field determine the frame type. The four different frame types are beacon, data, acknowledgment, and MAC command frames. Beacon frames are required for network discovery and are used to define the superframes as discussed above. Data frames carry the application-layer playload. The transmitting device usually requests an acknowledgment by setting bit five in the frame control field. This way it handles link layer retransmissions, which contributes to a reliable connection between adjacent devices. In the star topology RFDs poll for data by issuing a MAC command, Thread utilizes this mechanism as explained in more detail in section 2.6.

2.4 6LoWPAN

In contrast to IPv4, IPv6 routers do not fragment and reassemble packets on the network layer in case they are too big for the underlying layers. Instead an ICMPv62 type 2 message Packet Too Bigis returned to the sender and the packet is dropped. IPv6, however, guarantees that packets with a size up to 1280 B are always deliverable, which imposes constraints on the underlying link- and physical layer technologies. But, as discussed in Section 2.3, the IEEE 802.15.4 standard only allows to transmit frames with a size up to 127 B. To overcome

. . .

Header type 1 Header type N Payload

Figure 2.2: The general format of a 6LoWPAN packet, it consists of a header stack followed by the encapsulated payload.

2 Internet Control Message Protocol version 6

(15)

11000 Datagram Size Datagram Tag

11100 Datagram Size Datagram Tag Datagram Offset

0:4 5:15 16:31

Bits

FRAG 1

0:4 5:15 16:31 32:47

Bits

FRAG N

Figure 2.3: 6LoWPAN fragmentation headers.

this limitation the 6LoWPAN group designed the 6LoWPAN adaption layer between the IPv6 network layer and the IEEE 802.15.4 link and physical layers.

6LoWPAN encapsulates an IPv6 packet in one or more 6LoWPAN packets, Figure 2.2 shows the general format of such a packet. 6LoWPAN provides several services, the most important being the fragmentation and reassembling of large IPv6 packets, the compression of IPv6 and UDP headers, and the link layer forwarding of 6LoWPAN packets. Each applied service has its own header in the header stack. Like in IPv6 the order within the header stack is fixed.

2.4.1 Header Compression

6LoWPAN specifies two header compression algorithms, HC1 for the IPv6 header and HC2 for the UDP, TCP, or ICMP header. RFC6282, an extension to 6LoWPAN, additionally defines the improved header compression algorithm (IPHC) and the next header compression (NHC) [52]. An important aspect in the header compression is the possibility to omit the source and/or destination IPv6 address. They might be reconstructable from either the 64-bit extended address or the 16-bit short address by using context information shared within the whole 6LoWPAN. In the best case 6LoWPAN reduces the size of an IPv6 and UDP header from 48 bytes to 6 bytes.

2.4.2 Packet Fragmentation

If, despite of header compression, the IPv6 packet cannot be transmitted in one IEEE 802.15.4 frame it is fragmented at the sending node. The fragmentation header identifies how the packet is reassembled at the receiving node, see Figure 2.3. The datagram tag identifies all fragments belonging to the same IPv6 packet and the offset determines the order of the fragments. The offset is omitted in the first fragment to save two bytes.

2.4.3 Link Layer Forwarding

In case the receiving and the sending node are multiple hops apart 6LoWPAN specifies a forwarding mechanism by means of the so-called mesh header. When using this mechanism the fragments of a larger IPv6 packet are only reassembled at the final destination node and not at every hop. This means that the fragments may take different routes within the network.

(16)

NETWORK LLSEC

MAC FRAMER

RDC RADIO

Adaption layer, sicslowpan_driver Link layer security, nullsec_driver MAC layer, csma_driver, framer_802154 Radio duty cycling, contikimac_driver Radio driver, cc2538_rf_driver

Figure 2.4: Contiki’s NETSTACK and its default IPv6 configuration for the zoul platform5.

For this to work forwarding nodes need to maintain a link layer routing table to determine the appropriate next hop. This link layer forwarding is often referred to a mesh under design.

Another approach is to use the route over design which uses no mesh header. Here the network layer routing protocol needs to ensure that the destination of the 6LoWPAN frame is reachable within one hop.

The following sections describe Contiki’s IPv6 stack and Thread and how they utilize IEEE802.15.4 and 6LoWPAN.

2.5 Contiki IPv6 Stack

Contiki’s uIPv6 module is an IPv6 implementation and also provides the transport layer pro- tocols UDP and TCP. Contiki encapsulates the lower layers of of the network stack in the so-called NETSTACK, which allows a flexible configuration of the physical, link, and adaption layer. Figure 2.4 illustrates such a possible configuration.

By default Contiki’s IPv6 stack uses the CSMA-CA algorithm as specified by IEEE 802.15.4 on the MAC layer, i.e. the node senses if the radio channel is idle before transmitting a radio frame. An additional layer between the MAC and the physical layer allows the utilization of a radio duty cycling protocol like ContikiMAC or X-MAC. This reduces the idle-listening of the radio and thus reduces the energy consumption of the node. In this matter Contiki goes beyond the IEEE 802.15.4 specification, which does not specify these duty cycling protocols.

Newer versions of the IEEE 802.15.4 standard, however, incorporate duty cycling by means of the coordinated sampled listening (CSL) and the time synchronized channel hopping (TSCH) protocols. The latter is also supported in Contiki3 [23].

The NETSTACK also supports IEEE 802.15.4 link layer security4, which uses a network-wide pre-shared key. Contiki’s 6LoWPAN module constitutes the top layer of the NETSTACK.

It supports header compression, and packet fragmentation and reassembling. It does not, however, implement link layer forwarding since it chooses a route over design and the nodes act as level 3 routers for the low power network.

3 Since it is based on the IEEE 802.15.4 slotted mode it does not use the radio duty cycling layer of the NETSTACK. See core/net/mac/tsch/README.md for more information.

4 As described in core/net/llsec/noncoresec/README.md.

5 The platform used for the measurements, see chapter 3.

(17)

2.5.1 RPL

Contiki implements the IPv6 Routing Protocol for Low-Power and Lossy Networks (RPL) which is specified by the ROLL group [51]. In RPL a destination-oriented directed acyclic graph (DODAG) is the basis for all routing decisions. There is one root — the destination

— in the DODAG, which can assume the role of a border router and provide connectivity to the Internet or another IPv6-based network. In the tree-like structure the nodes select one so- called preferred parent and may act as such a parent for multiple children. Beside the memory of the devices there is no constraint on the number of nodes in a DODAG.

The topology of the DODAG naturally supports multipoint-to-point (MP2P) traffic from the nodes towards the root. This kind of traffic typically occurs in applications that collect sensor readings in the deployment area and report the data to some server. But RPL also supports point-to-multipoint (P2MP, querying a node from outside the network) and point-to-point (P2P, communication between two nodes in the low power network) traffic. An extension to RPL allows to discover additional point-to-point routes by means of a subsidiary, temporary DODAG, which is rosoted at the sending node [27]. Contiki, however, does not support this extension.

The nodes in a DODAG maintain a property called rank, which indicates the node’s position with respect to the root. The rank is strictly increasing, i.e. parents always have a higher rank than their children. The exact calculation of the rank is determined by an objective function, which has to obey the specifications in a separate RFC [7]. By default, Contiki’s objective function uses the expected transmission count (ETX) [12].

According to the standard there may be several instances of RPL in the same low power network and different instances may use different objective functions to provide optimized routes for different application goals. Each instance may furthermore consist of several DODAGs, which all need to use the same objective function.

The nodes construct and maintain a DODAG by means of ICMPv6 control messages, the stan- dard defines DODAG Information Solicitation (DIS) messages, DODAG Information Object (DIO) messages, Destination Advertisement Object (DAO) messages, and DAO acknowledg- ments. As in IPv6 neighbor discovery DIS messages are used to trigger DIOs by nearby nodes.

A DIO includes information about the RPL instance, the DODAG id, the so-called DODAG version number, the rank, and optionally routing information. DIOs are broadcasted periodi- cally by all nodes in a DODAG using the Trickle algorithm. Other nodes use them to maintain a set of parents and to select their preferred parent, which acts as the default route towards the root of the DODAG.

Typically the root node acts as the border router for low-power wireless network, and it is always the root node that initiates a DODAG. Devices striving to attach to the network listen to DIO messages and choose an appropriate preferred parent. They may also attach as a leaf node only if they do not support the DODAG’s objective function. This eases interoperability since devices running different RPL implementations do not necessarily support the same objective functions. Contiki’s RPL implementation has successfully passed interoperability tests of the IPSO Alliance’s interoperability program. Additionally, interoperability with TinyRPL, the RPL implementation of TinyOS, has been demonstrated in [35].

RPL defines two modes of operation: storing and non-storing mode. Figure 2.5 illustrates their different behavior in P2P communication. In storing mode all nodes maintain a downward

(18)

R

A B

C D E

F 2 1

(a) P2P communication in storing mode.

R

A B

C D E

F 1 2 3

4

(b) P2P communication in non-storing mode.

Figure 2.5: A sample DODAG.

routing table covering their sub-DODAG, whereas in non-storing mode only the root maintains such a table. Depending on the mode of operation each node sends Destination Advertisement Objects (DAOs) to either the preferred parent (storing mode) or the root (non-storing mode).

These are the used to update the routing table accordingly.

Time varying external conditions such as non-fixed obstacles or temporary high ambient ra- diation can render certain links in the DODAG unusable. RPL has local and global DODAG repair mechanisms to deal with such cases. A global repair is initiated by the DODAG root, which increments the DODAG version number. This causes the construction of a completely new DODAG. Local repairs are triggered by nodes when an inconsistency is detected. Poison- ing the sub-DODAG, i.e. by advertising an infinite rank in the DIO, leads to a reconstruction of the node’s sub-DODAG since the children need to select a new preferred parent with lower rank.

2.5.2 Application Layer

Contiki includes several application layer protocols which run on top of its IPv6 stack. Be- side the already mentioned CoAP module it provides an implementation of the Lightweight machine-to-machine (LWM2M) protocol from the Open Mobile Alliance [3]. It is especially well suited for IoT applications. Another option, Sparrow, is developed by Yanzi Networks. It is available under an open source license and provides functionality similar to LWM2M. It also supports to remotely reprogram an IoT device out of the box.

2.6 Thread

Thread defines two groups of devices, routers and hosts. Routers provide connectivity within the low-power network by forwarding messages for each other and participate in the joining process of new nodes. Border routers additionally supply connectivity to the internet or other IPv6-based networks. In contrast to RPL there may be multiple border routers in a Thread network, which avoids a single point of failure. A Thread network can contain up to 32 routers, where one of them assumes the role of the network leader and is responsible for decisions within the network.

The leader may decide to downgrade the functionality of a router device to that of a host.

The so-called router-eligible end-device (REED) does not provide any router-service but — if required — it can assume the router role without user interaction. It is worth noting that not

(19)

all hosts need to be a REED. End-devices generally attach to a parent router and communicate exclusively via this parent.

End-devices may be sleepy, i.e. they are not required to idle-listen for incoming packets.

Thread uses the previously explained star topology of the underlying IEEE 802.15.4 protocol for the communication between routers and hosts. Parent routers act as a coordinator and buffer messages for their children in case they are sleepy. Upon wake up the sleepy child either sends data via its parent or it issues an IEEE 802.15.4 data request MAC command. The parent acknowledges the request with the frame pending bit set if there are buffered messages for the child; see also Figure 2.1. If the frame pending bit is set the end-device keeps its radio on awaiting further messages, otherwise it may enter its sleep cycle.

In Thread sleepy end-devices can potentially run on batteries for long periods. Routers, how- ever, have a significantly higher power consumption. Since Thread is designed for home envi- ronments routers may directly attach to mains power.

2.6.1 Addressing

The network shares a locally assigned global 64-bit IPv6 prefix as specified in RFC4193 [29].

Together with their 64-bit extended MAC address, the IEEE 802.15.4 devices generate an IPv6 address by using stateless address auto configuration. Similarly the nodes configure a mesh- local address with the FE80::0/64 prefix or further addresses using prefixes provided by the border routers.

All devices joining the network are assigned a 16-bit short IEEE 802.15.4 address. Router addresses utilize only the six higher bits with the lower ten bits set to zero. This allows for 64 router addresses but only 32 of them are used simultaneously to allow aging and recycling of router addresses. End-devices use the same high bits as their parent but the lower bits are non-zero. This way the routing of messages towards end-devices can be inferred by only considering the higher bits of the target address.

2.6.2 Routing Algorithm

Thread supports full peer-to-peer connectivity between all routers in the network. It uses the algorithms from the distance-vector routing information protocol (RIP [30][40]), but in conjunction with a different message format. The message format is based upon and extends the Internet draft for mesh link establishment (MLE), which was designed for “1) dynamically configuring and securing radio links, 2) enabling network-wide changes to radio parameters, and 3) detecting neighboring devices” [34].

The routers maintain a next-hop routing table using the short addresses. They populate the table by exchanging MLE messages which contain path costs to all other routers in the network.

The path cost to a router is the minimum sum of link costs to reach that router and the link cost in turn is an asymmetric measure based on the RSSI of incoming messages. Because of the addressing scheme the routing tables contain at most 32 entries, even for large networks with a lot of end-devices,

Thread incorporates an address translation mechanism that relates IPv6 addresses, 64-bit interface addresses, and 16-bit short addresses. When possible Thread favors the short address to achieve higher compression on the 6LoWPAN adaption layer. Besides header compression

(20)

and fragmentation, Thread makes heavy use of 6LoWPAN’s mesh header as described in Section 2.4. This means fragmented IPv6 packets are only reassembled at the destination node; i.e. Thread uses a mesh under design.

2.6.3 Encryption, Commissioning And Joining

Beyond IPv6 connectivity Thread also specifies security relevant aspects. All communication is encrypted using the mechanisms described in IEEE 802.15.4. This implies that no device can join the network unless it has a network wide pre-shared key or it undergoes an commis- sioning and joining process, which requires human interaction. Because Thread includes this commissioning and joining process in its specification, standalone IoT devices running Thread can attach to existing Thread networks. There is no need that all devices in the network are developed together; instead they might run completely different applications. Contiki’s IPv6 stack differs from Thread in this regard as there is no agreed on commissioning and joining process integrated in the stack itself.

Thread also specifies network discovery; a new device scans all IEEE 802.15.4 channels and issues beacon requests. A beacon response contains network information and indicates if the network accepts new members. If the node is not pre-commissioned a router establishes a session between the device and an application on a smartphone or similar. After receiving the necessary credentials the device attaches to a parent from whom it obtains a short ad- dress.

After a reboot (for example due to a battery change) the device reattaches directly to the network. For this to work Thread stores relevant parameters like the network credentials in non-volatile memory.

2.7 Related Work

In this section we summarize previous work related to IP-based networking in low-power wire- less networks.

uIP was the first TCP/IP network stack for embedded systems, which “implemented all RFC requirements that affect host-to-host communication” [15]. It demonstrated the feasibility to run IP-based network stacks on resource constrained embedded systems without sacrificing standard compliance. In the original paper the performance of uIP was tested using a point- to-point connection over an Ethernet link. The stack, however, proved also suitability for wireless sensor networks [18][19]. uIP implements version 4 of the Internet Protocol, which is still widely used today but not suitable for the huge amount of devices connected to the IoT. Another shortcoming of IPv4, the missing mechanism for address auto-configuration, was already mentioned in the original paper.

Hui et al. “claim that IPv6 is better suited to the needs of WSNs than IPv4 in every dimension”

[31]. They argue, that a lot of useful mechanisms from wireless sensor network research like

“Sampled-listening, Trickle-based dissemination, hop-by-hop feedback, and collection routing”

can be integrated into an IPv6 based network stack. In their paper they outline a layered architecture, that incorporates all these mechanisms, and clearly separates application layer logic from the networking logic. They show that the performance of their stack outperforms the performance of prior deployments using other architectures. Furthermore, they contribute

(21)

by improving the to date 6LoWPAN compression schemes and by adopting a tree-like routing topology.

Subsequently, the Contiki operating system and TinyOS also included an IPv6 network stack [26]

[14]. Ko et al. demonstrated interoperability between the two stacks by deploying both stacks in the same network [36]. This highlights the benefits of the layered IP architecture, which provides application-agnostic connectivity. Ko et al. also pointed out that the performance of the network is sensitive to the configuration of queue sizes, retransmission timers etc.

Another IPv6 stack for low-power networks is OpenWSN [55]. Like Contiki’s uIPv6 it is based on standards and uses the RPL routing protocol. It was the first open source stack to incorporate the to date new Time Synchronized Channel Hopping (TSCH) standard, which allows extremely low duty cycles. OpenWSN is designed in a portable way and not tied to a specific operating system or platform.

Like OpenWSN, OpenThread is designed to be portable. Due to its open source license, OpenThread is also integrated in other systems. RIOT, another operating system for embed- ded IoT devices, recently added a port for OpenThread [10]. Also Nordic Semiconductor, a company that actively contributes to OpenThread, includes it in its ’nRF5 software develop- ment kit available for their platforms [48].

Unlike the mentioned research, we do not aim to design and implement a network stack, or to demonstrate interoperability of two stacks. Instead, we focus on two existing implementations with different routing strategies, and compare their performance. We conduct experiments using the same operating system and the same application-layer test suite for both stacks.

Furthermore, all measurements are carried out in the same testbed of IoT devices at similar times. This approach increases the overall comparability of the results. In this point we extend the previous work, which evaluates the performance of a single stack in isolation.

2.8 Alternatives For Low Power Networks

Both Thread and Contiki’s IPv6 stack implement IPv6 on top of IEEE 802.15.4 radios. There are, however, attractive alternatives depending on the use case. Thread is designed for home en- vironments with relatively confined spatial coverage. For a lot of such smart devices Bluetooth Low Energy, a widely supported wireless communication protocol, is a possible alternative.

The successor, Bluetooth 5, adds new features that are tailored for the IoT.

Powered devices in homes can also connect to existing but more power hungry IEEE 802.11 WiFi networks. The recently published IEEE 802.11ah protocol focuses, like Bluetooth 5, on the IoT and provides low-power communication for distances of up to one kilometer [50].

The LoRa Alliance specifies another long range communication network, which is feasible for distances up to some kilometers, depending on environmental conditions like line-of-sight between the nodes [6]. This high transmission range requires radios with a higher power consumption compared to IEEE 802.15.4 radios. The nodes and the gateway, however, are only one hop apart and there is no need to maintain a network topology.

Another option are backscatters; i.e., radios that do not actively transmit but modulate am- bient electromagnetic waves. Recent research shows that such devices can communicate over distances of up to one kilometer while consuming only microwatts instead of milliwatts like traditional radios in the field [54].

(22)
(23)

Chapter 3

Porting OpenThread to Contiki

This chapter describes the porting of OpenThread to Contiki. Although the port aims to be as platform independent as possible, it is only tested on the boards installed in the testbed.

These boards are described in the first section.

Since OpenThread is under active development, minor parts of the APIs were subject to change during this thesis. Therefore both OpenThread and Contiki were fixed to specific commits during the thesis project1.

3.1 Platforms

For each platform Contiki needs to be configured and initialized correctly. To this end all platforms have their own configuration and initialization files. They are typically found in the platform folder which contains the matching contiki−main.c and contiki−conf.h files, respectively, and were adapted for the zoul and the native platform.

The native platform allows to run simulations in conjunction with the posix examples provided by OpenThread. These examples route the network traffic through UDP sockets, which re- quired the implementation of a new, compatible radio driver for Contiki. Simulating a Thread network similar to the setup used in the measurements was helpful to test the setup for the experiments, see chapter 4 for more details.

The measurements were conducted on Zolertia Firefly devices, which are based on Zolertia’s zoul module. It incorporates two chips manufactured by Texas Instruments: CC2538 and CC1200. The CC1200 is a sub-1 GHz radio interface and is not used throughout this thesis since OpenThread operates only in the 2.4 GHz range. The Firefly also has an on-board USB-to-Serial converter (CP2104), which allows to program the board using its USB port.

Furthermore it has built-in user and reset buttons, an RGB LED, and a core temperature and battery sensor.

The CC2538 contains a 2.4 GHz IEEE 802.15.4-compliant radio transceiver and an ARM Cortex

1 Commit 02bd8e1dddd6a5b3a14255fc96657e75f00b0b75 for Openthread and Commit 99402348eb36a2032 f98cfa3c4f63947bfc33133 for Contiki.

(24)

Figure 3.1: A Zolertia Firefly. 25 of these are installed in the testbed.

M3 microcontroller. The microcontroller runs with frequencies up to 32 MHz, has 512 kB programmable flash storage, and 32 kB RAM.

3.2 Contiki’s and OpenThread’s Platform Abstraction

Contiki is a highly portable operating system and its modules and libraries are implemented on top of hardware drivers, which, in turn, are implemented for a variety of platforms.

OpenThread is also designed with portability in mind: it is “OS and platform agnostic, with a narrow platform abstraction layer and a small memory footprint, making it highly portable” [42]. OpenThread compiles to a static library, which is then linked to the final application.

The porting thus mostly consists of mapping OpenThread’s platform abstraction to the cor- responding modules in Contiki, as illustrated in Figure 3.2. However, not the whole platform abstraction of OpenThread needs to be implemented in order to get a functional network stack. As an example consider the memory module, which provides heap memory functional- ity similar posix malloc and free. OpenThread is designed such that multiple instances of the network stack can run on the same device, which requires the allocation of additional memory.

A single instance of OpenThread, however, runs just fine without implementing the memory module. A guide describing the general porting process in more detail was recently added to the OpenThread wiki [45].

The following subsections shortly describe the porting of the modules, respectively.

Alarm Random Radio Uart

OpenThread Platform Abstraction Layer OpenThread Core

Ctimer Random Netstack Radio

Serial Line Contiki Modules

Contiki Drivers

Figure 3.2: OpenThread’s platform abstraction is mapped on the corresponding Contiki mod- ules. The graphic shows only the modules implemented.

(25)

3.2.1 Radio module

As discussed in chapter 2 Contiki’s NETSTACK consists of several layers that provide different functionality. All layers operate on a global buffer called packetbuf, which holds exactly one packet.

On the lowest layer, the radio driver implements the Contiki’s extended radio API defined in core/dev/radio.h. It is accessible through the NETSTACK_RADIO macro and most of OpenThread’s radio abstraction is implemented using this driver. One difference between the two abstrac- tions is that Contiki transmits packets synchronously, whereas OpenThread has a non-blocking transmit function together with a callback. Since the callback in OpenThread should not be issued before the transmit function returns, it is sent from a Contiki process, which, in turn, is polled after the packet has been transmitted.

In Contiki, when operating in interrupt mode, the incoming packets enter directly the radio- duty-cycling (RDC) layer of the NETSTACK. In the current port the RDC layer processes the received packets and forwards them to OpenThread and the remaining upper layers have no functionality. Another, maybe more flexible, approach would be to interact with OpenThread on the highest layer (NETWORK). OpenThread’s platform abstraction supports hardware that handles CSMA timeouts and retransmissions autonomously. Thus the intermediate lay- ers in the NETSTACK could implement this logic, which potentially avoids redundant copy operations of the radio frame.

The energy scan functionality and the source address matching are not implemented since they are not provided by the Contiki radio driver and they are not needed for a functional port. The latter, however, improves the quality of the port considerable since it leads to energy savings for sleepy end devices. The source address match table controls how the frame pending bit in acknowledgments is set, currently this bit is set in every acknowledgment. Sleepy end devices are thus active longer since they expect incoming packets and thus the efficiency of the mailbox system decreases. The zoul platform has hardware support for source address matching and it might be worth the effort to add this functionality to the radio driver in Contiki.

3.2.2 Random Module

The random module is used in security relevant assets and to generate the extended address of the device. It provides a true random number generator (TRNG) for the upper layers. The module is implemented using Contiki’s random library, which is based on a hardware number generator for the zoul platform.

3.2.3 Alarm

OpenThread requires a free running timer with millisecond resolution. In the default configu- ration of the zoul platform, Contiki’s ctimer has a resolution of roughly 10 ms. This is adjusted to the required millisecond resolution by changing CLOCK_CONF_SECOND from 128 to 1000 in contiki−conf.h.

During the course of this thesis OpenThread added an API for a microsecond resolution timer.

Such a timer is needed to fully comply with [1] with respect to backup periods in clear channel assessments. This high resolution timer is not yet added to the port, but might be implemented

(26)

using Contiki’s rtimer. For zoul, the rtimer provides a sufficient resolution of 1/32768 ≈ 31 µs.

In its current implementation the rtimer can only be used for one task in a Contiki application, which is typically the network stack. As the network stack is replaced by OpenThread, the rtimer is an option for the high resolution timer. Another option is, as already mentioned, to im- plement the CSMA and retransmission logic on the MAC layer of Contiki’s NETSTACK.

3.2.4 Universal Asynchronous Receiver and Transmitter (UART)

OpenThread comes with a convenient command line interface (cli) application. It was used throughout the thesis for testing and for learning purposes. The cli application is based on an UART or a SPI backend. For this port the UART backend is chosen because it is already well integrated within Contiki by means of the printf function and the serial-line module. The printf transmits characters over the UART while the serial-line module buffers incoming characters.

Upon the reception of the newline character ’\n’ an event is broadcasted to all subscribed processes.

The port uses a process which subscribes to the serial-line-event. There is, however, a small conflict: OpenThread’s command line interface also buffers incoming data until a newline character is received. Contiki, however, replaces the newline character by the string termi- nation character ’\0’. This substitution needs to be reverted before passing the data to OpenThread.

This approach is problematic, however, since it does not work with another example application of OpenThread, the Network CoProcessor (NCP). Like Conitiki’s SLIP module (Serial Line Internet Protocol), the NCP routes radio frames over the UART. It thus provides a way to connect the low-power wireless network to a computer and to other networks like the Internet.

In order for the NCP to function properly, it needs exclusive access to the UART and incoming data may not be buffered. For zoul, this can be accomplished by adapting uart_set_input←- (..) in contiki−main.c for reception and puts in cpu/cc2538/dbg.c) for transmission. Debug messages may then be send using otNcpStreamWrite.

3.2.5 Logging and Misc

The implementation of the logging module is optional, but was valuable for debugging purposes.

Currently the printf function is used to transmit the logs over the UART. Failed asserts within OpenThread also print an error message and the hardware reset functions are not implemented yet. An application should change these modules to comply with to the chosen logging and fault handling strategy.

3.2.6 General

OpenThread signals when it has non-urgent tasks to run. Upon reception of such a signal a Contiki process is scheduled, which then runs all queued tasks.

3.2.7 Unimplemented Modules

As already stated above, the memory module and some functionality of the radio module is not implemented yet. Also the non-volatile storage module is still missing, it is needed to

(27)

save credentials like the network name, the network channel, the panid, the private key, etc.

Without this data, a node cannot rejoin the network after a power loss or a reboot. This functionality might be added using Contiki’s file system Coffee [53]. OpenThread uses mbeds cryptographic library to implement link-layer security [39]. Further performance improvements can be achieved by enabling hardware acceleration for the this library.

(28)
(29)

Chapter 4

Evaluation

Both Contiki’s IPv6 stack and OpenThread provide the same basic functionality: IPv6 net- working for low power wireless networks. This makes a comparison of both stacks meaningful.

This chapter first presents results of experiments measuring latency and packet loss in a testbed of IoT devices and then compares the respective implementation complexity.

Generally a low packet loss rate is essential for any IoT application and one of the main goals of the underlying network stack. It is thus interesting and meaningful to compare the loss rate of both stacks under similar conditions. The latency is an indication for the capability of the network. It is measured as a round-trip time between two nodes to avoid synchronization issues regarding the clocks of the nodes.

Counting the lines of code serves as one measure of the implementation complexity. It is studied in order to assess if one of the implementations is overly complex, which might increase the probability of bugs and might make the code harder to understand. The other measure studied is the size of the respective firmwares. This is of interest because smaller firmwares may result in savings due to the use of cheaper IoT devices.

4.1 Experimental Setup

The experiments are carried out in a testbed of IoT devices installed at the office of RISE SICS at Kista, Stockholm. The testbed consists of 25 Zolertia Fireflys as described in the previous chapter. One further Firefly is directly connected to a laptop and used to collect data from and issue commands to the nodes in the testbed. Figure 4.1 shows the physical distribution of the devices.

A serial connection between the additional node and the laptop is used to connect to the low- power wireless network. In case of Contiki this node runs the rpl-border-router application found in Contiki’s examples. It acts as the DODAG root and routes outbound traffic over the UART using the Serial Line IP (SLIP) protocol. On the host the messages are processed by tunslip6, a tool included in Contiki. It creates a network interface in the host operating system and tunnels the traffic between the serial connection and this interface.

The OpenThread team provides a similar tool called wpantund, but uses the Spinel protocol to communicate over the UART [43] [44]. The node needs to run a so called Network Control Processor (NCP) application. OpenThread includes an NCP application and also a ready-to-

(30)

Figure 4.1: The testbed at SICS. The node connected to the laptop is placed in the office right of Node C12, close to the door.

run example, which was used during the measurements.

4.2 Measuring Latencies And Packet Loss

The nodes in the testbed run a small application on top of UDP to measure latencies and packet loss rates within the low-power wireless network. A thin wrapper unifies the stack- specific UDP APIs and provides a common interface to the application layer. The network stacks use the respective standard configuration, only for Contiki’s IPv6 stack the size of the routing table is increased to accommodate more than 16 routers.

The laptop itself runs a Python script which opens a socket with a fixed IPv6 address. On startup the nodes send their IP address to the Python script, which, upon reception of all addresses, issues commands to the nodes and collects the responses.

In order to assess the latencies in the network the round-trip times for packets are measured as explained in Figure 4.2. Both timestamps are taken at the same node to avoid inaccuracies due to unsynchronized clocks. Packet losses are registered along the round-trip time measurements.

Adopting the names in Figure 4.2: node D starts a 2 s timer in step (2) and reports a round-trip time of zero if no response is registered before the timeout. Losses along the paths (1) and (4) are not taken into account when analyzing the data.

(31)

Python Script A

B

C D

E F

G 1

4 2

3

Figure 4.2: Capturing round-trip times: (1) The Python script sends a request to node D to measure the round-trip time to Node B. (2) Node D requests a packet of a certain size from Node B and stores the timestamp. (3) Node B responds by sending an accordingly sized packet to Node D. Node D then calculates the round-trip time. Step 2 and 3 are repeated ten times.

(4) Node D sends the round-trip times to the Python script.

The application implements another command to take a snapshot of the current routing topol- ogy in the network. When using Contiki’s IPv6 stack the nodes report the IP address of their respective preferred parent, which allows to reconstruct the DODAG. One of the captured snapshots is shown in Figure 4.3. In case of OpenThread the routing table is inspected. Each router reports the appropriate next hop router for messages to every other router in the net- work. Nodes that currently act as children report the router ID of their parent. The resulting mesh topology is depicted in Figure 4.4.

4.3 Round-Trip Times

The round-trip times were measured between five different pairs of nodes. The devices in the pairs (1|5) and (9|12) are relatively close to each other. The remaining pairs cover bigger spatial areas with different characteristics; the pair (24|18) lies within a tendential linear region of the network, the pair (14|3), however, represents the mesh part of the network. Finally (23|8) covers both of theses regions.

The sizes of the responses, i.e. the packets sent in the third step in Figure 4.2, range from 20 bytes to 440 bytes, which covers the relevant range for IoT applications and ensures that 6LoWPAN-fragmented packets are included in the measurements. Figure 4.5 shows the number of data points captured for each pair of nodes and each packet size. The experiments are done during non-office hours to minimize radio interference due to people in the office building accessing the WIFI network, etc. Furthermore the order of the measurements was randomized to spread the influence of external factors equally.

Figure 4.6 reveals the round-trip times behave similar for both network stacks. It is almost proportional to the packet size with small discontinuities when the number of 6LoWPAN frag- ments needed increases. It is worthwhile to note that the different strategies, route over used by Contiki’s stack and mesh under, do not have a visible effect on the round-trip times. Even though OpenThread splits and reassembles 6LoWPAN fragments only once at the destination node instead of at every hop, there is no sign that OpenThread scales better with respect to packet size.

(32)

Figure 4.3: Snapshot of the DODAG when using Contikis IPv6 stack. The root was connected to the host running the Python script.

Figure 4.4: Snapshot of the mesh topology when using OpenThread. The children are shown in the light rectangular boxes; the arrow points to their parent. All the other nodes are routers and Node 20 currently has the leader role. The arrows point to all routers which are used as next hop on some route.

(33)

Figure 4.5: The number of successful round-trip measurements.

Figure 4.6: The mean values of the round-trip times between the nodes. 80% of the values are bigger than the lower end of the interval and 80% are smaller than the upper end of the interval.

(34)

(a) The time until the UDP send command re- turns.

(b) The time until the fragments of an UDP packet are transmitted.

Figure 4.7: Time measurements for sending an UDP packet. 80% of the values are bigger than the lower end of the interval and 80% are smaller than the upper end of the interval, respectively.

It is rather the other way around, all node pairs except the first show lower round-trip times for Contiki’s IPv6 stack than for OpenThread. Even though the topologies shown in Figures 4.3 and 4.4 are only snapshots and will change with time, they might contain a possible explanation for that. The nodes (9|12) are only one hop apart in the DODAG compared to (at most) three hops in OpenThread’s mesh. Similar considerations apply for the pair (1|5) and they also explain OpenThread’s good performance between (14|3). Here are two three-hop-routes possible in the mesh compared to five hops in the DODAG. However, it needs to be stressed that these arguments do not hold for the pairs (23|8) and (24|18).

Also the fact that Contiki’s IPv6 stack is more tightly integrated in the operating system than OpenThread might play a crucial role. This could explain why the time differences for (14|3);

i.e., the case when OpenThread shows lower round-trip times, are smaller than for the other pairs. In order to assess this more quantitatively we measure how long it takes to transmit whole UDP packets; i.e., the time from issuing the send command until the whole packet left the device.

Figure 4.7a shows how long it takes for the UDP-send command to return. It reveals longer times for Contiki’s IPv6 stack. To understand this we look at the implementations of both stacks. OpenThread just enqueues the UDP packet in its network layer buffers, informs the operating system that it has tasks to do, and then returns control. Thus, OpenThread exhibits low and very predictable times in Figure 4.7a. Contiki’s IPv6 stack, however, directly begins to process the packet; it issues a route lookup, fragments the packet if necessary, and passes all fragments to the MAC-layer of the NETSTACK, where they are put in a queue1. This explains the higher times for Contiki’s IPv6 stack in Figure 4.7a. The higher variations are due to variations in the route lookup time for different destination nodes.

The time to actually transmit the packet is considerably higher, though; see Figure 4.7b.

Contiki’s stack starts to actually transmit the fragments after an backoff period, which is initially always zero2. The very predictable timings are due to several reasons. First, the measurement was done in a two node network and hence there are no varying route lookup

1 Note that this is specific to the CSMA MAC-layer used in the measurements.

2 See the macro CSMA_MIN_BE.

(35)

Figure 4.8: Encryption and decryption times of radio frames measured for OpenThread. The plots show the mean and standard deviation.

times. Second, after a fragment is successfully transmitted, the next fragment is send directly without the usage of additional backoff timers. Since Contiki’s stack sends link-layer packets synchronously and because of the low link errors rate in the simple network, all fragments are, most of the time, sent directly after each other.

OpenThread transmits link-layer packets asynchronously and therefore returns control to the operating system after each fragment. This leads to generally higher transmission times for the whole UDP packet. Furthermore, the overhead compared to Contiki’s IPv6 stack increases with the number of fragments. The variations are generally higher since other processes may be scheduled in-between the transmission of two fragments. As Figure 4.7b reveals, it takes ap- proximately 8 ms longer to transmit a 300 B UDP packet with OpenThread than with Contiki’s IPv6 stack.

Another relevant factor for the latency measurements is security. Contiki’s IPv6 stack does not use link-layer encryption by default and hence it is not used in the measurements. Link- layer encryption, however, is mandatory in the Thread specification. Furthermore, the port of OpenThread to Contiki currently does not support hardware acceleration for encryption and decryption. This leads to notable additional latency at both the sending and receiving node.

Figure 4.8 reveals that this process takes about 1 ms, depending on the size of the radio frame.

Hence, for large UDP packets, which are fragmented in multiple frames, the en- and decryption penalty on the latency can sum up to the order of 10 ms; without taking into account the en- and decryption at intermediate, forwarding nodes.

Summarizing the above arguments, we can state that the additional latency due to encryption and asynchronous transmission accounts for a fair amount of the differences observed in the round-trip time measurements. Hence, both routing topologies perform equally well for the chosen pairs of nodes in the given testbed.

4.4 Packet loss

Figure 4.9 shows the packet loss rates. The pairs (1|5), (9|12), and (14|3) show similar behavior for both stacks; the loss rate varies only slightly with the packet size. OpenThread, however, exhibits a slightly higher overall loss rate for (1|5) and (9|12), and a clearly higher loss rate for (14|3). High ambient noise close to node 14 might be an explanation for that, but this is only

(36)

Figure 4.9: Dependency between the packet loss rate and packet size.

speculative.

More interesting, however, are the pairs (23|8) and (24|18); here OpenThread exhibits an almost linear dependency of the packet loss rate on the packet size. On the contrary, the reception rate of Contiki’s IPv6 stack shows no such dependency, but it is also generally higher than for the other pairs. Overall, the high average loss rate indicates that one or more links along the path were not very reliable for both stacks.

The linear dependency shown by OpenThread’s loss rate can be explained by considering the different routing strategies for 6LoWPAN fragments. Since Contiki’s IPv6 stack reassembles all fragments at every hop, either all fragments or nothing is forwarded at intermediate nodes.

In Thread, however, every intermediate node forwards all received fragments. This leads to the possibility that a lot of fragments, but only few complete UDP packets, arrive at the destination node. If the buffers are not flushed fast enough, fragments of potentially complete UDP packets may be dropped causing the reception rate to decrease. Since the buffers fill more quickly for larger packet sizes this explains the size dependency. In OpenThread the default timeout for reassembling 6LoWPAN fragments is 5 sec and there are 40 buffers for such fragments available3. The current design of the experiment described in Figure 4.2 repeats the same measurement ten times and it is not unlikely that two or more of such blocks are scheduled within 5 sec. In case of bigger UDP packets more frames need to be sent, and thus the likelihood that the buffer overflows increases with packet size.

The overall packet loss of OpenThread is almost two times as high as the loss of Contiki’s IPv6

3 See OPENTHREAD_CONFIG_6LOWPAN_REASSEMBLY_TIMEOUT and OPENTHREAD_CONFIG_NUM_MESSAGE_BUFFERS.

References

Related documents

Voluntatis int egritatem utpore qua nititur virtus, praecipue poftulat Do£trina Chr iltiana, noltrasque idcir­ co bonas a£Lones motivorum metitur puritate, qutr in

ex M ateria, Animarum præexiftentia & /xelefx^AJxaxrei, cor po ribus Angelorum atque tribus hominis partibus es- fentialibus recentiores inter Chriftianac

Det blev inte helt oväntat mest nedböjning fram även vid denna analys eftersom att ramen endast gjorde konstruktionen styvare.. Deformationen blev istället maximalt

"Control- Quality Driven Task Mapping for Distributed Embedded Con- trol Systems", International Conference on Embedded and Real- Time Computing Systems and

Figur 11: Ganska liten variation hos makrotexturen och lika mellan spåren förutom efter 1500 meter där det blir en förhöjning i höger spår. Figur 12: Något högre makrotextur

För en relativt låg nyttj andegrad talar även att en stor areal av fiskevattnen idag inte utnyttjas för yrkesfiske (66% i Kalmar län 1994) och att stora områden är

By using this model, the control problem is posed as an optimization problem, which, if solved, provides an optimal 1 control action given the current state of..

exempel kan anges de fall där en förening som tillvaratar konsument- eller löntagarintressen i tvister mellan konsumenter och en näringsidkare om någon vara eller tjänst, för