• No results found

THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS

N/A
N/A
Protected

Academic year: 2022

Share "THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE OPERATING SYSTEMS"

Copied!
79
0
0

Loading.... (view fulltext now)

Full text

(1)

THE EVALUATION OF TINYOS WITH WIRELESS SENSOR NODE

OPERATING SYSTEMS

Master’s thesis in Computer Systems Engineering Famoriyo Olusola

Supervised by : Prof Tony Larsson, Halmstad University

School of Information Science, Computer and Electrical Engineering, IDE Halmstad University

(2)
(3)

Node Operating systems

School of Information Science, Computer and Electrical Engineering, IDE Halmstad University

Box 823, S-301 18 Halmstad, Sweden

January 2007

(4)

ii

(5)

This report is a result of a master project carried out at the School of Information Science, Computer and Electrical Engineering(IDE-section) at Halmstad University. The project concludes the Master of Science in Computer Systems Engineering. The project was ini- tiated in March 2006 and was concluded in January 2007.

The work would have not been possible without the help and interest from a number of people that contributed immensely to the success on the thesis,financially, morally and education-wise:

The supervisor of the thesis at Halmstad University, Prof. Tony Larsson for his huge support and advises during the whole duration of the project.

Markus Adolfsson, Anders Ahlander and Veronica Gaspes of Halmstad University for their courses and support which really helped me in the process of completion of the thesis.

I would also like to say a big thank you to my Dad and Mum, Mr and Mrs J.G Famor- iyo for the financial, love and support given to me throughout the duration of my studies in Sweden and also to all my family members that have contributed in one way or the other Finally i would like to acknowledge the support of all my good friends in Sweden, Canada, USA, UK and Nigeria.

(6)
(7)

List of Figures

2.1 Task Execution States . . . . 7

2.2 Tasks Scheduling Operation . . . . 9

2.3 Telosb Sensor Mote . . . . 10

2.4 Memory : Compile and Run Time . . . . 12

2.5 SurgeC [1] . . . . 17

2.6 Contiki Partitioning into core and loaded programs . . . . 18

2.7 MantisOS Architecture . . . . 21

3.1 Compilation Process of TinyOS . . . . 27

3.2 Compilation process of TestSerial Application on all OS . . . . 28

3.3 Directory Structure of TinyOS-1.x . . . . 30

3.4 Directory Structure of TinyOS-2.x . . . . 31

3.5 Application Services in TinyOS-1.x . . . . 32

3.6 Applications Services in TinyOS-2.x . . . . 33

3.7 The Cygwin Interface of Blink Application . . . . 36

(8)
(9)

List of Tables

2.1 Core Interfaces provided by TinyOS-2.x[1] . . . . 14 3.1 Tools in TinyOS 1.x and 2.x . . . . 36 4.1 Comparative overview features of Sensor node operating systems . . . . 39 4.2 Program memory size of Applications in TinyOS-1.x in Test Environment(a) 41 4.3 Program memory size of Applications in TinyOS-1.x in Test Environment(b) 42 4.4 Program memory size of Applications in TinyOS-1.x on Simulation . . . . . 43 4.5 Program memory size of Applications in TinyOS-2.x on Test-Bed . . . . 45 4.6 Program memory size of Arbiter Applications in TinyOS-2.x on Test-bed . . 47 4.7 Program memory size of Storage applications in TinyOS-2.x on Test-Bed . . 47 4.8 Program memory size of TestSerial Application in Operating Systems on Test-Bed 47 A.1 Library files of TestSerial application on Sensor node operating system . . . 67

(10)
(11)

Contents

Preface and Acknowledgment iii

Abstract 1

1 INTRODUCTION 3

2 BACKGROUND 7

2.1 Sensor Node Operating System Concepts . . . . 7

2.1.1 Processes management . . . . 7

2.1.2 Storage management . . . . 9

2.1.3 Power management . . . . 10

2.1.4 Input and Output System . . . . 10

2.1.5 Memory Management . . . . 10

2.1.6 Security Management . . . . 11

2.2 SENSOR NODE OPERATING SYSTEM . . . . 13

2.2.1 TINYOS . . . . 13

2.2.2 CONTIKI . . . . 17

2.2.3 MANTISOS (MOS) . . . . 20

2.2.4 SOS . . . . 22

2.3 COMPARATIVE OVERVIEW . . . . 24

3 EVALUATION ENVIRONMENT 27 3.1 EXPERIMENTAL SETUP . . . . 27

3.2 STRUCTURE . . . . 29

3.2.1 TINYOS 1.X and 2.X . . . . 29

3.3 TOOLS. . . . . 34

3.3.1 Java . . . . 34

3.3.2 Java COMM 2.0 . . . . 34

3.3.3 Graphviz . . . . 34

3.3.4 nesC Convention . . . . 34

3.3.5 AVR / MSP GCC Compiler . . . . 35

(12)

3.3.6 Cygwin . . . . 35

3.3.7 Binutils. . . . 35

3.3.8 LIBC . . . . 35

3.3.9 Python-tools . . . . 36

3.3.10 Base . . . . 37

3.3.11 Avarice . . . . 37

3.3.12 Insight . . . . 37

3.4 PLATFORM . . . . 37

3.4.1 TEST-BED: Telosb . . . . 37

3.4.2 SIMULATION: TOSSIM . . . . 38

4 RESULTS 39 4.1 THEORETICAL RESULT . . . . 39

4.2 EXPERIMENTAL RESULT . . . . 41

5 REFLECTIONS and CONCLUSION 49 5.1 Reflections on the Evaluation . . . . 49

5.2 Tasks and Schedulers in TinyOS. . . . 50

5.2.1 Tasks in TinyOS-1.x . . . . 50

5.3 CONCLUSION . . . . 52

References 53 A APPENDIX 57 A.1 FOREWORD . . . . 57

A.2 TEST PROGRAMS . . . . 57

A.2.1 TinyOS: . . . . 57

A.2.2 CONTIKI: . . . . 60

A.2.3 MantisOS: . . . . 62

A.2.4 SOS: . . . . 64

(13)

Abstract

Wireless Sensor nodes fall somewhere in between the single application devices that do not need an operating system, and the more capable, general purpose devices with the resources to run a traditional embedded operating system. Sensor node operating system such as TinyOS, Contiki, MantisOS and SOS which is discussed in this paper exhibit characteristics of both traditional embedded systems and general-purpose operating sys- tems providing a limited number of common services for application developers linking software and hardware.

These common services typically include platform support, hardware management of sen- sors, radios, and I/O buses and application construction etc. They also provide services needed by applications which include task coordination, power management, adaptation to resource constraints, and networking. The evaluation was concentrated on TinyOS including an analysis on version 1.x and 2.x resource management and flexibility and its operation with the other wireless sensor node operating systems.

Keywords: TinyOS, Sensor node Operating system, TelosB, nesC, Task, Applications, Evaluation

(14)
(15)

1 INTRODUCTION

Wireless Sensor nodes also known as motes are single application devices that combines together in singles, tens or thousands to form a network which makes up the wireless sensor network, where all devices communicate within themselves, and passes information from one node to another node to a central application; which would be used to coordinate the information gathered at a terminal, for human use [2]. In order for the information received to be useful, a general purpose application that co-ordinates all the information would have to be put in place, which can also be regarded to as an operating system, with the ability to run traditional embedded and general purpose operating system features.

The features of a sensor node operating system must exhibit characteristics of both tradi- tional embedded and general-purpose operating systems, providing a number of common services for application developers linking software and hardware implementation [3].

The most common services provided typically includes processes management, storage management, power management, Input and Output systems, memory management and protection and security. Some other internal services that are of importance includes task coordination, time management, adaptation to resource constraints and networking ca- pabilities. In the real sense, there are many embedded devices that do not use or need an operating system, but for the operations that is being carried out in a sensor network, the use of an operating system simplifies most of the constraints that cannot be included in the hardware implementation due to certain factors such as size, cost, complexities etc. Maximizing performance within the constraints of a limited hardware resources, by integrating a software approach to cater for all the remaining deficiencies, is therefore of paramount importance [4, 5].

In performance optimization, the processes emphasizes on an efficient design of inter- faces, while searching for the optimum mapping of modules to hardware and software with the focus on those parts of the design that could be alternatively implemented in software or hardware implementation, and their corresponding interfaces. The prototyp- ing environment for the sensor node operating system gathers characteristic information about hardware and software modules, [3, 6, 4, 7] and stores it in a library of reusable modules. Parameters for implementation between the hardware and software interfaces include execution time, size, cost,resources, inter-operability, reliability, scalability, power management and memory management amongst others.

The introduction of a sensor node operating system has the advantage of giving priority to computational tasks, which can be catered for by event handlers within the various ca- pabilities of the sensor mote, depending on what services is being rendered by the remote sensor. The network of sensors is being employed to offer services such as environmental monitoring, acoustic detection, habitat monitoring, medical monitoring, military surveil- lance and process monitoring etc [4, 8, 5, 9]. A sensor network can be used to perform one of these services or a combination of the services with the detection of light, temperature,

(16)

sound etc.

Sensor node operating systems does not usually offer as many good features as a gen- eral purpose desktop operating system such as Linux, Windows etc., but it must be able to act as a standalone in co-ordinating the services needed to provide all the basic neces- sary features which includes smallness in size, low energy consumption capability, diversity in design, usage and operation, with limited hardware parallelism and a good controller hierarchy.

The sensor node operating system configuration must also include features of a stan- dard networking protocols and data acquisition for it to be useful for a wireless sensor network, distributed services, drivers to read sensor information and data acquisition tools.

In this report, we will be evaluating certain sensor node operating systems that are com- monly employed nowadays in practical use, the description of the most commonly used sensor node operating systems will be discussed in the background section, using TinyOS as a benchmark since it is the currently the most widely employed sensor node operating system and the other commonly used operating systems that we will be considering are Contiki, MantisOS and SOS. The evaluation will be carried out on the four commonly employed wireless sensor node operating system,to analyze it effectiveness with a compar- ison of its memory mode of execution and energy consumption. We will be implementing TinyOS to run on a different wireless sensor node and comparing the expected features as it performs in conformance with TinyOS and generally as an embedded system oper- ating system. The priority goals set out to be achieved by the four wireless sensor node operating systems is as stated below.

TinyOS’s [7, 1, 10] design is motivated by four main goals in mind which are the ca- pabilities to cater for;

• Limited resources

• Flexibility

• Concurrency

• Power management.

The module of construction of TinyOS is built around these four main goals and it en- compasses all the general features that are expected of a sensor node operating system to perform effectively and also maximize performance.

Contiki [11, 12, 13] has its design motivated by;

• Lightweight

• Dynamic loading / re-programming in a resource constrained environment

• Event-driven kernel

(17)

• Protothread / Preemptive multi-threading (optional) .

SOS [3, 14] is another sensor node operating system mainly motivated by;

• Dynamic re-configurability through fault tolerance, heterogeneous deployment and new programming paradigms.

MantisOS [15, 16] is another sensor node operating system designed with time sliced multi-threading offering all the basic features rendered by a good sensor node operating system.

(18)
(19)

2 BACKGROUND

2.1 Sensor Node Operating System Concepts

Sensor node operating systems are built based on certain fundamental principles that guides its structure [17, 18, 19]. All these principles must be put in mind when creating an operating system for sensor nodes.

2.1.1 Processes management

Processes management controls how a programs execution takes place and what goes on at various instances, and what has to be catered for in providing a smooth running of all processes. How this is achieved are also some of the fundamental issues that needs to be taken into account when creating a sensor node operating system. Inter-Process com- munication among processes must be well administered and a good co-ordination pattern formulated in building the operating system. The switching of services between processes and context to know which services must be running, which are to be blocked and those that should be made ready for use while the other services continues running in the back- ground [18, 20]. The three states represented by tasks is shown in Fig 2.1

Figure 2.1: Task Execution States

Ready state indicates a task which is about to run but cannot run because a higher

(20)

priority task is still running.

Blocked shows the task requesting for a resource that is not available and is waiting for an event handler or a resource to be freed, or keeping itself delayed by waiting for a timing delay to end. The essence of the blocked state is very important , because without a blocked state lower priority tasks would never have the opportunity to run. Starva- tion would occur if higher priority tasks are do not block. A blocked tasks stays blocked until its blocking condition is met. Blocking conditions can be met by the release of a semaphore token, when its time delay expires or when the message the blocked task is waiting for arrives into a message queue.

Running state is the task with the highest priority running and its implemented by loading its registers with the task’s context which is thus executed. A task may move back to its ready state from the running state when it is been preempted by a higher priority task as in step 4 of Fig 2.2 . A running task can move to the blocked state by making a call that requests for an unavailable resource, a call to delay task for a period of time or a request to wait for an event to occur.

The steps described by Fig 2.2 can be used to show what happens in a ready state using four tasks with the same or with different priority levels. Task 1 has the highest priority task (30), tasks 2 and 3 are the next-highest priority tasks of (50) and task 4 has the lowest priority task of (60).

Step 1: Tasks 1,2,3 and 4 are in the task ready list waiting to be run.

Step 2: In step 2, task 1 has the highest priority of (30), its task will be the first to run, the kernels moves it from the waiting list to the running state. While task 1 is exe- cuting, it makes a blocking call, and the kernel moves it to the blocked state, takes task 2 (50) which is currently the highest priority task in the ready list and moves it to the running state.

Step 3: When task 2 makes its own blocking call, task 3 is moved to the running state.

Step 4: In step 4 as task 3 runs, it frees up the resources requested for by task 2, the kernel then returns task 2 to the ready list state and inserts it before task 4 because it still has a higher priority than task 4, while task 3 continues as the currently running task.

Step 5: If task 1 becomes unblocked at any point during this, the kernel would move task 1 to the ready list state, and task 3 would be moved to the ready list state for task 1 to be run since it still has a higher proirity level than all the remaining tasks.

Ready, blocked and running is represented in some embedded system by different no- tations, in VXworks kernels [21] is represented as suspended, pended and delayed, where pended and delayed are the sub-states of the blocked state. These are not known to the user but the operating system designer must put all these into consideration at the initial building stages. A good scheduling structure by the operating system in giving priority to tasks also helps operating system to know when to schedule the operations that can

(21)

Figure 2.2: Tasks Scheduling Operation

be performed that does not require much resources and yet useful. When tasks are well handled, they provide room for adequate interrupt mapping, queues and synchronization.

The interrupt of tasks in operating system must also be flexible or triggered by certain conditions of events. Data race [7, 10] conditions must be avoided and a standard such as [22]IEEE 802.15.4 networking protocols must be employed to achieve the good aim of an efficient wireless sensor network.

2.1.2 Storage management

Storage management deals with the issues of how sensor motes are manufactured either for a specific running operating system or with interoperability with all other platforms.

Sensor operating systems should not be designed for just a particular type of hardware; it must be able to run on several platforms either alone or simultaneously between different motes. Examples of such commonly know sensor motes are the Mica, Micaz, Telos, ESB and Tmote sky [23, 24, 25, 22]which all runs on either the AVR Atmel 128 microprocessor platform or Texas Instrument MSP430 microprocessor with a minimal memory capacity which are part of the features the operating system builds on in reducing size and cost.

The Telosb used in the report uses the IEEE 802.15.4 / ZigBee [25] compliant RF transceiver with 2.4 to 2.4835 GHz, a globally compatible ISM band that transmits data at the rate of 250kbps. It has an integrated onboard antenna and with a low consump- tion of power capability. The goals of achieving a platform to cater for the extremely constrained resources of hardware is being built into the software services which is carried out by the operating system, most sensor node operating systems are written in the C

(22)

Figure 2.3: Telosb Sensor Mote

language and some others with additional extension features of C, nesC as in the case of TinyOS .

2.1.3 Power management

Power management can also be regarded as one of the key factors in storage management, because since motes depends on battery to run, the operating system must be able to provide additional services in making the power run processes such as the sleep and wake up of motes to save and extend the battery life for longer provision of adequate power supply to the mote.

2.1.4 Input and Output System

Input and output systems simply deals with the sections of what triggers the sensors in performing its duty of data acquisition, activation, communication etc. This also includes the principles of design of the various sensor motes and how its porting is provided for in the sensor node operating system. They provide the services rendered above in the operation of a sensor.

2.1.5 Memory Management

Memory management is a way of allocating portions of memory to programs at their request, and releasing it for reuse when it is no longer needed. It deals with the program memory and the flash memory. The usage of flash memory is currently used for the storage of application code, the text segment and data. Two approaches used in the management of memory for sensor operating system is by;

• Allocation of physical memory into software-usable memory objects (allocation).

• Availability of memory objects and its management by software (management).

(23)

To increase performance, we must share the memory usage of an operating system since memory is a large array of words or bytes with its own address. The operating system fetches instructions from memory processes it and after the instruction has been exe- cuted, the results are later stored back in memory. The instructions in the memory must be binded and the data to memory addresses is carried out in various ways based on the principle employed by the sensor node operating system.

In a compile time scheme, the memory usage is known prior time to a user in a process that resides at a particular location, and generating a compiler code from the location, which extends up to the completion of the process. If the location changes at any point, the code will have to be recompiled to also effect the changes in the memory address location.

In the load and execution scheme, the instruction binding is delayed until when it is ready to load and be executed. When an address location changes, the binding will only be delayed until the next run time is to be executed. The kernel is useful in controlling the memory management unit and thus its files and attributes must be protected which is one of the reasons why security should be considered in a sensor node operating system.

Fig 2.4 shows the basic operation process of the memory for the compile time and run time implementation.

The Compile and Run time scheme basically describes the static and dynamic memory allocation. Static memory allocation and Dynamic memory allocation are the two types of memory allocation in Sensor node operating systems. Most of the problems that can arise in Operating system comes from the way in which their memory allocators have been implemented.

A well structured memory management reclaims garbage memory when the system is in its idle state, and the time taken in performing the garbage collection cycle must at worst be proportional to the maximum amount of concurrent live memory. SRAM, a volatile random access memory is generally fast for reads and writes, but consumes more energy than flash, and is even more expensive. And for this reason, it is used in small quantities as data memory.

Many sensor node operating systems are constrained in terms of energy, SRAM, and flash memory. Sensor network nodes “motes” typically support up to 10KB of RAM and up to 1MB of flash memory. The MSP430 has a single (Von Neumann) address space, with data RAM and program ROM all accessed by a single 16-bit pointer. It supports a variety of addressing modes and has dedicated stack instructions, and a stack pointer register. [26] EEPROM (electrically erasable programmable read-only memory ) acts in a similar way to flash memory, it is nonvolatile although slower to write and can only cater for a limited number of writes.

2.1.6 Security Management

Processes handled by a sensor operating system file system must be protected from one another’s activities, using various mechanisms to ensure that only processes with author- ity can gain access to the resources of the operating system. The Kernels must be created

(24)

Figure 2.4: Memory : Compile and Run Time

in a way to protect itself, and other processes from being accessed by other irregular processes, in an address space with a collection of virtual memory locations where access right occurs. An address space is a unit for management of a process’s virtual memory.

The execution of an application code is done in a distinct user level address space for the application. A user process and user level process describes the execution in user mode in a user level address space with restricted memory access. The kernels execution takes place in the kernel’s address space. Both the application processes and the kernels processes thus needs to be protected. The processes can be transfered from the user level address space to the kernels address space via an exception such as interrupt or a system call trap. A system call trap is the invocation mechanism for resources managed by the kernel. The kernels share key data structures such as the queue of runnable threads, and yet still keeps some of its working data private.

The main reason for including protection is to prevent mischievous and intentional vi- olation of an access restriction. Protection can also improve reliability in detecting errors at interfaces of component subsystems [18]. File systems in sensor node operating systems follow certain principles in administering permissions or granting access rights to specific users and for specific uses on its processes. The permissions are managed in three different

(25)

address spaces. These address spaces have the READ, WRITE and EXECUTE options on files shared. A component of an operating system can operate either singly or in a combination of this options.

In these three domains, a dynamic association of processes can switch permissions be- tween themselves, and each domain may be represented either by using a process or a system/procedure call to other processes depending on the call to be made from the loader.

2.2 SENSOR NODE OPERATING SYSTEM

2.2.1 TINYOS

TinyOS is a component-based operating system that utilizes a unique software architecture specifically designed around a static resource allocation model for resource-constrained sensor network nodes, [27] and a very little bit of it is based on dynamic allocation which also sometimes introduces difficult failure modes into applications. TinyOS is intended for wireless sensor networks which execute concurrent, reactive programs that operates with [1, 10] severe memory and power constraints in an event driven way, with a scheduler operation within a single thread of execution. The system execution is typically triggered when an event posts one or more tasks to the queue and quickly leaves its event context[28].

The components in TinyOS are written in a “Network Embedded Systems C” nesC [7] a dialect of C that adds some new features to support the structure and execution model.

TinyOS application gives a warning when any global variable that can be touched by an interrupt handler is accessed outside of an atomic section. Its a graph of components each with its interfaces. An application that typically conserves energy by going to the sleep mode most of the time when not in use to run in low cycle in an interrupt-driven manner. Commands, tasks and events are the mechanism for inter-component commu- nication in TinyOS [10, 29]. A command is a request sent to a component to perform the service of the execution of a task, while an event signals the completion of execution of the service. Command and events cannot block because they are decoupled through a split-phase mechanism with the task.

TinyOS maintains a two-level concurrency scheduling structure, so a small amount of processing associated with hardware events can be performed immediately, while long running tasks are interrupted. The execution model is similar to finite state machine models, but considerably more programmable. Tasks scheduler uses a non-preemptive FIFO scheduling policy, but Interrupts may preempt tasks (and each other), but not dur- ing atomic sections which are implemented by disabling interrupts. The Table 2.1 below shows some of the core interfaces provided by TinyOS. We will discuss the execution model of TinyOS, components and the nesC language in the preceding sections.

(26)

Interface Description

ADC Sensor hardware interface

Clock Hardware Clock

EEPROM Read/Write EEPROM read and write Hardware ID Hardware ID Access

I2C Interface to I2C

Leds Red/yellow/green LEDs

MAC Radio MAC layer

Mic Microphone interface

Pot Hardware potentiometer for transmit

Random Random number generator

ReceiveMsg Receive Active Message

SendMsg Send Active Message

StdControl Init, start, and stop components

Time Get current time

TinySec Lightweight encryption/decryption

WatchDog Watchdog timer control

Table 2.1: Core Interfaces provided by TinyOS-2.x[1]

2.2.1.1 Execution Model

TinyOS uses an event based execution model in providing the levels of operating efficiency required in wireless sensor networks. [30] The approach used for handling this is by executing a handler immediately; this is because of the events that are time critical. The stack memory is allocated for storing activation records and local variables in the execution of a typical task in any operating system and this allows for allocation of a separate stack for each running task. TinyOS was built based on the limitations of memory in most low power microcontrollers, and its applications consists of multiple tasks that all share a single stack in order to be able to reduce the amount of memory employed during execution, because of this design, a task must run to completion before giving up the processor and stack memory to another task. These tasks can be preempted by hardware event handlers, which also run to completion, before giving up the shared stack. A task must store any required state in global memory. A TinyOS scheduler follows a FIFO policy, but other policies have also been implemented such as earliest-deadline first, Round-Robin and Priority [30, 29].

• Event Based Programming

In TinyOS, a single execution context is shared between unrelated processing tasks with each system module designed to operate by continually responding to incoming events.

An event arrives with its required execution context and when the event processing is completed, it is returned back to the system. It is proved [31, 32] that event based pro- gramming can be used to achieve high performance in concurrency intensive applications.

• Task

A task is a function which a component tells TinyOS to run later, there are times when a component needs to do something, but it can be done a little later giving TinyOS the

(27)

ability to defer the computation until later, then it can deal with everything else that’s waiting first.

An event based program is limited by long-running calculations that affect the execution of other time critical sub-systems. In the case of an event not completing in due time, all other system functions would be halted to allow for the long running computation to be fully executed. Tasks can be scheduled at any time but will not execute until all current pending events are completed. They can also be interrupted by low-level system events allowing long computations to run in the background while system event processing continues. Implementing priority scheduling for tasks is usually done using the FIFO queuing but it is unusual to have multiple outstanding tasks.

• Atomicity

The TinyOS task primitive also provides a mechanism for creating mutually exclusive sections of the code to provide for long-running computations. Although non-preemption eliminates races among tasks, there are still potential races between codes that are reach- able from tasks (synchronous code) and codes that is reachable from at least one interrupt handler (asynchronous code). In interrupt-based programming, data race conditions cre- ate bugs that are difficult to detect. An application uses tasks to guarantee that data modification occurs atomically from the context of other tasks. When tasks run to com- pletion without being interrupted by other tasks, it implies that all tasks will be atomic to other tasks eliminating the possibility of data race conditions between tasks. The pro- grammer can either convert all of the conflicting code to tasks (synchronous only) or use atomic sections to update the shared state to be able to reinstate atomicity in the system.

2.2.1.2 TinyOS Components

The component model provided by nesC allows an application programmer to be able to easily combine independent components provided by interfaces, rather than developing libraries of functions that would be called by user programs into an application specific configuration. These components are separate blocks of code that are defined by interfaces for both input and output. A component must execute a set of commands defined to be able to provide an interface and to be able to use the interface components execute a dif- ferent set of functions, called events [1]. A component that wants to utilize the commands of a specific interface must also implement the events for that same interface. Components that are not used are not included in the application library. When a component structure has been formed they must be organized in an application-specific way to implement the desired application functionality using the various component configurations.

A component has two interfaces that it uses, those provided by it and those used by it. A component must be able to split a large computation into smaller chunks for ex- ecution one after the other. The components wire the functional components together implementing an interface that can be used or provide multiple interfaces as well as mul- tiple instances of a single interface, and the components are re-usable as long as it gives each instance a separate name. The TinyOS component has four interrelated parts: a set of command handlers, a set of event handlers, an encapsulated private data frame, and a bundle of simple tasks. A typical example of how TinyOS components are interrelated is

(28)

the StdControl interface of Main in a surge application[Fig 2.5[10]] which is represented as a directed graph in which the wiring of commands and events between components dictates the edges of the graph.

The StdControl is wired to Photo, TimerC and Multihop where each component has its own namespace which it uses to refer to commands and events which are bidirectional.

The programming structure is as represented below with each interface having its own module.

interface StdControl { command result t init ( ) ; command result t start ( ) ; command result t stop ( ) ; }

interface Timer {

command result t start ( char type, uint32 t interval ) ; command result t stop ( ) ;

event result t fired ( ) ; }

Interface Clock {

Command result t setRate ( char interval, char scale ) ; Even result t fire ( ) ;

}

interface ADC {

command result t getData ( ) ;

event result t dataReady ( uint16 t data ) ; }

interface SendMsg {

command result t send ( TOS Msg *msg, uint16 t address, uint8 t lenght ) ; event result t sendDone ( TOS Msg *msg, result t success ) ;

}

A typical scripting of TinyOS interface based on certain interfaces [1].

The surgeC application scripted above is represented by this diagram shown with the interfaces, events and commands.

(29)

Figure 2.5: SurgeC [1]

2.2.1.3 The nesC Language

The implementation of TinyOS is written in a dialect of C [7], known as nesC its main goals are to allow for strict checking at compile time and also to ease the development of TinyOS components. It was developed to cater for the large macros present in the C language to express the component graph and the command/event interfaces between components. nesC is a programming model in which components interact through in- terfaces and a concurrency model based on run to completion of tasks. Event handlers can interrupt at compile time and simultaneously prevent all data races condition. The concurrency model of nesC also prevents interfaces from blocking calls[7]. Since most of the program analysis is done at compile time, cross-component optimization is possible providing function in-lining and eliminating the unreachable codes. nesC is a static lan- guage that uses a static memory allocation but with no function pointers. Implemented in a modules of C behavior, configuration is using select and wire mechanism. Despite its lack of memory protection, variables cannot still be directly accessed from outside of a component. This limitations helps the programmer to know resource requirements for a given application and also enforces the programmer to determine requirements in advance preventing it from error prone that may have been imposed by run-time characteristics.

2.2.2 CONTIKI

Contiki is a hybrid model operating system based on a very lightweight event-driven ker- nel protothread with optional per process preemptive multithreading, it is designed in a flexible manner, to allow individual programs and services to dynamically load and unload modules and programs of a large number of sensor devices in a running system [11, 13, 12].

Contiki is designed for a variety of constrained systems ranging from a modern 8-bit microcontrollers for embedded systems to the old 8-bit home computers [33, 12]. The protothread feature is driven by repeated procedure calls to the function in which the protothread is running. Whenever a function is called, the protothread will run until it

(30)

blocks or exits, but the scheduling of protothread is done by the application that exclu- sively uses protothread.

The preemptive multithreading is included as optional in a linked library of programs that explicitly need it. Computations can be performed in a separate thread allowing events to be handled while the computations runs in the background [34].

The main abstraction provided by Contiki uses an event based CPU multiplexing and memory management to support loadable programs while other abstractions are imple- mented as libraries or application programs [35, 11].

The indirect pre-emptive nature of the kernel makes Contiki lack real-time guarantees, the guarantees can be catered for by the use of interrupts provided by an underlying real-time execution or hardware timers, this is the main reason why interrupts are never disabled by default in Contiki. The number of abstractions provided by the Contiki kernel is kept at a minimum level to reduce size and complexity and still make the system flexible. Since the hardware (ESB) for which Contiki is designed does not support memory protection, it has been designed without any protection mechanism in mind. All processes in the ESB[24] hardware share the same address space within the space of 23 bytes in the same protection domain.

2.2.2.1 Design

A running system in Contiki is made up of kernels, libraries, program loader and pro- cesses that can be dynamically loaded or unloaded at run-time. Processes is made up of the application program or services which implements functionality used by more than one application process. The kernel is like the communication channel that allows device drivers and applications communicate directly with the hardware. The kernel only keeps a pointer to the process state in the process private memory. The inter-process commu- nication takes place by posting events.

Figure 2.6: Contiki Partitioning into core and loaded programs

(31)

Contiki system is divided into two modules at compile time: the Core and the loaded program modules Fig 2.6. The core is made-up of four internal modules, the kernel, the program loader, language run-time libraries and communication services module. A single binary image compiled from the core is stored in the devices for deployment; the compiled program cannot be modified once it is being deployed, unless a special boot loader is used to overwrite the core. The program loader obtains the binary images either from the communication stack or the EEPROM and loads the program into the system.

The programs to be loaded are first stored in the EEPROM before being loaded in the code memory.

2.2.2.2 Kernel

The Contiki kernel is designed to be very small in terms of code size and memory re- quirements consisting of a lightweight event scheduler that dispatches program execution events to running processes using a polling mechanism to call processes. Polling is used by processes to confirm the status updates of the hardware devices and it can be seen as high priority events that are scheduled in-between each asynchronous event. Event handlers run to completion on schedule because the kernel does not preempt event handlers once scheduled, however event handlers may use internal mechanisms to achieve preemption.

CPU multiplexing and message passing are the only basic features provided by the kernel abstractions, others come as built in libraries.

Programs can be linked with libraries in three different ways;

• Statically with libraries that’s are part of the core

• Statically with optional libraries that are part of the loadable program

• Calling libraries through dynamic linking replaceable at run-time

Contiki uses a single shared stack for the execution of its processes. The kernel also supports two kinds of events, namely asynchronous events which are a form of deferred procedure calls and synchronous events mainly used for inter-process communication, but the use of asynchronous events reduces stack space requirements as the stack is shared between each event handlers, and en-queued by the kernel to be dispatched to the target process at a later time. A synchronous event on the other hand immediately causes the target process to be scheduled in a dynamic link manner. A process of looking up the new process ID in the list of processes, storing the ID in an internal variable and calling the event handler of the new process is known as context switching and its time is critical to the performance of the library calls. It occurs when asynchronous events are dispatched and poll events are scheduled, or when synchronous events are passed between processes.

The Contiki kernel does not contain any explicit power save abstractions [33, 36] but leaves this to the application specific parts of the system and networking protocols to care for the idle periods in reducing power consumption.

Pre-emptive multithreading is implemented as an optional library in Contiki with ap- plications that explicitly require a multi-threaded model of operation. The preemption is

(32)

implemented using a timer interrupt that saves the processor registers onto the stack and switches back to the kernel stack. The library provides the necessary stack management functions making threads execute on their own stack until they either explicitly yield or are preempted. The operation that invokes the kernel must first switch to the system stack and turn off preemption in order to avoid race conditions in the running system causing the multi-threading library provide its own events posting functions to the kernel.

2.2.3 MANTISOS (MOS)

The MANTIS (Multimodal Networks of In-situ Sensors) operating system (MOS) [15, 16]

is a sensor operating system written in standard C and executed as threads with an in- tegrated hardware and software platform with built-in functionality. Its multi-layered design approach behaves similar to a UNIX runtime environment with a little difference that MOS uses zero-copy mechanisms at all levels of the network stack, including the COMM layer [37]. It is known as multimodal because of its applicability to various de- ployment scenarios such as weather surveys, biomedical research, embedded interfaces, wireless networking research and artistic works. MOS seeks to provide services such as preemptive multithreading using an interface similar to the standard POSIX threads [19]

API and support for multiple hardware platforms, which includes the commonly known sensor motes such as MICA2, MICA2DOT and MICAZ Motes,Telosb, Mantis nymph and x86 Linux.

MOS also provides a hardware driver system that incorporates support for a resource- constrained environment, power management, dynamic programming, fast context switch- ing, a round robin scheduling policy and a finite amount of memory capability which can be as low as 500 bytes including kernel, scheduler, and network stack.

In a thread-driven system, except for the shared resources, an application programmer is not really bothered with blocking either definitely or indefinitely during tasks execution, because the scheduler policy used will preemptively time-slice between threads, allowing some tasks to continue execution even though others may be blocked. Concurrency in multithreading helps prevent one long-lived task from blocking execution of a next time- sensitive task.

The user-level threads T3,T4,T5 represents the multithreading present in the MantisOS.

Traditional multithreaded embedded operating systems such as QNX [38] and VXWorks [21] occupy a large chunk of memory to execute on micro sensor nodes which is one of the motives behind the creation of MOS. Other key factors includes its flexibility in the form of cross platform [22] support testing across PC’s, PDAs and different micro sensor platforms and also its support for remote management of in-situ sensors through dynamic reprogramming and remote login. The architecture of MOS system API’s can be classified into the Kernel/Scheduler, COMM, DEV and Net layers and other devices.

(33)

Figure 2.7: MantisOS Architecture

2.2.3.1 MOS Kernel

MOS kernel includes scheduling and synchronization mechanisms and it uses the Portable Operating System Interface for UNIX (POSIX)-like semantics which is based on a multi- threaded kernel. The kernel provides counting semaphores and mutual-exclusion semaphores in a round-robin scheduling mechanism [15, 16] for multiple threading on the same priority level, with the maximum stack space specified for each thread in the same address space, thus allowing the allocation of a block of data memory for the thread’s stack [26].

The MOS kernel executes as part of its idle thread a simple algorithm to determine the CPU power mode. If one thread is running, the CPU is left in active mode and if none is running it is put to power save mode. For an efficient power programming, all devices in MOS are initially set to off state initially, where they consume the minimum amount of power possible.

The pattern of memory management in MOS uses the static allocation of known memory size at start-up beginning with low addresses. At the node start-up, the stack pointer is at the high end of memory, and the initialization thread stack is located at the top block of memory, after starting up, the initialization thread becomes the idle thread keeping the same memory stack space where it is managed as a heap. The stack space is allocated out of the available heap space when a thread is produced the space is then reclaimed on the exit of the thread, this feature makes it easily possible to detect overruns in the memory stack which leaves room for a dynamic reprogramming by the application developer.

(34)

2.2.3.2 COMM and DEV Layers

MOS hardware devices is divided into two main categories, namely the unbuffered de- vices (synchronous) which are associated with the DEV layer and the buffered devices (asynchronous) in receiving data in the COMM layer. Sensors, file systems, and random- number generator are typical examples of synchronous devices which return only after the operation has been completed. They may exist in a single system and can also be accessible through the same set of read, write, mode (On and Off), and ioctl functions as in the UNIX stream functions.

COMM devices are handled separately from DEV-layer devices because they require the ability to receive data in the background during such times when there is no applica- tion thread currently blocked on a receive call. Radio and serial ports are examples of the COMM layers. The COMM layer and the DEV layer have similar interfaces of syn- chronously sending and receiving blocks until a packet is present. Comm layer devices will not receive packets until they are turned on, and when turned on, received packets runs in the background and are buffered. It also provides the ability to perform a select of either non-blocking option or time-out option on multiple devices, when returning a packet on the selected device.

Some of the interfaces used in the programming of MOS is as stated below [15].

Networking: com send, com recv, com ioct, com mode On-board sensors (ADC): dev write, dev read

Visual Feedback (LEDs): mos led toggle Scheduler: thread new

2.2.4 SOS

SOS is a sensor operating system that implements messaging, dynamic memory, module loading and unloading, and other services on its kernel. SOS modules are not processes [3, 16, 39, 28] they are scheduled cooperatively with no memory protection but still pro- tects against common bugs using a memory integrity check incorporated in the SOS operating system. Dynamic Reconfigurability of the libraries of the operating system on the sensor node after deployment is one primary motivation and goal for SOS. Another primary goal is that it helps easing programming complexity for programmers by provid- ing commonly needed services and an increasing memory reuse.

Modules are written in the standard C programming language, with modules implement- ing a basic message handler of the normal main function, in a single switch / case block that directs messages to their module specific code. SOS supports compiling unmodified TinyOS modules directly into SOS application code.

Reconfigurability allows for concentration on three major parts of neglect in wireless sensor networks which are fault tolerant, heterogeneous deployments, and new program- ming paradigms. Time critical tasks are improved by moving processing out of interrupt

(35)

context through priority scheduling.

Fault tolerance issues are addressed in SOS by the ability to incrementally deploy or remove software modules with newer and more stable versions without physical presence or with minimal interruption of node services.

Heterogeneous Deployment of applications on top of homogeneous sensor nodes, is by specialized applications built and configured without a fear of overhead or interactions with other applications; by direct loading of modules on to nodes even after deployment.

New Programming Methodologies; since SOS believes that most applications were built as a single monolithic [14] kernel development of new programming methodologies of dividing this monolithic kernel into easier to understand components that were combined during compilation. SOS expands on this by providing the components separate through the compilation phase enabling construction of true software agents and active network- ing and also exhibiting advances in macro programming, since users no longer focus on programming at the node level (monolithic kernel)[26] but at the application level.

Two applications developed for SOS are; Visa, an application developed for SOS us- ing distributed Voronoi [14] spatial analysis to calculate the area covered by an individual node. Another application is used by motes for augmented recording. Memory integrity checks, RPC mechanisms and discovery of misbehaving modules are also part of the goals set out to be achieved in SOS.

SOS expects and supports multiple modules execution on its kernel at the same time.

SOS uses modular updates rather than full binary system images, not forcing a node to reboot after updates and installing updates directly into program memory without expensive external flash access. It includes a publish-subscribe scheme for distributing modules within a deployed sensor network not limited to running the same set of modules and kernels on all nodes in a network. SOS dynamically link modules using a priority scheduling scheme and a simple dynamic memory subsystems, its kernel services support the changes after deployment and provides a high level API thus reducing abstraction implementation by the programmer.

2.2.4.1 Modules

Modules are the independent binaries that implement a task or function, and modification of the kernel is only important when low layer hardware or resource management needs to be modified. A good coupling of the SOS modules helps in reducing high incurring over- head that may occur. Flow of execution enters from two entry mechanisms, the message delivery from the scheduler which is implemented using a specific handler function, and it implements handler functions for the initialization and the final messages by the kernel during module coupling, and calls to functions registered by the module for external use for the operations that need to run synchronously. The function calls are made available through a function registration and subscription scheme by bypassing the scheduler to provide low latency communication to modules [3, 40].

(36)

2.3 COMPARATIVE OVERVIEW

In this section, we present a simple overview of related work already done in comparing some of the most commonly employed sensor node operating system in a wireless sensor network and a background of these four types of sensor node operating systems. Since the development of such is an ongoing process, there are lots of improvement done to update the source codes and also to improve its performance.

An event-driven run-to-completion operating system is well suited to highly memory- constrained devices while is it almost impossible that a multithreaded system would be capable of being implemented in such limited memory and also actually support multiple threads of execution in practice. Event driven execution is also suited for achieving a good energy efficiency in a situation where no events needs to be handled, the system would not execute and can go to the sleep mode. Some researchers argued that events are a bad idea for high concurrency, and believed threads can achieve all of the strengths of events, they proved that improper implementation is the reason for such assumption that threading is a bad idea. Some of the notions as used in both cases can be given below in an event vs thread pattern [41].

• Event handlers vs. Monitors.

• Events accepted by a handler vs. Functions exported by a module.

• SendMessage / AwaitReply vs. procedure call or join

• SendReply vs. Return from procedure

• Waiting for messages vs. Waiting on condition variables.

In[1], Levis et al. proposed TinyOS design and motivations. TinyOS is a state of the art operating system for sensor nodes and it has been ported to many sensor mote platforms.

Certain comparisons have been done between TinyOS and various other sensor node op- erating systems.

TinyOS uses a special description language for an event driven operating environment for composing a system of smaller components [10, 42] which are statically linked with the kernel to a complete image of the system. [11] claims that after the linking, mod- ifying the system is not possible; they also proposed using a dynamic structure which allows programs and drivers to be replaced at run-time without linking. This is one of the principle behind the design of SOS also and it was again criticized in [43] that although this function provides the ability to update some of the software modules in individual nodes, and add new modules to nodes after deployment, but since modules in SOS are not processes, they are scheduled cooperatively and they are independent of each other.

Therefore, SOS also does not have a global real-time scheduling and thus is unable to guarantee the real-time schedule of modules. Levis and Culler have developed Mate [44], a virtual machine for TinyOS devices, as similarly used in MagnetOS [45]. In [15, 16, 37]

order to provide run-time reprogramming, the Code for the virtual machine can be down- loaded into the system at run-time which is specifically designed for the needs of typical

(37)

sensor network applications

In [13] Adam et al, proposed the Contiki operating system, and argued that the ad- vantages of using a virtual machine instead of native machine code is that the virtual machine code can be made smaller, thus reducing the energy consumption of transporting the code over the network. They noted a drawback in this approach as the increased energy spent in interpreting the code for long running programs and the energy saved during the transport of the binary code is instead spent in the overhead of execution of the code. They claim Contiki programs uses native code and can therefore be used for all types of programs, including low level device drivers without loss of execution efficiency.

Some related work has been done where TinyMos was[28] proposed which combines the features of TinyOS and MantisOS in a single application. The combination of this re- flects a similar approach used by Contiki. They offered a solution which provides an evolutionary pathway that ultimately allows nesC applications to execute in parallel with other system threads where TinyOS is run as a single scheduled thread on top of a mul- tithreaded scheduler.

In [16] Shah Bhatti et al, proposed the MANTIS OS, which they claim its multi-threading has more benefits than TinyOS which does not support multimodal tasking well, and its lack of real-time scheduling since all program execution is performed in tasks that run to completion, thus does not make it good for real-time sensor network systems. Man- tisOS uses a traditional preemptive multi-threaded model of operation that also enables reprogramming of both the entire operating system and parts of the program memory by downloading a program image onto EEPROM, then burned into flash ROM. Due to the multi-threaded semantics of MantisOS, every program must have stack space allocated from the system heap, and locking mechanisms must be used to achieve mutual exclusion of shared variables. In contrast to this Contiki that also has its multi-threading capability provided by a library uses an event based scheduler without preemption, thus avoiding allocation of multiple stacks and locking mechanisms.

MANTIS [3] implements a lightweight subset of the POSIX threads API targeted to run on embedded sensor nodes which introduced context switching in its concurrency, this is also a problem in Contiki, another event driven operating system, Contiki that does limit the number of its concurrent threads to two, and SOS addressed this by adopting an event driven architecture which is able to able to support a comparable amount of concurrency without the context switching overhead with its module architecture.

(38)

References

Related documents

The familial experience that Bull recounts in the 1930s of his own siblings emigrating, and of his eldest son's suicide in response to the probability of emigration, and

Then, we related fear memory savings (e.g. [19]) to initial amygdala activity to deter- mine if the long-term effects of undisrupted reconsolidation can be predicted from activity

Swedenergy would like to underline the need of technology neutral methods for calculating the amount of renewable energy used for cooling and district cooling and to achieve an

Reviewing the paperwork on some stock transfers for North Poudre stock and the preferred rights of Fossil Creek Reser- voir takes quite a few hours each

The aim of this study was to describe and explore potential consequences for health-related quality of life, well-being and activity level, of having a certified service or

Re-examination of the actual 2 ♀♀ (ZML) revealed that they are Andrena labialis (det.. Andrena jacobi Perkins: Paxton & al. -Species synonymy- Schwarz & al. scotica while

(2006) fann inte att biljettpriset för stadsbussar i London hade någon direkt påverkan på efterfrågan för stadsbussar i London, vilket då inte stämmer överens

Vid punkter med konvergensproblem exkluderade presenteras nu resultatet från endagsprognoserna, där EGARCH(1,1) uppvisar bäst prognosprecision med hänsyn till MSE och