• No results found

Online Admission Control for Multi-Switch Ethernet Networks

N/A
N/A
Protected

Academic year: 2021

Share "Online Admission Control for Multi-Switch Ethernet Networks"

Copied!
55
0
0

Loading.... (view fulltext now)

Full text

(1)

aster˚

as, Sweden

Thesis for the Degree of Master of Science in Computer Science with

Specialization in Embedded Systems 30.0 credits

ONLINE ADMISSION CONTROL FOR

MULTI-SWITCH ETHERNET

NETWORKS

Yong Du

ydu13001@student.mdh.se

Examiner: Thomas Nolte

alardalen University, V¨

aster˚

as, Sweden

Supervisor: Mohammad Ashjaei

(2)

Abstract

The trend of using switched Ethernet protocols in real-time domains, where timing requirements exist, is increasing. This is mainly because of the features of switched Ethernet, such as its high throughput and availability. Compared to other network technologies, switched Ethernet can support higher data rate. Besides the timing requirements, that must be fulfilled in real-time applications, another requirement is normally demanded in real-time systems. This requirement is the ability of changing, adding or removing the messages crossing the network during run-time. This ability is known as on-line reconfiguration, and it should be done in a way that the real-time behavior of the network is not violated. This means that, the guarantee of meeting the timing requirements for the messages should not be affected by the changes in the network. In this thesis, we focus on on-line reconfiguration for multi-hop HaRTES architecture, which is a real-time switched Ethernet network. The HaRTES switch is a modified Ethernet switch that provides real-time guarantees as well as an admission control to be used for on-line reconfiguration. We study the existing reconfiguration methods including centralized and distributed approaches. Then, we propose a solution to provide on-line reconfiguration for the multi-hop HaRTES architecture, based on the studied methods. For this purpose, we use a hybrid method to achieve the advantages of both traditional centralized and distributed approaches. Moreover, we perform two different experiments. In the first experiment we focus on the decision making part of the method. The decision making part decides whether the requested reconfiguration is feasible. We calculate the time required to make the decision in dif-ferent network settings. In the second experiment, we focus on the entire reconfiguration process, where the decision making is part of it. Again, we show the time needed to do the reconfiguration in several network settings. Finally, we conclude the thesis by presenting possible future works.

Keywords: Online reconfiguration; Switched Ethernet; Real-time switched Ethernet; The HaRTES architecture; Response time analysis

(3)

Table of Contents

1 Introduction 5

1.1 Motivation . . . 6

1.2 Thesis Contributions . . . 6

1.3 Outline of the Thesis . . . 7

2 Background 8 2.1 Ethernet Switch . . . 8

2.2 Real-Time Ethernet Protocols . . . 9

2.2.1 Ethernet Powerlink . . . 9

2.2.2 PROFINET IRT . . . 9

2.2.3 TTEthernet . . . 10

2.2.4 Ethernet AVB . . . 11

2.2.5 The FTT-SE protocol . . . 12

3 The HaRTES Architecture 15 3.1 The HaRTES Switch Structure . . . 15

3.2 The Single-Switch HaRTES Architecture . . . 15

3.3 The Multi-hop HaRTES Architecture . . . 17

3.3.1 Multi-hop HaRTES Topology . . . 17

3.3.2 Scheduling Methods . . . 18

3.4 Response Time Analysis . . . 19

3.4.1 System Model . . . 19

3.4.2 Response Time Analysis for Synchronous Message . . . 19

3.4.3 Response Time Analysis for Asynchronous Message . . . 22

4 Problem Formulation 23 5 Solution Method 24 6 On-line Reconfiguration Design 25 6.1 Cluster-Tree Topology . . . 25

6.2 Dynamic Reconfiguration Method . . . 26

6.2.1 Request . . . 26 6.2.2 Feasibility Check . . . 26 6.2.3 QoS Management . . . 27 6.2.4 Mode-change . . . 27 6.3 Discussion . . . 30 7 Evaluation 31 7.1 Implementation of the Response Time Analysis . . . 31

7.2 Decision Making of On-line Reconfiguration . . . 34

7.3 The Reconfiguration Process . . . 36

8 Related work 39 9 Conclusion 42 9.1 Summary . . . 42

(4)

Appendix B Appendix B - Response Time Functions 47

B.1 Idle Time Calculation Function . . . 47

B.2 Inflation Time Calculation Function . . . 47

B.3 Blocking Time Calculation Function . . . 48

B.4 Interference Time Calculation Function . . . 48

B.5 Switch Delay Calculation Function . . . 50

B.6 Response Time Calculation Function . . . 51

B.7 Total Response Time Calculation Function . . . 52

Appendix C Appendix C - Experimental Data 53 C.1 The raw data for decision making time, EC=1ms LSW=70%EC . . . 53

C.2 The raw data for decision making time, EC=2ms LSW=70%EC . . . 53

C.3 The raw data for decision making time, EC=1ms Message Number=20 . . . 53

C.4 The raw data for reconfiguration process time, Period=50EC, EC=1ms, LSW=70%EC . . . 53

C.5 The raw data for reconfiguration process time, Period=100EC, EC=1ms, LSW=70%EC . . . 54

C.6 The raw data for reconfiguration process time, Period=50EC, EC=1ms, Message Number=20 . . . 54

(5)

List of Figures

1 The Ethernet switch internal structure [1] . . . 8

2 Ethernet Powerlink Transmission Cycle [1] . . . 9

3 PROFINET IRT communication cycle [2] . . . 10

4 TTEthernet communication cycles [3] . . . 11

5 Ethernet AVB Protocol Stack [4] . . . 11

6 The FTT-SE Overview [5] . . . 13

7 Eelementary Cycle in the FTT-SE [1] . . . 14

8 Internal HaRTES Switch Structure [6] . . . 15

9 Elementary Cycle in the HaRTES architecture . . . 15

10 The HaRTES Architecture Overview . . . 16

11 An example of single-switch HaRTES architecture . . . 17

12 Muti-Hop HaRTES Topology . . . 17

13 Operation of the RBS method . . . 18

14 The path links between source and destination nodes in the multi-hop HaRTES architecture . . . 19

15 Switch delay of message m2 . . . 21

16 Solution Method used in this thesis . . . 24

17 Clusters in Multi-hop HaRTES architecture . . . 26

18 Reconfiguration process in Multi-hop HaRTES archeteture . . . 28

19 Reconfiguration process example . . . 29

20 The Network architecture under evaluation . . . 31

21 Message structure . . . 31

22 Decision making for EC=1ms LSW=70%EC . . . 35

23 Decision making for EC=2ms LSW=70%EC . . . 35

24 Decision making time for changed LSW . . . 36

25 Reconfiguration process time, when Period= 50EC EC=1ms LSW=70%EC . . . . 37

26 Reconfiguration process time, when Period= 100EC EC=1ms LSW=70%EC . . . 37

27 Reconfiguration process time for changed LSW, when Period= 100EC EC=1ms . 38 28 Time Plan of thesis . . . 46

List of Tables

1 EC=1ms LSW=70%EC . . . 53

2 EC=2ms LSW=70%EC . . . 53

3 EC=1ms Message Number=20 . . . 53

4 Period= 50EC EC=1ms LSW=70%EC . . . 53

5 Period= 100EC EC=1ms LSW=70%EC . . . 54

(6)

1

Introduction

Over the last decades, Ethernet has been used widely in numerous industrial applications. As a member of computer network technologies, Ethernet is based on the IEEE 802.3 standard. Currently, Ethernet still plays an essential part in construction of enterprise information system, intelligent building and the information superhighway. Ethernet has the following advantages: 1) High data transmission rate

Ethernet can provide four data rates, 10Mbps, 100Mbps, 1Gbps and 10Gbps; 2) Supporting various physical medium and topological structure

Ethernet supports a variety of transmission medium, including coaxial cable, twisted pair cable, antenna, etc., therefore users could have a number of choices according to the band-width, range, price and other factors. Moreover, Ethernet supports Bus type and Star type topology structure, which makes it scalable. It also uses a variety of redundant connection modes, which could improve the performance of the network;

3) Good open specification

Ethernet, based on TCP/IP protocol , is an open network standard, and it is easy to be used for different manufacturers to interconnect their equipments. This feature is very suitable to solve the problem of compatibility between different equipment manufacturers in the control system and the interoperability problem. Ethernet is the most widely used LAN technology, and follows the international standard IEC/ISO802.3, with a wide range of technical support. Almost all programming languages support the development of Ethernet applications, such as Java, VC++ and Visual Basic;

4) Low cost

In the engineering and application field, due to the fact that Ethernet has been used for many years, it has a large number of experts that are familiar with Ethernet applications. There are a lot of technical experience that can be reused. A large number of existing resources can greatly reduce the development and training of Ethernet system maintenance, which can effectively reduce the overall system cost. Besides, it can also accelerate the speed of system development;

These above advantages make Ethernet a very suitable technology for industrial control systems. However, there still exists many problems with Ethernet when it is used in industrial applications. For example, Ethernet does not provide power, there must be an additional power supply cable. Ethernet in control systems requires more security controls, which can result in security vulnerabil-ities. Besides, the main problem in Ethernet is the use of Carrier Sense Multiple Access / Collision Detection (CSMA/CD) arbitration mechanism. CSMA/CD leads to non-deterministic behavior of data transmission. Each node in the network sends the information by means of messages to the channel. When nodes start transmission, they should also check whether there is a collision with other messages in the channel. If there is a collision, the node immediately stops sending. It waits for some unpredictable time, then starts retransmission. Thereforethe node may re-transmit many times. This behavior makes Ethernet non-deterministic, which is not desirable for safety critical control system and industrial applications.

In safety critical control system and industrial communications, the messages are transmitted with specific requirements. Basically, the messages should be delivered within a specific time. These messages with timing constrains are called real-time messages. Standard Ethernet cannot support timing constraints due to the CSMA/CD arbitration. However, switched Ethernet is seen as a promising method to overcome that limitation, as it could eliminate the collision and support timing constraints.

The increasing use of switched Ethernet technologies in real-time domains such as in trains, air-crafts and industrial applications, becomes very common. The main reasons are its good fea-tures i.e. cost-effective, scalability, higher data rates and expandability. Switched Ethernet could support throughput up to 100 Mbps in embedded systems, which is higher than other network

(7)

technologies. Moreover, switched Ethernet is flexible to connect with various elements in different topologies like Mesh, Star and Tree, which is convenient for switches to form a multi-hop architec-ture.

1.1

Motivation

However, many new requirements present new challenges to real-time network field. Those re-quirements include growing network complexity due to highly interactive functions [7], incorporat-ing with diverse traffic patterns (event-triggered, time-triggered) and on-line reconfiguration [8]. Therefore, many Real-Time Ethernet (RTE) protocols, such as TTEthernet [9], Profinet [10], have developed to meet some of these requirements. Despite a timeliness guarantee that is provided by the RTE protocols, they still have limitations when applied in dynamic systems. In the dynamic systems, the messages in the network may need to be added or removed during run-time. In order to handle dynamic reconfiguration in real-time networked embedded system, the Hard Real-Time Ethernet Switching architecture (HaRTES) [11] has been developed. The HaRTES architecture is an enhanced Flexible Time Trigger (FTT) enabled switch, which is based on master-slave tech-nique. It supports both synchronous traffic i.e. real time periodic traffic, and asynchronous traffic, which composes real time sporadic traffic and other traffics. Moreover, a middle-ware[12] included on-line admission control and Quality of Service(QoS) management, was proposed to support dy-namic reconfiguration. However, the proposed QoS management and on-line reconfiguration were proposed for a small network with one switch in the network. As most of the network architec-tures have several switches, a multi-hop architecture is required. Therefore, admission control for multi-hop HaRTES architecture is required, which is the focus of this thesis.

1.2

Thesis Contributions

In this thesis, we work on on-line reconfiguration for multi-hop HaRTES architecture. The adap-tivity and on-line reconfiguration for a single-switch case was already investigated [12]. Currently, there is no protocol provided for dynamic reconfiguration in the multi-hop HaRTES architecture. Thus, in the beginning we investigate different reconfiguration methods, which are already pro-posed in the literature. Then, we present a protocol to reconfigure dynamic request change in context of the multi-hop HaRTES architecture, without disrupting the real-time behavior of the system. This means that the proposed reconfiguration protocol does not cause a deadline miss for the messages. Based on the discussed introduction and motivation, we formulate the goal of the thesis as below:

The goal of the thesis is to provide an on-line reconfiguration protocol for the multi-hop HaRTES architecture, such that it does not affect the timeliness guarantee of the network.

We achieve the main goal by presenting the following contributions:

• We study the state of the art regarding the reconfiguration methods as well as resource reservation mechanisms in the network;

• We define a method to carry out the dynamic reconfiguration in the context of the multi-hop HaRTES architecture. For this purpose, we use a hybrid method to achieve the advantages of both traditional centralized and distributed approaches;

(8)

1.3

Outline of the Thesis

Our thesis consists of 9 chapters and the rest parts of the thesis are organized as follow.

• Chapter 2 illustrates background and some basic concepts related to the thesis. This includes the basic concepts of the standard switched Ethernet protocol and the switch structure. In addition, the chapter mentions some limitations of using the standard Ethernet switch in real-time applications. Therefore, some solutions to overcome the limitations in the literature are described in this chapter as well;

• Chapter 3 presents the HaRTES architecture, including the HaRTES switch internal struc-ture, the single switch forwarding method, and the multi-hop HaRTES forwarding method; • Chapter 4 formulates the problem of performing reconfiguration in the multi-hop HaRTES

architecture;

• Chapter 5 describes the research method during the thesis;

• Chapter 6 presents how to build cluster in the architecture, and proposes dynamic reconfig-uration protocol for the multi-hop HaRTES;

• Chapter 7 depicts two experiments to evaluate the proposed protocol in terms of decision making time and reconfiguration time;

• Chapter 8 presents the state of the art regarding the reconfiguration mechanisms. This chapter also provides the advantages of our mechanism compared with the state of the art solutions;

• Chapter 9 contains conclusion and presents some directions for the future work of the HaRTES architecture.

(9)

2

Background

In this chapter, we firstly introduces some basic concepts about standard switched Ethernet technol-ogy. Then, we describe some of the solutions to tune switched Ethernet for real-time applications.

2.1

Ethernet Switch

Figure 1 shows a typical Ethernet switch structure. This switch contains receive ports, input buffers, a packet handling module, queue transmitting and output ports. The process for a mes-sage crossing a switch is depicted as follow steps. Firstly, switch receives a mesmes-sage and buffer it in input buffers. Then, the packet handling module is responsible to analyze the message and find its destination address. At last, the message is inserted into the destination queue. The output queue is First-In-First-Out (FIFO) model, i.e. the first coming message could be transmitted first, the messages are buffered based on their arrival time. Most switches are based on IEEE802.1D Standard that proposed up to 8 FIFO queues for each output port.

Figure 1: The Ethernet switch internal structure [1]

Switching techniques are used to transmit the packets among the switches through its channels. In general, Ethernet switches employ two common switch techniques: Store-and-Forward switching, Cut-Through switching.

Store-and-Forward switching

Store-and-Forward switching is the simplest switching technique among other switching tech-niques. In Store-and-Forward switching, switch stores the entire packet in the internal buffer memory before forwarding the packet to the next switch. Moreover, switch computes Cyclic Redundancy Check (CRC) of each packet to check whether the packet is trustful or not. If a error is found by CRC, the corresponding packet would be discarded. The packet cannot be transferred between the switches until the entire packet is received and stored in the switchs buffer. The switch latency is big in this technique because switch checks the CRC of each packet.

(10)

switching. However, Cut-Through switching may propagate the errors since it does not fully check the error of each packet.

2.2

Real-Time Ethernet Protocols

The standard switched Ethernet still shows some limitations when using in the real-time systems, such as the number of priority levels in the output queues is limited , and messages would be dropped when the buffer is full. When the output buffer is full, the arriving messages are dropped, which is a very undesirable situation in real-time systems. These limitations affect the Ethernet switch to achieve real-time communication. In order to guarantee timeliness behavior in real-time switched Ethernet, several solutions were proposed. In the following sections, we present some of the solutions.

2.2.1 Ethernet Powerlink

Ethernet Powerlink(EPL) [13] is a master/slave protocol supported by Ethernet Powerlink Stan-dardization Group. It provides deterministic real-time communication and supports periodic (Isochronous) traffic and asynchronous traffic. In EPL, a fix time-slot that is used to organize communication, is called cycle. In Figure2, each EPL cycle composes four periods:

1) Start period, where the master sends a SoC message(start of cycle ) in order to notify slave nodes at the beginning of EPL cycle;

2) Isochronous period, the master sends a Poll Request to each slave node in a polling way ac-cording to the predefined one-way sequence. Once slave node receives a Poll Request, slave node responds by transmitting the corresponding data message (Poll Response). At the same time, all the other slave nodes (including those who should receive the frame of the nodes) can receive, supervise the Poll Response frames. After all slave nodes sending their Poll Response, the master sends an End of Cycle message, which is transmitted until the end of the isochronous period;

3) Asynchronous period, in which only one asynchronous message could be transferred. The master grants a node and sends a start of asynchronous (SoA) message to that the selected node, the node gives a reply to SoA message;

4) Idle period, which is aimed to enforce a precise cycle start with low jitter.

Figure 2: Ethernet Powerlink Transmission Cycle [1]

2.2.2 PROFINET IRT

PROFINET IRT is a modified standard Ethernet protocol developed by PROFIBUS and PROFINET International for industrial automation. PROFINET IRT defines fast real-time data exchange in distributed architecture, whose cycle time is down to less than 1 millisecond. Therefore,

(11)

PROFINET-IRT [14] is ideal for applications where motion control is critical to production cycles; high performance industry automation applications, which requires a cycle time in a few hundred microseconds. Figure3illustrates PROFINET IRT communication cycle. Each cycle is composed of two channels:

1) Isochronous Communication Channel, which is conveying static time to schedule real-time communications;

2) TCP/IP Communication Channel, which is used to transmit address for synchronous and asynchronous messages.

Figure 3: PROFINET IRT communication cycle [2]

Compared with PROFINET RT, PROFINET IRT takes advantages of its bandwidth reserva-tion and its advanced scheduler. When using PROFINET IRT, the industrial economy requires extra bandwidth, fast delivery of data stored in the super-critical part. When a priority message is transmitted, dedicated reservation bandwidth for seamless transfer offers as soon as possible so that the message can be delivered.

Another feature of PROFINET IRT is its scheduler. Depending on the intended message where it is at a place of production data transfer cycle. The scheduler guarantees that the data message, which is based on the time required for the device, is sent. It also ensure that the manufacturing process of receiving message in device is as expected.

2.2.3 TTEthernet

TTEthernet [9] is a scalable switched Ethernet and is optimized for time-triggered transmission which is based on IEEE 802.3 standard. TTEthernet protocol defines how to implement high precision time synchronization in a standard Ethernet. Moreover, TTEthernet provides several traffic classes, Figure4shows three types of traffics:

1) Time-Triggered(TT) traffic, transmission is according a predefined communication schedule; 2) Rate-Constrained(RC) traffic, enforces minimum duration between two frames of the same

(12)

Figure 4: TTEthernet communication cycles [3]

In TTEthernet protocol, TT traffic has the highest priority. Therefore, TT messages are asso-ciated with hard real-time communication, with low latency. The transmission of these messages are followed a TDMA policy. RC traffic, based on event-triggered, is used by applications with less stringent real-time requirements than time-triggered systems. RC frame delivery is guaranteed, but potentially has high latency and jitter. BE traffic is transmitted in the free bandwidth left by other two traffic classes. Best-effort frame delivery (standard Ethernet traffic) is not guaranteed. However, the high priority TT traffic may be blocked from the low priority traffic RC and BE traffics. Thus, TTEthernet proposed three methods to solve the problem, which are preemption, timely block and shuffling.

2.2.4 Ethernet AVB

Ethernet Audio Video Bridging (Ethernet AVB) technology is potential and promising technology in the real-time audio and video transmission network. Ethernet AVB is mostly used in automotive industry. It is compatible with the standard Ethernet data transmission. In addition, Ethernet AVB ensures the transmission of real-time stream. It could provide high reliability, low latency, and low cost of implementation for real-time audio and video streaming data transmission. As it is shown in Figure5, Ethernet AVB protocol stack includes five protocols, PTP (Precision Time Protocol), SRP (Stream Reservation Protocol), QFP (Queuing and Forwarding Protocol), AVBTP (Audio/Video Bridging Transport Protocol) and RTP (Real-time Transport Protocol) respectively [15,16].

(13)

PTP: Its prototype is IEEEP1588 V2, the original IP routing protocol is applied to a local area Ethernet network which only has two-layer structure. PTP mainly includes two aspects, one aspect is the choice of the master clock, another is a synchronization mechanism that contains time compensation and clock frequency match. PTP selects a master clock in PTP domain through the best master clock algorithm, as the root. The root is to establish the spanning tree which is used to synchronize. Each time-sensitive device node must be synchronized with the master clock. In the local network, in the meantime, PTP defines a number of potential master clock in case of node failure. When the node fails to access the main clock, PTP can automatically switch to one of the potential main clock and establish the appropriate spanning tree, to ensure network clock synchro-nization. After the master clock is determined, PTP sends a synchronization message through the time-stamp mechanism, and transfers the times-tamp through a conventional Ethernet packets. When the message transmit to the port which needs clock synchronization, it will be compared with the local clock synchronization, using corresponding path delay compensation algorithm to match the local clock. After the clock matched, the slave node sends a message containing a time stamp, and match the clock synchronization in the next slave nodes.

SRP: In order to ensure QoS of data transmission and forwarding, reducing latency and jitter, SRP, based on the bandwidth of the network topology, locks the transmission path in advance, and sets aside a portion of the bandwidth to ensure inter-end bandwidth availability of streaming audio and video equipment. SRP utilizes SP (Signaling Protocol) and multi-function expansion of IEEE802.1 MRP (Multiple Registration Protocol) to exchange the description message of band-width audio and video streams, and reserves bandband-width resources. In general, the 75 percent of the whole bandwidth is reserved for audio and video time-sensitive data streams, the remaining 25 percent is used to transmit conventional Ethernet data.

QFP: It is an accompanied protocol in AVB protocol stack, and most of the implementation are in the switch. The main responsibilities of QFP are processing and forwarding data transmission, ensuring that traditional Ethernet data traffic cannot interfere with the real-time audio and video streaming. QFP mainly includes three parts, traffic shaping, prioritization and queue management. In order to avoid the completion for bandwidth between time-sensitive audio-video streaming data and general data, Ethernet AVB switch has several input and output queues, audio-video stream-ing data and general data could be transmitted into different queues. All switches and bridges are using priority transmission selection algorithm, and give the highest priority of audio and video streaming data.

In addition, AVB TP is mainly responsible for packeting real-time streaming data in Ethernet AVB. Meanwhile, it is also responsible to establish stream, control stream, and end up stream. The RTP, based on three-layer application of IP, takes advantage of performance of Ethernet AVB, which provides time synchronization in the LAN, latency and bandwidth reservation service through bridging and routing.

2.2.5 The FTT-SE protocol

The Flexible Time Trigger Switched Ethernet (FTT-SE) protocol is based on the Flexible Time Trigger (FTT) scheme which provides real-time communication service. The FTT-SE architecture exploits master/slave technique in which the master node is responsible to control transmissions of the salve nodes. Master node in the FTT-SE architecture is connected with one port of other switch ports, as it is depicted in Figure6.

(14)

Figure 6: The FTT-SE Overview [5]

In the FTT-SE architecture, Elementary Cycles(EC) are fixed duration time-slots for data transmission. The master node schedules the messages for transmission within the ECs and puts the scheduling decision through a special message, which is called Trigger Message (TM). Then, master node broadcasts TM to all nodes. Each EC is composed of two different windows, syn-chronous window and asynsyn-chronous window, to handle synsyn-chronous traffic and asynsyn-chronous traffic respectively. Figure7depicts EC in the FTT-SE protocol.

(15)

Figure 7: Eelementary Cycle in the FTT-SE [1]

The master node not only employs any kinds of scheduling policies, but also performs reconfig-uration to meet communication requirements. The last feature involves on-line admission control, removing message under guarantee timeliness and dynamic band-width reservation. The FTT-SE protocol has a distinctive feature in handling asynchronous traffic. The slaves node are of event-driven model, which trigger asynchronous traffic. The FTT-SE protocol uses a specific message to notify asynchronous messages to master node, which follows a signaling mechanism. Once the master node receives all asynchronous messages, it schedules them and sends new traffic scheduling for upcoming EC. The FTT-SE protocol makes use of full-duplex links so that master node could receive slave nodes’ asynchronous traffic queues, while it could also send the TM to slave nodes at the same time [17].

(16)

3

The HaRTES Architecture

Although the previous presented protocols have advantages in supporting real-time communica-tion, there still exits some limitations. For example, some of the presented real-time protocols are fit in a static reconfiguration for real-time communication, the on-line reconfiguration for real-time traffic is only available in SRP protocol. Therefore, we focus on the FTT-enabled switch, namely HaRTES architecture, which can provide on-line admission control, dynamic QOS management and arbitrary traffic scheduling policies.

3.1

The HaRTES Switch Structure

The internal HaRTES structure is depicted in Figure8. The packet classification is used to clas-sify the arriving packets at input ports, it distinguishes the traffic types and sends the packets to memory pool. The master module consists of admission control, scheduler, Quality of Service (QoS) management and one repository that stores traffic attributes such as message length, dead-line, period, etc. When one message stream is added, removed, or their properties are changed, the traffic needs on-line reconfiguration. Admission control unit handles requests for adding new messages, executes an adequate analysis and decided whether the message can be accepted or not. If the message can be accepted, then the necessary changes in the system must be made to accom-modate the new message. The packet forwarding module checks the packet type and puts it into the corresponding output queue. Three FIFO queues are used to handle three different packets (Synchronous, Asynchronous, None-Real-Time) in each output port. The function of dispatcher is to handle the packet transmission in reserved bandwidths and to enforce temporal isolation.

Figure 8: Internal HaRTES Switch Structure [6]

3.2

The Single-Switch HaRTES Architecture

In single-switch HaRTES architecture, message communication is scheduled by master module in a time duration slots, namely Elementary Cycle (EC). As shown in Figure9, each EC contains two windows, synchronous window and asynchronous window. Synchronous window is used to transmit synchronous traffic, while asynchronous window transmits asynchronous traffic and the non-real-time traffic. A particular message, which is sent by master to all slave nodes, is called trigger message(TM). TM contains the ID of the scheduled messages for the following EC [8].

(17)

HaRTES architecture supports all traffic types i.e. periodic traffic, sporadic traffic and non-real-time traffic. Based on traffic patterns, traffic can be divided into synchronous traffic that is time-triggered event, asynchronous traffic that is caused by event-triggered event, and non-real-time traffic. The HaRTES integrates FTT master in the switch as shown in Figure 10. Besides, it obtains some important features compared with the FTT-SE protocol. For example, improv-ing performance in handlimprov-ing asynchronous traffic, which is autonomously transmittimprov-ing without triggering by the master. Moreover, it maintains temporal isolation [18].

Figure 10: The HaRTES Architecture Overview

A single-switch HaRTES architecture example is depicted in Figure 11 . Only one HaRTES switch connects several nodes in the architecture. The switch acts as master node to schedule transmission to all slave nodes.

(18)

Figure 11: An example of single-switch HaRTES architecture

3.3

The Multi-hop HaRTES Architecture

With the development of industry, the networks in industrial applications require more than hun-dreds nodes, this means a single switch cannot meet that requirement. The multi-hop architecture is developed to overcome the mentioned limitation. In this part, we describe a multi-hop HaRTES topology and a scheduling method to handle the traffic forwarding in this architecture.

3.3.1 Multi-hop HaRTES Topology

Multiple HaRTES switches are connected with each other to form a tree topology. An example of multi-hop HaRTES architecture is depicted in Figure12.

(19)

3.3.2 Scheduling Methods

Two scheduling methods are proposed in [8, 11], namely Distributed Global Scheduling(DGS) and Reduced Buffering Scheme(RBS), to handle traffic forwarding through the multiple switches. In this section, we only describe the RBS method in details, as it provides better performance compared with the DGS method. Furthermore, we explain the on-line reconfiguration process in the context of the RBS method.

In the RBS method, message that needs to be sent from source slave node to destination slave node, would not be buffered in each involved switch. Firstly, switch that is connected to source node, schedules the message and buffers it. The switch forwards the message until there is not enough time in the associate window of the current EC, then, next switch buffers the message and continues the same procedure.

Figure 13 illustrates the operation of message forwarding in the RBS method. We assume that source node S1 sends a message M1 to destination node S2 in network as shown in Figure

12. The message M1 is activated in ECk, then switch H2 schedules it and buffers it in its own

memory. Then, switch H1 inserts the message into its output link. Since there is still enough time in the synchronous window in current EC, switch H2 forwards the message to switch H1. Switch H1 do the same operation as switch H2 did, if there is still enough time in the synchronous window. Since there is not enough time to forward message M1 from switch H1 to switch H3 in current synchronous window, the transmission of message is suspended in switch H3. In ECk+1,

the message is buffered in switch H3. Switch H3 receives the message from switch H1, schedules it and forwards the message to destination node S2, in ECk+1.

(20)

• In the HaRTES architecture, asynchronous messages are transmitted in asynchronous win-dow, while synchronous messages are in synchronous window.

3.4

Response Time Analysis

In this section, we depict the response time analysis in details which presented in [8]. First, we define a system model that represents the messages in the HaRTES architecture network. Then, we explain the response time analysis for two types of message, i.e., synchronous and asynchronous, in details.

3.4.1 System Model

The message model for both synchronous and asynchronous messages is presented in Expression

1:

Γ = mi(Ci, Di, Ti, Pi, RTi, Li, ni, P ki), i = 1 · · · N (1)

In the system model, each message consists of several parameters. Ci is the transmission time

of message mi. Di indicates the deadline of mi, while Ti is the period of message mi. Pi is the

priority of message mi. Note that the value of priority must be an integer number. RTiis used to

store the response time of message mi. P ki indicates the packet size that composes message mi.

Li represent the transmission path of mi, which consists of path links between the source node

to destination node. Besides, ni shows the number of links in Li. For example, node A sends a

message m1 to node B as show in Figure14. The transmission path of m1 includes link 2, link 1,

link 3. The number of links in L1 is 3. The transmission path of message m1 and its links number

are shown as below.

L1= [2, 1, 3], n1= 3

Figure 14: The path links between source and destination nodes in the multi-hop HaRTES archi-tecture

In this model, the total response time of miis the time duration from source node initiates the

transmission of a message to destination node receives the message. Moreover, a response time between two links (la and lb) is defined, which is the time duration a message crosses between the

mentioned links. This response time is denoted by RTi,a,b.

3.4.2 Response Time Analysis for Synchronous Message

We calculate total response time of mi by Algorithm1. In the first three line of the algorithm,

we initialize the total response time of message mi (RTi), and the links that are included in the

(21)

the loops carry on. Line 5 illustrates the response time calculation between link la and link lb. The

response time of message miis expressed in an integer number of ECs by using ceiling function in

line 6. As the RBS method described before, the message will be buffered in switch if the response time of the current link is not equal to the response time of previous link. In this situation, the response time of previous link is added into the total response time. Meanwhile, the start link is set to current link. Otherwise, the algorithm continues to calculate the response time of next link. This process can be shown in line 7 to line 12. In the last three lines of the algorithm, the algorithm calculates the response time until the last link of message mi and response time of the

last link is also added into total response time.

Algorithm 1 Total Response Time Calculation for mi

Initialization:

1: The total response time, RTi= 0;

2: The start link, a = 1 3: The next link, b = 1; Iteration:

4: while b ≤ ni do

5: rti,a,b= responseT imeCalc(i, a, b)

6: RTi,a,b= drtECi,a,be

7: if (a! = b)&(RTi,a,b! = RTi,a,(b−1)) then

8: RTi= RTi+ RTi,a,(b−1) 9: a = b 10: else 11: b = b + 1 12: end if 13: end while 14: RTi= RTi+ RTi,a,(b−1) 15: Return RTi

Take an example to explain mentioned algorithm, we assume that node A sends a message m1

to node B in Figure14. To calculate the total response time of message m1, the following steps are

needed. At first, we need to calculate the response time of start link, i.e. link 2, and the response time of first two links, i.e., link2 and link 1. Then, we compared the values of two response time. If they are equal, then, we calculate the response time of the first three links. However, if the two values of response time are not equal to each other. The response time of first link is added into the total response time, meanwhile, the start link is set to the second link, i.e., link 1. We follow the same steps as before and calculate the response time until the last link. At last, we add the response time of the last link into total response time.

In Algorithm1, the function of response time calculation between link la and link lb is shown

in line 5. This response time is calculated in Equation 2. As the message is transmitted in the specified synchronous window, not other times in the EC, the inflation factor should be defined.

rtxi,a,b= Ci αi,a,b

+ Ii,a,b+ Bi,a,b+ SDi,a,b (2)

The response time of message mi, rti,a,bincludes four parts: (i) the inflated transmission time αCi

i,a,b,

(ii) the interference from higher priority messages that share links with mi, Ii,a,b, (iii) the blocking

(22)

The idle time in the EC for miis denoted by Idi,land αi,a,brepresents inflation factor between

la and link lbfor mi. Idi,l and αi,a,b are computed in Equation4and Equation5, respectively. In

Equation4, P k represents packet size of message. LWl means length of transmission window in

link linkl. hep(mi) means the message that have higher or the same priority than mi.

Idi,l= max ∀r∈[1,N ] ∧mr∈hep(mi) ∧l∈Lr (P Kr, P Ki) (4) αi,a,b= min l=a···b (LWl− Idi,l) EC (5)

The interference time is caused by messages that have higher or the same priority than mi.

This interference is calculated in Equation6.

Ii,a,b= X ∀r∈[1,N ],i6=j ∧mj∈hep(mi) ∧Lj∩Li,a,b6=0 & rt(x−1)i,a,b Tj ' Cj αi,a,b (6)

The blocking time Bi,a,b, is caused by the messages with lower priority that shares the link

between la and link lb in transmission path of message mi. For example, a message mi is inserted

to output queue, while a low priority message mj is transmitted with the same queue. Therefore,

mi is blocked by mj. The blocking time of message mi is calculated in Equation7. lp(mi) means

messages that have lower priority than message mi.

Bi,a,b= X t=a+1...b,a6=b max ∀p∈[1,N ], ∧mp∈lp(mi) ∧lt∈Lp ∨∀y,a+1≤y≺t,ly∈L/ p P Kp αi,a,b (7)

The switching delay SDi,a,b, consists of hardware fabric latency which is a constant value, and

store-and-forward delay which is the time to buffer message mi in the switch before transmitted.

When message mi crosses through a switch, no matter the message is preempted or blocked by

other message, the switch delay of mi exists. However, when calculating the switch delay of mi,

we also need to consider the effect from other messages. In the worst-case, according to [8], the maximum transmission time of all messages crossing from the same input and output ports should be considered for the switching delay. As shown in Figure 15, when m1 is transmitted into its

output link, m2is waiting to be transmitted until m1done. In this scenario, when calculating the

switch delay time for m2, we need also take the switch delay time of m1 into account.

(23)

In general, the switching delay time of message mi is calculated in8. All messages that have

either higher or lower priority than the priority of message mi, are taken into account.

SDi,a,b= X t=a+1...b,a6=b max ∀q∈[1,N ], ∧lt∈Lq ∨l(t−1)∈Lq  SW Di, SW Dq αi,a,b  (8)

3.4.3 Response Time Analysis for Asynchronous Message

The response time analysis of asynchronous message is similar to the process that is done for synchronous message. The main difference is that asynchronous messages are transmitted in asyn-chronous window. Therefore, the asynasyn-chronous message cannot interfere to the transmission of synchronous message. Besides, the inflation factor is also considered in the asynchronous window.

(24)

4

Problem Formulation

The on-line reconfiguration for dynamic systems follows four steps in general. The first step is that a negotiation is communicated between slave node and master node. The slave node requires a request change and sends the request to master switch. The request is usually caused by a slave node removing, adding message stream, or changing some parameters in message stream. The sec-ond step is implementing Admission Control in master node. Admission control is used to handle the request and verify the request changes. Therefore, response time analysis or utilization bound can be used to measure the feasibility in the second step. The third step is resource reservation for the message streams. QoS management is used to distribute the reserved resource. At last, mode-change is needed, where all nodes are updating their databases. In order to guarantee timeliness of the system, this transition should be done in a bounded time, and consistently.

The on-line reconfiguration for single-switch HaRTES architecture is similar to the one pro-posed in the FTT-SE protocol, as they use the same concepts for message transmission. The on-line reconfiguration for the FTT-SE is presented in [12]. The proposed protocol, however, cannot be used in the multi-hop HaRTES architecture, as there are several master nodes in the network. Therefore, all the master nodes should agree on the requested changes, and they should apply the new changes at the same time to achieve the consistency. Moreover, the data transmission follows different methods, as it uses the RBS method. This affects the negotiation phase of the reconfiguration, where the nodes should send their requests using the RBS method. Considering the mentioned differences, we need a new protocol for multi-hop HaRTES architecture to achieve on-line reconfiguration.

In this thesis, we define an on-line reconfiguration protocol for the multi-hop HaRTES archi-tecture. The main problems are the admission control process, negotiation between the master node and slave node, and updating the network based on the changes. Based on the introduction throughout this thesis, we address the following research questions:

1. What are the steps toward achieving the on-line reconfiguration in multi-hop HaRTES archi-tecture?

2. How many admission control units are required in the network to perform the reconfiguration? 3. Where the admission control units should be located in the architecture to achieve a fast and

efficient request and update handling?

4. What are the criteria to measure the performance of the proposed on-line reconfiguration method?

(25)

5

Solution Method

In order to perform the thesis in a structure way, we follow several steps. In the first step, we reviewed the state of the art, in two different directions: (i) reviewing the switched Ethernet protocols in real-time systems, (ii) the reconfiguration mechanisms in the same contexts. Then, we performed a detail study on the HaRTES architecture, as the main part of thesis is focused on this protocol. In third step, we proposed a reconfiguration protocol according to the identified requirement. In the next step, we compared the solution with other existing solutions, and we tuned that to achieve better performance. Then, we evaluate the proposed solution in terms of overall performance of the protocol. The flow of solution method is shown in Figure16.

(26)

6

On-line Reconfiguration Design

In this chapter, we propose a protocol to achieve on-line reconfiguration for the multi-hop HaRTES architecture. In order to define our reconfiguration protocol, we divide the network into several clusters. In this section, we first explain how to split the network to clusters, then we present the protocol itself.

6.1

Cluster-Tree Topology

Cluster-based technique is a well-known method and is widely used in Wireless Sensors Network (WSN) field [19]. The basic principle of the cluster-based technique is that the nodes in archi-tecture are classified into several groups called clusters. There is a node that would be selected as cluster head (CH) in each cluster. The functions of CH are to collect data from other clus-ter members, aggregate, and forward the compact information to a base station. By using this principle, it is able to reduce the amount of data transferred within the network and have highly energy-efficient operation of WSNs. Moreover, the cluster-based technique has some advantages related to scalability as well as efficient communication. It decreases the overheads occurred due to communication, thereby reducing interferences and energy consumptions among network nodes.

We adopt the cluster-based technique in the context of the multi-hop HaRTES architecture, to take the advantages of such technique. Before we propose an algorithm to set up clusters in multi-hop HaRTES network, some parameters of tree topology need to be explained.

• Root switch – the switch on top of the network hierarchy

• Parent switch – a switch in which several nodes and switches are connected to it, and it is responsible for collecting requests in the cluster-tree topology

• Cluster – a group of nodes and switches with one single parent switch

In order to divide the multi-hop HaRTES network into the clusters, we follow a bottom-up algo-rithm. This means that we start from the switches in the lower hierarchy level. We select the switches with one parent switch, together with their nodes, and we grouped them as one cluster. Then, we move to the upper level, and we select the parent switch with the switches and nodes, which do not belong to any cluster, as another cluster.

(27)

Figure 17: Clusters in Multi-hop HaRTES architecture

For example, as shown in Figure 17, this is a hybrid HaRTES tree topology with 9 switches. We start to assign clusters from the bottom, therefore, switch 1, switch 2 and their parent switch (switch 3) can be grouped as cluster 1. Switch 3 is the cluster head in cluster 1 , as it is the parent for the others. Then, for the left switches that are located next to the bottom, we follow the method as before, cluster 2 consists of switch 4 and switch 5, while its cluster head is switch 4. Cluster 3 includes switch 8, switch 9 and their parent switch (switch 7), while switch 7 is its cluster head. When we assign new cluster for left switches, only switch 6 left. Therefore, the root switch (switch 6) forms a cluster 4, while it is also the cluster head in cluster 4.

6.2

Dynamic Reconfiguration Method

As it is described before, the process of on-line reconfiguration consists of four steps. Here, we explain each steps in details.

6.2.1 Request

In this step, a node that has a request to change, i.e. adding or removing a message, will send this request to its cluster head. This request is sent by a sporadic message through the asynchronous window of the EC. Moreover, the request message has the highest priority message among other data messages, so that it can be delivered as short time as possible. As the request message is a real-time message, the response time of that is bounded.

(28)

their deadline in the new settings, then the reconfiguration is accepted. Otherwise, if at least one of the messages does not meet their deadlines, the reconfiguration request must be rejected. In case of reject, the cluster head sends an update message to the node who requested the reconfiguration informing that about the rejection. However, in case of acceptance, the cluster header not only must inform the nodes inside its cluster about the change, but also it should inform other nodes in the other clusters. An important issue is that, in case of mode-change all nodes and switches must change their mode at the same time to keep the consistency of the system. Therefore, the information about the time that the system must do the mode-change should be sent to other nodes as well. In order to do that, the cluster head, computes the time it takes for the update message to reach to all the nodes in the network, and set that time for the mode-change. This time is encoded in the update message. More details on update message is described in Section

6.2.4.

6.2.3 QoS Management

The QoS management is done inside the cluster head. After feasibility check, the cluster head checks how much bandwidth is available, and how much is used by the new change. This infor-mation is saved in the cluster head for future bandwidth distribution. However, the focus of the thesis is not bandwidth redistribution, thus this part remains for the future work.

6.2.4 Mode-change

In the last step, the new mode should be sent to other nodes in the network. This information is sent by a sporadic message through the asynchronous window in the EC. This message is sent first to the other cluster heads, then those cluster heads are responsible to inform their cluster nodes. As the update message is a real-time asynchronous message, its response time is bounded.

(29)

Figure 18: Reconfiguration process in Multi-hop HaRTES archeteture

The entire process of the on-line reconfiguration is shown in Figure 19. In order to show the process in details, we explain an example. We assume the hybrid architecture is shown in Figure

17. One slave node S1in cluster 1 issues a request. Slave node S1 sends the request to its cluster

head SW 3 through a slv − request message. Cluster head SW 3 calculates the feasibility of request change. We assume that this request change is accepted. At time t1, cluster head SW 3 informs the result of request change to other cluster heads through a ch − update message. Then all cluster heads update their master switches and nodes through a slv − update message. The process of the reconfiguration for the depicted example is shown in Figure19.

It may occur that several nodes initiate several reconfiguration requests to their cluster heads. Assume that in the worst-case several cluster heads accept the requested change. In this case there will be several conflicting update messages propagated in the network. This causes an inconsis-tency in the network. In order to solve this problem, if a node or switch receives two consecutive update messages, it applies the one that has the latest mode-change time encoded in the message. As all the nodes and switches follow this rule, they all apply the same update message.

(30)
(31)

6.3

Discussion

The proposed on-line reconfiguration protocol is a combination of centralized and distributed ap-proaches. In the centralized approach, there is one specific node, as a management node, in the network responsible to decide on the acceptance or rejection of the request. This means all the nodes must send their reconfiguration requests to the management node for the decision. Therefore, in a large network, the request and update messages are sent through several links. Moreover, as there is one node performing the process, the node should be strong enough to handle all requests in the network. On the other hand, in the distributed reconfiguration, all nodes cooperate on the decision. This was there is no need for a high performance node. However, there are many request and update messages could be transmitted through the network.

In our protocol, we used a hybrid approach in which several nodes are responsible for recon-figuration, similar to the distributed approach. However, the number of nodes for the process are less. We divided the network into a number of clusters. Each cluster has a specific head to do the reconfiguration, similar to the centralized approach.

(32)

7

Evaluation

In this chapter, we evaluate the proposed protocol with two different experiments. In the first experiment we evaluate the decision making part of the protocol, i.e., the response time analysis by the cluster head. In the second experiment, we evaluate the entire reconfiguration process from sending the request and updating the system. In order to evaluate the protocol, we build a small network that includes three switches along with nine nodes, as it is shown in Figure20. In order to measure the times of the response time and the reconfiguration, we implement the response time analysis in a computer with the following configuration. The processor is 4th Generation Intel Core i5-4210U Processor (@1.70GHz @2.40GHz 1600MHz 3MB) with 8GB RAM. The operating system is Windows8.1, 64-bit.

Figure 20: The Network architecture under evaluation

7.1

Implementation of the Response Time Analysis

In the implementation of message model, we define a structure for each message in Figure21. Each element in structure represents the parameter in the system model in Expression1.

Figure 21: Message structure

In the implementation of calculating idle time, we define a function idleT imeCalc(mi, linki)

that follows algorithm2. Firstly, we initial idle time as the packet size of message mi. Then, we

(33)

linki of message mi. Last, we choose the maximum packet size of those messages as the idle time.

Algorithm 2 Idle Time Calculation for mi

Initialization:

1: The idle time, I = m[i].P k; Iteration:

2: for j= 0 to N do

3: if mj has higher or same priority than mi then

4: for k = 0 to linkmax do 5: if linklist[k] == l then 6: if m[j].P k > I then 7: I = m[j].P k 8: end if 9: end if 10: end for 11: end if 12: end for 13: Return I

We define a function inf lationCalc(mi, linka, linkb) to calculate inflation factor of message mi

between link la and link lb in algorithm 3. When calculating inflation factor, we take the worst

case into consideration. Therefore, we choose the minimum inflation factor between link la and

link lb.

Algorithm 3 Inflation Factor Calculation for mi

Initialization:

1: The Idle time, I = 0;

2: The current inflation factor, X = 0

3: The minimum inflation factor, prevX = LW/EC; Iteration:

4: for i= linki to linkj do

5: I =IdleTimeCacl(m,i)

6: X =(LW - I)/EC

7: if X is less than prevX then

8: prevX = X

9: end if

10: end for

11: Return prevX

We define a function interT imeCalc(mi, a, b) to implement the calculation of interference time

in algorithm4. First, we calculate inference time of each message that have higher priority than message mi. Then, we add all the interference time together and store the value in total inference

(34)

Algorithm 4 Interference Calculation for mi

Initialization:

1: The inflation factor, αi,a,b;

2: The interference time, T T erm = 0; 3: The previous response time, rt0;

Iteration:

4: for l= linki to linkj do

5: for j= 0 to message-number do

6: if m[i].prio is higher than m[j].prio then

7: for k = 0 to numlink do 8: if mess[m].linklist[k] == l then 9: T erm = d rt0 mess[m].periode mess[m].trans αi,a,b 10: end if 11: end for 12: end if 13: end for

14: T T erm = T T erm + T erm

15: end for

16: Return T T erm

In our thesis, we define a function blockT imeCalc(mi, linka, linkb) to calculate the block time

of message mi in algorithm5. First, we calculate block time for each link in the transmission path

of message mi. Then, we add all the block time together and store the value in total block time

T block.

Algorithm 5 Block Term Calculation for mi

Initialization:

1: The totally block time, T block = 0; Iteration:

2: for l= linki to linkj do

3: P revmax = 0;

4: for j= 0 to message-number do

5: if m[i].prio is higher than m[j].prio then

6: for k = 0 to numlink do

7: if mess[m].liklist[k] == l then 8: Block = mess[m].pk/αi,a,b

9: if Block > P revmax then

10: P revmax = Block 11: end if 12: end if 13: end for 14: end if 15: end for

16: T block = T block + P revmax

17: end for

18: Return T block

We implement a function delayT imeCalc(mi, linka, linkb) to calculate the switch delay time

of message mi by following algorithm6. Firstly, we compare switch delay time of each message in

message set mess[] with the switch delay time of message mi. We choose the higher switch delay

(35)

Algorithm 6 Switching delay Calculation for mi

Initialization:

1: The totally delay time, T delay = 0; Iteration:

2: for l= linki to linkj do

3: P revmax = 0;

4: Swdi= (SLD + msgi.pk)/alpha;

5: P revmax = Swdi;

6: for j= 0 to message-number do

7: for m[j] is belong to mess[] do

8: for k = 0 to numlink do 9: if mess[m].liklist[k] == l then 10: Swdj= (SLD + mess[j].pk)/alpha; 11: if Swdj> Swdi then 12: P revmax = Swdj 13: end if 14: end if 15: end for 16: end for 17: end for

18: T delay = T delay + P revmax

19: end for

20: Return T delay

7.2

Decision Making of On-line Reconfiguration

In order to evaluate the decision making time, we generate randomly 10000 sets of messages. The parameters of each message is selected within a given range. We set the value of message’s period within [2, 22]EC, while the deadline of message is equal to its period. The priority of the mes-sages is assigned based on the Rate Monotonic algorithm, i.e., larger period has higher priority. In our thesis, the network capacity is 100M bps. The maximum packet size in Ethernet is 1524KB, therefore, the maximum time that use to transmit maximum packet is around 123µs. Thus, the transmission time and packet size of each message are selected within [80, 123]µs. In addition, the hardware fabric latency is set to 5µs. We performed the experiment for two settings. In the first setting the EC size is set to 1ms, while in the second setting it is set to 2ms. The synchronous window is set to 70% of the EC, for both settings. The source and destination nodes of the message is chosen randomly.

In our evaluation, we change the value of EC and compare the decision making time. In the first case, EC is set to 1ms, length of synchronous window is set to 70%EC. We generate 10000 message sets which includes 10 messages, then we measure the time it takes to calculate the re-sponse time of the messages in the set. for each set. Figure 22 shows minimum, average and maximum time that it took to compute the response time of the messages in the set, where 10, 20 and 30 messages are generated in each set. As it can be seen from the figure, by increasing the number of messages, the time of performing the response time for the messages increases. For example, for 10 messages in the set, the maximum time it took to calculate the response time is less than 0.5ms.

(36)

Figure 22: Decision making for EC=1ms LSW=70%EC

In the second case, EC is set to 2ms. We follow the same process as we did in first case to calculate the decision making time for each message set. The result is shown in Figure23. Similar to the previous case, the time for the response time increases by increasing the number of messages in the set. Comparing both cases, the average time for performing response time did not change by increasing the EC size.

Figure 23: Decision making for EC=2ms LSW=70%EC

In the third case, we fix the value of EC, then we compute decision time, when changing the synchronous window duration LSW for a given set of messages. We assume EC = 1ms, the number of message set is 20. Then, the size of LSW is selected 60%, 70%, 80% and 90% of the EC. Figure

24illustrates the minimum, average and maximum decision making time. As it can be seen, the time is slightly decreasing by increasing the window size in the EC. This is due to the fact that, the number of messages that can be fit in the windows is decreasing when the size of the window is decreasing. Thus, the response time of the messages becomes larger. For LSW = 60%EC, the average decision making time is 0.36 ms. While LSW= 90%EC, the average decision making time decrease to 0.25 ms.

(37)

Figure 24: Decision making time for changed LSW

To sum up, the decision making time is affected by both EC and transmission window sizes. However, this experiment is done in a fixed network architecture. Checking the effect of different architecture and the size of network are remained for the future work.

7.3

The Reconfiguration Process

In order to measure the reconfiguration time, we measure each step of the process and sum them up. The first step is to send a request. We assumed that the request is sent from a node in a cluster that has to pass at least two switches, i.e. node D sends a request to node G, we assume the biggest possible cluster and longest possible route for the request message. The response time of the request message is the time it takes to send the request. In the second step, the response time is performed by the cluster head. This measurement is already done in the previous evaluation in Section 7.2. In the last step, the cluster head sends the update message to the other cluster heads, and the cluster heads send the update to their switches and nodes inside the cluster. Again, the time it takes for sending the update message is the worst-case response time of the message. However, there are several update messages with different routes and interference. The maximum response time among those update messages is the time to inform all the switches and the nodes. In this evaluation, we assume that the update message passes the longest route in the network. Moreover, in the evaluation we assume that all the requests are accepted by the cluster head, as it generates the update message. Otherwise, in case of the rejection, there is a short update just for the requested node. The request time, the response time analysis time and the update time are shown by RTrequest, RTdecision, RTupdate, respectively. Thus, the whole reconfiguration process

time is calculated in Equation9.

RTreconf iguration= RTrequest+ RTupdate+ RTdecision (9)

In this evaluation part, we generate randomly 10000 sets of messages that are schedulable. Then we generate request message and update message.The parameters of both message is selected in given range. We perform the evaluation for two different settings. In the first setting the reconfig-uration period is set to 50EC. This means that every 50EC there will be a request message. In the second setting, we increase the reconfiguration period to 100EC. The priority of messages are set based on the Rate Monotonic algorithm. The transmission time and packet size of of each message are set to 123µs. In addition, the hardware fabric latency is set to 5µs.

(38)

average, for 30 messages in the set, it takes around 8.5 ms to completely reconfigure the system. This time is increased slightly to 8.1 ms when the number of messages is 20.

Figure 25: Reconfiguration process time, when Period= 50EC EC=1ms LSW=70%EC

In the second setting where the reconfiguration period is 100EC, the reconfiguration time is shown in Figure26for different number of messages in the set. As it can be seen, compared to the previous case, shown in Figure25, the time did not change.

Figure 26: Reconfiguration process time, when Period= 100EC EC=1ms LSW=70%EC

In addition, we also evaluate the reconfiguration process time when the transmission time is changing for a message set. For this case, we assume that the reconfiguration period is 50EC, message set includes 20 messages and EC is equal to 1ms. We change the size of synchronous window to 60%, 70%, 80% and 90% of the EC. The simulation result is shown in Figure27. As it can be seen from the figure, the average reconfiguration time for larger LSW is less than the case where we set the LSW small. This means the reconfiguration takes 9 EC. When LSW is set to 60%EC, the average reconfiguration time for message set is 8.35ms. While, when we set the LSW to 90%EC, the average reconfiguration time is decreased to 6.33ms. This means the reconfiguration can be done in 7 ECs.

(39)

Figure 27: Reconfiguration process time for changed LSW, when Period= 100EC EC=1ms

The experiments show that the reconfiguration time is affected by the number of messages, size of EC and size of transmission window. The larger numbers of messages is in set, the longer time is needed to perform reconfiguration. For a given message set, the higher size synchronous window can lead a less reconfiguration process time.

(40)

8

Related work

In this chapter, we review state of art regarding the on-line reconfiguration mechanisms. This chapter also provides the advantages of our mechanism compared with the state of the art solutions

Marau et. al. [12] proposed a middleware, which contains QoS management and admission control, for the Flexible Time Triggered Switched Ethernet (FTT-SE) protocol, which is to perform dynamic reconfiguration and adaptability of real-time communication. The FTT-SE protocol is using master-slave technique. This paper gives a brief overview of the FTT-SE protocol, which contains its basic structure and application interfaces. It also identifies requirements of middleware and proposes middlesware structure. The on-line reconfiguration and adaption in the FTT-SE pro-tocol composes four steps. Firstly, there is a negotiation from slave node to master node that slave node requires a request change. The request change includes adding message, removing message and changing parameters of the message. Then, Admission Control is used to handle the request change from slave nodes. The third step is Qos Management, which allocates resource for each message stream. Last step is Mode Change, which synchronizes the change for all slave nodes. The FTT-SE master node implements Admission Control, QoS Management and Mode Changes in the FTT-SE architecture. Through a case-study of camera surveillance system, the proposed middle-ware shows its merits in alleviating the application in slave side, verifying easier interface provided to the application and processing overall reconfiguration in bounded time. However, the proposed protocol is worked in context of single-switch. As there are many master nodes in the multi-hop HaRTES architecture, the proposed FTT-SE protocol is not enough. In multi-hop HaRTES archi-tecture, all the master nodes should agree on the requested changes, and they should apply the new changes at the same time to achieve the consistency. Considering the mentioned difference, we propose a new protocol for multi-hop HaRTES architecture to achieve on-line reconfiguration in this thesis.

Ashjaei et.al [20] proposed two protocols (based on centralized and on distributed) to perform on-line reconfiguration for the multi-hop FTT-SE networks, and made a qualitative comparison between them. The centralized approach chooses one root master node to implement Admission Control and QoS Management. All slave nodes send their request change to the root node. The root node verifies all the request change and informs its decision to other master nodes and slave nodes. Therefore, the root master node requires higher processing power than other master nodes. In the distributed approach, all master nodes can be acted the role as root master node in central-ized approach. Each master node is capable to verify feasibility of request change and compute the allocated resource in parallel. Thus, master node in the distributed approach require higher processor power. Ashjaei et.al [21] used the computational time and reconfiguration signal time to evaluate the reconfiguration time of two protocols. Computational time is the time that used to calculate response time of a set messages. The reconfiguration time is the time that used to negotiate and update in reconfiguration process. The centralized approach, which is easy to be implemented, is more efficient for large scale networks, while the distributed approach has advan-tages in terms of bandwidth and fault tolerance. In our thesis, we proposed a hybrid method to use the advantages of both centralized and distributed approach.

Garner et.al [16] described Stream Reservation Protocol(SRP) of IEEE 802.1 AVB in details and case study of its application in Home networks. SRP is conducted into two parts, registration and reservation. In the stream reservation service, the provider of stream is defined as Talker, the recipient of the stream is defined as Listener. Talker reserves bandwidth resource that audio and video streams require, Listener is registered and receiving audio and video streams from the Talker. Talker provided initially broadcast a offering declarations, which announces streams that could be transmitted and depicts their features, so that the listeners could notice the existence of talker and offer steam to talker. During the transmission of offering declaration, it will collect quality of service information along the channel, and the collected information is classified into two types, positive and negative offering declarations. When the channel is ready, the feedback provided a positive offering declaration that indicates the communication path is ready and the stream can be sent. While the bandwidth of along path is lacked, a negative offering declaration

Figure

Figure 1 shows a typical Ethernet switch structure. This switch contains receive ports, input buffers, a packet handling module, queue transmitting and output ports
Figure 2: Ethernet Powerlink Transmission Cycle [1]
Figure 3: PROFINET IRT communication cycle [2]
Figure 5: Ethernet AVB Protocol Stack [4]
+7

References

Related documents

fluctuation and this causes a delay of the first packet at the time of transmission from switch 1. Due to sudden load fluctuation, only the first packet is delayed and the other

The experiments are repeated with the same packets‟ period, deadline, packet‟s size, and simulation time under different network topologies - different number of leaf nodes

Most modern Local Area Network (LAN) and Wide Area Network (WAN) networks, including IP networks, X.25 networks and switched Ethernet networks, are based on

The motivation for making the comparison is that the two approaches are based on the same network architecture (star topology) and similar traffic handling (multiple

traÆc models for Ethernet that is usable in performance analysis.. In this thesis we describe an Ethernet

The fitness function compares the proposed solutions’ moments to the desired moments at the different timescales, where the desired moments are obtained from measurements.. The

Secondly, it is expected to ensure QoS (Quality of Service) when it comes to latency, worst-case latency and buffer fill level so that the network can perform

The work is conducted in two tracks - one track of experimental measurements and statistical analysis of the latency present in the proposed solutions and one track with a