• No results found

A Server for ARINC 615A Loading

N/A
N/A
Protected

Academic year: 2021

Share "A Server for ARINC 615A Loading"

Copied!
60
0
0

Loading.... (view fulltext now)

Full text

(1)

Institutionen för datavetenskap

Department of Computer and Information Science

Final thesis

A Server for ARINC 615A Loading

by

Markus Gustafsson

LIU-IDA/LITH-EX-A--13/057--SE

2013-11-05

(2)

Final Thesis

A Server for ARINC 615A Loading

by

Markus Gustafsson

LIU-IDA/LITH-EX-A--13/057--SE

2013-11-05

Supervisor: Nicolas Melot

Examiner: Christoph Kessler

(3)

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare –

under en längre tid från publiceringsdatum under förutsättning att inga

extra-ordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner,

skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för

ickekommersiell forskning och för undervisning. Överföring av upphovsrätten

vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av

dokumentet kräver upphovsmannens medgivande. För att garantera äktheten,

säkerheten och tillgängligheten finns det lösningar av teknisk och administrativ

art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i

den omfattning som god sed kräver vid användning av dokumentet på ovan

beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan

form eller i sådant sammanhang som är kränkande för upphovsmannens litterära

eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se

förlagets hemsida http://www.ep.liu.se/

In English

The publishers will keep this document online on the Internet - or its possible

replacement - for a considerable time from the date of publication barring

exceptional circumstances.

The online availability of the document implies a permanent permission for

anyone to read, to download, to print out single copies for your own use and to

use it unchanged for any non-commercial research and educational purpose.

Subsequent transfers of copyright cannot revoke this permission. All other uses

of the document are conditional on the consent of the copyright owner. The

publisher has taken technical and administrative measures to assure authenticity,

security and accessibility.

According to intellectual property law the author has the right to be

mentioned when his/her work is accessed as described above and to be protected

against infringement.

For additional information about the Linköping University Electronic Press

and its procedures for publication and for assurance of document integrity,

please refer to its WWW home page:

http://www.ep.liu.se/

(4)

Abstract

The development of the next generation of Saab's multirole combat aircraft the JAS 39 Gripen includes developing a lot of new software for the air-craft's on-board computers. The new software has to be loaded into these computers routinely, in order to carry out testing on it. This is currently a slow and tedious process.

In order to load the computers a protocol dened in the ARINC 615A standard is used. Today Saab uses commercial software applications im-plementing this protocol for the loading process. These applications have signicant disadvantages, such as not being able to load several computers in parallel or only being able to load computers from a specic manufacturer. In this thesis we introduce a system using a client-server architecture that allows users to load the computers in parallel, regardless of the manufac-turer. In Section 3.2.2 we show that our system reduces the time required to load the aircraft's on-board computers signicantly. We also indicate some improvements that can be made in order to speed up the process even further. These improvements mainly involve improving the on-board com-puters themselves through the use of faster persistent storage and support for later revisions of the protocol involved.

(5)
(6)

Contents

1 Introduction 1

1.1 Software loading at Saab . . . 2

1.1.1 ARINC 615A . . . 2

1.1.2 Trivial File Transfer Protocol . . . 6

1.2 Existing system . . . 8 1.2.1 Drawbacks . . . 9 2 Contribution 12 2.1 Design . . . 12 2.2 Implementation . . . 13 2.2.1 Client-Server Application . . . 14 2.2.2 Networking . . . 14

2.2.3 Data Load Application . . . 16

2.2.4 Target Hardware Allocator . . . 17

2.2.5 Task Queue . . . 18 2.2.6 Client Handler . . . 18 2.2.7 File Handler . . . 19 2.2.8 Client-Server Protocol . . . 19 3 Experimental Evaluation 23 3.1 Experimental Setup . . . 23 3.2 Performance Results . . . 24 3.2.1 Blocksize Option . . . 24

3.2.2 Comparison with the existing system . . . 27

4 Discussion 32 4.1 Overall performance . . . 32 4.2 Related Work . . . 35 5 Conclusion 39 5.1 Future Work . . . 39 5.2 Final Words . . . 40 Bibliography 42

(7)

CONTENTS CONTENTS

Glossary 45

Appendix A File Formats Used by the Client-Server Protocol 47

A.1 HELLO.CSP . . . 47 A.2 GOODBYE.CSP . . . 48 A.3 SCHEDULE.CSP . . . 48 A.4 ALLOCATION.CSP . . . 50 A.5 DEALLOCATION.CSP . . . 51 A.6 UPLOAD.CSP . . . 52 A.7 MESSAGE.CSP . . . 53

(8)

Chapter 1

Introduction

Saab AB is a Swedish military defence and civil security company, with its main business areas being aeronautics, weapon systems and electronic defence systems. Products range from weapon systems such as the mul-tirole combat aircraft JAS 39 Gripen, the Carl Gustav recoilless rie and RBS 70 man-portable air-defense system to radar systems such as the GI-RAFFE Radar family of land and naval radars, the eld artillery radar sys-tem ARTHUR and Airborne Early Warning and Control Syssys-tem (AEW&C) Erieye.

The Aeronautics department at Saab's headquarter in Linköping is cur-rently in development of the next generation of the JAS 39 Gripen. This process involves developing a lot of new applications for the aircraft's embed-ded real-time system. For critical applications such as these testing becomes extremely important, which often requires substantial time and eort [8].

In order to test the applications, either on-board the plane or in sim-ulators, the software must rst be loaded into the computers making up the system. Since a lot of new software is being developed, the computers need to be reloaded often, especially in the simulators. Today a setup of the Gripen's real-time system used for testing purposes is located in a dedicated computer lab, and loading is only possible through a computer dedicated for this purpose in the lab. This computer lab is a shared resource between all the dierent teams within the Aeronautics department responsible for developing new software.

In order to integrate any avionics system it is important to understand the behavior of both the individual components in the system and the sys-tem as a whole [5], and thus Saab engineers need to be able to load both individual components in the system as well as the whole system. Since there is only one computer through which loading can be carried out, this means that one engineer in process of loading one of the computers in the system will eectively block others wishing to load any of the other com-puters. This makes the testing of individual components an unnecessarily

(9)

1.1. SOFTWARE LOADING AT SAAB CHAPTER 1. INTRO inecient process. Furthermore, with the software used for loading today it is impossible to load the whole system simultaneously, making for instance integration testing both time consuming and unnecessarily complex. There-fore Saab needs a system designed to make the overall loading process easier and less time consuming.

1.1 Software loading at Saab

The Gripen's embedded real-time system essentially consists of a collection of computers connected through a computer network. The computers all have dierent roles and responsibilities, such as ight mission management and controlling the displays found in the cockpit etc., which they fulll by running dierent applications. But from the perspective of this work, they appear as black boxes to be loaded with software; their actual functions and purposes are irrelevant. Throughout this text, the Gripen's real-time system is referred to as the Avionics Core.

To load a computer in the Avionics Core with new software a loading ap-plication is used. The loading apap-plication loads the computer according to a protocol dened in a standard called ARINC 615A, which is provided by Aeronautical Radio, Inc. (ARINC). The ARINC 615A standard designates the computers to be loaded as Target Hardwares, and the loading applica-tion as the Data Load Applicaapplica-tion (DLA). The person running the DLA is referred to as the operator and the device it is running on is referred to as the Data Loader.

Relevant for this work is also another ARINC standard used by Saab called ARINC 665. This standard species les that describe the software load and that are used as input to the DLA.

The Target Hardwares at Saab comes from two dierent manufacturers designated Manufacturer A and Manufacturer B in this text. There are cur-rently 6 Target Hardwares from Manufacturer A and 4 from Manufacturer B in use in the Avionics Core. Throughout this text, a shorthand on the form of Mn_THx is occasionally used, where for example MA_TH1 would be Target Hardware 1 from Manufacturer A and MB_TH2 would be Target Hardware 2 from Manufacturer B.

1.1.1 ARINC 615A

As explained above, the process of loading of a Target Hardware with soft-ware is carried out through a protocol dened in a standard called ARINC 615A. This protocol is referred to as the Data Load Protocol (DLP) by ARINC 615A.

The DLP provides several dierent operations and is implemented using the Trivial File Transfer Protocol (TFTP) as an underlying protocol, which in turn uses UDP as its underlying protocol. The protocol stack has been illustrated in Figure 1.1.

(10)

1.1. SOFTWARE LOADING AT SAAB CHAPTER 1. INTRO

Figure 1.1: The protocol stack for the DLP

There are several revisions of ARINC 615A and as of today the Avion-ics Core implements two of them: ARINC 615A-1 [1] used by the Target Hardwares from Manufacturer B and ARINC 615A-2 [2] used by the Target Hardwares from Manufacturer A. These two revisions do not dier in the way the protocol is performed or in which operations they provide, but there are slight dierences to the protocol les used. The overall structure of the les are the same, but ARINC 615A-2 [2] les might have had elds added at points labeled as expansion points in ARINC 615A-1 [1].

The latest revision of ARINC 615A is called ARINC 615A-3 [4]. This revision is currently not used in the Avionics Core, but Saab expects the Target Hardwares from Manufacturer A to support it in the near future. Even though it is not currently supported, it introduces some features useful for this work.

The ARINC 615A standards species three dierent operations that can be performed on a Target Hardware:

• Information Operation: allows the operator to query some information on how a Target Hardware is congured.

• Uploading Operation: allows the operator to upload les to a Target Hardware. This operation is the main focus for this work.

• Download Operation: allows the operator to download les from a Target Hardware. It comes in two variants:

 Operator Dened Mode: the Target Hardware presents to the operator a list of les that can be downloaded, from which the operator chooses the ones to be downloaded.

 Media Dened Mode: the Target Hardware is presented with a list of les that the operator wants to download and the Target Hardware then sends them.

All operations are performed by exchanging protocol les over TFTP. These lenames for these les are on the form <THW_ID_POS>.<extension>,

(11)

1.1. SOFTWARE LOADING AT SAAB CHAPTER 1. INTRO where THW_ID_POS is a string resulting from concatenating a Target Hard-ware's ID and its position. At Saab this string is not unique for each Target Hardware and aside from using the right string for the right Target Hardware its actual content is irrelevant in this context.

The Uploading Operation

After having rst decided what Target Hardware to load the operator has to decide on what loadset is to be loaded. A loadset is dened by a le called LOADS.LUM and its content is specied in ARINC 665 [3]. The le is essentially just a list of what ARINC 665 designates as Header Files and their corresponding CRCs. The Header Files are also specied in ARINC 665 and they are in turn lists of Data Files (and their CRCs). It is these Data Files that make up the actual software that the operator wishes to load the Target Hardware with.

The ARINC 615A standard divides the Uploading Operation into the following three steps:

Initialization Step: During the Initialization Step the Data Loader re-quests a le called <THW_ID_POS>.LUI from the Target Hardware. This le tells the operator whether the Target Hardware supports the op-eration at all, is ready for it and what version of ARINC 615A it runs. If this step is successful, the List Transfer Step is performed.

List Transfer Step: The Data Loader starts o the List Transfer Step by waiting to receive a le called <THW_ID_POS>.LUS from the Target Hardware. This le is a status le and it is described in more detail later. Here it is mainly used to ensure that both the Data Loader and the Target Hardware are in a synchronized state.

Once the Data Loader has received the status le it sends the Target Hardware the <THW_ID_POS>.LUR le. This le is a list of lenames for Header Files and has been generated from the LOADS.LUM le described earlier.

Transfer Step: In the Transfer Step the Target Hardware asks the Data Loader for the les to be uploaded. It rst asks for the Header Files listed in the <THW_ID_POS>.LUR le and then asks for the Data Files listed in each one of the Header Files. Once all les have been suc-cessfully transfered the Uploading Operation is complete.

A sequence diagram over the les exchanged during the Uploading Operation is shown in Figure 1.2.

As illustrated in Figure 1.2, the Target Hardware sends the status le <THW_ID_POS>.LUS in parallel to receiving Data Files from the Data Loader throughout the File Transfer Step. This happens periodically with the delay in between status les being specied to a maximum of 3 seconds.

(12)

1.1. SOFTWARE LOADING AT SAAB CHAPTER 1. INTRO

Figure 1.2: Sequence diagram over the Uploading Operation The status le gives the Data Loader continuous updates on how the operation is progressing from the Target Hardware's perspective. It also serves as a heartbeat that allows the Target Hardware to indicate that it is working even though it might not currently request les.

Limitations

Figure 1.2 shows that both the Data Load Application and the Target Hard-ware request to write and read les to and from each other during the Upload Operation. This means that both the Data Loader and Target Hardware both have to switch between acting as TFTP server and TFTP client.

For instance, during the List Transfer Step the Target Hardware acts as TFTP client by sending <THW_ID_POS>.LUS to the Data Loader acting as TFTP server. Upon nishing this le transfer, the Data Loader acts as a TFTP client and sends <THW_ID_POS>.LUR to the Target Hardware now acting as a TFTP server.

Assuming the operator wishes to be able to perform operations on several Target Hardwares in parallel, this means that our implementation of the DLA needs to be able to run a TFTP server for each Target Hardware. This is made complicated by the fact that the ARINC 615A versions used in the Avionics Core denes that the TFTP server in the Data Load Application must listen on port 59.

(13)

1.1. SOFTWARE LOADING AT SAAB CHAPTER 1. INTRO Using a single TFTP server to handle all the requests from the Target Hardwares would not work since the lenames for the protocol les are not unique for each Target Hardware. In a scenario where the operator performs the Uploading Operation on two Target Hardwares simultaneously, both Target Hardwares would continuously send their status les and their lenames would be the same in both cases. So one Target Hardware could very well end up overwriting the other ones les.

This problem is recognized by ARINC 615A-3 [4] which also provides a solution for it, which is described in Section 1.1.2. But since ARINC 615A-3 is not currently supported in the Avionics Core this is a problem that has to be solved in some other way.

1.1.2 Trivial File Transfer Protocol

This section serves as a brief introduction to the Trivial File Transfer Proto-col (TFTP) as it is dened in RFC1350 [18]. The TFTP Options extension is also detailed as dened in RFC2347 [11] and the Blocksize Option as de-ned in RFC2348 [12], as this option is utilized by the Target Hardwares supplied by Manufacturer A. Finally, some of the extensions made to TFTP in ARINC 615A are described.

TFTP is designed to be a very simple protocol for transferring les be-tween machines on a network. It is designed to be implemented on top of the Internet User Datagram Protocol (UDP). The protocol only allows a client to read and write les from or to a remote server. It does not allow clients to list directories, or have any of the other features found in for example the regular File Transfer Protocol (FTP) [16].

There are three modes of transfer dened: netascii, octet and mail. In the octet mode the data is treated as raw 8 bit bytes and this is the mode used by ARINC 615A.

TFTP supports ve types of packets: Read Request (RRQ), Write Re-quest (WRQ), Data (DATA), Acknowledgement (ACK), Error (ERROR). How these packets interact to allow a client to download and upload les to a TFTP server is depicted in Figure 1.3 and Figure 1.4.

Read Request and Write Request: To establish a transfer the client sends either a Read Request (RRQ) or a Write Request (WRQ) to the server. This request contains the lename for the le the client wishes to read or write, and the transfer mode the client wishes to use. The server responds by sending the rst Data (DATA) packet if the request was a RRQ, or an Acknowledgement (ACK) packet if the request was a WRQ.

Data: The Data packet contains the actual data of the le to be transfered, along with a block number to identify it. After having sent a Data packet, the sender waits for the data to be acknowledged before sending the next segment of data. This results in the protocol working in a lock-step fashion.

(14)

1.1. SOFTWARE LOADING AT SAAB CHAPTER 1. INTRO

Figure 1.3: TFTP download Figure 1.4: TFTP upload The size of the data in a Data packet is dened to be 512 bytes. This number is called the blocksize. If the size is less than 512 bytes this indicates the last Data packet of the transfer.

Acknowledgement: Upon receiving a Data packet, the server sends an Acknowledgement (ACK) packet. The ACK packet contains the same block number as that of the Data packet it acknowledges. The ACK packet is also used by the server to grant a WRQ, then with block number 0.

Error: If a request cannot be granted by the server, or any other error occurs during the transfer, then an Error (ERROR) packet is sent. This Error packet indicates what went wrong, such as the le being requested not being found, the disk being full etc.

TFTP Options:

TFTP Options is an extension to TFTP dened in RFC2347 [11].

Each option consists of two strings: an option name and an option value. These strings are appended onto a WRQ or RRQ packet to be sent to a TFTP server. If the server supports and accepts one or more of the options suggested by the client it responds with a new type of packet: Option Ac-knowledgement (OACK), listing which options it accepts. If the server does not accept any of the options or does not support RFC2347 [11] in general, it ignores the appended options and responds according to RFC1350 [18]. Blocksize Option The Blocksize Option is dened in RFC2348 [12] and allows a client to suggest a larger blocksize for a le transfer than the default blocksize of 512 bytes dened by RFC1350 [18].

Since TFTP works in a lock-step fashion an increased blocksize results in a reduction of the number of packets sent. For example, when using

(15)

1.2. EXISTING SYSTEM CHAPTER 1. INTRO a blocksize of 1024 instead of the default 512, both the number of Data packets sent and the number of ACK packets sent are halved. According to RFC2348 [12] increasing the blocksize to 1024 led to a 32% reduction in transfer times over the default blocksize.

From our experience with the Avionics Core, we observe that the Target Hardwares from Manufacturer A support the Blocksize Option and request a blocksize of 1200 during the Upload Operation.

ARINC 615A additions to TFTP

This section goes through the additions to TFTP made by ARINC 615A. Error Messages ARINC 615A denes two types of errors of their own in addition to those dened in RFC1350: the WAIT Message and the ABORT Message.

The WAIT Message is only sent in response to an initial request (WRQ or RRQ) and is used to tell the client to back o for the period of time supplied with the message before trying again. This as to let the Target Hardware switch from one mode of operation to another, such as preparing itself to receive les.

The ABORT Message can be utilized by the Data Loader to tell the Target Hardware that it wishes to abort an operation currently in progress. Port Option The Port Option is exclusive to ARINC 615A-3 [4], and provides a solution to the problem of running multiple servers concurrently described earlier in Section 1.1.1.

Upon initiating an operation, the Data Loader appends what port it wishes the Target Hardware to communicate back on onto its initial TFTP packet (the RRQ for <THW_ID_POS>.LUI in the case of the Upload Oper-ation). This enables the loading application to run a TFTP server on a dierent port for every Target Hardware it wishes to perform operations on. Unfortunately, the Target Hardwares in the Avionics Core do not support ARINC 615A-3 [4], and thus not the Port Option. Therefore, the problem of several Target Hardwares using the same port has to be solved in a slightly dierent manner. The solution is described in Section 2.2.2. The Port Option has however still been implemented for use with the Client-Server Protocol described in Section 2.2.8.

1.2 Existing system

There are currently two dierent commercial loading applications in use to load the Target Hardwares that make up the Avionics Core. These loading applications run on computer dedicated for this purpose, which is located in the same computer lab as the Avionics Core. Using ARINC 615A termi-nology, this computer is referred to as the Data Loader. The Data Loader

(16)

1.2. EXISTING SYSTEM CHAPTER 1. INTRO can access the loadset to be loaded from a remote le server located outside of the computer lab.

Which loading application the operator must use depends on the manu-facturer of the Target Hardware to be loaded. The two dierent applications are called Application A and Application B throughout this text, where Ap-plication A is used to load Target Hardwares from Manufacturer A and Application B is used to load the ones from Manufacturer B.

Aside from one application only being able to load the Target Hardwares from one manufacturer, there are some other limitations to both applica-tions worth mentioning. Most importantly, only Application B supports the loading of multiple Target Hardwares in parallel. Application A does support parallel loading, but only when loading over a special type of com-puter network not used at Saab. Furthermore, Application B does not even fully implement ARINC 615A. For example, the Uploading Operation is supported but neither of the two types of the Download Operation.

Since there is only one Data Loader this is a shared resource between many engineers developing new software for the Avionics Core. The engi-neers are divided into several teams, and all teams regularly need to load one or several of the Target Hardwares with their latest work to carry out testing. For this they must use the Data Loader. Other teams that wish to use the Data Loader will have to wait for their turn.

To control access to the Avionics Core, Saab uses a shared electronic calendar that is accessible by all engineers from their regular workplace. When an engineer or a team wish to test their latest software, they place a booking in this calendar indicating which Target Hardwares they need and for how long. Then when the time arrives, they move over to the computer lab where the Avionics Core is located and, hopefully, nd the Data Loader free to use. If it is not free to use, they have to wait for their turn.

This situation is illustrated in Figure 1.5.

1.2.1 Drawbacks

Since Application A is unable to load multiple Target Hardwares in parallel, there are scenarios where two teams are in the computer lab at the same time, wishing to load dierent Target Hardwares from Manufacturer A but still have to take turns using the Data Loader.

Only being able to load these Target Hardwares sequentially also makes the process of loading very time consuming for a team that needs to load multiple Target Hardwares. This task is made even more complex if a team needs to load Target Hardwares from both manufacturers, and has to use the appropriate loading application depending on the manufacturer.

There are also some peculiarities in Application B that causes Applica-tion A to not work properly if ApplicaApplica-tion B was not shut down properly prior to launching Application A, making it impossible to load Target Hard-wares from Manufacturer A and B simultaneously.

(17)

1.2. EXISTING SYSTEM CHAPTER 1. INTRO

Figure 1.5: The situation today

The Gantt chart in Figure 1.6 illustrates a theoretical situation where an operator wishes to perform the Uploading Operation on 4 Target Hardwares; two from each manufacturer. In this scenario, the operator rst loads the Target Hardwares from Manufacturer B using Application B. A setup time of 10 seconds before the Uploading Operation can be started is assumed. The setup time represents the time it might take for the operator to select the Target Hardware to be loaded from a list and pick a loadset for it. Setup time is added for both Target Hardwares because despite the fact that Application B allows the operator to load the Target Hardwares in parallel, the Uploading Operation still has to be started sequentially by the operator. The total time that the Uploading Operation takes for these is assumed to be 3 minutes for both Target Hardwares.

After the Loading Operation for MB_TH2 has nished the operator switches to Application A, to be able to load the Target Hardwares from Manufacturer A. For this imaginary scenario, the switch is assumed to take 30 seconds. Once again a setup time of 10 seconds is assumed, but since Application A does not support parallel loading the operator has to wait for the Uploading Operation being performed on MA_TH1 to nish before starting the one for MA_TH2. For both Target Hardwares from Manufac-turer A, the time it takes to complete the Uploading Operation is assumed to be 8 minutes.

(18)

1.2. EXISTING SYSTEM CHAPTER 1. INTRO

(19)

Chapter 2

Contribution

In this chapter a new system designed to eliminate the drawbacks coming with the existing system is introduced. Section 1.2 identied the following main three drawbacks with the existing system:

• Only one person can operate the Data Loader at a time.

• The operator has to use a dierent Loading Application depending on manufacturer of the Target Hardware.

• The process of loading multiple Target Hardwares is slow, due to cur-rent software not being able to load them in parallel.

To eliminate these drawbacks our system follows the client-server model with a server that supports parallel loading regardless of the manufacturer. The users are presented with a client and no longer have to care about the manufacturer of the Target Hardwares they which to load, nor wait for their turn to use the Data Loader. By enabling users to load all the Target Hardwares in parallel, the time required to load multiple Target Hardwares is considerably reduced.

2.1 Design

As noted in the introduction to Chapter 2, the new system makes use of the client-server model. The engineers run a client application through which they send instructions to a server. The server, which is located in the same computer lab as the Avionics Core, then carries them out on the Avionics Core. By using this model the engineers no longer have to make their way out to the computer lab in order to load a Target Hardware, but can do it from their regular workplace. The situation is illustrated in Figure 2.1.

The server is able to load Target Hardwares from both manufacturers, so the engineers do not have to care about the manufacturer of the Target Hardwares they wish to load.

(20)

2.2. IMPLEMENTATION CHAPTER 2. CONTRIBUTION

Figure 2.1: Proposed system

Access to the Avionics Core is managed through a schedule held by the server. If an engineer wishes to load a Target Hardware he/she rst needs to allocate it for some period of time in this schedule using her client. Assuming no other engineer has allocated the same Target Hardware for an overlapping period of time, the server is now willing to carry out the engineer's client instructions on it. Such as loading it with a loadset pointed to by the engineer.

If an engineer is done with a Target Hardware earlier than he/she allo-cated it for, the engineer can deallocate it. This makes it available for other engineers to allocate.

As explained in Section 1.2.1, it is not possible to load more than one Tar-get Hardware from Manufacturer A at a time with the loading application used today. By making this possible our system allows multiple engineers to load Target Hardwares simultaneously without having to go through a shared resource such as the Data Loader in the system in use today.

The largest benet from parallel loading however is seen when one en-gineer or a team wants to carry out for instance integration testing that requires the whole Avionics Core to be loaded. Instead of, as with the exist-ing system, havexist-ing to load all the Target Hardwares sequentially they can with our system load them all in parallel.

2.2 Implementation

In this section the implementation of the system outlined in Section 2.1 is described. The main focus is how the server in this client-server architecture

(21)

2.2. IMPLEMENTATION CHAPTER 2. CONTRIBUTION is implemented, as the client itself represents a low priority in this work.

Figure 2.2 shows a component diagram over the server in the system suggested. One can think of the application as consisting of a frontend and a backend, where the frontend is responsible for communicating with the clients and the backend responsible for communicating with the Avionics Core.

Figure 2.2: Component diagram

2.2.1 Client-Server Application

The Client-Server Application (CSA) is the component responsible for com-municating with the clients. Communication is done over the Client-Server Protocol (CSP) which denes the commands sent between client and server. It is the CSA that is responsible for carrying out the commands coming from the clients. The CSP is explained in more detail in Section 2.2.8.

2.2.2 Networking

The networking component implements TFTP and handles the underlying UDP communication. This component is used both by the CSA and the DLA, resulting in protocol stacks for the server being as illustrated in Fig-ure 2.3.

(22)

2.2. IMPLEMENTATION CHAPTER 2. CONTRIBUTION

Figure 2.3: Protocol view

TFTP is implemented as according to RFC1350 [18], as well as TFTP Options as described in RFC2348 [12] and the Blocksize Option described in RFC2347 [11]. This implementation also support the Port Option intro-duced with ARINC 615A-3 [4].

Furthermore, it provides a solution to the problem that arises when communicating with several Target Hardwares that were described in Sec-tion 1.1.1. To solve this, it introduces virtual TFTP servers. Instead of creating a TFTP server that just listens on port 59, a virtual TFTP server is created for each Target Hardware by giving them a virtual UDP socket each. A virtual socket is merely a queue used to hold UDP packets that are associated with the IP address of the Target Hardware. These queues are populated by a thread that has a regular socket listening on port 59. Upon receiving a datagram, this thread enqueues it in the right virtual socket based on the IP number of the sender. The concept is shown for three Target Hardwares in Figure 2.4.

Figure 2.4: Virtual UDP

The virtual UDP sockets are only used for incoming packets on port 59, i.e. TFTP WRQs and RRQs coming from the Target Hardwares. For the actual transfer of data normal UDP sockets are used. Figure 2.5 shows the internals of the networking components and depicts how the TFTP layer will utilize both Virtual UDP and OS provided UDP.

(23)

2.2. IMPLEMENTATION CHAPTER 2. CONTRIBUTION

Figure 2.5: Networking component internals

2.2.3 Data Load Application

The Data Load Application (DLA) implements the Data Load Protocol (DLP) described in the ARINC 615A standards and is responsible for per-forming the actual operations on the Target Hardwares. It is initialized by the CSA and for each Target Hardware in the Avionics Core a DLA thread is spawned, as illustrated in Figure 2.6. Each of the threads has a task queue associated with it in the Task Queue component.

Figure 2.6: DLA initialization

Each thread fetches tasks from their task queue and carries them out on its corresponding Target Hardware. Unless the task queue is empty, the Upload Operation step that the task corresponds to is carried out on the Target Hardware. If no task is found the thread goes to sleep. The thread is awoken once a new operation (set of tasks) is enqueued for it. The general

(24)

2.2. IMPLEMENTATION CHAPTER 2. CONTRIBUTION idea is illustrated in Figure 2.7.

Figure 2.7: The DLA for Target Hardware 1 fetching and carrying out tasks The DLA currently only implements the Upload Operation of the DLP, but it could easily be extended to support additional operations in the future.

2.2.4 Target Hardware Allocator

The Target Hardware Allocator (THA) manages access control over the Avionics Core. It holds a schedule over which Target Hardwares are currently owned by which clients. If a client requests to have an operation carried out on a Target Hardware, the Client-Server Application (CSA) uses the THA to determine whether that request can be granted or not. For a request to be granted, the client must rst have allocated the Target Hardware it wants to have operations carried out on.

A client asks for the allocation of a Target Hardware by issuing the Allocation Command to the CSA. The Allocation Command states which Target Hardware the client wishes to allocate and for what period of time. The CSA then asks the THA if said Target Hardware is available for allo-cation during the time the client asked for via the Alloallo-cation Command. If the THA grants the allocation, i.e. no other client has allocated that Target Hardware for that period of time, the THA returns an Allocation ID for the CSA. The CSA then pass this back to the client to let it know the Allocation Command succeeded.

A client can also ask the CSA to deallocate an allocation by specifying Target Hardware and Allocation ID via the Deallocation Command. The CSA then attempts to deallocate the allocation with the supplied Allocation ID in the THA. The attempt succeeds assuming the Allocation ID exists in the schedule corresponding to the Target Hardware and is actually owned by that client.

Furthermore, a client can request the schedule held by the THA by sending the CSA the Schedule command. The CSA asks the THA to write

(25)

2.2. IMPLEMENTATION CHAPTER 2. CONTRIBUTION its schedule le and then pass the le on to the client who requested it. The schedule can then be used by the client to make an informed decision on when a Target Hardware is available for it to allocate.

The le format used for this le is displayed in Table A.3 under Sec-tion A.3. This le could possibly also be used to recover the schedule after a server crash. The le format is designed to allow for this, but the feature is yet to be implemented.

2.2.5 Task Queue

It is through the Task Queue that the frontend passes on instructions re-ceived from clients to the backend.

The Task Queue contains a queue of tasks for each Target Hardware in the Avionics Core, where a task corresponds to a step in an ARINC 615A operation. These tasks are created and enqueued by the CSA. For instance, when the CSA receives the instruction to load a Target Hardware from a client, it enqueues the steps the Upload Operation consists of in the right queue. The tasks are then fetched by the DLA and carried out on the Target Hardware.

The purpose of breaking the Uploading Operations into tasks and en-queueing them in queues is mainly to provide support for improvements that might be needed in the future. The circumstances under which these improvements might need to be made and their nature are discussed in more detail in Section 4.2.

2.2.6 Client Handler

The Client Handler (CH) is responsible for keeping track of which clients are currently in session. It also keeps a queue for each client (in session or not) containing Client Messages.

Client Message For instance, when the Data Load Application (DLA) performs an Uploading Operation the Target Hardware sends the progress of the operation back in form of the status le, as described in Section 1.1.1. In turn, the DLA then passes this back to the Client-Server Application (CSA) through the Client Handler in the form of a Client Message. The CSA then passes the Client Message back to the client.

If a client ends its session in the middle of an operation, Client Messages may still be enqueued in the client's message queue for the client to receive upon starting a new session. This allows an engineer to request an operation to be carried out and then close its client only to start it again at a later time and learn the result of the operation.

(26)

2.2. IMPLEMENTATION CHAPTER 2. CONTRIBUTION

2.2.7 File Handler

This component is responsible for everything le system related, such as setting up and managing folder structures used, extracting relevant data from for instance status les, creating protocol les used both for the Data Load Protocol and the Client-Server Protocol etc.

The File Handler wraps some platform specic system calls, such as calls for directory creation etc. It also implements CRC checking for both the CRC32 and CRC16 checksums dened by ARINC 665-3 [3], which is used to check the integrity of a loadset.

2.2.8 Client-Server Protocol

The Client-Server Protocol is a simple protocol implemented on top of TFTP. It consists of 7 commands: Hello, Goodbye, Schedule, Allocation, Deallocation, Upload and Message. Each command has a le associated with it, and to issue a command this le is transferred either from client to server or vice versa. The Message command is the only command the server carries out on the client, the other ones are only carried out on the server by the client. In order to allow both client and server to carry out commands on each other both needs to run both a TFTP client as well as a TFTP server. The sequence diagram in Figure 2.8 shows how these commands interact from a client-server perspective when a client wish to perform the Uploading Operation on a Target Hardware.

In this case the client rst sends the Hello command to register itself with the server. It then requests the schedule so the person running the client can determine when the Target Hardware he or she wants to use is available. The client then allocates the Target Hardware for a suitable period of time and issues the Upload command. The server then start the Uploading Operation on the Target Hardware when the time arrives. The server passes the status of the ongoing Uploading Operation back to the client throughout the whole operation through the Message command. The Message command is also used to indicate whether or not an allocation request or upload request was granted. Once the operation is complete the client might issue the Goodbye command, or ask the server to perform some other operation. Note that there is only one Upload Operation being performed in Figure 2.8 and that it is possible for the client to start for instance the Uploading Operation on some other Target Hardware simultaneously. The following sections describe the commands in more detail.

Hello Command

The Hello command is used by a client to introduce itself to the server. Clients issue this command by sending the server the le HELLO.CSP. This le contains the username of the user running the client; the full le format used for HELLO.CSP is found in Section A.1.

(27)

2.2. IMPLEMENTATION CHAPTER 2. CONTRIBUTION

Figure 2.8: Client-server communication during an Uploading Operation The le is sent with the TFTP Port Option described in Section 1.1.2. The port set in the Port Option is the port on which the client's TFTP server is listening. This TFTP server in the client allows the CSP server to pass back messages to it through the Message command. By utilizing the Port Option, the port on which a client listens does not need to be xed. It can ask its operating system for what ever port is available and then tell the server that it wants Message commands back on that port. This also allows for multiple clients being run from the same computer if so is desired.

Upon having received the HELLO.CSP the client is said to be in session and is marked as active in the Client Handler. The CSA launches a thread for that client which will be responsible for passing Client Messages back to it. As long as the client is marked as active, the thread fetches Client Messages from the client's queue in the Client Handler and passes these back to the client through the Message command.

Goodbye Command

By sending the le GOODBYE.CSP the client instructs the CSP server that it wishes to end the session. The le format used for this le is found in Section A.2. For the CSA receiving this le it means that it should not attempt to pass Client Messages back to it any longer.

(28)

2.2. IMPLEMENTATION CHAPTER 2. CONTRIBUTION The client is marked as inactive in the Client Handler and the thread responsible for passing back Client Messages to the client is terminated. If the client sends this le while an Upload Operation it has requested in being carried out on the Avioncs Core, Client Messages are being produced but are stored in the client's queue in Client Handler. These are then sent back to the client the next time it starts a new session. This allows a user to start an Uploading Operation from one computer, close its client and move on to some other computer, relaunch the client and from that computer learn how the operation turned out.

Schedule Command

A client sends a request for the le SCHEDULE.CSP. Upon receiving the re-quest for this le, the server asks the THA to write the schedule for each Target Hardware to a le and then transfers it to the client. The engineer running the client can then use these schedules to learn when a Target Hard-ware he or she needs is available for use and use this information for the Allocation command.

See Section A.3 for information regarding the le format used to repre-sent the schedules held by the THA.

Allocation Command

When an engineer decides about the time period for which he or she wishes to allocate a Target Hardware, he or she sends the Allocation command to the server. This is done by sending the le ALLOCATION.CSP to the CSA server. This le tells the server which Target Hardware the client wishes to allocate and for what time span. The le also contains the username of the user running the client so the server is able to tell who owns the allocation. For details regarding the le formats used for ALLOCATION.CSP see Section A.4.

The server allocates the Target Hardware for the client assuming it is available for said time span. The CSA responds by creating a Client Message indicating whether or not the request was accepted. This Client Message is then transferred back to the client by the thread responsible through the Message command.

Deallocation Command

If an engineer realizes he or she will not need the Target Hardware for all the time they have allocated it for, they send the Deallocation command to the CSA by sending the le DEALLOCATION.CSP to it. This le tells the server which allocation the engineer wishes to deallocate and for which Target Hardware and the username of the engineer issuing the request. Further details of this le format used for the DEALLOCATION.CSP le can be found in Section A.5.

(29)

2.2. IMPLEMENTATION CHAPTER 2. CONTRIBUTION Upon receiving the DEALLOCATION.CSP le, the CSA asks the THA to deallocate the allocation. This operation succeeds assuming that such allo-cation exists and that it is owned by the engineer who issued the request.

The CSA responds by creating a Client Message indicating whether or not the request was accepted that eventually results in the Message Com-mand being carried out on the client.

Upload Command

The client asks the server to perform an ARINC 615A Uploading Operation on a Target Hardware by sending the le UPLOAD.CSP to the CSA. This le tells the server which Target Hardware is to be loaded, at what time and the path to the software to be loaded. The le also contains the username of the user making the request. See Section A.6 for more information regarding the le format used for UPLOAD.CSP.

If the server nds that the client does indeed own the Target Hardware that it wishes to load at the specied time, the CSA creates the tasks asso-ciated with the Upload Operation and enqueues them in the Task Queue.

The CSA responds by creating a Client Message indicating whether or not the request was accepted that will eventually result in the Message command being carried out.

Message Command

To indicate failures, the status of an ongoing ARINC 615A operation, whether or not a request made by the client was accepted or not etc. the CSA is-sues the Message command on the client. It does this by sending the le MESSAGE.CSP. The le MESSAGE.CSP is basically just the Client Message described in Section 2.2.6 written to le.

The le format for this le is detailed in Section A.7. Security Concerns

Allocation in the THA is determined only by the username of the person running the client making the allocation. The username is supplied by the client, and the server trusts the information it receives from the client. Thus, it is entirely possible to have a scenario where an attacker sends the server a DEALLOCATION.CSP le containing the username of some other user and have that user's allocation deallocated. This has however been deemed a scenario so unlikely that this approach is sucient for the time being.

(30)

Chapter 3

Experimental Evaluation

This chapter presents how well our new system performs when compared to the system in use today. In Section 3.1 the experimental setup is described and then the actual comparison is made in Section 3.2.

3.1 Experimental Setup

To test how well our new system fares compared to the one in use by Saab today, as much of the Avionics Core as possible is loaded with a recent loadset using both approaches. A scenario like this corresponds to situations where engineers need to test an integration build requiring them to load many of the Target Hardwares.

When testing the existing software the computer referred to as the Data Loader in Section 1.2 is used. The same computer is also used to run the server in our new system, with a client connecting from another computer found in the computer lab.

The statistics presented in this chapter comes from Wireshark. Wire-shark is a network analysis tool that allows us to capture all the packets sent and received, as well as produce some statistics over them.

When loading one or more Target Hardwares a Wireshark capture that captures all packets sent and received during the loading process is started on the aforementioned computer. Statistics over the total elapsed time, total number of bytes sent or received etc. can then be produced from the capture le. It should be noted that the total number of bytes sent and received during loading of a Target Hardware will always be slightly larger than the total amount of bytes in the loadset used. This is due to Wireshark including all the packet headers surrounding the data being transferred in its statistics, as well as there being some overhead from ARINC 615A protocol les being transferred. In our case that includes headers for TFTP, UDP and Ethernet for each packet, as well as the list of Header Files transferred to the Target Hardware in every Upload Operation.

(31)

3.2. PERFORMANCE RESULTS CHAPTER 3. EVALUATION It is also possible to lter out packets based on for instance the IP ad-dress of the receiver and sender. This allows ltering out only the packets belonging to the Upload Operation performed on a specic Target Hard-ware and produce statistics over those packets alone. Such a lter is used to obtain statistics for each Target Hardware when loading in parallel.

Unfortunately, it has been dicult to ensure that the full Avionics Core is operational when carrying out tests, as it itself is still under development. The results presented reect what was available at the time of these tests. This that means our new system was only tested against 7 out of the 10 currently available Target Hardwares (4 from Manufacturer A and 3 from Manufacturer B) in the Avionics Core. Furthermore, the Avionics Core is a heavily tracked shared resource central to Gripen development. In com-bination with how long time it takes to load the slowest Target Hardwares, this has made it dicult for us to carry out multiple runs of our tests. Con-sequently, this is reected in the data presented in the following sections.

3.2 Performance Results

This section presents the results obtained when loading the Avionics Core. Section 3.2.1 presents the results obtained when using the Blocksize Option and compares them to not using it. Section 3.2.2 presents the results ob-tained when loading using the existing software and when using our new system, and compares the two.

3.2.1 Blocksize Option

This section presents the results obtained when testing the impact of using the TFTP Blocksize Option discussed in Section 1.1.2. Recall from Sec-tion 1.1.2 that it is the TFTP client that proposes the opSec-tions that are to be used for a le transfer. Since the Target Hardware acts as the client during the File Transfer Step described in Section 1.1.1 (the Target Hard-ware requests les from the Data Loader) it is up to the Target HardHard-ware to propose a blocksize to the Data Loader. Wireshark logs taken while loading with existing software show that the Target Hardwares from Manufacturer A propose a blocksize of 1200 bytes, while the Target Hardwares from Man-ufacturer B do not seem to implement the Blocksize Option at all.

In order to test the impact of using the Blocksize Option Target ware 2 from Manufacturer A is loaded. First, a run where the Target Hard-ware is loaded by ignoring the blocksize it proposes is performed, resulting in the default blocksize of 512 bytes being used for the le transfers. Then, a run where the suggested blocksize of 1200 bytes is accepted is performed and the results from this are compared to the results from the previous run.

The loadset used for this test is described in Table 3.1.

Tables 3.2 and 3.3 show the results from the two runs. When comparing the total data transfered from both runs, the comparison shows that less

(32)

3.2. PERFORMANCE RESULTS CHAPTER 3. EVALUATION

Target Hardware Number ofHeader Files Number ofData Files Total loadset size(bytes) MA_TH2 8 52 16 165 896 Table 3.1: Loadsets used testing the impact of the blocksize option Target Hardware Data Transfered(bytes) Time Elapsed(s) Avg. bit rate(MBit/s) MA_TH2 19 672 191 453.119 0.347 Table 3.2: Loading MA_TH2 using the default blocksize of 512 bytes

Target Hardware Data Transfered(bytes) Time Elapsed(s) Avg. bit rate(Mbit/s) MA_TH2 17 758 832 447.905 0.317

Table 3.3: Loading MA_TH2 using a blocksize of 1200 bytes data is transferred when using the proposed blocksize of 1200 bytes. Ap-proximately 1.9 MB (or 9%) less data is transfered with a blocksize of 1200 bytes instead of the default 512 bytes. This is expected, as was touched upon in Section 1.1.2. By using a larger blocksize not only the amount of TFTP Data packets having to be sent is reduced, but also the amount of ACKs the Target Hardware has to send back to the Data Loader.

What is perhaps more surprising is that a similar reduction in the total time it takes to carry out the runs is not seen. The loading is only approx-imately 5 seconds (or 1%) faster when using the larger blocksize. And the average bit rate is actually lower with the larger blocksize. This is unex-pected and not at all in line with what RFC2348 [12] suggests. In Section 4.1 this is explained by showing that most of the time spent on loading is not spent on the actual le transfers.

To give an idea of how an increased blocksize impacts the actual le transfers during the Uploading Operation, the speeds of transferring the 5 largest les in the loadset are compared between the two blocksizes. This data was obtained through Wireshark and has been tabulated in Tables 3.4 and 3.5.

The Tables 3.4 and 3.5 have been compiled into the diagram shown in Figure 3.1. As the gure suggest, increasing the blocksize from 512 bytes to 1200 bytes has a huge impact. The bitrate is nearly doubled, so increasing the blocksize clearly makes a dierence for the actual le transfers.

(33)

3.2. PERFORMANCE RESULTS CHAPTER 3. EVALUATION

File Number Data Transferred(bytes) Time Elapsed(s) Average bit rate(Mbit/s) 1 2 346 648 1.6167 11.6 2 971 742 0.6145 12.7 3 971 734 0.6345 12.3 4 971 734 0.7015 11.1 5 971 718 0.6099 12.7 Total 6 233 576 4.1771 11.9 Table 3.4: Data over the le transfers for the 5 largest les using a blocksize of 512 bytes

File Number Data Transferred(bytes) Time Elapsed(s) Average bit rate(Mbit/s) 1 2 235 799 0.8924 20.0 2 925 891 0.3427 21.6 3 925 883 0.3341 22.2 4 925 883 0.3349 22.1 5 925 821 0.3102 23.9 Total 5 939 277 2.2143 21.5 Table 3.5: Data over the le transfers for the 5 largest les using a blocksize of 1200 bytes

(34)

3.2. PERFORMANCE RESULTS CHAPTER 3. EVALUATION

3.2.2 Comparison with the existing system

In order to be able to compare our new system with the one already in use by Saab today, 7 of the Target Hardwares in the Avionics Core are loaded using both systems. The loadsets used for these test runs are tabulated in Table 3.6.

Target Hardware Number ofHeader Files Number ofData Files Total loadset size(bytes) MA_TH1 8 49 16 292 015 MA_TH2 8 55 19 807 617 MA_TH3 4 29 19 761 064 MA_TH5 4 22 10 831 895 MB_TH1 1 4 22 204 446 MB_TH2 1 4 22 205 390 MB_TH3 1 4 22 205 054 Total 31 196 153 068 545 Table 3.6: Loadsets

In general, the loadsets for Manufacturer A's Target Hardwares are smaller in size but contain more les than the loadsets belonging to the Target Hardwares from Manufacturer B. All Data Files in the loadsets for Manufacturer B's Target Hardwares belong to a single Header File, while the Data Files for Manufacturer A's Target Hardwares are split among several Header Files.

Existing System

The data obtained when loading with the existing applications is tabulated in Table 3.7 and Table 3.8. Application A was used for loading the Tar-get Hardwares from Manufacturer A and Application B for the ones from Manufacturer B. The Target Hardwares from Manufacturer B were loaded in parallel since Application B supports this.

The total elapsed time in Table 3.7 was obtained by adding the time it took to load each individual Target Hardware.

The total elapsed time in Table 3.7 is the time it takes to load the 3 Target Hardwares in parallel. This time is slightly larger than the time it takes for the slowest Target Hardware to complete. This is because Application B requires the user to start each Upload Operation sequentially, i.e. the user rst selects MB_TH1, picks the loadset for it and then starts the Upload Operation and then moves on to do the same thing for the other Target Hardwares that are to be loaded.

In Table 3.9 the total elapsed time from Table 3.7 and Table 3.8 have been added.

(35)

3.2. PERFORMANCE RESULTS CHAPTER 3. EVALUATION

Target Hardware Data Transfered(bytes) Time Elapsed(s) Avg. bit rate(Mbits/s) MA_TH1 17 887 539 438.418 0.326 MA_TH2 21 730 081 482.825 0.360 MA_TH3 21 580 546 297.384 0.580 MA_TH5 11 842 704 209.702 0.452 Total 73 040 870 1 428.329 0.409 Table 3.7: Loading 4 Target Hardwares from Manufacturer A sequentially using Application A

Target Hardware Data Transfered(bytes) Time Elapsed(s) Avg. bit rate(Mbits/s) MB_TH1 26 811 842 106.772 2.01 MB_TH2 26 814 243 127.527 1.68 MB_TH3 26 814 299 129.159 1.66 Total 80 440 632 142.108 4.53 Table 3.8: Loading 3 Target Hardwares from Manufacturer B in parallel using Application B

Manufacturer Data Transfered(bytes) Time Elapsed(s) Avg. bit rate(Mbits/s) Manufacturer A 73 040 870 1 428.329 0.409 Manufacturer B 80 440 632 142.108 4.53 Total 153 481 502 1 570.437 0.782 Table 3.9: Loading 7 Target Hardwares using software in use by Saab today

The most interesting statistic from Table 3.9 is the total time elapsed, which is what is used when comparing it with our new approach. It should be noted that this gure does not take into account the time it takes to setup Application A for loading of each individual Target Hardware, the time it takes to switch between Application A and Application B etc. So in reality, the overall process of loading 7 Target Hardwares is slightly longer. New System

The statistics obtained when loading using our new system is shown in Table 3.10. This data was obtained by running a Wireshark capture while loading all 7 Target Hardwares in parallel. The data for each individual Target Hardware was then ltered out in Wireshark.

(36)

3.2. PERFORMANCE RESULTS CHAPTER 3. EVALUATION

Target Hardware Data Transfered(bytes) Time Elapsed(s) Avg. bit rate(Mbits/s) MA_TH1 17 895 286 440.621 0.325 MA_TH2 21 738 686 483.748 0.360 MA_TH3 21 578 182 298.323 0.579 MA_TH5 11 840 297 211.161 0.449 MB_TH1 26 813 321 128.543 1.669 MB_TH2 26 814 726 127.927 1.677 MB_TH3 26 814 782 138.266 1.551 Total 153 495 791 483.804 2.538 Table 3.10: Loading 7 Target Hardwares in parallel using the new system was obtained by measuring the duration from the rst packet captured by Wireshark to the last packet captured. As Table 3.10 indicates, this time is determined by how long time it takes to load MA_TH2.

Looking at the data for individual Target Hardwares, MA_TH3 stands out. It is considerably faster than MA_TH2 despite being loaded with a loadset of similar structure and size and coming from the same manufacturer. This is because this Target Hardware runs a newer version of the software on its end, which is discussed in more detail in Chapter 4.1.

Comparison

When comparing our new system to the old one, a comparison to the old approach is rst made on a per Target Hardware basis. The bar chart in Figure 3.2 uses the data from Tables 3.7 and 3.8 and 3.10 to compare the time it takes to load each individual Target Hardware with the old and new systems.

As Figure 3.2 illustrates, the new system is able to match the old one in this regard quite well, despite it loading all 7 Target Hardwares in parallel. What stands out in Figure 3.2 is MB_TH1. Here the old system is considerably faster than the new system; the new approach is approximately 20% slower than the old one. To investigate why this is, the individual le transfers being done while loading MB_TH1 are looked into. The loadset for MB_TH1 contains 4 Data Files and when using the new system the transfers of 3 of these are slower compared to the old system. Knowing this, the response times for transferring the largest of these Data Files using both approaches can be compared. Figure 3.3 shows the response times for this le transfer with both the old and new systems as plotted by Wireshark.

Recall that TFTP works in a lock-step fashion. Each data packet sent has to be acknowledged before a new one can be sent. The blue lines (higher) in Figure 3.3 show the average time it takes before an ACK packet is re-ceived after a data packet has been sent during the le transfer for both

(37)

3.2. PERFORMANCE RESULTS CHAPTER 3. EVALUATION

Figure 3.2: Per Target Hardware comparison between old system and new system

Figure 3.3: Response times for the old system (left) and the new system (right). The higher lines are the average delay between DATA and ACK, and the lower lines the average delay between ACK and DATA

systems. The red lines (lower) show the average time it takes to send a new data packet after having received an ACK packet. As illustrated, the aver-age response time from the Target Hardware is higher when using our new system compared to when using the old system, while the average response time from our new system is actually lower compared to that of the old sys-tem. From the graphs the average response time from the Target Hardware can be estimated to be approximately 0.75 ms higher when using the new system. The le being transferred is quite large, resulting in approximately 25,000 data packets being sent. With an extra 0.75 ms for each data packet this means the le transfer takes approximately 18.75 seconds longer. It is these longer response times that explains why loading with the new system is slower. It is however impossible to explain why the Target Hardware sud-denly decides to respond more slowly. The Target Hardware is a black box

(38)

3.2. PERFORMANCE RESULTS CHAPTER 3. EVALUATION and there is nothing that can be done to improve its response times. But all this means very little when looking at the bigger picture.

The most interesting metric when comparing the old and new system is still how long time it takes to load all the 7 Target Hardwares. Figure 3.4 compares the total elapsed times for using both approaches using the data presented in Tables 3.9 and 3.10.

Figure 3.4: Total comparison between old system and new system As Figure 3.4 shows, the new system outperforms the old one by a wide margin. By not having to load the Target Hardwares from Manufacturer A sequentially a tremendous amount of time is gained.

(39)

Chapter 4

Discussion

Not touched upon when presenting the performance data in Section 3.2 was how slow loading is in general. This matter is discussed in Section 4.1 along with a scenario where our solution would become inecient. In Section 4.2 we discuss how one could improve performance under such circumstances, but also looks into how the Avionics Core booking system employed today could be improved in the future.

4.1 Overall performance

When loading with the old system an average bit rate higher than 2.01 Mbit/s for Target Hardwares from Manufacturer B is never reached. Target Hardwares from Manufacturer A are even slower  never reaching higher average bit rates than 0.580 Mbit/s. When loading 7 Target Hardwares in parallel with the new system the average bit rate is a mere 0.754 Mbit/s. Analyzing the IO graphs produced with Wireshark helps to understand why only such relatively low bit rates are reached.

The IO graphs in Figures 4.1 and 4.2 are for MA_TH1 and MB_TH1 from when 7 Target Hardwares were loaded with the new system. As it is the Target Hardware that requests les from the Data Loader during the Upload Operation, the IO graphs looks similar for when loading using the old system.

The IO graph in Figure 4.1 shows that the Uploading Operation consists of 6 IO bursts with a lot of idle time in between them. Please recall from Section 1.1.1 that the Target Hardware rst requests a list of Header Files and then requests the listed Header Files. It then requests the Data Files associated with each Header File. The list of Header Files and actual Header Files are too small to make an appearance in the IO graph, but each IO burst corresponds to the transfer of the Data Files associated with each Header File. The loadset used had 8 Header Files in it but out of these 2 have only one small Data File associated with it, which is why only 6 IO bursts appear

(40)

4.1. OVERALL PERFORMANCE CHAPTER 4. DISCUSSION

Figure 4.1: IO graph over the Uploading Operation performed on MA_TH1 in the graph.

The most signicant feature of Figure 4.1 however is how much time is spent doing seemingly nothing. There are still status les being sent by the Target Hardware periodically (every 3 seconds) during these gaps of inactivity, but these are too small to be visible in the graph. Manufacturer A has explained that this has to do with slow writing to ash memory in the Target Hardware; i.e. when the Target Hardware has received the Data Files associated with a Header File, it does not request new les until it has nished writing the newly received les to ash memory.

As Figures 4.1 illustrates, some relatively high speeds are reached, up to 10 Mbit/s, so the actual le transfers are not that slow compared to the average bit rate of for the entire Uploading Operation. Because of the slow ash writes, the average bit rate for this Upload Operation is only 0.325 Mbit/s in the end.

Figure 4.2: IO graph over the Uploading Operation performed on MB_TH1 Figure 4.2 shows the IO graph for MB_TH1. This graph has very dif-ferent characteristics from that in Figure 4.1. This comes from the loadset having a dierent structure. The loadset for MB_TH1 consists of only 1 Header File and only 4 Data Files. But Data Files are also signicantly

(41)

4.1. OVERALL PERFORMANCE CHAPTER 4. DISCUSSION larger than those of MA_TH1. It has not been possible to test the Target Hardwares from Manufacturer B with loadsets similar to those for Manufac-turer A, or vice versa, so it is impossible to say if these Target Hardwares have the same problem with slow ash writes in between bursts of Data Files. They do however spend some considerable time idling after having received the last Data File until indicating that the operation is complete back to the Data Loader.

In any case, the le transfers for Manufacturer B's Target Hardwares fail to reach as high speeds as those for Manufacturer A. And as was touched upon when comparing the old and new systems, the response times can vary greatly which can have signicant impact on the total time required to perform the Uploading Operation.

Furthermore, the Target Hardwares from Manufacturer A have a delay of 2 seconds between nishing a le transfer and requesting a new le. This is a signicant amount of time spent idle, especially when considering that the loadsets for these Target Hardwares often consist of 50 or more les. A 2 second delay between each of them would mean that 100 seconds throughout the Uploading Operation are unused from the perspective of the Data Loader for these Target Hardwares. It is unknown what purpose this delay serves for the Target Hardware, but removing it could hypothetically decrease load times signicantly. The Target Hardwares from Manufacturer B show a similar delay in between les but since the loadsets for these contain fewer les it has less of an impact on the overall load times.

To estimate exactly how much time is spent idle is problematic. Wire-shark can be used to calculate the time dierence between captured packets but summing these up would mean the delays between the Data Loader sending an Data packet and the Target Hardware responding with an ACK would also be included in the sum. Therefore in order to only get the delays coming from the 2 seconds in between transferring Data Files, and the 3 seconds in between the status les sent between IO bursts, only the delays that are larger than 100 ms are added to the sum. Table 4.1 show these sums for the loading of MA_TH1 and MB_TH1.

Target Hardware Time Wasted(s) Total Elapsed Time(s) Ratio MA_TH1 429.193 440.621 97% MB_TH1 50.983 128.543 40% Table 4.1: Time wasted when loading MA_TH1 and MB_TH1 As Table 4.1 shows, during the vast majority of the Uploading Operation the Target Hardware from Manufacturer A does something else other than receiving les from the Data Loader. The le transfers are relatively fast compared to those for Manufacturer B, but in the end this makes little dierence because of the delays in between les and slow ash writes. And

(42)

4.2. RELATED WORK CHAPTER 4. DISCUSSION it is because of all this time being wasted that such a little gain from using a larger blocksize is seen. The le transfers get faster but the vast majority of time is still spent waiting for the Target Hardware to nish writing to ash and request new les. The Target Hardwares from Manufacturer B are slightly more ecient but still 40% of the Uploading Operation is spent not actually transferring les.

As was mentioned in Section 3.2.2, Target Hardware 3 from Manufac-turer A runs a newer version of its loading software. An IO graph for this Target Hardware is shown in Figure 4.3. It is unknown exactly how this Target Hardware's loading software diers from the one used by the other Target Hardwares from Manufacturer A, but judging from the IO graph it seems to be able to receive more les before having to write them to ash memory. The Uploading Operation is however still very slow overall.

Figure 4.3: IO graph for MA_TH3

Furthermore, from the graph in Figure 4.1 it is possible to imagine sit-uations where our solution becomes inecient. If the les being transferred were to increase in size and if the Target Hardwares were to be able to down-load them from the Data Loader at a higher bitrate we could potentially have a situation where the Target Hardwares would compete for the band-width of the Data Loader during the le transfers, but they would also have their moments of inactivity overlapping. The bandwidth becomes a shared resource to which access needs to scheduled in order to try to minimize the number of overlapping le transfer bursts and moments of idleness.

4.2 Related Work

In this Section we look into how the performance of the system could be improved under the circumstances imagined in Section 4.1. Then we discuss the wider problem of the Avionics Core being an heavily tracked resource and how the Avionics Core booking system employed today could be im-proved.

References

Related documents

In this article the authors develop the theory on organizational improvisation and suggest an understanding of how a company, through strategic and individual

At the time, universal suffrage and parlamentarism were established in Sweden and France. Universal suffrage invokes an imagined community of individuals, the demos, constituted

The result of this study has within the setting of using the workshop facilitation tool during the EDWs, identified a set of digital features affording; engagement, interactivity

Naturhistoriska riksmuseet (The Swedish museum of Natural History) in Stockholm, Sweden is compared with the Ditsong National Museum of Natural History in Pretoria, and

The paper’s main findings show that among the basic economic factors, the turnover within a company has the strongest positive relationship with the company’s level of

Affordances and Constraints of IntelligentAffordances and Constraints of IntelligentAffordances and Constraints of IntelligentDecision Support for Military Command and

As the idea was to design lessons which provided the pupils with maximal English input the lessons were designed accordingly, to offer exposure to the language with the

One of them, call it ‘A’ is very clumsy when it has to replace TrpF, although very good in its own job, whereas the other, call it ‘F’ has already forgotten its original task by the