• No results found

Usage of databases in ARINC 653-compatible real-time systems

N/A
N/A
Protected

Academic year: 2021

Share "Usage of databases in ARINC 653-compatible real-time systems"

Copied!
112
0
0

Loading.... (view fulltext now)

Full text

(1)

Institutionen för datavetenskap

Department of Computer and Information Science

Final thesis

Usage of databases in ARINC 653-compatible

real-time systems

by

Martin Fri

Jon Börjesson

LIU-IDA/LITH-EX-A——10/010--SE

2010-05-10

(2)
(3)

Linköping University

Department of Computer and Information Science

Final Thesis

Usage of databases in ARINC 653-compatible

real-time systems

by

Martin Fri

Jon Börjesson

LIU-IDA/LITH-EX-A—10/010--SE

2010-05-10

Supervisor:

Magnus Vesterlund

Saab Aerosystems

Examiner:

Professor Simin Nadjm-Tehrani

Department of Computer and Information Science

(4)
(5)

Abstract

The Integrated Modular Avionics architecture , IMA, provides means for run-ning multiple safety-critical applications on the same hardware. ARINC 653 is a specification for this kind of architecture. It is a specification for space and time partition in safety-critical real-time operating systems to ensure each applica-tion’s integrity. This Master thesis describes how databases can be implemented and used in an ARINC 653 system. The addressed issues are interpartition communication, deadlocks and database storage. Two alternative embedded databases are integrated in an IMA system to be accessed from multiple clients from different partitions. Performance benchmarking was used to study the dif-ferences in terms of throughput, number of simultaneous clients, and scheduling. Databases implemented and benchmarked are SQLite and Raima. The studies indicated a clear speed advantage in favor of SQLite, when Raima was integrated using the ODBC interface. Both databases perform quite well and seem to be good enough for usage in embedded systems. However, since neither SQLite or Raima have any real-time support, their usage in safety-critical systems are limited. The testing was performed in a simulated environment which makes the results somewhat unreliable. To validate the benchmark results, further studies must be performed, preferably in a real target environment.

Keywords : ARINC 653, INTEGRATED MODULAR AVIONICS, EM-BEDDED DATABASES, SAFETY-CRITICAL, REAL-TIME OPERATING SYSTEM, VXWORKS

(6)

Contents

1 Introduction 7 1.1 Background . . . 7 1.2 Purpose . . . 7 1.3 Problem description . . . 8 1.3.1 Objectives . . . 8 1.3.2 Method . . . 8 1.3.3 Limitations . . . 9 1.4 Document structure . . . 9 2 Background 10 2.1 Safety-critical airplane systems . . . 10

2.1.1 DO-178B . . . 10

2.2 Avionics architecture . . . 12

2.2.1 Federated architecture . . . 12

2.2.2 IMA architecture . . . 13

2.3 ARINC 653 . . . 14

2.3.1 Part 1 - Required Services . . . 14

2.3.2 Part 2 and 3 - Extended Services and Test Compliance . . 18

2.4 VxWorks . . . 19

2.4.1 Configuration and building . . . 19

2.4.2 Configuration record . . . 19

2.4.3 System image . . . 19

2.4.4 Memory . . . 20

2.4.5 Partitions and partition OSes . . . 21

2.4.6 Port protocol . . . 22 2.4.7 Simulator . . . 22 2.5 Databases . . . 22 2.5.1 ODBC . . . 23 2.5.2 MySQL . . . 23 2.5.3 SQLite . . . 23 2.5.4 Mimer SQL . . . 24 2.5.5 Raima . . . 25

3 System design and implementation 26 3.1 Architecture overview . . . 26

3.1.1 Interpartition communication design . . . 28

3.2 Database modules . . . 29

(7)

CONTENTS

3.2.2 Server version . . . 32

3.2.3 Server and Client interaction - Protocol . . . 33

3.3 Transmission module . . . 37 3.3.1 Ports . . . 37 3.3.2 Transmission protocol . . . 38 3.4 Databases . . . 41 3.4.1 Filesystem . . . 41 3.4.2 SQLite specific . . . 42 3.4.3 RAIMA specific . . . 43 3.5 Database adapters . . . 43 3.5.1 Interface . . . 43 3.5.2 SQLite adapter . . . 43 3.5.3 Raima adapter . . . 44 3.5.4 Query result . . . 45 4 Benchmarking 46 4.1 Environment . . . 46 4.1.1 Simulator . . . 46 4.1.2 Variables . . . 46 4.1.3 Measurement . . . 47 4.2 Benchmark graphs . . . 48 4.2.1 SQLite Insert . . . 48

4.2.2 SQLite task scheduling . . . 50

4.2.3 SQLite select . . . 51

4.2.4 Raima select . . . 52

4.3 Benchmark graphs analysis . . . 53

4.3.1 Deviation . . . 53

4.3.2 Average calculation issues . . . 53

4.3.3 Five clients top . . . 54

4.3.4 Scaling . . . 55

5 Comparisons between SQLite and Raima 56 5.1 Insert performance . . . 56 5.2 Update performance . . . 58 5.3 Select performance . . . 59 5.4 No primary key . . . 60 5.5 Task scheduling . . . 61 5.6 Response sizes . . . 62 5.7 Summary . . . 63

6 Discussion and Conclusion 64 6.1 Performance . . . 64

6.1.1 Database comparison . . . 64

6.1.2 Scalability . . . 64

6.1.3 Measurement issues . . . 65

6.2 Design and implementation . . . 65

6.3 Usage and Certification . . . 66

6.4 Future work . . . 67

(8)

CONTENTS A Benchmark graphs 70 A.1 Variables . . . 70 A.2 SQLite . . . 70 A.2.1 Insert . . . 70 A.2.2 Update . . . 74 A.2.3 Select . . . 77

A.2.4 Alternate task scheduling . . . 80

A.2.5 No primary key . . . 82

A.2.6 Large response sizes . . . 85

A.3 Raima . . . 87

A.3.1 Insert . . . 87

A.3.2 Update . . . 90

A.3.3 Select . . . 94

A.3.4 Alternate task scheduling . . . 97

A.3.5 No primary key . . . 99

(9)

List of Figures

1.1 An ARINC 653 system. . . 8

2.1 A federated architecture and an Integrated Modular Avionics ar-chitecture. [6] . . . 12

2.2 One cycle using the round robin partition scheduling. [6] . . . 15

2.3 VxWorks memory allocation example. [11] . . . 20

3.1 System design. . . 27

3.2 Channel design used in this system. . . 28

3.3 Statement, connection and environment relations. . . 30

3.4 Flowchart describing the SQLFetch routine. . . 31

3.5 Flowchart of a task in the database server module. . . 36

3.6 A connection deadlock caused by Sender Block port protocol. . . 40

3.7 Filesystem structure. . . 42

4.1 Average inserts processed during one timeslot for different num-ber of client partitions. . . 48

4.2 Average number of inserts processed during one timeslot of vari-ous length. . . 49

4.3 Average selects processed during one timeslot for different num-bers of client partitions and timeslot lengths. Task scheduling used is Yield only. . . 50

4.4 Average selects processed during one timeslot for different num-bers of client partitions. . . 51

4.5 Average selects processed during one timeslot for different num-ber of client partitions. The lines represent the average processed queries using different timeslot lengths. . . 52

4.6 With one client, the server manages to process all 1024 queries in one time frame. . . 54

4.7 With two clients, the server cannot process 2*1024 in two times-lots due to context switches. An extra time frame is required. . 54

4.8 The average processing speed is faster than 1024 queries per timeslot, but it is not fast enought to earn an entire timeslot. . . 54

5.1 Comparison between average insert values in SQLite and Raima. Timeslots used in the graphs are 50 ms and 100ms. . . 57

5.2 Comparison between average update values in SQLite and Raima. Timeslot lengths are 50 ms and 100 ms. . . 58

(10)

LIST OF FIGURES

5.3 Comparison between average select values in SQLite and Raima. Timeslot lengths are 50 ms and 100 ms. . . 59 5.4 Comparison between average select values in SQLite and Raima

with and without primary key. Timeslot length is 50 ms. . . 60 5.5 Comparison between average select queries using different task

schedules in SQLite and Raima. Timeslot length is 50 ms. . . 61 5.6 Comparison between average selects with and without sorting in

SQLite and Raima. The resulting rowsets are of size 128 rows and timeslot length is 50 ms. . . 62 5.7 Comparison between single row select and 128 rows select queries

in SQLite and Raima. Average values are showed in the graph with timeslot length of 50 ms. . . 62 A.1 Average inserts processed during one timeslot for different

num-ber of client partitions. . . 71 A.2 Average no. inserts processed during one timeslot of various

length. . . 72

A.3 Maximum inserts processed during one timeslot for different num-ber of client partitions. . . 73 A.4 Maximum inserts processed for various timeslot lengths. . . 73 A.5 Average updates processed during one timeslot for different

num-ber of client partitions. . . 74 A.6 Average no. updates processed during one timeslot of various

length. . . 75

A.7 Maximum updates processed during one timeslot for different number of client partitions. . . 76 A.8 Maximum updates processed for various timeslot lengths. . . 76 A.9 Average selects processed during one timeslot for different

num-bers of client partitions. . . 77 A.10 Average no. selects processed during one timeslot of various

length. . . 78

A.11 Maximum selects processed during one timeslot for different num-bers of client partitions. . . 79 A.12 Maximum no. selects processed during one timeslot of various

length. . . 79

A.13 Average selects processed during one timeslot for different num-bers of client partitions and timeslot lengths. . . 80 A.14 Maximum selects processed during one timeslot for different

num-bers of client partitions and timeslot lengths. . . 81 A.15 Average selects processed during one timeslot for different

num-bers of client partitions and timeslot lengths. . . 82 A.16 Average no. selects processed during one timeslot of various length. 83 A.17 Maximum selects processed during one timeslot for different

num-bers of client partitions and timeslot lengths. . . 84 A.18 Maximum no. selects processed during one timeslot of various

length. . . 84 A.19 Average and maximum processed select queries. These selects

(11)

A.20 Average and maximum processed selects are displayed. Each query asks for a 128 rows response. Results are sorted in an ascending order by an unindexed column. . . 86 A.21 Average inserts processed during one timeslot for different

num-ber of client partitions. . . 87 A.22 Average inserts processed for various timeslot lengths. . . 88 A.23 Maximum inserts processed during one timeslot for different

num-ber of client partitions. . . 89 A.24 Maximum inserts processed for various timeslot lengths. . . 89 A.25 Average number of updates processed during one timeslot for

different number of client partitions. . . 90 A.26 Average number of updates processed for various timeslot lengths.

91

A.27 Maximum updates processed during one timeslot for different number of client partitions. . . 92 A.28 Maximum updates processed for various timeslot lengths. . . 93 A.29 Average selects processed during one timeslot for different

num-ber of client partitions. . . 94 A.30 Average selects for various timeslot lengths. . . 95 A.31 Maximum selects processed during one timeslot for different

num-ber of client partitions. . . 96

A.32 Maximum selects processed for various timeslot lengths. . . 96

A.33 Average selects processed during one timeslot for different num-ber of client partitions. . . 97 A.34 Maximum selects processed during one timeslot for different

num-ber of client partitions. The lines represent the maximum pro-cessed queries using different timeslot lengths. . . 98 A.35 Average selects processed during one timeslot for different

num-ber of client partitions. No primary key is used. . . 99 A.36 Average selects processed for various timeslot lengths. No

pri-mary key is used. . . 100 A.37 Maximum selects processed during one timeslot for different

num-ber of client partitions. No primary key is used. . . 101 A.38 Maximum select processed for various timeslot lengths. No

pri-mary key is used. . . 101 A.39 Average and maximum processed selects are displayed. Each

query asks for 128 rows. . . 102 A.40 Average and maximum processed selects are displayed. Each

query asks for 128. Results are sorted in an ascending order by an non-indexed column. . . 103

(12)

Chapter 1

Introduction

This document is a Master thesis report conducted by two final year Computer science and Engineering students. It corresponds to 60 ECTS, 30 ECTS each. The work has been carried out at Saab Aerosystems Link¨oping and examined at Department of Computer & Information science at Link¨oping University.

The report starts with a description of the thesis and some background information. Following this, the system design and work results are described. Finally, the document ends with an analysis, and a discussion and conclusion section.

1.1

Background

Safety-critical aircraft systems often run on a single computer to prevent mem-ory inconsistency and to ensure that real-time deadlines are held. If multiple applications are used on the same hardware they may affect each other’s mem-ory or time constraints. However, a need for multiple applications to share hardware has arisen. This is mostly due to the fact that modern aircrafts are full of electronics; no space is left for new systems. One solution to this prob-lem is to use an Integrated Modular Avionics, IMA, architecture. The IMA architecture provides means for running multiple safety-critical applications on the same hardware. The usage of Integrated Modular Avionics is an increasing trend in most aircraft manufacturers and Saab Aerosystems is no exception.

ARINC 653 is a software specification for space and time partitioning in the IMA architecture. Each application is run inside its own partition which isolates it from other applications. Real-time operating systems implementing this standard will be able to cope with the memory and real-time constraints. This increases the flexibility but also introduces problems such as how to com-municate between partitions and how the partition scheduling will influence the design. A brief overview of an ARINC 653-system can be seen in figure 1.1.

1.2

Purpose

As described in the previous section the trend of avionics software has shifted towards Integrated Modular Avionics systems. ARINC 653 is a specification for this kind of systems and is used in many modern aircrafts [8]. Saab is interested

(13)

CHAPTER 1. INTRODUCTION App 1 partition App 2 partition DB partition RTOS (ARINC 653) Hardware App 3 partition

Figure 1.1: An ARINC 653 system.

in knowing how databases can be used for sharing data among partitions of an ARINC 653-system. Therefore the purpose of this master thesis is to imple-ment different databases in an ARINC 653-compliant system and study their behavior.

1.3

Problem description

This thesis will study how can databases be embedded in an ARINC 653-environment. Saab Aerosystems has not previously used databases together with ARINC 653 and now wants to know what such a system is capable of. There will be multiple applications of various safety-criticality levels that may want to use the database services. The database in itself would not contain safety-critical data. The problem to solve is how to implement databases in an ARINC 653-compliant system and how to make them communicate with applications.

Important aspects to consider are design and configuration for efficient database usage, performance implications and usefulness of this kind of ARINC 653-system.

1.3.1

Objectives

Goals and objectives for this master thesis:

• Port and integrate alternative databases within an ARINC 653-partition. • Implement a database API for communication between the applications and the database, that will hide the ARINC-layer from the application and the database.

• Find the system’s performance thresholds according to throughput, num-ber of simultaneous clients and scheduling. Then evaluate the use of databases in an ARINC 653-system.

1.3.2

Method

The workload will be divided into two parts since there are two participating students in this thesis.

(14)

CHAPTER 1. INTRODUCTION

Martin’s part is focused towards ARINC 653 and the interpartition com-munication. The goals for this part is to abstract the underlying ARINC 653 layer and provide a database API to an application developer. The application developer shall not notice that the system is partitioned, he/she should be able to use the database as usual. This part is responsible for the communication between the application and the database.

Jon’s part is to study databases and how to port them to the ARINC 653 system. This includes to choose two databases and motivate the choices. One of the databases shall be an open source database. The goals of Jon’s part is to port the chosen databases and make them work in an ARINC 653-compatible real-time operating system. This requires for instance an implementation of a file system and one database adapter per database.

Testing and benchmarking of the implemented system has been done by both Martin and Jon.

1.3.3

Limitations

Limitations for this project are:

• VxWorks shall be the ARINC 653-compatible real-time operating system. Saab already uses this operating system in some of their projects. • The implementation will primarily be tested in a simulated environment.

Tests in a target environment are subject to additional time.

1.4

Document structure

A short description of the document structure is provide below. Chapter 1 Contains an introduction to the master thesis.

Chapter 2 Essential background information needed to understand the thesis. Chapter 3 The design of the implemented system is described here.

Chapter 4 Contains benchmarking graphs and their analyses. Chapter 5 Contains a comparison between chosen databases.

(15)

Chapter 2

Background

This section will provide the reader with essential background information.

2.1

Safety-critical airplane systems

Systems embedded in airplanes are safety-critical software. A software failure in such a system can cause minor to catastrophic events.

When developing safety critical software for aircrafts, there are some special issues that arises. Since aircraft software is a type of embedded system there are limitations on the memory available. This causes the developer to use special techniques to ensure that memory boundaries are not overflowed.

Aircraft software is often also safety-critical real-time systems; in theses systems timing and reliability are large concerns. First, deadlines for important tasks should never be missed. This forces the system to have some kind of scheduling that ensures that deadlines are not overrun. Secondly, safety-critical real-time systems must be reliable. It would not do if for example an aircraft or a respirator suddenly stopped working. This issue leads to strict requirements on not only the software but also the hardware. Hardware available for this kind of systems must be closely monitored and tested before they are allowed to be used. These requirements leads to that the hardware available are many years old and have poor performance compared to the modern, high performance hardware. [3] [5] [7]

2.1.1

DO-178B

DO-178B, Software Considerations in Airborne Systems and Equipment Cer-tification, is an industry accepted guidance about how to satisfy airworthiness of aircrafts. It focuses only on the software engineering methods and processes used when developing aircraft software and nothing on the actual result. This means that if an airplane is certified with DO-178B, you know that the devel-oping process for develdevel-oping an airworthiness aircraft has been followed, but you do not really know if the airplane can actually fly. A system with a poor requirement specification can go thorough the entire product development life cycle and fulfill all of the DO-178B requirements. However, the only thing you

(16)

CHAPTER 2. BACKGROUND

know about the result is that it fulfills this poor requirement specification. In other words, bad input gives bad, but certified output. [3] [5]

Failure categories

Failures in an airborne system can be categorized in five different types according to the DO-178B document:

Catastrophic A failure that will prevent a continuous safe flight and landing. Results of a catastrophic failure are multiple fatalities among the occupants and probably loss of the aircraft.

Hazardous / Severe-Major A failure that would reduce the capabilities of the aircraft or the ability of the crew to handle more difficult operating conditions to the extent of:

• Large reduction of safety margins or functional capabilities.

• Physical distress or higher workload such as the crew could not be relied upon to perform their tasks accurately or completely.

• Increased effects on occupants including seriously or potentially fatal in-juries to a small number of the occupants.

Major A failure that would reduce the capabilities of the aircraft and the ability of the crew to do their work to any extent of:

• Reduction of safety margins or functional capabilities. • Significant increased workload of the crew.

• Possible discomfort of the occupants including injuries.

Minor A failure that would not significantly reduce the aircraft safety and the crew’s workload are still within their capabilities. Minor failures may include slight reduction of safety margins and functional capabilities, a slight increase in workload or some inconvenience for the occupants.

No Effect A failure of this type would not affect the operational capabilities of the aircraft or increase crew workload. [3] [4]

Software safety levels

Software in airplanes has different safety levels depending on what kind of fail-ures they can cause. These are according to the DO-178B document:

Level A Software of level A that doesn’t work as intended may cause a failure of catastrophic type.

Level B Software of level B that doesn’t work as intended may cause a failure of Hazardous/Severe-Major type.

(17)

CHAPTER 2. BACKGROUND

Level C Software of level C that doesn’t work as intended may cause a failure of Major type.

Level D Software of level D that doesn’t work as intended may cause a failure of Minor type.

Level E Software of level E that doesn’t work as intended may cause as failure of No Effect type. [3] [4]

2.2

Avionics architecture

There are two main types of architectures used in aircraft avionics. These are, the traditionally used federated architecture, and the new Integrated Modular Avionics architecture. See figure 2.1.

Air Data

computer Mission computer

Flight Management

computer

Air Data FMS ComputerMission

MMU-partitioning Operating System Flight

Management computer

Bus

Federated System Integrated Modular Avionics (IMA)

Figure 2.1: A federated architecture and an Integrated Modular Avionics architec-ture. [6]

2.2.1

Federated architecture

Federated architecture is the traditional approach of implementing avionics sys-tems. This approach uses distributed standalone systems which run on their own separate hardware. This is a well known approach that has been used for a long time.

One advantage of a federated architecture is that many experienced develop-ers are available. These people have worked with this kind of architecture for a long time and they know where the pitfalls are. Another advantage is that since each system is separate both hardware and software wise, this leads to easy verification and validation processes for these systems. This saves the company a lot of time and money since aircraft certification is extremely expensive.

The disadvantage of this type of architecture is that each standalone system has a specific function. This requires that each system must be specifically developed in aspects of hardware and software. Reuse of software and hardware components are very rare. Federated architectures also require more power control mechanisms due to the fact that each separate system must have its own power system. This leads to more electronics leading to a higher power consumption, and that the weight is increased.

In today’s airplanes weight and space is extremely limited. They are so full of electronics that it is hard to upgrade with new functionality. This together

(18)

CHAPTER 2. BACKGROUND

with the other disadvantages of the federated approach forced the developers to find a new solution: Integrated Modular Avionics.

Advantages of a federated architecture are: • Traditionally used

• ”Easy” to certify

Drawbacks of a federated architecture are: • Hard to reuse software and hardware

• More hardware needed, increases weight and space required • High power consumption

[5] [7] [8] [9]

2.2.2

IMA architecture

The Integrated Modular Avionics architecture allows multiple applications to run on a single hardware. This works almost as a regular desktop computer in which different applications get access to resources like the CPU. The largest concern with this kind of system is to enforce the memory- and real-time con-straints. In today’s aircrafts only a few systems are using IMA approaches, but the trend is going towards more systems designed using this architecture.

One big advantage with IMA architecture is that it is relatively cheap to make changes to the system. This is because developers are able to design modular programs and reuse software and hardware components. Compared to the federated architecture the IMA approach uses hardware a lot more efficiently. Here, there are multiple software systems sharing the same hardware. This decreases the amount of hardware needed which also leads to less heat generation and lower power consumption. Another advantage with IMA architecture is that specialized developed hardware is not needed. Instead it is possible to use Commercial Off The Shelf, COTS, products. Since COTS products are a lot cheaper than specifically developed hardware this possibility makes the companies more excited about IMA.

However, there are some drawbacks with the IMA architecture. One major drawback is that the design of IMA system must be quite complex. Complex systems are tougher to develop and they require more time and experienced personnel. The personnel issue is another drawback, the IMA architecture is a quite new way to develop systems and therefore there is a lack of experienced persons within this area. However, this is already changing fast because more systems are switching to IMA architectures. Certification is both a drawback and an advantage. The first certification of a system is expensive due to the complex nature of IMA systems. But when this has been done and the platform, which is the real-time operating system, RTOS, and the functions provided by the module OS, is certified, new modules are easy to certify.

Advantages of an IMA architecture are: • Effective use of hardware

(19)

CHAPTER 2. BACKGROUND • Reuse of software and hardware • COTS, Commercial of the shelf Drawbacks of an IMA architecture are:

• Complex design needed • Complex platform certification [5] [7] [8] [9]

2.3

ARINC 653

ARINC 653 is a software specification for space and time partitioning in the IMA architecture. Real-time operating systems implementing this standard will be able to cope with the memory and real-time constraints. ARINC is an abbreviation for Aeronautical Radio, Inc which is the company that developed the standard.

ARINC 653’s general goal is to provide an interface as well as specifying the API behaviour between the operating system and the application software.

In ARINC 653, applications are run in independent partitions which are scheduled by the RTOS. Since each application is run independent of other applications it is possible for the applications to have different safety levels, e.g. ARINC 653 supports DO-178B level A-E systems on the same processor. For more information see section 2.1.1. [1] [7]

Specification details The ARINC 653 specification consists of three parts: • Part 1: Required services

• Part 2: Extended services • Part 3: Test Compliance

The required services part defines the minimum functionality provided by the RTOS to the applications. This is considered to be the industry standard interface. The extended services define optional functionality while the test compliance part defines how to establish compliance to part 1. [1]

Currently, there is no RTOS that supports all three parts. In fact, no one even fully supports part 2 or 3. The existing RTOS’s supporting ARINC 653 only supports part 1. Although, some RTOS’s have implemented some services defined in part 2, extended services, but these solutions are vendor specific and do not fully comply with the specification. [5]

2.3.1

Part 1 - Required Services

ARINC 653 specification part 1, required services, describes how the most im-portant core functionality of real-time operating systems should work to comply with the specification. The system functionality is divided into six categories:

(20)

CHAPTER 2. BACKGROUND • Process management • Time management • Memory allocation • Interpartition communication • Intrapartition communication

There is also a section about the fault handler called Health Monitor. [1] Partition management

Partitions are an important part of the ARINC 653 specification. They are used to separate the memory space of applications so each application has its own ”private” memory space.

Scheduling Partitions are scheduled in a strictly deterministic way, they have a fixed cyclic schedule maintained by the core OS. A cyclic scheduling works as a loop, the scheduled parts are repeated in the same way forever as shown in figure 2.2. The cycle repeating itself is called a major cycle and it is defined as a multiple of the least common multiple of all partition periods in the module. Inside the major cycle each partition is assigned to one or more specified times-lots, called minor frames or minor cycles. No priorities are set to the partitions as the specified scheduling order cannot change during runtime. It is the system integrators job to set the order, frequency and length of each partition’s frame. As frequency implies, a partition can be assigned to more than one frame in each major cycle. The configuration is loaded during system initialization and cannot change during runtime.

Partition #1 Partition #2 Partition #3 Partition #4 Minor Frame #1 200 ms Minor Frame #2 150 ms Minor Frame #3 250 ms Minor Frame #4 200 ms

Major Frame repeat

Time

Figure 2.2: One cycle using the round robin partition scheduling. [6]

Modes A partition can be in four different modes: Idle, normal, coldstart and warmstart.

IDLE When in this mode, the partition is not initialized and it is not executing any processes. However, the partition’s assigned timeslots are unchanged.

(21)

CHAPTER 2. BACKGROUND

NORMAL All processes have been created and are ready to run.

COLDSTART In this mode, the partition is in the middle of the initialization process.

WARMSTART In this mode, the partition is in the initialization phase. The difference between WARMSTART and COLDSTART is that WARM-START is used when some things in the initialization phase do not need to be done. E.g. if the RAM memory is still intact after some kind of failure, the partition will start in WARMSTART mode. [1]

Process management

Inside a partition an application resides which consists of one or more processes. A process has its own stack, priority, and deadline. The processes in a partition run concurrently and can be scheduled in both a periodic and an aperiodic way. It is the partition OS that is responsible for controlling the processes inside its partition.

States A process can be in four different states: Dormant, Ready, Running and Waiting.

Dormant Cannot receive resources. Processes are in this state before they are started and after they have been terminated.

Ready Ready to be executed.

Running The process is assigned to a processor. Only one process can be in running state at the same time.

Waiting Not allowed to use resources because the process is waiting for an event. E.g. waiting on a delay.

Scheduling Process scheduling is controlled by the partition operating sys-tem and is using a priority preemptive scheduling method. Priority preemptive scheduling means that the controlling OS can at anytime stop the execution of the current process in favor for a higher prioritized process which is in ready mode. If two processes have the same priority during a rescheduling event the scheduler chooses the oldest process in ready mode to be executed.

Time management

Time management is extremely important in ARINC 653 systems. One of the main points of ARINC 653 is that systems can be constructed so applications will be able to run before their deadlines. This is possible since partition schedul-ing is, as already mentioned, time deterministic. A time deterministic schedulschedul-ing means that the time each partition will be assigned to a CPU is already known. This knowledge can be used to predict the system’s behavior and thereby create systems that will fulfill their deadline requirements. [1]

(22)

CHAPTER 2. BACKGROUND Memory allocation

An application can only use memory in its own partition. This memory alloca-tion is defined during configuraalloca-tion and initialized during start up. There is no memory routines specified in the core OS that can be called during runtime. [1] Interpartition communication

The interpartition communication category contains definitions for how to com-municate between different partitions in the same core module. Communication between different core modules and to external devices is also covered.

Interpartition communication is performed by message passing, a message of finite length is sent from a single source to one or more destinations. This is done through ports and channels.

Ports and Channels A channel is a logical link between a source and one or more destinations. The channel also defines the transfer mode of the messages. To access a channel, partitions need to use ports which work as access points. Ports can be of source or destination type and they allow partitions to send or receive messages to/from another partition through the channel. A source port can be connected to one or more destination ports. Each port has its own buffer and a message queue which both are of predefined sizes. Data which is larger then this buffer size must be fragmented before sending and then merged when receiving.

All channels and all ports must be configured by the system integrator before execution. It is not possible to change these during runtime.

Transfer modes There are two transfer modes available to chose from when configuring a channel. They are sampling mode and queuing mode.

Sampling mode In sampling mode, messages are overwritten at the port buffer which means that only the latest, most up to date, value remains in the buffer. No queuing is performed in this mode.

The send routine for a sampling port will overwrite the sending port’s buffer with the new value and then sends the value directly. At the receiving end the message is copied into the receiving port’s buffer overwriting the previous value. Queuing mode In queuing mode, messages are stored in a queue at the port. The size of this queue and its element sizes are configured before execution. A message sent from the source partition is buffered in the port’s message queue while waiting to be transmitted by the channel. At the receiving end, incoming messages are stored in the destination port’s message queue until its partition receives the message.

Message queues in the queuing mode are following the First In First Out, FIFO, protocol. This allow ports to indicate overflow occurrences. However, it is the application’s job to manage the overflow situations and make sure no messages are lost. [1]

(23)

CHAPTER 2. BACKGROUND Intrapartition communication

Intrapartition communication is about how to communicate within a partition. This can also be called interprocess communication because it is about how processes communicate with each other. Mechanisms defined here are buffers, blackboards, semaphores and events.

Buffers and Blackboards Buffers and Blackboards work like a protected data area that only one process can access at a give time. Buffers can store multiple messages in FIFO queues which are configured before execution while Blackboards have only one spot for messages though have the advantage of immediate updates. Processes waiting for the buffer or blackboard are queued in a FIFO or priority order. Overall, buffers and blackboards have quite a lot in common with the queuing and sampling modes respectively in the interpartition communication section.

Semaphores and Events Semaphores and events are used for process syn-chronization. Semaphores control access to shared resources while events coor-dinate the control flow between processes.

Semaphores in ARINC 653 are counting semaphores and they are used to control partition resources. The count represents the number of resources avail-able. A process accessing a resource has to wait on a semaphore before accessing it and when finished the semaphore must be released. If multiple processes wait for the same semaphore, they will be queued in FIFO or priority order depending on the configuration.

Events are used to notify other processes of special occurrences and they consist of a bi-valued state variable and a set of waiting processes. The state variable can be either ”up” or ”down”. A process calling the event ”up” will put all processes in the waiting processes set into the ready mode. A process calling the event ”down” will be put into the waiting processes set.[1]

Health Monitor

Fault handling in ARINC 653 is performed by an OS function called Health Monitor(HM). The health monitor is responsible for finding and reporting faults and failures.

Errors that are found have different levels depending on where they occurred. The levels are process level, partition level and OS level. Responses and actions taken are different depending on which error level the failure has and what has been set in the configuration. [1] [10]

2.3.2

Part 2 and 3 - Extended Services and Test

Compli-ance

Part 2, extended services, contains specifications for: • File Systems

• Sampling Port Data Structures • Multiple Module Schedules

(24)

CHAPTER 2. BACKGROUND • Logbook System

• Sampling Port Extensions • Service Access Points

The only relevant topic among these is file systems. However, this file system specification will not be used in this master thesis, see 3.4.1 for more information. [1] [2]

Part 3 of the ARINC 653 specification defines how to test part 1, required services. This is out of the scope of this master thesis.

2.4

VxWorks

VxWorks 653 is an ARINC 653 compatible real-time operating system. This section mostly consists of information regarding VxWorks’ configuration. De-tails below are described in VxWorks 653 Configuration and Build Guide 2.2 [11].

2.4.1

Configuration and building

VxWorks 653’s configuration system is designed to minimize dependencies be-tween applications, partition OSes and other components. This makes it possi-ble to certify components separately. The configuration system is also designed to support incremental building of a module, meaning that the numbers of files needed to be rebuilt when a change is made are reduced. The configuration system also facilitates development of different components of the module by different developers.

2.4.2

Configuration record

A module’s configuration record resides as a record in the part of the module’s payload. It is read by the core OS at boot time and at run-time to configure the module.

The configuration record is built as a separate binary file, independent from other parts of the module. This means that memory and resource information must be given in the configuration file for the module and that resources are pre-allocated for each application, shared library and shared data region. Since the configuration record is a separate component of the module, it can be certified separately.

2.4.3

System image

When the module parts are linked, the applications and shared libraries must conform to the allocated resources in the configuration record. After linking the configuration record binary with all other components, a bootable system image can be generated.

The build system is made up of makefiles and is designed to support building of separate components. To be able to link all components together into a system module file there are two requirements that must be met:

(25)

CHAPTER 2. BACKGROUND

• The virtual addresses of shared libraries must be specified. The address depends on the memory configuration of the core OS, and the size and address of other shared libraries in the module. To calculate a reasonable virtual address for each shared library a map of the virtual memory needs to be created.

• To build applications, stub files for shared libraries used by the applica-tions must be created. This is done as a part of the shared library build.

2.4.4

Memory

The memory configuration is made up of two parts; the physical memory con-figuration and the virtual memory concon-figuration. An example of the memory organization can be seen in figure 2.3.

Blackboard

Blackboard

POS-2

POS-1

ConfigRecord ConfigRecord

COS

COS

App-1

App-2

App-2

App-1

Blackboard

POS-1

POS-2

ConfigRecord

COS

App-2

App-1

POS-1

POS-2

ConfigRecord

COS

Physical memory

Virtual memory

Rom

Ram

App-1

App-2

Figure 2.3: VxWorks memory allocation example. [11]

Physical memory

The physical memory is made up of the read-only memory, ROM, and the random-access memory, RAM.

(26)

CHAPTER 2. BACKGROUND

As figure 2.3 illustrates, applications and the core OS consumes more mem-ory in RAM than in ROM. This is because each application requires additional memory for the stack and the heap. This also applies to the core OS. Each application has its own stack and heap, since there is no memory sharing al-lowed between applications. If an application is using any shared libraries, it also needs to set aside memory for the libraries’ stacks and heaps. How much memory each application gets allocated is specified in the configuration record. Partition OSes, POS, and shared libraries, SL, require no additional space in RAM. This is because the application that is using the POS/SL is responsible for the stack and the heap space.

SDR-Blackboard is a shared data region (SDR). It is a memory area set aside for two or more applications as a place to exchange data.

App-1 and App-2, seen in figure 2.3 are loaded in to separate RAM areas. They share no memory excepts for the SDR areas.

Virtual memory

Every component, except for the applications, has a fixed, unique address in the virtual memory. All applications have the same address. This makes it possible to configure an application as if it is the only application in the system. Each application exists in a partition, which is a virtual container. The partition configuration controls which resources that are available to the application.

2.4.5

Partitions and partition OSes

A module consists of a RTOS and partitions. Each partition contains an ap-plication and a partition OS, POS, in which the apap-plication runs. When the application accesses a shared library or any I/O region, it is done via the POS which regulates the access. The partition provides the stack and heap space that is required by the application. It also supplies any shared library code that the application uses.

There can be multiple POS instances in a module; one instance per partition. The instances are completely unaware of each other.

Partition OSes are stored in system shared libraries. This is to conserve resources. If two partitions are running the same POS, they reference the same system shared library, but must provide their own copy of the read/write parts of the library to maintain the isolation between partitions. They must also provide the required stack and heap size for the library to run. A partition can only access one system shared library. A system shared library can only contain one POS.

When the POS is built, stub files are created. The stub files are needed when building applications since they contain routine stubs that the applications link to. To map the stubs to the POS routines, an entry-point table is created as a part of the POS. VxWorks resolves the stub calls to the POS routines when the application is initializing.

vThreads

VxWorks 653 comes with a partition OS called vThreads. vThreads is based on VxWorks 5.5, and is a multithreading technology. It consists of a kernel and

(27)

CHAPTER 2. BACKGROUND

a subset of the libraries supported in VxWorks 5.5. vThreads runs under the core OS in user mode.

The threads in vThreads are scheduled by the partition scheduler. The core OS is not involved in this scheduling. To communicate with other vThreads domains, the threads are making system calls to the core OS.

vThreads get its memory heap from the core OS during startup. The heap is used by vThreads to manage memory allocations from its objects. This heap memory is the only memory available for vThreads; it’s unable to access any other memory. All attempts to access memory outside the given range will be trapped by the core OS.

2.4.6

Port protocol

The ARINC 653 standard leaves it up to the implementation to specify port protocols. VxWorks implements two different; sender block and receiver discard. The sender block protocol means that a message is sent to all destination ports. When a destination port is full, messages get queued in the source port. If the source port gets full the application will be blocked at the send command until the message has been sent or queued. As soon as a full destination port is emptied a retransmission will occur. Whether the retransmission succeeds or not depends on the state of all the other destination ports. If any of these ports are full, the retransmission will fail.

The advantage of using sender block is that it ensures that a message arrives at the destination. No messages will be lost, as ARINC 653 requires. A draw-back is that partitions get coupled to each other, meaning that one application may block all the others.

The other protocol, receiver discard, will drop messages when a destination port is full. This will avoid one application with a full destination buffer to block all the others. If all destination ports are full, an overflow flag will be set to notify the application that the message was lost.

2.4.7

Simulator

VxWorks comes with a simulator. It makes it possible to run VxWorks 653 applications on a host computer. Since it’s a simulator, not an emulator, there are some limitations compared to the target environment. The simulator’s per-formance is affected by the host hardware and other software running on it.

The only clock available in the simulator is the host system clock. The simu-lator’s internal tick counter is updated with either 60 Hz or 100 Hz, which gives very low resolution measures. 60 Hz implies that partitions can’t be scheduled with a timeslot shorter than 16 ms. With 100 Hz it’s possible to schedule as short as 10 ms.

One feature that isn’t available in the simulator is the performance monitor, which is used for monitoring CPU usage.

2.5

Databases

Today Saab is using their own custom storage solutions in their avionics soft-ware. They are often specialized for a particular application and not that

(28)

gen-CHAPTER 2. BACKGROUND

eral. A more general solution would save both time and money, since it would be easier to reuse.

This chapter contains information about the databases that have been evalu-ated for implementation in the system. It also contains some general information that may be good to know for better understanding of the system implementa-tion.

2.5.1

ODBC

Open Database Connectivity, ODBC, is an interface specification for accessing data in databases. It was created by Microsoft to make it easier for companies to develop Database Management System, DBMS, independent applications. Applications call functions in the ODBC interface, which are implemented in DBMS-specific modules called drivers.

ODBC is designed to expose the database capabilities, not supplement them. This does not mean that just because one is using an ODBC interface to access a simple database, the database will transform into a fully featured relational database engine. If the driver is made for a DBMS that does not use SQL, the developer of the driver must implement at least some minimal SQL functionality. [17]

2.5.2

MySQL

MySQL is one of the worlds most popular open source databases. It is a high performance, reliable relational database with a powerful transactional sup-port. It includes complete ACID (atomicity, consistency, isolation, durability) transaction support, unlimited row-level locking, and multi-version transaction support. The latter means that readers never block writers and vice-versa.

The embedded version of MySQL has a small footprint with preserved func-tionality. It supports stored procedures, triggers, functions, ANSI-standard SQL and more. Even though MySQL is open-source, the embedded version is released under a commercial license. [16]

2.5.3

SQLite

SQLite is an open source embedded database that is reliable, small and fast. These three factors come as a result of the main goal with SQLite - to be a simple database. One could look at SQLite as a replacement to fopen() and other filesystem functions, almost like an abstraction of the file system. There is no need for any configuration or any server to start or stop.

SQLite is serverless. This means that it does not need any communication between processes. Every process that needs to read or write to the database opens the database file and reads/writes directly from/to it. One disadvantage with a serverless database is that it does not allow more complicated and finer grained locking methods.

SQLite supports in-memory databases. However, it’s not possible to open a memory database more than once since a new database is created at every opening. This means that it’s not possible to have two separate sessions to one memory database.

(29)

CHAPTER 2. BACKGROUND

The database is written entirely in ANSI-C, and makes use of a very small subset of the standard C library. It’s very well tested; with tests that have 100% branch coverage. The source has over 65.000 lines of code while test code and test scripts have over 45.000.000 lines of code.

It’s possible to get the source code as a single C file. This makes it very easy to compile and link into an application. When compiled, SQLite only takes about 300 kB memory space. To make it even smaller, it’s possible to compile it without some features to make it as small as 190 kB.

Data storage

All data is stored in a single database file. As long as the file is readable for the process, SQLite can perform lookups in the database. If the file is writable for the process, SQLite can store or change things in the database. The database file format is cross-platform, which means that the database can be moved around among different hardware and software systems. Many other DBMS require that the database is dumped and restored when moved to another system.

SQLite is using a manifest typing, not static typing as most other DBMS. This means that any type of data can be stored in any column, except for an integer primary key column. The data type is a property of the data value itself. Records have variable lengths. This means that only the amount of disk space needed to store data in a record is used. Many other DBMS’ have fixed length records, which mean that a text column that can store up to 100 bytes, always takes 100 bytes of disk space.

One thing that is still experimental but could be useful, is that it is possible to specify which memory allocation routines SQLite should use. This is probably necessary if one wants to be able to certify a system that is using SQLite. Locking technique

A single database file can be in five different locking states. To keep track of the locks, SQLite uses a page of the database file. It uses standard file locks to lock different bytes in this page. One byte for each locking state. The lock page is never read or written by the database. In a POSIX system setting locks is done via fcntl() calls.[13]

2.5.4

Mimer SQL

Mimer SQL is a relational database manager system from Mimer Information Technology AB.

Mimer SQL is a SQL database with full SQL support, transactions, triggers etc. There is an embedded version of Mimer, with native VxWork-support.

The highly performance-optimized storage mechanism ensures that no man-ual reorganization of the database is ever needed.

In Mimer, a transaction is an atomic operation. Since Mimer uses optimistic concurrency control, OCC, deadlocks never occur in the database. Change requests are stored in a write-set during the build-up phase of the transaction. The build-up phase is when all database operations are requested. During this phase the changes in the write-set is hidden from other users of the system. The changes will be visible to the other users after a successful commit. Besides the

(30)

CHAPTER 2. BACKGROUND

write-set, there is a read-set. It records the state of the database with intended changes. If there is a conflict between the database state and the read-set, a rollback will be performed and the conflict is reported. This could happen e.g. if one user deletes a row that is to be updated by another user. It’s up to the application how to handle this. [14]

2.5.5

Raima

Raima is designed to be an embedded database. It has support for both in-memory and on-disk storage. It also has native support for VxWorks, and should therefore needs less effort to get up and running in VxWorks 653. It is widely used, among others is the aircraft vendor Boeing.

Raima has both an SQL API and a native C API. The SQL API conforms to a subset of ODBC 3.5.

Raima uses a data definition language, DDL, to specify the database design. Each database needs its own DDL file. The DDL file is parsed with a command line tool that generates a database file and a C header file. The header file contains C struct definitions for the database records defined in the DDL and constants for the field names. These structs and constants are used with Raima’s C API. The DDL parsing tool also generates index files for keys. Each index file must be specified in the DDL.

There are two types of DDL in Raima. One which has a C like syntax, DDL, and one with a SQL like syntax, SDDL. Both types must be parsed by a special tool. Its not possible to create databases or tables during runtime with SQL queries. Fortunately, its possible to link Raima’s DDL/SDDL parsers into applications. This allows creation of DDL/SDDL files during runtime, that could be parsed by the linked code.

Raima is storing data in fixed length records. This is not per table basis, but file basis. If there are multiple tables in the same file, the record size will be the size of the biggest record needed by any of the tables. This means that there may be a lot of space wasted, but it should result in better performance. [15]

(31)

Chapter 3

System design and

implementation

In this chapter an experimental system is described, whose purposes are to ab-stract ARINC 653 and make a database available for applications. The database can easily be replaced by another database without modifications to the rest of the system. This system is later benchmarked to find different thresholds, see chapter 4.

3.1

Architecture overview

The system is based on a three layered architecture. Layer one consists of the transmission module and is responsible for transmitting data between ARINC 653-partitions. Layer two includes two database modules. The purpose of these modules are to transfer queries and results back and forth between each other. Layer three consists of the actual database, a database adapter and client ap-plications. See figure 3.1 for a system overview.

Below these three layers, the ARINC 653-compatible real-time operating system resides. This can be seen as a separate layer.

• Layer 3 - Applications executing routines, database adapter providing functionality to lower layer.

• Layer 2 - Database API, Query/Result transport protocol. • Layer 1 - Interpartition communication.

• ”Layer 0” - RTOS, handled by VxWorks.

Clients and Servers A partition containing a transmission module, database client module and an application is called a client. A partition containing a transmission module, database server module, database adapter and a database is called a server.

This implementation is able to handle multiple clients which use one or more servers. However, the main configuration is for one server partition and multiple client partitions.

(32)

CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION

Partition scheduling Both client and server partitions are scheduled with a cyclic schedule that resembles round robin, i.e. all timeslots are of the same length and each partition has only one timeslot per major cycle. See Partition Management in section 2.3.1 for more information about partition scheduling in ARINC 653.

Database partition Application partition

RTOS (ARINC 653)

Adapter

Database module (server)

Transmission module

Database Application

Database module (client)

Transmission module

(33)

CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION

3.1.1

Interpartition communication design

Partitions communicate with each other through ports and channels. This sec-tion describes the channel design for this implementasec-tion.

In this design, each client partition has one outgoing and one ingoing channel connected to the database partition. This means that between a database and a client application, the ingoing / outgoing pair of channels are exclusively used for just this connection. See figure 3.2.

Figure 3.2: Channel design used in this system.

The benefit with this design is the application independence. E.g. if one port buffer is overflowed, only this port’s connection will be affected. The other connections can still continue to function. Another benefit is that the server will know which client sent a message. This is because all channels are configured before the system is run.

The drawback with this approach is that it requires many ports and channels to be configured which requires a lot of memory.

(34)

CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION

3.2

Database modules

There are two different database modules. One is for client partitions and provide a database API to applications. The other one is for server partitions, and handle the database adapter connections.

3.2.1

Client version

The client version of the database modules is responsible for providing a database API to the application developer. The routines can be used to interact with the database, e.g. send and receive queries and results.

Database API

The database API follows the ODBC standard interface. However, not all rou-tines specified in the ODBC interface are implemented, only a subset of the ODBC routines that provide enough functionality to get the system to work.

The names and short descriptions of the implemented routines are listed in table 3.1.

SQLAllocHandle Allocate handle of type: environment, connection or statement.

SQLFreeHandle Free specified handle.

SQLConnect Connect to specified database.

SQLDisconnect Disconnect from database.

SQLExecDirect Execute a query in connected database.

SQLFetch Move results from queries into bound columns.

SQLBindCol Bind buffers to database columns.

SQLNumResultCols Get the number of columns in result data set.

SQLRowCount Get the number of rows in result data set.

SQLEndTran End transaction. A new transaction begins automatically.

Table 3.1: Database API

Design

ODBC specifies three different datatypes. They are as follows:

Statements Contains a query and when returned, the result of

the query.

Connections Every connection contains handles to the ports

asso-ciated with this connection. It also holds all state-ment handles used by this connection.

Environments Contains port names and all connections operating

on these ports.

See figure 3.3 for information about how these data types relate to each other.

Before a query could be sent, an environment must be created. Inside this environment multiple connections can be created. And inside every connection,

(35)

CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION «datatype» Statement «datatype» Connection «datatype» Environment 1 1 1 1 1 * * 1

Figure 3.3: Statement, connection and environment relations.

multiple statements can be created. A statement will contain the query and, when the response comes from the server, a rowset.

Send queries When a statement has been created, it can be loaded with a query and then sent by calling function SQLExecDirect with the specific state-ment as a parameter.

Efficiency would be greatly diminished if the execute/send routine would be of a ”blocking” type, i.e. the execution would halt while waiting for a response to return. If this was the case, every client would only be able to execute one query per timeslot. This is due to the fact that a query answer cannot be returned to the sender application before the query has been executed in the database. The database partition must therefore be assigned to the CPU and process the query before a response can be sent back. Since the partitions are scheduled with a round robin scheduling with only one timeslot per partition, the sender application will get the response at earliest at its next scheduled timeslot.

Our design does not use a ”blocking” execute. Instead it works as ODBC describes and passes a handle to the execute routine which sends the query and then continue its execution. The handle is later needed by the fetch/receive routine for knowing which answer should be fetched. This approach supports multiple sent queries during one timeslot. However, the fetch routine gets a bit more complicated.

Receive results To receive results from an executed query, SQLFetch must be called, see figure 3.4 for a SQLFetch flowchart. It is the SQLFetch routine that moves responses into statements where they are available for the user. More precisely, when SQLFetch is called with a specific statement handle, the client receives data from the in port belonging to the current connection. The data is then merged and converted into a result which is matched towards the specified statement. If there isn’t a match, the routine moves the result’s data into the correct statement. This is possible since the result knows which statement it belong to. In this way, a later SQLFetch might not need to receive any data since its result is already stored in the right statement. The above operations are repeated until a matching result is found. When this happens, the result is moved into the specified statement. The data is then moved into the bound

(36)

CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION buffers before the routine exit.

Receive and merge messages

to get one result from the inport

SQLFetch is called with a specific statement pointer 'RHVWKHUHVXOW¶VLGPDWFKWKH VSHFLILHGVWDWHPHQW¶VLG?

Move result into the statement which the UHVXOWV¶VWDWHPHQW

pointer points to NO

Move the result into the specified Statement and then move the result to the

bound columns YES

Exit SQLFetch

Figure 3.4: Flowchart describing the SQLFetch routine.

A short summary : All queries with results stored earlier in the inport queue, in front of the matching query, will get their results moved into their statements for faster access in the future. The result belonging to the specified query is moved into the statement’s bound columns and are made available for layer three applications. All queries who have their result stored behind the specified query in the inport queue will not get their result moved into their statements. These results have to stay in the port queue until another SQLFetch routine is called in which these operations are repeated.

Analysis of design choices As mentioned in the previous paragraph, all data in front of the fetch routine’s specified statement in the port are moved to the statements to which they belong at once. The SQLFetch routine wants the result of a specific query, and to get that result the system must go through all data which is located in front of this result. The unwanted data can’t just be destroyed since there is another statement waiting for just this data. Therefore the read unwanted data is moved into its statement before the routine continues. One drawback with locally stored results in statements is that there is a possibility of memory issues. These can occur when there are many statements and each of them contains a large rowset as a result.

An alternative solution is to discard unwanted data and retransmit it later. This approach requires both acknowledges and retransmits and would also cause a disorder in the statements received. Performance will dramatically decrease when using this method due to the inappropriate combination of retransmits and the partition scheduling algorithm.

(37)

CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION

A client’s processing performance is an important issue. An idea to speed this up is to send multiple partial rowsets instead of sending the entire rowset at one time. Benefits would be that clients could start their result processing earlier, i.e. start processing the first part directly instead of having to wait for the entire rowset to be transmitted. This is a good idea if the partitions would have been running concurrently. However since they do run in a round-robin scheduling this idea loses its advantage. A client which receives a partial row set can start to process directly, but when the processing has finished it has to wait a whole cycle to get the next part of the partial row set before it can continue its processing. This leads to an inefficient behavior where the client application can only process one partial rowset per timeslot. However, it is possible to achieve good performance using this method but then the client’s performance must be known in advance. If that is the case, the server can send precise sized rowsets which the client would just be able to cope with.

3.2.2

Server version

The server version of the database module is the link between the database adapter and the transmission layer. It is responsible for receiving queries, syn-chronizing access to the database and sending back the results to the client applications.

Multitasking

The server module must be able to be connected to multiple client applications. The server process will therefore spawn additional tasks to handle these appli-cations. Each server-client connection is run inside its own single task. This is to deny connections from occupying too much time and to prevent the database from being locked down in case of a connection deadlock.

A task is similar to a thread. They are stored in the partition memory and can access most of the resources inside the partition.

Each task handles one and only one of the connections. Its job is to manage this connection, i.e. receive queries, processes queries and return answers until the task is terminated. See figure 3.5 for a task job flowchart.

The advantage of a threaded environment is that if a deadlock occurs in one connection, it won’t affect any other connections. Also, threads can be scheduled to make the server process queries in different ways depending on the system’s purpose. One disadvantage with threading is that the system becomes more complex. Added issues are for example synchronization and multitasking design.

Synchronization The access to the database is synchronized to prevent mul-tithreading issues within the database. This means that only one thread at a time can be executing a query in the database. The processing of this query must be completed before the lock is lifted and another thread can access the database.

(38)

CHAPTER 3. SYSTEM DESIGN AND IMPLEMENTATION Scheduling

In ARINC, scheduling of tasks is done in a preemptive priority-based way. In the implemented system the priority is set to the same number for all tasks. This implies that the tasks will be scheduled in a first in first out, FIFO, order. There are two schedules of interest here; Round robin with yield and Yield only.

Round robin with yield works as a round robin scheduling but has the possi-bility to release the CPU earlier at its own command. This scheduling doesn’t force the task to occupy its entire timeslot. Instead it will automatically relin-quish the processor at send and receive timeouts. Timeslot lengths are deter-mined by the number of ticks set by the KernelTimeSlice routine. Since all tasks have the same priority, all tasks will enjoy about the same execution time. If the partition timeslot is large enough and the KernelTimeSlice is small enough, all clients will get some responses every cycle.

Another available scheduling is Yield only. This scheduling lets a task run until itself relinquishes the CPU. The tasks do not have a maximum continuous execution time. When used in this system, the CPU is released only at send and receive timeouts.

3.2.3

Server and Client interaction - Protocol

The server and client interact with each other using a protocol. A package transmitted using this protocol always starts with a fixed sized header before the actual data follows. The data can be of any size since queries and result sizes can vary. This variable data is unknown for the receiver and can therefore not be received directly. The receive routine wants to know how many bytes it should read from the inport. Also, the receiver needs the data size to be able to allocate a buffer of an appropriate size.

A fixed maximum data size is the alternative, but will generate a huge over-head because of the large size difference between the smallest result set and the largest result set. However, this approach is the only choice if the system should be certified. See section 6.4 for more information.

Since this system uses a variable data size, the receive routine only knows about the header size. Therefore only the header is read from the inport at first, leaving the data behind. Then the receiver checks the header for the data size value, see table 3.2, and allocates a buffer of the correct size. As the data size now is known, the receiver can read the data from the port and put the data in the newly allocated buffer.

Header

The header contains an id, a statement pointer, a data size and a command type as showed in table 3.2.

The statement pointer is used to distinguish to which statement a query belongs. This is to make sure that database responses are moved into the correct statements. An id is required because there are some special occurrences where the statement pointer isn’t enough to determine which statement is the correct one. The id is a sequence number that is increased and added to the header at every send called by SQLExecDirect in the client application.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Tommie Lundqvist, Historieämnets historia: Recension av Sven Liljas Historia i tiden, Studentlitteraur, Lund 1989, Kronos : historia i skola och samhälle, 1989, Nr.2, s..

Bill Armstrong (R-Colo.) to clear up a financing problem affecting the Dallas Creek Project near Montrose.. The amendment was atta.ched to a funding

The CRF model, on the other hand, is able to take advantage of the complementary information in the coupled constraints, provided that the dictionary is able to filter out

Anledningen till besöket var att jag ville ta några bilder for en artikel om skolan. Bl a hoppades jag p å att hitta ett gammalt svart piano som spelat en viktig roll

SA, som hade kallats in för att ge go- da råd och nu ville lätta upp stäm- ningen, började berätta en historia om hur Tage Erlander en gång hade tvingats lägga ner

Planen för övergångsprocessen var att alla skulle börja använda arbetssättet samtidigt, eftersom 3Con vid implementeringen inte bestod av tillräckligt med anställda för att