• No results found

Adaptive QoS Management in Dynamically Reconfigurable Real-Time Databases

N/A
N/A
Protected

Academic year: 2021

Share "Adaptive QoS Management in Dynamically Reconfigurable Real-Time Databases"

Copied!
169
0
0

Loading.... (view fulltext now)

Full text

(1)

Adaptive QoS Management in

Dynamically Reconfigurable Real-Time

Databases

by Daniel Nilsson Henrik Norin LITH-IDA-EX–05/014–SE 2005-02-16

(2)
(3)

Master’s Thesis

Adaptive QoS Management in Dynamically

Reconfigurable Real-Time Databases

by Daniel Nilsson Henrik Norin LiTH-IDA-EX–05/014–SE

Supervisor: Aleksandra Teˇsanovi´c Mehdi Amirijoo

Department of Computer and Information Science

at Link¨opings University Examiner: Dr. J¨orgen Hansson

Department of Computer and Information Science

(4)
(5)

due to the growing number of data-intensive applications needing to enforce real-time constraints. The COMponent-based Embedded real-Time data-base (COMET) is a real-time datadata-base developed to meet these demands. COMET is developed using the AspeCtual COmponent-based Real-time system Development (ACCORD) design method, and consists of a number of components and aspects, which can be composed into a number of differ-ent configurations depending on system demands, e.g., Quality of Service (QoS) management can be used in unpredictable environments.

In embedded systems with requirements on high up-time it may not be possible to temporarily shut down the system for reconfiguration. Instead it is desirable to enable dynamic reconfiguration of the system, exchanging components during run-time. This in turn sets demands on the feedback control of the system to adjust to these new conditions, since a new time variant system has been created.

This thesis project implements improvements in COMET to create a more stable database suitable for further development. A mechanism for dynamic reconfiguration of COMET is implemented, thus, enabling com-ponents and aspects to be swapped during run-time. Adaptive feedback control algorithms are also implemented in order to better adjust to work-load variations and database reconfiguration.

Keywords: real-time databases, dynamic reconfiguration, quality of ser-vice, component-based software development, aspect-oriented soft-ware development, adaptive feedback control

(6)
(7)

Sammanfattning

Se senaste ˚aren har behovet av realtidsdatabastj¨anster ¨okat p.g.a. det ¨

okande antalet dataintensiva applikationer vilka beh¨over uppfylla tidskrit-iska krav. COMponent-based Embedded real-Time database (COMET) ¨ar en realtidsdatabas utvecklad f¨or att tillm¨otesg˚a dessa krav. COMET ¨ar utvecklat enligt AspeCtual COmponent-based Real-time system Develop-ment (ACCORD) designmetoden och best˚ar av ett antal komponenter och aspekter, vilka kan s¨attas samman till ett antal konfigurationer beroende p˚a systemkrav, t.ex. kan kvalitetsgaranti anv¨andas i of¨oruts¨agbara milj¨oer. I inbyggda system, med krav p˚a h¨og tillg¨anglighet, kan det vara om¨ oj-ligt att tillf¨alligt st¨anga ner systemet f¨or omkonfiguration. Ist¨allet ¨ar det ¨

onskv¨art att m¨ojligg¨ora dynamisk omkonfiguration av systemet, d.v.s. byta ut komponenter under exekvering. Detta st¨aller i sin tur krav p˚a ˚ aterkop-plingen av systemet eftersom ett nytt tidsvariant system har skapats.

Detta examensarbete implementerar f¨orb¨attringar av COMET f¨or att skapa en mer stabil databas b¨attre l¨ampad f¨or fortsatt utveckling. En mekanism f¨or dynamisk omkonfiguration av COMET implementeras och p˚a s˚a s¨att m¨ojligg¨or byte av komponenter och aspekter under exekvering. Algoritmer f¨or adaptiv ˚aterkoppling implementeras ocks˚a f¨or att b¨attre kunna anpassa ˚aterkopplingen till ¨andringar i arbetslast och omkonfigura-tion av databasen.

(8)
(9)

Acknowledgements

The authors would first of all like to thank each other for good cooperation during this master thesis project. Although we have spent an unhealthy amount of time together we are still alive and friends. We would also like to thank our supervisors Aleksandra Teˇsanovi´c and Mehdi Amirijoo. It has been a great experience working together with you. Your input and encouragement kept our motivation up and our anger level down. We could not have wished for better supervisors. Thank you very much for making our master thesis project into a productive and fun six months. We wish you all of the best in the future. We would also like to send a thank you to our examiner Dr. J¨orgen Hansson and hope that your workload will decrease in the future. Calin, sorry for being the loudest students you have ever met. Unfortunately for you we got the room next to you. Thank you also to the rest of the staff at RTSLAB. Sorry we did not have time to attend the lab meetings.

(10)
(11)

Contents

1 Introduction 5 1.1 Motivation . . . 5 1.2 Target Reader . . . 6 1.3 Thesis Outline . . . 6 2 Background 7 2.1 Real-Time Systems . . . 7 2.2 Real-Time Databases . . . 8 2.2.1 Data Model . . . 9 2.2.2 Transaction Model . . . 9 2.3 Quality of Service . . . 10 2.4 Programming Methods . . . 11

2.4.1 Component-Based Software Development . . . 11

2.4.2 Aspect-Oriented Software Development . . . 11

2.5 ACCORD . . . 13 2.5.1 RTCOM . . . 13 2.6 COMET . . . 14 2.6.1 COMET components . . . 14 2.6.2 COMET aspects . . . 15 2.7 Control Theory . . . 16 2.7.1 PID-control . . . 16 2.7.2 Performance Metrics . . . 17

2.8 Feedback Control Algorithms . . . 19

(12)

CONTENTS

2.8.2 FC-M . . . 20

2.8.3 FC-UM . . . 20

2.8.4 QMF . . . 21

2.9 Adaptive Control Theory . . . 21

2.9.1 Self-Tuning Regulator Control . . . 21

2.9.2 Least Squares And Regression Models . . . 23

3 Problem Description 27 3.1 COMET Improvements . . . 27

3.1.1 COMET Merging . . . 27

3.1.2 COMET Deficiencies . . . 28

3.2 Dynamic Reconfiguration . . . 29

3.3 Adaptive Feedback Control . . . 30

3.3.1 Adaptive Feedback Control Algorithms . . . 30

3.3.2 Metrics . . . 31

3.4 Aim and Objectives . . . 31

4 Advanced Preliminaries 33 4.1 COMET Components . . . 33

4.1.1 User Interface Component (UIC) . . . 34

4.1.2 Scheduler Manager Component (SMC) . . . 35

4.1.3 Transaction Manager Component (TMC) . . . 36

4.1.4 Locking Manager Component (LMC) . . . 36

4.1.5 Indexing Manager Component (IMC) . . . 36

4.1.6 Memory Management Component (MMC) . . . 36

4.2 Transaction Flow . . . 37 4.3 Aspect Packages . . . 39 4.3.1 Concurrency Aspects . . . 39 4.3.2 QoS Aspects . . . 40 5 COMET Improvements 43 5.1 Prerequisites . . . 43 5.2 Approach . . . 45

5.3 Merging Into COMET v3.0 . . . 46

5.4 Testing COMET v3.0 . . . 46

(13)

5.5.1 COMET Without Concurrency and QoS . . . 52

5.5.2 COMET With Concurrency and QoS . . . 53

5.6 Debugging . . . 55

5.7 Coding and Naming Standard . . . 57

5.7.1 Components . . . 58

5.7.2 Aspects . . . 59

5.7.3 Headers . . . 60

5.8 Tuple Updates and Deletions . . . 62

5.9 ACCORD and RTCOM Conformance . . . 63

5.10 Data Passing . . . 63

5.11 Building COMET and Generating Documentation . . . 64

6 Dynamic Reconfiguration 65 6.1 Requirements for Dynamic Reconfiguration . . . 65

6.1.1 Existing Models . . . 66

6.2 Feasibility . . . 68

6.3 ACCORD and RTCOM Revisited . . . 69

6.3.1 Requirement Conformance . . . 71

6.4 Dynamic Reconfiguration of COMET . . . 72

6.4.1 COMET Component Framework . . . 73

6.4.2 Component Changes . . . 74

6.4.3 Aspect Changes . . . 77

6.4.4 COMET Safe State . . . 80

6.4.5 Saving Component States . . . 81

7 Adaptive Feedback Control 85 7.1 Adaptive QoS Configurations . . . 85

7.2 Adaptive Feedback Control Design . . . 86

7.2.1 Self Tuning Regulator Control . . . 86

7.2.2 Least Squares and Regression Models . . . 90

8 Performance Evaluation 99 8.1 Experiment Setup . . . 100

8.2 Execution Time Measurements . . . 101

8.3 Feedback Control Performance . . . 103

(14)

CONTENTS

8.3.2 Load Experiment . . . 106

8.3.3 Transient Performance Experiment . . . 106

8.4 Sampling Periods . . . 109

8.5 Self Tuning Regulator . . . 111

8.6 Dynamic Reconfiguration . . . 114

8.6.1 Increasing Transaction Execution Time . . . 114

8.6.2 Decreasing Transaction Execution Time . . . 118

8.7 Changing Controller Parameter Value . . . 122

8.8 Conclusions . . . 124 9 Summary 125 9.1 Conclusions . . . 125 9.2 Future Work . . . 128 A Abbreviations 131 B Terminology 133 C Variables 135 D Test Runs 137 D.1 Test Runs in COMET v3.0 . . . 137

D.2 Test Runs with QoS Management Aspects . . . 138

(15)

List of Tables

5.1 COMET v2.0 Changes . . . 44

5.2 Component changes during COMET improvements . . . 47

5.3 Cont. Component changes during COMET improvements . 48 5.4 Aspect changes during COMET improvements . . . 49

5.5 Cont. Aspect changes during COMET improvements . . . . 50

5.6 Variables to set in the DBTrans and UIC_ SystemParamet-ers structs when using aspect packages . . . 54

5.7 Debug functions available in COMET . . . 55

5.8 Debug levels available in COMET . . . 56

6.1 Basic Requirements for Dynamic Reconfiguration . . . 67

8.1 Execution times for transactions running in COMET con-figured with HP2PL and MRFC . . . 101

8.2 Execution times of swapping components during dynamic reconfiguration of COMET with single transactions running 102 8.3 Execution times of swapping components during dynamic reconfiguration of COMET with a 300% load . . . 103

8.4 The linear and quadratic difference in deadline miss ratio for the closed-loop system using different sampling periods1 . . 111

8.5 The linear and quadratic difference in deadline miss ratio for the open-loop system using different sampling periods1 . . . 112

8.6 The linear and quadratic difference in deadline miss ratio when delays are introduced . . . 116

(16)

LIST OF TABLES

8.7 The linear and quadratic difference in deadline miss ratio

when delays are introduced . . . 120

8.8 The linear and quadratic difference in deadline miss ratio using MRFC and varying controller parameter . . . 123

E.1 Description of project phases . . . 142

E.2 Tasks performed by Daniel Nilsson . . . 143

(17)

List of Figures

2.1 An example of an aspect written in AspectC++ . . . 12

2.2 A schematic view of a open-loop system . . . 16

2.3 A schematic view of a closed-loop system . . . 17

2.4 Definition of settling time (T ) and overshoot (M ) . . . 18

2.5 Estimation of u(k) . . . 23

4.1 COMET components and their relations . . . 34

4.2 The execution steps of a transaction in COMET . . . 38

6.1 Extended ACCORD with support for dynamic reconfiguration 69 6.2 Architecture of COMET Component Swapping . . . 73

7.1 Software composition of QoS aspects . . . 86

7.2 Architecture of adaptive feedback control using a self tuning regulator . . . 90

7.3 Architecture of adaptive feedback control using a regression model . . . 91

8.1 The average deadline miss ratio as a function of load for the open-loop system . . . 105

8.2 The average deadline miss ratio as a function of load for the closed-loop system . . . 107

8.3 Transient behavior of COMET with MRFC applied . . . 108

8.4 Transient behavior of COMET with MRFC applied using different sampling periods . . . 110

(18)

LIST OF FIGURES

8.5 Performance of STRA G estimation . . . 113 8.6 Deadline miss ratio when delays are introduced in the

open-loop system . . . 115 8.7 Deadline miss ratio when delays are introduced and MRFC

is applied . . . 116 8.8 Deadline miss ratio when delays are introduced and STRC

is applied . . . 117 8.9 Deadline miss ratio when delays are removed in the

open-loop system . . . 119 8.10 Deadline miss ratio when delays are removed and MRFC is

applied . . . 120 8.11 Deadline miss ratio when delays are removed and STRC is

(19)

Chapter 1

Introduction

This chapter provides the motivation for this thesis (section 1.1). Further-more it presents the target reader (section 1.2) and an outline (section 1.3) of the thesis.

1.1

Motivation

During the last years the need for real-time database services has increased due to the growing numbers of data-intensive applications needing to en-force real-time constraints. Many of these applications also run in environ-ments with unpredictable workloads. By monitoring system performance and applying control theory it is possible to adapt system behavior, thus, conform to the actual workload and guarantee a certain quality of ser-vice [5].

Another trend within software development and especially within real-time system development is the need for more configurable systems, de-pending on, e.g., application type or scarce resources. Component-based software development [30] combined with aspect-oriented software develop-ment [13], using a design method called ACCORD [33], can fulfill this need. A COMponent based Embedded real-Time database system (COMET) has been developed using ACCORD enabling the possibility of reconfiguration

(20)

1.2. Target Reader

of real-time databases [33].

An additional desire in systems with high up-time demands, is to be able to swap the components constituting the system during run-time, thus, not having to initially shut down the system for recompilation. Due to com-ponent swapping and differences in performance of the comcom-ponent versions, this in turn would affect the systems composition and performance, pos-sibly leading to critical errors, when real-time constraints are no longer enforced. Using adaptive control techniques, it is possible to change how the system is controlled, i.e., conforming control to the conditions of the changed system. Thus, handling of component swapping and other time in-variances common to computer systems is made possible.

1.2

Target Reader

This thesis is intended for people with basic knowledge of computer science, control theory and mathematics. Areas where a deeper understanding is required are explained in the theoretical background chapter (Chapter 2).

1.3

Thesis Outline

The outline of the thesis is as follows. Chapter 2 provides the back-ground knowledge needed to understand the rest of the thesis. The problem definition is described in chapter 3, which also includes possible COMET improvements, adaptive control algorithms, and a proposal for making COMET components swappable into the system during run-time. Chapter 4 contains advanced preliminaries on COMET components and aspect packages available. Descriptions of the merging of two COMET versions and improvements implemented in COMET are found in chapter 5. Chapter 6 provides requirements for dynamic reconfiguration, extensions to AC-CORD and RTCOM, and specifications on the implementation of dynamic reconfiguration in COMET. The design and implementation of the adapt-ive feedback control algorithms is described in chapter 7. In chapter 8 the performance evaluation of the adaptive feedback control algorithms is described. Finally a summary of the thesis is given in chapter 9.

(21)

Chapter 2

Background

This chapter introduces theory and defines concepts needed for good un-derstanding of the rest of the thesis. The firsts sections describe real-time systems and real-time databases. In the proceeding sections, component-based and aspect-oriented software development is presented along with the ACCORD design method, the RTCOM component model and the COMET database platform are presented. Finally control theory and feedback al-gorithms conclude this chapter.

2.1

Real-Time Systems

There are many definitions of what exactly a real-time system (RTS) is. One quite general definition is [18]:

Any system where a timely response by the computer to ex-ternal stimuli is vital is a real-time system.

In other words a RTS is a system where not only the content of the response, but also the time in which it is provided, is of importance. An example of a RTS is a system in control over the cooling at a nuclear power plant, where actions taken are time critical, otherwise causing a core meltdown.

Tasks in a real-time system have deadlines stating when a task must be completed. Depending on the type of system and its application there

(22)

2.2. Real-Time Databases

exists three categories of real-time systems [18]. Hard real-time systems, e.g., systems controlling an aircraft, do not allow deadlines to be violated, as it can have catastrophic consequences. Soft real-time systems, e.g., mul-timedia streaming, allow deadline violations and degrading performance but without anything catastrophic happening. In firm real-time systems deadlines can be missed, but no benefit is given from continuing execution of a task after the deadline is missed.

Multiple tasks are often running simultaneously in a RTS. A problem that exists when trying to enforce the task deadlines is scheduling, i.e., deciding in which order the tasks should execute. Tasks can either be scheduled off-line (static scheduling) or during run-time (dynamic schedul-ing). Several algorithms for dynamic scheduling exist [7], e.g., Earliest Deadline First (EDF), where the task with the deadline closest in time will have precedence, and Rate Monotonic Scheduling (RMS), where tasks are assigned priorities based on their periods. A shorter period means higher priority and, hence, higher precedence. EDF is sensitive to overload prob-lems. As long as the system is not overloaded, the system works optimally. As soon as the system gets overloaded, a domino effect may cause very many transactions to miss their deadlines [7]. Often the worst-case exe-cution time (WCET), meaning the maximum time it takes to complete a task, is used for estimating how long a task will take to execute.

Many real-time systems, called embedded systems, are used to control physical devices like airplanes, nuclear reactors etc. These systems often have limited CPU and memory, thus placing demands on the RTS to use resources, e.g., memory and CPU, efficiently [7].

2.2

Real-Time Databases

Real-time databases (RTDB) [18] are integrated in many data intensive applications, e.g., banking and stock market systems. Although similar to traditional database systems, there are some additional requirements on a RTDB [18]. Firstly, queries made to the RTDB have soft- or hard deadlines that must be enforced, i.e., the system response times must be predictable. Secondly, there are additional consistency demands on the RTDB. Absolute consistency is a measurement of accuracy, meaning that

(23)

data in the database should reflect the environment, e.g., when data is read from a sensor. A RTDB is thus dealing with temporal data, i.e., data that is not valid after a certain amount of time. Data objects can be divided into base data objects, i.e., data object containing a view of the real-world environment, and derived data objects obtained from base data objects [25]. Relative consistency means that data objects, used to produce a certain derived data object, must be updated relatively close to each other in time [18].

2.2.1

Data Model

In this thesis we consider only base data objects. The following attributes are attached to a data object di:

• T Si, the time stamp of the latest update.

• AV Ii, the absolute validity interval of di.

A data object di is considered to be fresh, as opposed to stale, if

CurrentT ime≤ T Si+ AV Ii (2.1)

2.2.2

Transaction Model

Tasks querying and writing to the RTDB are called transactions, which are atomic units of work, i.e., they are either performed completely or not done at all [18]. In COMET, a transaction, τi, is classified as one of two

types:

• User transactions arrive from the application programs and can read base data objects and write to derived data objects. These transac-tions arrive aperiodically.

• Update transactions are write transactions that update the base data objects. These transactions arrive periodically.

By lowering the update rate of the data, reflecting the environment, we get a less absolute consistent system, but more resources for user transactions and vice versa.

(24)

2.3. Quality of Service

• EEi, the estimated execution time of τi.

• AEi, the actual execution time of τi.

For periodic tasks, i.e., update transactions, we additionally define: • Pi, the invocation period of τi.

• Di, the relative deadline of τi. We set Di = Pi,∀i.

• EUi, the estimated CPU utilization of τi, EUi = EEi/Pi.

• AUi, the (actual) CPU utilization of τi, AUi = AEi/Pi.

For aperiodic tasks, i.e. user transactions, we additionally define: • AI, the average inter-arrival time of τi.

• Di, the relative deadline of τi. We set Di = AI,∀i.

• EUi, the estimated CPU utilization of τi, EUi = EEi/EIi.

• AUi, the (actual) CPU utilization of τi, AUi = AEi/AIi.

2.3

Quality of Service

Quality of Service (QoS) refers to the idea that utilization, miss ratio, and other characteristics in a RTDB can be measured and improved. There are several ways to deal with QoS demands, e.g., through resource reservation or admission control [31], i.e., rejecting user transactions in a RTDB [5], to enforce the QoS requirements. It is also possible for a system to have several QoS levels [20] and deliver a more imprecise result with lower QoS, meaning acceptable results that require less resources, e.g., computation time, bandwidth or memory. An example of this is a multimedia streaming application that chooses to stream lower quality media when bandwidth is low [9].

(25)

2.4

Programming Methods

This section covers two programming methods, component-based software development (CBSD) and aspect-oriented software development (AOSD), that have developed in the last years to lower costs and enforce reusability and tailorability of software.

2.4.1

Component-Based Software Development

Component-based software development has been developed to lower costs and achieve a higher level of reuse and reliability in software develop-ment [30]. CBSD evolves around the concept of composing pre-defined systems out of components. A component is a unit of composition with specified interfaces and explicit context dependencies only [30]. Interfaces can be considered as the components access points. These points allow clients of the components, e.g., other components or applications, to access the services of the components. There are three types of interfaces [6]: provided, required and configuration interfaces. The configuration inter-faces are intended for configuration of the components by the user com-posing the system. The provided and required interfaces are used when components are combined with other components.

2.4.2

Aspect-Oriented Software Development

Aspect-oriented software development is a new programming methodology that allows to capture and modularize concerns that crosscut a software system in so-called aspects [13]. Examples of such cross-cutting aspects are memory optimization, concurrency control and error handling. AOSD implementation of a software system consists of [33] (i) components written in a component language, e.g., C++ and Java, (ii) aspects, written in an aspect language, e.g., Aspect C++ [28], and (iii) an aspect weaver, which is a special compiler for combining components and aspects in a process called aspect weaving.

Aspect languages use the concept of pointcuts, which are sets of join points described by a pointcut expression. Join points are defined as points in the component code where aspects can be woven. The code that should

(26)

2.4. Programming Methods

run at the join points, specified by the pointcut expression, is specified in the advice declaration. Advice can be declared as either before, after, or around advice, specifying if the advice code should be executed, respect-ively, before, after or in place of the join points. An example of an aspect can be seen in figure 2.1. Firstly, the pointcut mainExecution() is defined which corresponds to the join point int main() in the component code, i.e., everywhere in the component the function main() is called with the return type int. Secondly, two advice are declared. The first one prints Before main() before the join point corresponding to the pointcut mainExecu-tion() is executed, i.e., before int main() is executed in the component code. The second one prints After main() after the join point corres-ponding to the pointcut mainExecution() is executed. If this example aspect should be woven into a simple program, containing one main func-tion, the text Before main() would be printed when starting the program, and After main() would be printed just before the program terminates.

aspect BeforeAfterPrinter {

pointcut mainExecution() = execution(‘int main()’); advice mainExecution() : before() {

cout << ‘Before main()’ << endl; }

advice mainExecution() : after() { cout << ‘After main()’ << endl; }

};

(27)

2.5

ACCORD

By combining CBSD and AOSD in real-time system development, develop-ment of more configurable and tailorable software is possible. One design method for providing this is AspeCtual COmponent-based Real-time sys-tem Development (ACCORD) [33]. The design method consists of the following:

• A decomposition process for decomposition of the real-time system into a set of components and into a set of aspects.

• Components, with well-defined functions and interfaces.

• Aspects, as properties cross-cutting the functionality of the system. • A real-time component model (RTCOM) that describes a real-time

component, supporting aspects while also enforcing information hid-ing.

2.5.1

RTCOM

The RTCOM component model consists of three parts [33]:

• The functional part consists of the actual code implementing the components functionality. Each component contains fine granular methods or function calls called mechanisms and more coarse gran-ular operations available to other components or the system. The operations are implemented using the underlying mechanisms of the component, and are flexible in the sense that their implementation can be changed by applying different application aspects.

• The run-time system dependent part handles temporal behavior of the functional part of the component, e.g., WCET specifications. • The composition part of RTCOM contains information about

com-ponent compatibility with respect to both application aspects and other components.

(28)

2.6. COMET

• Application aspects tailor the components based on the underlying application requirements.

• Run-time aspects contain the information describing the component behavior with respect to the target run-time environment.

• Composition aspects describe with which other components a com-ponent can be combined and handles versioning. Composition aspects can also adapt components to work with other components.

2.6

COMET

To enable development of different database configurations for different em-bedded and real-time applications COMET (COMponent-based Emem-bedded real-Time database) has been developed [33]. COMET has been developed using the ACCORD design concept and the RTCOM component model.

2.6.1

COMET components

COMET has been divided into seven basic components with well-defined functions and interfaces:

• The User Interface Component (UIC) enables users to access data in the database.

• The Scheduler Manager Component (SMC) provides mechanisms for performing scheduling of transactions arriving to the system.

• The Locking Manager Component (LMC) provides locking of data, used to maintain concurrency and consistency.

• The Transaction Manager Component (TMC) coordinates the activ-ities of all components in the system with respect to transaction ex-ecution.

• The Indexing Manager Component (IMC) deals with indexing of data.

(29)

• The Memory Management Component (MMC) manages access to data in the physical storage.

• The Buffer Manager Component (BMC) manages buffers used when running transactions.

A more detailed description of how COMET components are implemented is given in chapter 4.

2.6.2

COMET aspects

The decomposition of COMET into aspects, corresponding to the AC-CORD decomposition, concludes with three major classes of aspects, and several types of aspects belonging to those classes.

1. Run-time aspects • Resource demand • Temporal constraints • Portability 2. Composition aspects • Compatibility • Versioning • Flexibility 3. Application aspects • Transaction • Real-time scheduling • Concurrency control • Memory optimization • Synchronization • Security

Since COMET is currently under development, the current implementation lacks many of the aspects COMET has been decomposed into. A more in detail description of implemented COMET aspects is given in chapter 4.

(30)

2.7. Control Theory

2.7

Control Theory

Control theory is used to give appropriate stimulus to a process or system to make the process or system behave in a desirable manner. Typically, the system to control has a set of measurable output signals, yi, and a set of

input signals, vi, that can be used to influence the system [12]. The control

problem consists of making the system produce an output signal which follows a reference signal (r). If the output signals are not available when deciding the input signals, defined as an open-loop system [12], a precise knowledge of the input signals effect on the output signals is demanded. Driving a car blindfolded requires, e.g., exact knowledge of which direction the car takes when turning the steering wheel [12]. A schematic view of a open-loop system is shown in figure 2.2. On the other hand, if the output signal is used when deciding the input signal, called a closed-loop or feedback control system [12], only a very approximate understanding of the system dynamics is required. In the car driving example it would be sufficient to know that, e.g., turning the steering wheel clockwise would result in a right turn. Fine adjustments could then be made to compensate for the difference between the output and reference signal, i.e., the error e = y− r [12]. The main reason to use feedback control is to reduce uncertainty, which can be, e.g., in the form of a modeling error in the system description [24]. A schematic view of a closed-loop system is shown in figure 2.3.

r v y System Open-loop system Controller

Figure 2.2: A schematic view of a open-loop system

2.7.1

PID-control

The PID-controller is the most commonly used controller in a feedback system, and its proportional (P), integral (I), and derivate (D) constitu-ents are basic to all controllers [24]. Each of these actions have a control

(31)

-1 y r e v System Closed-loop system Controller

Figure 2.3: A schematic view of a closed-loop system

parameter associated to it. By varying these parameters the PID-controller performs differently. In the feedback control scheduling approach used in this thesis, the integral part lies within the actuator. The derivate term is not used at all, because derivate control may amplify the noise in miss ra-tio and utilizara-tion due to frequent workload variara-tions in the unpredictable environment of a real-time system [20]. When the controller constitutes of only a proportional part, the manipulated variable v produced in the controller is computed as

v = Kp· e (2.2)

where Kp is the proportional controller parameter, and e is the error.

2.7.2

Performance Metrics

To analyze the performance of a feedback control system, some metrics must be defined. The system designer specifies the desired behavior of the system with a performance specification based on transient and steady state response [12].

Steady state appears when the controlled variable achieves a stable value. The transient response is studied in the time domain, typically when a unit step is applied as input signal to the system [12]. The step response given by the controlled variable can then be analyzed. Figure 2.4 shows a typical step response. A number of performance metrics exist:

(32)

2.7. Control Theory Time Value T S M p Reference

Figure 2.4: Definition of settling time (Ts) and overshoot (Mp)

• m(k), miss ratio, at the kth sampling instant is defined as:

m(k) = 100× #DeadlineM iss(k)

#T erminated(k) (%) (2.3) where #DeadlineM iss(k) is the number of transactions that have missed their deadline and #T erminated(k) as the number of termin-ated transactions over a sampling window ((k-1)T,kT), where T is the sampling period . A desired target or reference level , MS, is set

for this metric.

• u(k), CPU utilization, at the kth sampling instant is the percentage

of CPU busy time over a sampling window ((k-1)T,kT). A desired target or reference level, US, is set for this metric.

• Ts, settling time, is the time it takes the systems transients to decay

(33)

• Mo and Uo, overshoot, is the maximum percentage the system

over-shoots its miss ratio or utilization reference, i.e., Mo = (Mmax −

MS)/MS, Uo = (Umax− US)/US.

2.8

Feedback Control Algorithms

Several feedback control algorithms exist for providing QoS management in real-time databases [20]. While the FC-M (Feedback Control-Miss ra-tio) algorithm is currently implemented in COMET [5], FC-U (Feedback Control-Utilization) and FC-UM (Feedback Control-Utilization and Miss ratio) are not.

2.8.1

FC-U

FC-U is a utilization control loop for controlling U (k) [20]. We define teu(k) as the total estimated CPU utilization, i.e, the estimated utilization of all tasks in the system in the kth sampling.

The control loop samples the utilization periodically, computes a change in the total estimated utilization, denoted dU(k), and adds it to teu(k),

which is then used by an Actuator that admits a transaction, Tj, iff:

EUj +

X

∀i

EUi ≤ teu(k) (2.4)

where i is the index of the transactions admitted into the system. The pseudo code for FC-U is as follows:

FC-U(Us, KP U) {

dU(k) = KP U × (US − u(k));

teu(k + 1) = teu(k) + dU(k); }

where KP U is the control parameter for the utilization controller. Since

u(k) is naturally bounded in the range [0, 100%] FC-U cannot detect how severely the system is overloaded when u(k) remains at 100%.

(34)

2.8. Feedback Control Algorithms

2.8.2

FC-M

FC-M is a similar algorithm to FC-U presented above, but it controls miss ratio [20]. The control loop samples deadline miss ratio periodically, com-putes a change in the total estimated utilization, denoted dM(k), and adds

it to teu(k). New transactions are admitted if they comply with equa-tion (2.4). The pseudo code is as follows:

FC-M(MS, KP M) {

dM(k) = KP M × (MS − m(k));

teu(k + 1) = teu(k) + dM(k); }

where KP M is the control parameter for the deadline miss ratio controller.

This algorithm calculates the change in total estimated utilization without knowledge of the utilization bound, but instead by the difference between deadline miss ratio in the last sampling window and the miss ratio refer-ence level. Since m(k) is naturally bounded in the range [0, 100%], FC-M cannot detect how severely the system is underutilized when m(k) remains at 0%.

2.8.3

FC-UM

FC-UM is an algorithm for integrated utilization and miss ratio control [20]. The algorithm solves the problems of handling overloaded and underutil-ized systems that can occur in both FC-U and FC-M. Both miss ratio and utilization are monitored, fed back to their controllers and the control sig-nals are calculated separately. The control input of the utilization control, dU(k), and miss ratio control, dM(k) are compared and the smaller one is

sent to QoS actuator. New transactions are admitted if they comply with equation (2.4). The pseudo code is as follows:

FC-UM(MS, US, KP M, KP U) {

dM(k) = KP M × (MS − m(k));

dU(k) = KP U × (US − u(k));

(35)

2.8.4

QMF

The QMF (a QoS-sensitive approach for Miss ratio and Freshness guaran-tees) [29] algorithm deals with issues handling updates of base data objects to reflect the real-world environment, i.e., update transactions, while still providing resources for user transactions. QMF switches updating base data from immediate to on-demand policy, providing more resources to user transactions, while still ensuring that read base data is fresh, i.e, in its absolute validity interval (see section 2.2.1). Prior studies implement-ing this algorithm on the COMET platform [5], has shown it to require to much resources to be of any value for system performance. By enhancing performance of the COMET platform, the QMF algorithm could provide higher utilization for user transactions.

2.9

Adaptive Control Theory

The word “adapt” means changing to fit new circumstances. Adaptive feed-back control can be thought of as a special type of nonlinear feedfeed-back con-trol in which the states of the process can be separated into two categories, which change at different rates [4]. The slowly changing states are called controller tuning variables. This separation introduces the idea of two time scales: a fast time scale for updating the ordinary feedback, and a slower one for updating the controller tuning variables. While a regular feedback controller is tuned for one particular environment setting, the adaptive controller tunes the feedback controller to adapt to a time-varying environ-ment, thus, automatically adjusting itself to better performance. Below we present two adaptive feedback control technologies, the self-tuning regulator (STR) and least squares and regression models.

2.9.1

Self-Tuning Regulator Control

When estimating the total utilization teu(k), it may differ from the total actual utilization u(k), which can be written as [35],

(36)

2.9. Adaptive Control Theory

where Ga is called execution time factor . When the total actual

utiliza-tion is below the utilizautiliza-tion threshold uth(k), miss ratio m(k) = 0, since

no transactions miss their deadlines. The utilization threshold is chosen according to

uth(k) =



1, EDF scheduler

u(k), 0 < m(k) < 0.005, other scheduler. (2.6) The miss ratio linearized at the utilization threshold gives [35],

m(k) = 

0 (u(k)≤ uth(k))

m(k− 1) + GM(u(k)− u(k − 1)) (u(k) > uth(k))

(2.7) where GM is the slope of linearization. Using (2.5) and (2.7), the

corres-ponding closed-loop transfer function is [35], Gc(z) =

KP MGa(k)GM

z− (1 − KP MGa(k)GM)

(m(k) ≥ 0). (2.8) Since Ga(k) may change from sample instant to sample instant, a new

KP M(k) must be calculated to keep the pole location of the closed-loop

transfer function constant. To obtain a stable system, the pole must be located within the unit circle [35]. The new KP M(k) can be calculated

with,

KP M(k) =

1− p0

Ga(k)GM

(2.9) where p0 is the pole location. In this thesis p0 = 0.25 is chosen. By using

(2.5) and estimating u(k) with m(k)G

M + uth(k) (see figure 2.5), a new Ga can

be estimated with [35], ˆ Ga(k + 1) = (1− α) ˆGa(k) + α m(k) GM + uth(k) teu(k) (2.10)

where 0 < α < 2. A larger value of α provides better tracking ability, but is also more sensitive to rapid changes in miss ratio between samples [35]. These two aspects should be considered when α is chosen. In this thesis α = 0.8 is chosen.

(37)

th u m(k) GM

G

M m(k) u(k)

m

u

(k)

Figure 2.5: Estimation of u(k)

2.9.2

Least Squares And Regression Models

Regression model algorithms are a family of algorithms with a black box ap-proach where little assumptions of the system are made [4]. Karl Friedrich Gauss formulated the principle for least squares, stating that the unknown parameters in a mathematical model should be chosen in such a way that [4]:

The sum of the squares of the difference between the actually observed and the computed values, multiplied by numbers that measure the degree of precision, is a minimum.

(38)

2.9. Adaptive Control Theory

The least-squares model is particularly simple for a mathematical model, called a regression model , written in the form,

y(k) = ϕ1(k)θ1(k) + ϕ2(k)θ2(k) + . . . + ϕn(k)θn(k) = ϕT(k)θ(k) (2.11)

where y is the observed controlled variable, θ(k) =



θ1(k) θ2(k) . . . θn(k)

T are the parameters of the model to be calculated, and

ϕT(k) = 

ϕ1(k) ϕ2(k) . . . ϕn(k)



are known functions that may depend on other known variables and are called the regression variables or the regressors [4].

The model is indexed by the variable k, which denotes a sample taken in a sampling window ((k− 1)T, kT ), where T is the sampling period. Using an estimation of the controlled variable, ˆy(t), the parameter θ should be chosen to minimize the least-squares loss function [4],

V (θ, k) = 1 2 k X i=1 (y(i)− ˆy(k))2. (2.12) In adaptive controllers the observations are obtained sequentially in real-time. Using the recursive least-squares algorithm (RLS) below, the results obtained at the sampling instant k− 1 can be used to get the estimates at the kth sample [19]. Let ˆθ(k− 1) be an estimate of θ(k − 1). The estimate of y(k) given by,

ˆ

y(k) = ϕT(k)ˆθ(k− 1) (2.13) and the prediction error ,

(k) = y(k)− ˆy(k) (2.14)

are calculated. The RLS algorithm calculates a new estimate of θ at the kth sampling instant according to,

(39)

(k) = y(k)− ϕT(k)ˆθ(k− 1) P (k) = 1 λ  P (k− 1) − P (k− 1)ϕ(k)ϕ T(t)P (k− 1) λ + ϕT(k)P (k− 1)ϕ(k)  K(k) = P (k)ϕ(k) ˆ θ(k) = θ(kˆ − 1) + K(k)(k). (2.15)

The P (k) matrix is defined as P (k) =  k X i=1 ϕ(k)ϕT(k) −1 . (2.16)

The RLS algorithm calculates P (k) according to 2.15, to be able perform recursive calculation. K(k) matrix is introduced as a temporary matrix for simplifying computation. For each sampling instant, the algorithm calculates the prediction error, (k), updates the P (k) and K(k) matrixes, and finally adds a factor K(k)· (k) to the ˆθ(k− 1) estimations, in order to generate a new parameter estimation, ˆθ(k).

In RLS a forgetting factor, 0 < λ < 1, is used to decide the import-ance of old measurements when the algorithm minimizes the loss func-tion, V (θ, k), to find ˆθ(k). A smaller value of λ provides better tracking ability, but is also more sensitive to rapid changes in miss ratio between samples [37]. In this thesis λ = 0.5 is used.

After an estimate of the model parameters has been calculated, future values of y can be predicted by calculating

ˆ

(40)
(41)

Chapter 3

Problem Description

In this chapter a description of the problem is given. Section 3.1 describes current COMET deficiencies. In section 3.2 a proposal for dynamic re-configuration of COMET during run-time is outlined. Adaptive feedback control architecture, algorithms and metrics are explained in section 3.3. In section 3.4 our aim and objectives are specified.

3.1

COMET Improvements

This section specifies improvements done to the COMET baseline during this thesis project. The baseline represents the basic standard setup of COMET components. The section consists of the merging of COMET v2.0 and v1.5 into COMET v3.0, and of the correction of other COMET deficiencies.

3.1.1

COMET Merging

Two different parallel COMET baselines exist, both originally built on the initial COMET v1.0 [10], with different functionality, interfaces etc. COMET v1.5, developed in two former thesis projects [11, 5], supports concurrency control and QoS management. COMET v2.0, is an enhanced

(42)

3.1. COMET Improvements

version of COMET v1.0 that has been redesigned, bug-fixed, and optimized, for better performance and functionality. The first task when improving COMET should be to merge COMET v1.5 and COMET v2.0 into COMET v3.0, suitable for all future development by all COMET project members.

3.1.2

COMET Deficiencies

When the COMET platform was extended with concurrency control and QoS [11, 5], a number of design and implementation deficiencies were de-tected. To achieve successful further development of COMET, implement-ation of component swapping, and successful implementimplement-ation of adaptive algorithms, these deficiencies, stated in the bullet list below must be ad-dressed, either by being implicitly fixed when merging the COMET v1.5 and COMET v2.0 version, or afterwards:

• The implementation does not have distinct interfaces, operations and mechanisms, thus not conforming to ACCORD and RTCOM. Re-structuring of the code into more distinct interfaces, operations and mechanisms should be done.

• Mechanisms in some components call operations in other components. This should be corrected so components only can communicate with other components via operations.

• Memory leaks exist and uninitialized memory is read at certain points, e.g., when performing index searches in the IMC and when writing meta-data to the memory. All memory leaks in COMET should be found and fixed to ensure a more stable system.

• Tuples are not updated correctly on attribute updates, and delet-ing tuples corrupts the meta-data. Correction of this basic database functionality should be done.

• When a relation is loaded, an index search is performed for each key in the relation. Since relations are loaded upon every database operation, this behavior causes performance to degrade heavily. A more efficient way of handling indexing should be implemented.

(43)

• Problems exist when passing data in aspects. When passing data, arguments should be replaced by data structures.

• Operations are too coarse granule with large amounts of code. The operations should be divided into more fine granule logical pieces of code, i.e., mechanisms.

• Transactions in COMET have very long execution times. Profiling of COMET to identify time-consuming functional parts and optimiza-tion of code should be done to lower execuoptimiza-tion times.

• Naming and coding standards do not exist, and therefore need to be outlined in order to enhance code readability and future work on COMET.

• Error handling in COMET is inferior and a more extensive and effi-cient way of handling errors should be developed.

• Other performance issues, e.g, a number of inefficient sequential search-es which search through complete arrays even though they could be empty or contain only a few elements, exist and should be optimized.

3.2

Dynamic Reconfiguration

Since COMET is composed of different components, one benefit is the possibility to swap one component for another. This can indeed be done at compile time, but a monolithic system is then created. In embedded systems with requirements on high up-time it may not be possible to tem-porarily shut down the system for reconfiguration, but instead be able to dynamically reconfigure the system during run-time. This in turn sets demands on the feedback control of the system to adjust to these new conditions, since a new system has been created (see section 3.3).

The components in COMET are differently difficult to make swappable during run-time, due to complexity of interfaces and the need to store and transfer their internal state to the new version of the component, and therefore all COMET components may not be suitable for swapping. The issue of whether a component should be allowed to be swapped or not can

(44)

3.3. Adaptive Feedback Control

be addressed by outlining a set of rules for what need to be fulfilled, i.e., a conformance rule set .

Another problem that exists when swapping COMET components, is the fact that the concurrency control and QoS management aspects crosscut several of the components. When swapping a component, the consistency of aspect internal states and functionality must be guaranteed.

3.3

Adaptive Feedback Control

Feedback control provides COMET with functionality for enforcing QoS in a real-time system during workload variations and other time invari-ances [5]. However the controllers behavior are not updated according to workload variations and changes to the database configuration, but instead tuned based on the fixed database configuration and an estimated work-load. Delays in execution time due to, e.g, CPU resource scheduling, and the introduction of component swapping into COMET, makes it difficult for traditional feedback control to achieve satisfactory or optimal results. By using adaptive control it is possible to achieve better QoS management by constantly monitoring the system and updating the controllers accord-ing to workload and execution time variations, and also when the database configuration is changed.

3.3.1

Adaptive Feedback Control Algorithms

Two different adaptive feedback control algorithms are compared in this thesis with regard to the metrics identified in section 2.7.2. Both algorithms monitor miss ratio, i.e, use the FC-M algorithm. Since FC-M is a propor-tional controller, adjustments of it are made to fit the adaptive algorithms specified below. Both algorithms use the same basic architecture, but dif-ferences exist in the way the controller tuning variables are calculated:

• AFC-M-STR (Adaptive Feedback Control-Miss ratio-Self-Tuning Reg-ulator), uses a self tuning regulator described in section 7.2.1.

• AFC-M-RM (Adaptive Feedback Control-Miss ratio-Regression Mo-del), uses the regression model algorithm described in section 7.2.2.

(45)

3.3.2

Metrics

The different algorithms described in section 2.9 differ in regard to usage of resources. Two metrics relevant for choosing these algorithms are:

• CPU usage • Memory usage

Since many embedded real-time systems have scarce CPU and memory resources, these metrics are of great importance.

3.4

Aim and Objectives

The aim of this project is to stabilize the COMET platform and render the possibility to swap components during run-time. COMET extensions for providing adaptive QoS management should also be implemented. Achiev-ing this aim includes the followAchiev-ing activities:

1. Study the theory behind adaptive control theory, component swap-ping and the COMET system.

2. COMET Improvements

• Merge the COMET v1.5 and COMET v2.0 baseline versions into COMET v3.0.

• Improve COMET platform by addressing deficiencies stated in section 3.1.

• Rigid testing of the improved version of the COMET platform. 3. Dynamic Reconfiguration

• Determine if dynamic reconfiguration can be performed on (a) components

(b) components with woven aspects

• Outline a conformance rule set for what needs to be fulfilled for dynamic reconfiguration to be allowed.

(46)

3.4. Aim and Objectives

4. Adaptive Control

• Choose adaptive feedback control algorithms according to met-rics.

• Implement the AFC-M-RM and AFC-M-STR adaptive algorithms in COMET using components and aspects.

• Develop a test bench for performance evaluation of the adaptive algorithms.

• Performance evaluation of the AFC-M-RM and AFC-M-STR algorithms.

• Performance evaluation of Adaptive QoS COMET in comparison to regular QoS COMET.

• Performance evaluation of Adaptive QoS COMET under the impact of the dynamic reconfuration.

(47)

Chapter 4

Advanced Preliminaries

This chapter describes the COMET components (section 4.1) and the exe-cution flow of a transaction in COMET (section 4.2). Section 4.3 describes the different aspect packages available, and briefly how they work.

4.1

COMET Components

COMET consists of the components described in this section. Figure 4.1 illustrates how the components relate to each other. An arrow from one component to another means that the first component uses the second component. Since the BMC is a subcomponent of the TMC, it is drawn with a dashed line. The UIC uses the SMC in case of concurrency, which is why the arrows connecting those components are dashed. In case of running COMET non-concurrently, the TMC is used directly by the UIC. The LMC is not used in the non-concurrent configuration of COMET, which is the reason why no arrows are connected to it. Aspects should weave in code to use the LMC when dealing with locks.

(48)

4.1. COMET Components

UIC

SMC

LMC

IMC

MMC

TMC

BMC

Figure 4.1: COMET components and their relations

4.1.1

User Interface Component (UIC)

The UIC is the connection point for applications when using the COMET database. The applications use the UIC interfaces to create and execute new transactions. The UIC stores every transaction present in the system in an array of DBTrans structs, which contains transaction specific informa-tion. After a transaction is initialized, with a call to RUIC_Op_beginTrans-action, strings containing queries can then be sent to the transaction. The queries, consisting of a SQL-like syntax [10], are parsed by the UIC, and corresponding execution trees are built and stored in the corresponding

(49)

DBTrans struct. When all queries belonging to a transaction have been submitted, the application sends a commit command to the UIC and the transaction starts to execute. Last the UIC provides the result of the trans-action to the application.

4.1.2

Scheduler Manager Component (SMC)

The SMC manages a set of threads, called a thread pool . Every transaction must be assigned a thread to execute within by the SMC. The number of threads available in the pool is set with a system parameter, and can be altered by a database user to meet certain requirements. A ready queue and active queue are also maintained by the SMC. Transactions that currently execute are stored in the active queue, and transactions waiting for stored are kept in the ready queue.

When a transaction τ is committed, the UIC sends a submit request to the SMC, which puts the transaction in the ready queue and then tries to execute the transaction with a scheduling request. Three scenarios are then possible.

1. The thread pool contains at least one thread, which can immediately be assigned to τ . τ is removed from the ready queue and put in the active queue and starts to execute.

2. The thread pool is empty, and the transactions currently executing have at least the same priority as τ . τ then has to wait in the ready queue to be scheduled by the SMC. The next transaction to execute is chosen with a scheduling algorithm, e.g., EDF.

3. The thread pool is empty, and at least one of the currently executing transactions has lower priority than τ . The transaction with lower priority is rolled back before completion to return its thread to the thread pool. τ is assigned the available thread and starts to execute. The threads from the thread pool are themselves scheduled by the operating system. When a transaction is completed it is removed from the active queue and its thread is returned to the thread pool. If a transaction is rolled back before completion, the transaction is removed from the active

(50)

4.1. COMET Components

queue and then put back in the ready queue and its thread is reassigned by the scheduler.

4.1.3

Transaction Manager Component (TMC)

The TMC executes a transaction by traversing the execution trees belong-ing to the transaction sequentially, usbelong-ing the recursive RTMC_Mech_Result function, which is the core function of the TMC. For each execution tree the affected relations are loaded into buffers, using the IMC and the MMC. The tuples not needed in the query are deleted. The operations of the query are performed on the tuples in the buffers. If a transaction contains an update query, all the affected tuples must be written back in memory, using the IMC and MMC. When finished with a parse tree, the TMC starts with the parse tree. When finalizing the last tree, the TMC informs the SMC that the transaction is completed. The Buffer Manager Component (BMC), which handles the buffers, is a subcomponent of the TMC.

4.1.4

Locking Manager Component (LMC)

The LMC manages a list of locks, and provide functionality for manipula-tion of locks, which can be used by, e.g., concurrency aspects.

4.1.5

Indexing Manager Component (IMC)

The IMC is used to find tuples in relations by storing their addresses in searchable trees. These addresses are used when reading or writing tuples with the MMC. The IMC provides functionality for managing relations and tuples, e.g., insertion of new tuples. Two separate IMC components exist, one default used for all COMET configurations except GUARD-Link, which uses a B-Link IMC version.

4.1.6

Memory Management Component (MMC)

The MMC provides memory operations, used by the TMC to manage rela-tions, and by the IMC to manage the index. The MMC operations are for allocating, deallocating, reading, and writing to memory. An additional set

(51)

of identical operations exist, intended to be used when weaving in aspect advice.

4.2

Transaction Flow

Transactions in COMET perform a certain set of steps when executing, which are shown in figure 4.2. For each step in the figure, every component involved in this step is presented. The steps are as follows.

1. A user sends an SQL query, as a string, to the UIC. The UIC then initiates a new transaction for the user query.

2. The query string is parsed, and a corresponding execution tree is produced.

3. If concurrency control is applied, the transaction waits to be sched-uled by the SMC, and then assigned to an available thread to start executing in. How conflicts are detected and resolved depends on which concurrency aspect used, and can be studied more in depth in [11]. Without concurrency control this step is omitted and the TMC is called directly from the UIC.

4. The TMC loads the relations needed by the query into buffers. Each relation is loaded in the following manner. First, the IMC is used to get the address of the metadata for the relation. The MMC is then used to read the metadata from memory. The TMC stores the metadata in a buffer. The metadata is used to describe the properties of the relation and its attributes, such as the name and types of the attributes. Thereafter the TMC locates every tuple in the relation by using the IMC, and reads the tuples into the buffer, using the MMC. 5. The tuples not needed by the query are deleted from the buffer. 6. The operations of the queries are performed on the tuples in the

buffer, e.g., to change the value of an attribute. If the transaction is of read-only type, the transaction leaves the TMC at this point, and the dashed route to step 8 is followed.

(52)

4.2. Transaction Flow

7. If the transaction contains an update query, the tuples affected in the previous step are written back to memory by first using the IMC to receive the correct address of the tuple, and then by using the MMC to write the tuple data to memory.

8. The result is presented to the user using the UIC, which uses the TMC.

9. Finally, if a concurrent configuration is used, the transaction thread is released and given back to the SMC thread pool.

BMC UIC TMC BMC query Enter Parse query Schedule transaction Load relation Remove tuples Perform operations Update tuples TMC TMC TMC UIC SMC BMC BMC BMC IMC IMC MMC MMC TMC 2 4 5 6 7 1 3 Present result Return thread SMC 8 9 UIC

Figure 4.2: The execution steps of a transaction in COMET

The execution steps when performing transactions with deletion or inser-tion of tuples are similar. Differences worth meninser-tioning are that metadata is updated as well in step 7 and that step 5 is omitted on inserts.

(53)

4.3

Aspect Packages

COMET consists of several different configurations, both for concurrency control (section 4.3.1) and QoS management (section 4.3.2). An aspect package is a set of aspects, which combined with the COMET compon-ents constitute a COMET configuration. All concurrency control packages can be used with any of the QoS packages. Instructions on how to build the different COMET configurations can be found in the COMET User Manual [21]. Since join points have been changed since the original im-plementation of the concurrency control and QoS management aspects, documentation of the updated aspects can be generated by running a build script also documented in the COMET User Manual [21].

4.3.1

Concurrency Aspects

Three different concurrency control configurations, described briefly below, can be used with COMET. For a more in detail description of the concur-rency algorithms see [11].

HP2PL With Similarity

High-Priority 2-Phase Locking (HP2PL) [1] is a locking scheme based on regular 2-Phase Locking (2PL). HP2PL takes priorities into account, 2PL does not. Different locks are acquired on read and write operations on data items. Variants of HP-2PL exist, using different conflict resolution methods. HP-2PL suffers from unbounded number of transaction restarts and unbounded waiting times. Thus, it is not suitable for hard RTSs, but for soft ones.

ODC

Optimistic Divergence Control (ODC) [36] is based on an concurrency con-trol method which uses weak and strong locks. As a transaction executes it asynchronously acquires weak read and write locks on the accessed data items, and updates data in local space. If a strong lock is held on any of the items, the requesting transaction is marked for abortion. The transactions that are marked for abortion are aborted when they enter the validation

(54)

4.3. Aspect Packages

phase. Aborted transactions wait a certain amount of time before restart-ing. If a transaction has not been marked for abortion during its execution, it is allowed to commit when it enters the validation phase. During commit, all of the weak write locks a transaction holds are temporarily converted to strong locks. Any transaction that holds a weak lock on a data item that gets locked by a strong lock in this phase, is marked for abortion.

GUARD-Link

GUARD-Link [14] uses the Gate-keeping Using Adaptive eaRliest Deadline (GUARD) admission control combined with a concurrent B-link index al-gorithm. The GUARD admission control decides which transactions that are allowed to execute at all, based on feedback control and a random factor. Index operations in the B-link tree lock tree nodes and subtrees, typically locked on descent, and the locks are kept if modification of the tree is needed, otherwise they are released.

4.3.2

QoS Aspects

Two different QoS management configurations can be used with COMET. The configurations use a number of aspects to implement their functional-ity.

ACC

The Admission Control Configuration (ACC) requires the use of:

• The Quality of service Actuator Component (QAC) intercepts re-quests to create new transactions from the UIC to the SMC. Based on an admission policy, the actuator decides whether to allow the requests to reach the SMC, or to reject the transactions.

• The Transactional Real-Time Aspect (TRTA) adds an estimated util-ization to the COMET transaction model. TRTA also transfers the utilization and deadline set in the application for all transactions to the SMC.

(55)

• The Quality of service Actuator Composition Aspect (QACA) facilit-ates the insertion of the QAC between the UIC and the SMC.

When using ACC the QACA inserts the QAC between the UIC and SMC by allowing QAC to intercept all requests to create new transactions. By using the estimated utilization, introduced by the TRTA and equation 2.4, a new transaction is either admitted or rejected.

MRFC

The Miss Ratio Feedback Configuration (MRFC) requires the use of all ACC components and aspects, and additionally:

• The Feedback Controller Component (FCC) computes input to the admission policy of the QAC at regular intervals. By default, an input of zero is generated, but aspects can be woven into FCC to accommodate various QoS algorithms. FCC is run on its own thread of execution.

• The Feedback Controller Composition Aspect (FCCA) initializes the FCC.

• The Missed Deadline Monitor Aspect (MDMA) modifies the SMC to keep track of the deadline miss ratio, using equation 2.3.

• The Utilization from Missed deadline Control Aspect (UMCA) mod-ifies the FCC to base its calculation of input to the admission policy of the QAC, on the deadline miss ratio of the system.

The MRFC uses the same admission control as the ACC. Apart from that MRFC implements the FC-M algorithm (section 2.8.2). The FCCA initial-izes the FCC, which regularly computes input to the admission policy of the QAC, based on the deadline miss ratio of the system, which is monitored by MDMA and calculated by UMCA.

(56)
(57)

Chapter 5

COMET Improvements

This chapter contains a specification of COMET v3.0, merged from COMET v1.5 and COMET v2.0. The chapter also provides the improvements im-plemented in COMET v3.0. Furthermore the chapter contains a section on testing of COMET v3.0, a description of the COMET API and debug tool, and a section specifying a COMET coding and naming standard. The last sections of the chapter provides changes made to COMET v3.0, directly mapped to the thesis problem description (chapter 3).

5.1

Prerequisites

Two parallel COMET baselines exist, both originally built on the initial COMET v1.0 [10], having different functionality, interfaces etc. COMET v1.5, developed in two former thesis projects ([11] and [5]), supports con-currency control and QoS management. COMET v2.0, developed at M¨ alar-dalens H¨ogskola, is an enhanced version of v1.0, as it has been redesigned, bug-fixed, and optimized, for better performance and functionality. The major changes done in COMET v2.0 can be found in table 5.1.

(58)

5.1. Prerequisites

Comp. Change done

ALL Dereferenced the Btable ** buffer throughout the project. General minor bugfixing.

UIC New function created, getParseTree, used for creation of execution parse trees.

Added struct DBTrans, containing all transaction specific informa-tion, to accommodate a new transaction model.

Changed the behavior/interface of functions to use the DBTrans struct:

• RUIC commitTransaction • RUIC rollbackTransaction • RUIC beginTransaction • RUIC Query

Created a destructor, RUIC freeTransaction, for the DBTrans struct. Created a function, RUIC printTransaction, to print the result of a transaction.

Removed functions no longer in use, since all transaction specific in-formation is now in the DBTrans struct:

• PutHandleinTree • PutInHandleTree • SearchHandle • makeHandleTree • findHandleTree

Changed newNode function to use Dmalloc, by that found some memory leakage.

Bug-fixed a memory leak in UIC moreAttrs.

TMC Cleaned the Result function with regard to unused variables etc. Broke out Update, Create and Drop into separate functions.

Removed all old references to handle trees, transaction lists etc., which are no longer in use.

Removed all traces of database pointers in the Result function. This functionality is now contained in DBPTMC instead.

Removed attributes in the Result function, also affecting all calls to it.

(59)

5.2

Approach

The first step in improving COMET according to section 3.1.2 should be to merge the two COMET versions, v1.5 and v2.0, into a new version called COMET v3.0. The benefits of doing this are:

• Creating common baseline to be used for all future development by all COMET project members.

• Providing possibility for all COMET project resources at M¨alardalens H¨ogskola and Link¨opings University to push the overall COMET pro-ject status forward in a mutually beneficial way.

• Setting standards and guidelines for future development of COMET. • Incorporating improvements already made in the COMET v2.0

ver-sion, to solve problems stated in section 3.1.2.

Two different options exist when merging the COMET v1.5 and COMET v2.0 versions. The first option is to start with the COMET v1.5 version and reimplement or copy, some or all of, the changes done in COMET v2.0. The other option is to start with COMET v2.0, and add the changes made in the COMET v1.5 version to provide scheduling, locking, thread handling, concurrency control and QoS management functionality.

The benefits of the first option would be that changes could be chosen in a way that concurrency control and QoS management aspects would not be affected. Therefore less time would be needed to study and alter these aspects. A major drawback of the first option, is that the merged version could not serve as a common base for all future development, since it would not fulfill all COMET project members needs. This is the reason why the second option is chosen.

After merging the two versions into COMET v3.0, all other COMET improvements not already solved implicitly by using COMET v2.0 as a base for merging versions, e.g., fixing errors on tuple updates and enforcing a naming and coding standard, are implemented.

References

Related documents

Linköping 2007 Mehdi Amirijoo QoS Contr ol of Real-T. ime Data

This section introduces the notion of real-time systems, quality of service (QoS) control, imprecise computation, and real-time data services.. 2.1

experiments on sputter deposition of metallic thin films actually involved the growth of metal oxide layers (reactive deposition), due to poor vacuum, and most researchers real-

I svitens första text, ”Yes!” (Esquire nr 344, juli 1962, s 31 samt 116) argumenterar en manlig feminist för formell jämställdhet mellan män och kvinnor samtidigt som det i

We use model food webs in order to explore how biodiversity (species number) affects the response of communities to species loss (Paper I-III) and to environmental variability

OTSI (Office of Transport Safety Investigations), 2017. Bus Safety Report – Bus fires in New South Wales in 2016. Bus Fires in Sweden. RISE Research Institutes of Sweden, SP

Genom att vi på ett subjektivt sätt har närmat oss våra forskningsobjekt har vi vidare med hjälp av vår egen förförståelse kunnat pendla mellan att se fenomenet ur vår egen