• No results found

Evaluation of Test Vector Quality for Hybrid Systems

N/A
N/A
Protected

Academic year: 2021

Share "Evaluation of Test Vector Quality for Hybrid Systems"

Copied!
73
0
0

Loading.... (view fulltext now)

Full text

(1)

Mars 2018

Evaluation of Test Vector Quality

for Hybrid Systems

Yu Xiao

(2)
(3)

Teknisk- naturvetenskaplig fakultet UTH-enheten Besöksadress: Ångströmlaboratoriet Lägerhyddsvägen 1 Hus 4, Plan 0 Postadress: Box 536 751 21 Uppsala Telefon: 018 – 471 30 03 Telefax: 018 – 471 30 00 Hemsida: http://www.teknat.uu.se/student

Evaluation of Test Vector Quality for Hybrid Systems

Yu Xiao

When testing is performed, a large number of test vectors representing series of signal values are input to the applications under development, and the results of the testing are then analyzed by checking violations of defined safety requirements. However, there is usually limited time, personnel and other resources. Selecting a subset of test vectors with better qualities is one of the possible ways in order to bring down the costs for testing-related activities.

This thesis work aims at evaluating and quantifying the qualities of test vectors for a hybrid system. Two possible criteria for good qualities are proposed, evaluated and quantified for test vectors that are input to a hybrid application at Volvo Cars. For validation and analyses purposes, the semantics of Signal Temporal Logic (STL) and its robustness satisfaction are introduced and monitored with the Matlab/Simulink Toolbox Breach.

The conclusion is that the method we proposed to evaluate and quantify the qualities of test vectors is able to satisfy the needs from Volvo Cars by far. With the

experiences we have at present, however, it is not sufficiently proven that test vectors with higher qualities assessed by our method behave better when the robustness satisfaction of STL formulas is monitored. Nevertheless, it is a good beginning to relate the robustness satisfaction of STL semantics to test vector quality evaluation for people with similar goals to consider in the future.

(4)
(5)

Many thanks shall be given to Johan Eddeland, supervisor of this thesis project, who has without question demonstrated great engineering skills and a nice personality throughout the process of this project. Thank you for your continuous efforts helping me to complete my degree.

The advice from the reviewer of this thesis project Professor Bengt Jonsson is also well appreciated when improving the report. His meticulous and humble attitude will always lead me to reflect on myself throughout my life.

I would also like to remark Dr Justin Pearsson for supporting and hosting the procedure of this project. His nice work plays an important role in completing the thesis.

Special greetings are given to a dear friend from high school Xihan Lei, who passed away because of stomach cancer on 24th August, 2017. May you rest in peace and stay young for ever.

(6)
(7)

Abstract i

Acknowledgments ii

Contents iii

List of figures vi

List of tables vii

1 Introduction 1

1.1 Hybrid Systems . . . 2

1.1.1 Example of a Hybrid System . . . 2

1.1.2 Current Research Interests about Hybrid Systems . . . 4

1.2 Verifying the Behavior of Hybrid Systems . . . 5

1.2.1 Formal Methods and The Limitations . . . 5

1.2.2 Testing the Hybrid Systems . . . 6

1.3 Model Based Development (MBD) in Automotive Industry . . . 7

1.3.1 Hybrid Systems in Automotive Industry . . . 7

1.3.2 MBD at Volvo Car Corporation . . . 8

1.3.3 Testing Activities in MBD . . . 8

1.4 Specifications of the Thesis Work . . . 9

1.4.1 Motivation . . . 9

1.4.2 Problems Statement . . . 10

1.4.3 Method . . . 11

1.4.4 Overview of the Report . . . 12

2 Test Vector Generation in Simulink Environment 13 2.1 Test Vectors for Automotive Applications . . . 14

(8)

2.2.1 Formal Specifications . . . 14

2.2.2 Recursive Definition of STL . . . 15

2.2.3 Robustness value of STL formulas . . . 16

2.2.4 Examples of STL Formulas . . . 17

2.2.5 An Example of a STL Formula and Its Robustness Values . . 17

2.2.6 Generating test vectors using Breach toolbox . . . 20

2.3 Generating Test Vectors with Testweaver . . . 23

2.4 Manually Generated Test Vectors . . . 24

2.5 Summary . . . 25

3 Use Case 26 3.1 The System Under Test . . . 27

3.1.1 Structure of the SUT . . . 27

3.1.2 General Introductions to the SUT . . . 28

3.1.3 An Example of a System Requirement . . . 29

3.2 Set Up for Evaluation . . . 29

3.3 Criterion 1 . . . 30

3.3.1 Description of criterion 1 . . . 30

3.3.2 First Step to Implement Criterion 1 . . . 30

3.3.3 Second step to Implement Criterion 1 . . . 33

3.3.4 Quantification Based on Criterion 1 . . . 33

3.3.5 Evaluation Result for Criterion 1 . . . 34

3.4 Extension to Criterion 2 . . . 36

3.4.1 the Need for a Second Criterion . . . 36

3.4.2 the Idea of a Second Criterion . . . 36

3.4.3 Implementation of Criterion 2 . . . 37

3.4.4 Quantification Result of Criterion 2 . . . 37

3.5 Summary . . . 39

(9)

4.2 Purpose of Using the Breach Toolbox . . . 41

4.3 Validation and Analyses Experiment Using the Breach Toolbox . . . 42

4.3.1 Selecting Test Vectors to Re-Run with Breach . . . 42

4.3.2 Translating System Requirements . . . 42

4.3.3 Implementation of the Breach Toolbox . . . 43

4.4 Outputs of the Breach Toolbox . . . 44

4.4.1 Original Data Gotten from the Breach Toolbox . . . 44

4.4.2 Standardization of the Original Robustness Values . . . 45

4.5 Validation and Correlation Analyses . . . 47

4.5.1 Validating Quantification Result of Criterion 1 . . . 47

4.5.2 Discussions about Validating Criterion 1 . . . 49

4.5.3 Correlation Analyses . . . 50

4.6 Conclusions of the Experiment with Breach . . . 51

5 Conclusions and Future Work 53 5.1 Contributions . . . 54

5.2 Limitations and Discussions . . . 55

(10)
(11)

1.1 The circuits for DC-DC boost converter . . . 2

1.2 the Hybrid automaton for the DC-DC boost converter . . . 3

2.1 Velocity signal 1 . . . 18

2.2 Velocity signal 2 . . . 19

2.3 Velocity signal 3 . . . 19

2.4 A flowchart describing the main falsification procedure [1]. . . 21

2.5 Two different kinds of parametrized input signals. a) shows a constant signal where the parameter is the value of the signal. b) shows a sine wave where the parameters are the amplitude and the angular frequency [1]. . . 21

3.1 The Simulink model under test . . . 27

3.2 An example of system requirements expressed in Simulink models . . 29

3.3 The histogram of Table 3.2 . . . 35

3.4 Quantification result for criterion 1 and criterion 2 . . . 38

4.1 A flowchart describing the procedure using Breach to get robustness values for system requirements . . . 44

4.2 A comparison between the quantification result of criterion 1 and sum of negative robustness values. . . 48

(12)
(13)
(14)
(15)

1

Introduction

The first chapter Introduction has two purposes. The first purpose is to get the reader more familiar with the background related to this thesis project: Chapter 1.1 introduces the notion of hybrid systems and the current research interests; Chap-ter 1.2 explains some Chap-terminologies regarding testing activities in automation indus-try; Chapter 1.3 provides information about the industrial environment at Volvo Car Corporation, where the main part of this thesis project is performed.

(16)
(17)

1.1

Hybrid Systems

The literature on dynamical systems focus mainly on two areas: continuous-time dy-namical systems, which can be modeled and analyzed based on differential equations, and discrete-event systems, which can be modeled as (finite) automata [1].

Hybrid systems refer to systems exhibiting both continuous and discrete dynamics. Those systems are regarded as useful tools for describing a wider range of different physical phenomena and systems [2].

1.1.1

Example of a Hybrid System

An example from the field of electrical engineering, the DC-DC boost converter, is introduced to help to introduce the basic characteristics of a hybrid system.

+ − E + L − + D − S=0 + − R C

Figure 1.1: The circuits for DC-DC boost converter

(18)

properties of the system are modeled by the electric charge q of the capacitor and the magnetic flux φ of the inductor [3].

mode 1 ˙ q = Lφ ˙ φ = −C1q + E mode 2 ˙ q = −RC1 q ˙ φ = E mode 3 ˙ q = −RC1 q ˙ φ = 0 mode 4 ˙ q = 0 ˙ φ = E S = 1 ∧ q ≥ 0 S = 0 ∧ φ ≥ 0 S = 1 ∧ q ≤ 0 q := 0 S = 0 ∧ φ ≤ 0 φ := 0 S = 1 ∧ q 0 q := 0 S = 0 ∧ φ ≤ 0 S= 0∧ φ≤ 0 φ:= 0 S= 0∧ φ≤ 0 q ≥ C E ∧ φ ≥ 0 φ := 0 q ≤ C E q ≤ 0 q := 0 q<0

(19)

1.1.2

Current Research Interests about Hybrid Systems

Modeling system performance and verifying system behavior [4] are two of the major research interests about hybrid systems. Although there already exists deep-rooted research for pure continuous systems and pure discrete systems, their hybrid combi-nation requires novel adaptations and extensions of former results [5].

For describing system behavior, besides hybrid Petri Nets [6] and hybrid pro-grams [7], hybrid automata [8] is also a powerful formalism.

Verification is the formal process of analyzing whether a system satisfies a desired specification using a computer algorithm: given a desired property, called a specifi-cation, we would like to guarantee that all of the hybrid system behaviors satisfy the specification. This is especially true for safety-critical applications [9].

Two reasons can be used roughly, to explain the popularity of studies for hybrid systems:

• Hybrid systems provide a convenient framework for modeling a wider range of engineering processes, especially processes with multiple time scales. Such systems are usually reactive real-time systems that evolve over time [10].

• There already exist many practical applications in industry: for the design and of modeling of hybrid automata, Simulink/stateflow [11] are popular tools. There are also built-in libraries called HyAuLib implemented in Modelica [12].

Meanwhile, Hytech for checking linear hybrid automata [13], UPPAAL for

(20)

1.2

Verifying the Behavior of Hybrid Systems

As is mentioned in Chapter 1.1.2, verifying the behavior of hybrid systems is one of the major research interests about hybrid systems. There exist two methods for verifying system behavior: formal methods and testing.

1.2.1

Formal Methods and The Limitations

Formal methods are mathematically based languages, techniques and tools for speci-fying and verispeci-fying software or hardware systems [17]. Formal methods are important verification techniques which can verify system behavior by calculating the reachable space of a hybrid automaton representing the system under study. However, the lack of methods for computing reachable sets of continuous dynamics has been the main obstacle towards an algorithmic verification methodology for hybrid systems [18]. The verification problem using formal methods for general hybrid systems has been studied and shown to be intrinsically difficult even under severe restrictions [8]. Even though recent developments have brought interesting theoretical results and practical tools for the reachability analysis to satisfy safety requirements of hybrid systems, there are still unsolved problems especially when it comes to large-scale applications in an industrial context [5].

(21)

Following the current trends in the automotive industry, the usage of advanced control system technology will continue to increase. The driving reason for this is the ever growing needs for more safety, less emissions and energy consumption, more driving information and driver assistance [20]. For example, an essential part of automotive software, ECU (Electronic Control Unit) will constantly be implemented with more advanced functions to satisfy those needs. This trend can cause the system complexity to increase permanently [21].

1.2.2

Testing the Hybrid Systems

When verification is insufficient to guarantee the correctness of system behavior, testing becomes an important alternative.

Testing denotes a set of activities that aim at showing that a system’s intended and actual behaviors do not conform, or to increase confidence that they do [22]. There are some important terminologies in testing field that are necessary for the readers to know in order to have a better understanding of this report.

System Under Test (SUT)

System under test (SUT) refers to the system that is being validated by the test engineers when performing testing activities [23].

Test Vector

Test vectors, also called test cases, are a series of the input signals over time to feed to the SUT when performing testing activities. In this report, the terminology test vectors will be used for consistency.

(22)

shorter or longer periods. Testing Environments

A testing environment means the software and hardware setup which provides the test engineers with required environmental resources. The setup can contain the hardware devices, the operating system(s) and the other softwares necessary to run the SUT.

For a SUT, input signals for different parts of the system are required for testing in different testing environments. Common signals for a hybrid car module include voltages, temperatures and control signals etc.

1.3

Model Based Development (MBD) in

Auto-motive Industry

This thesis project is host by the Group of Software Integration and Testing at Volvo Car Corporation in Gothenburg. This section is therefore for providing the readers with some information about the development process and testing activities in automotive industry.

1.3.1

Hybrid Systems in Automotive Industry

(23)

1.3.2

MBD at Volvo Car Corporation

Instead of some definitions that appear daily in engineers’ minds, hybrid systems lie more implicitly in the development process at Vovo Car Corporation. At Volvo Cars, such as in many other places in automotive industry, there is a trend towards MBD. In MBD, software components are no longer handwritten in C or Assembler code but modeled with MATLAB/Simulink, Statemate, or similar tools [25]. In this thesis, the SUT is a hybrid system modeled in Simulink, representing an electric vehicle. The SUT will be introduced more in detail in Chapter 3.1.

There are many advantages of MBD. Firstly, graphical models and simulations of these models allow engineers to find a common understanding early in the de-sign phase, thus improving the communication efficiency within development teams. Moreover, MBD also reduces time to market through component reuse and reduces costs by simulating and testing systems before implementation [26].

1.3.3

Testing Activities in MBD

In automotive industry, many applications under development are safety-critical. Thus testing of the systems developed at Volvo Cars is very important, since for many of these applications, formal verification is insufficicent because of the size of the systems, and the existence of continuous dynamics.

Testing is performed at different integration levels. The following integration levels are distinguished [26]:

M odel − in − the − Loop(M iL)

(24)

automation industry, considers the problem of testing the systems where all parts of of the system are simulated by mathematical models [27].

MiL is the first integration level based on the model of the system itself. Testing on MiL level means that model and its environment are simulated without any hardware component. This allows testing at early stages of the development cycle.

Sof tware − in − the − Loop(SiL)

Testing an embedded system on SiL level means embedded software is tested within a simulated environment model but without any hardware.

Hardware − in − the − Loop(HiL)

Hardware-in-the-loop (HiL) simulation, or HWIL [28], is the next process in the

testing cycle. HiL is a technique that is used in the development and test of complex real-time embedded systems. Running test are expensive since tests are run in a real time scenario, and it is therefore interesting to reduce the amount of tests run to just include the tests with highest quality. Hil testing requires real-time behavior of the environment model to ensure the communication with the ECU is the same as in real application. However the environment around the ECU is still a simulated one.

T estrig

Testing in a test rig means that the embedded software runs on the ECU and the environment consists of physical components.

1.4

Specifications of the Thesis Work

1.4.1

Motivation

(25)

Meanwhile, a classical estimate relates up to 50% of the overall development cost to testing [30]. In automotive industry, Original Equipment Manufacturers (OEMs) in-cluding Volvo Cars have long realized this and spend up to 40% of their development budgets for testing-related activities [31].

It is of great interest to decrease costs for testing activities. One possible way

to achieve this is to execute test vectors with better qualities. There are more

explanations listed as follows to clarify why test vector qualities are interesting and important to evaluate:

• The later the bug is detected in the software development process, the more expensive it is to fix it. Good test vectors are important to run as early as possible.

• A large number of test vectors are generated using various tools for different purposes. It is inefficient to execute them all.

• An iterative process is needed with a considerable number of interim releases of the integrated system. This means that same test vectors need to be run repeatedly on different integration levels (including SiL and MiL mentioned in Chapter 1.3.3).

• Between every release, there is usually limited time, personnel and other re-sources for test engineers. Thus, there is a need for a method to select a subset of the test vectors. A possible use of the subset will be to run with higher priorities when testing is performed.

1.4.2

Problems Statement

(26)

To achieve the goal of picking out better-quality test vectors, the following ques-tions are necessary to investigate:

• What does a test vector look like? How are the test vectors generated in the automotive industry?

• What should be the criteria to select a good test vector?

• How to implement an efficient method to select test vectors with desirable properties?

• How to quantify the quality of test vectors based on the previously-defined criteria?

• How to validate and analyze whether the evaluation result is reasonable or not?

1.4.3

Method

The thesis attempts to answer the questions proposed above. This is done in the following way:

1. Two criteria for good properties are proposed on a given application. The first criterion evaluates the test vectors’ ability to reveal violations of system requirements. The second criterion takes into consideration if a test vector is easy to analyze or not.

2. These criteria are implemented in Matlab.

3. Scores are generated as a means of quantifying test vectors based on how much they satisfy the implemented criteria.

(27)

examples) and the violations of system requirements, the Simulink toolbox Breach is implemented when rerunning test vectors to get the robustness values. 5. The robustness data acquired with the help of the Breach Toolbox is analyzed

and compared with the quantification result for crieterion 1.

1.4.4

Overview of the Report

The rest of this report is structured as follows:

• In Chapter 2, three ways of generating test vectors in the Simulink environment are introduced including Breach toolbox. Though the Breach toolbox is not used to generate test vectors for evaluation in Chapter 3, the definitions of STL and the robustness semantics introduced in Chapter 2 are essential for the reader to understand analyses and conclusions in Chapter 4 and Chapter 5.

• In Chapter 3, the readers can get an overview of the SUT, how the test vectors are evaluated based on the proposed criteria.

• In Chapter 4, the use of the Breach toolbox is motivated. The method to acquire the robustness data for analyses is also described.

• Chapter 5 contains the conclusions for the analyses result and summarizes the contributions, limitations of this project. In the end, future work that extends from this project is discussed.

(28)

2

Test Vector Generation in Simulink

Environment

(29)

2.1

Test Vectors for Automotive Applications

Test vectors provide signal values at each time stamp. The values for each signal are assumed to remain the same between the former time stamp and the next one. Test vectors can be stored in different formats to perform testing activities. Common formats include comma-separated values (.csv-files) and Microsoft Excel Workbooks (.xslx-files).

Test vectors can be generated manually and automatically using tools that can generate test vectors based on desired attributes with the knowledge of the SUT.

2.2

System Requirements Guided Test Vector

Gen-eration

For safety-critical systems, system requirements (also called safety requirements or safety specifications) exist for checking if the systems behave as expected. It is recorded whether these requirements are violated or not when executing test vectors.

2.2.1

Formal Specifications

System requirements can be expressed in different ways. Natural language, though convenient to use and straightforward to understand, is ambiguous and unreliable. It has been shown that utilizing formal specifications can lead to improved testing and verification quality [32].

(30)

digital circuits [35]. Temporal logic comes into linear time and branching time [36]. Here only linear time temporal logic is introduced.

2.2.2

Recursive Definition of STL

Signal temporal logic (STL) is an extension of linear temporal logic with real-time and real-valued constraints. STL allows the specification of temporal properties of real-valued signals, and has been applied to the analysis of hybrid dynamical systems. STL has the advantage of naturally admitting a quantitative semantics which, in addition to the binary answer to the question of satisfaction, provides a real num-ber indicating the notion of distance of the satisfaction or violation. The recursive definition of STL is as follows [25]:

ϕ ::= µ|¬µ|ϕ ∧ ψ|ϕ ∨ ψ|[a,b]ψ| [a,b]ψ|ϕU[a,b]ψ (2.1)

Here, µ is a predicate and has a value determined by the sign of a function of a signal x, i.e., µ ≡ µ(x) > 0. φ and ψ are STL formula, [a,b] is the globally operator

over the interval [a,b], [a, b] is the finally operator over the interval [a,b], and U[a, b]

(31)

(x, t) |= µ ⇔ µ(x(t)) > 0 (2.2) (x, t) |= ¬µ ⇔ ¬((x, t) |= µ) (2.3) (x, t) |= ϕ ∧ ψ ⇔ (x, t) |= ϕ ∧ (x, t) |= ψ (2.4) (x, t) |= ϕ ∨ ψ ⇔ (x, t) |= ϕ ∨ (x, t) |= ψ (2.5) (x, t) |= [a,b]ϕ ⇔ ∀t0 ∈ [t + a, t + b], (x, t0) |= ϕ (2.6) (x, t) |= ♦[a,b]ϕ ⇔ ∃t0 ∈ [t + a, t + b], (x, t0) |= ϕ (2.7) (x, t) |= ϕ U[a,b]ψ ⇔ ∃t0 ∈ [t + a, t + b] (x, t0) |= ψ (2.8) ∧ ∀t00 ∈ [t, t0], (x, t00) |= ϕ

2.2.3

Robustness value of STL formulas

Instead of checking the boolean satisfaction of an STL formula, the notion of a robust (or quantitative) semantics is defined to measure how far away a specification is from being satisfied. The robust satisfaction is a real-valued function ρ. The sign of ρ indicates whether ϕ is satisfied or not: positive means satisfied, negative means not. Meanwhile, the magnitude of ρ indicates the margin by which ϕ is satisfied or violate. The robust semantics of STL is defined as follows [37]:

ρ(µ, x, t) = µ(x(t)) (2.9)

ρ(¬µ, x, t) = − µ(x(t))) (2.10)

ρ(ϕ ∧ ψ, x, t) = min(ρ(ϕ, x, t), ρ(ψ, x, t)) (2.11)

ρ(ϕ ∨ ψ, x, t) = max(ρ(ϕ, x, t), ρ(ψ, x, t)) (2.12)

ρ([a,b]ϕ, x, t) = mint0∈[t+a,t+b]ρ(ϕ, x, t0) (2.13)

ρ(♦[a,b]ϕ, x, t) = maxt0∈[t+a,t+b]ρ(ϕ, x, t0) (2.14)

ρ(ϕ U[a,b]ψ, x, t) = maxt0∈[t+a,t+b](min(ρ(ψ, x, t0), (2.15)

(32)

2.2.4

Examples of STL Formulas

Here two examples from [38] are included to demonstrate that system requirements expressed in natural language can also be translated into STL formulas.

Example 1: the first example comes from an automatic transmission system. In natural language, we have a specification that: if the engine speed ω is always less than ω, then vehicle speed v can not exceed v in less than T sec. The STL expression of the same specification can be formed as:

φexpl = ¬(♦[0,T ](v < v) ∧ (ω < ω))

Example 2: the second example is from a fuel control system. In natural lan-guage, we have a specification: the fuel flow rate should not be 0 for more than 1 second within the next 100 second. The corresponding STL formula is:

φexp2 = ¬♦[0,100][0,1](fuelFlowRate=0)

2.2.5

An Example of a STL Formula and Its Robustness

Values

In order to understand what “the distance to failure or robustness satisfaction of a requirement” means, it is essential to have a straightforward understanding of the robustness values of STL formulas. Here is an example: in natural language, we might have the following specification: The velocity v should always be below 100 during simulation time [0,10]s.

(33)

as:

ϕv = [0,10](v < 100) (2.16)

Figure 2.1: Velocity signal 1

In Figure 2.1, Figure 2.2 and Figure 2.3, three signals representing velocity over the time interval [0,10]s are plotted respectively. By intuition it is the velocity signal in Figure 2.3 that is closest to violate this requirement. This can be confirmed by calculating the robustness satisfaction values of this requirement for three signals.

Firstly, the STL formula is obtained from the specification:

µ = (100 − x) (2.17)

ϕv = [0,10]µ = [0,10](100 − x) (2.18)

(34)

Figure 2.2: Velocity signal 2

(35)

(2.13), we have:

ρ(ϕv) =ρ([0,10]µ, v, t)

=mint0∈[0,10]ρ(µ, v, t0)

=mint0∈[0,10](µ(v, t0))

(2.19)

The robustness satisfaction for each velocity signal is: ρ(ϕv1) = 50

ρ(ϕv2) = 20

ρ(ϕv3) = 8

Positive values mean the specification is not violated while the margin of the value means how far it is from the specification is violated. Here it is verified that signal v3 is closest to violate ϕv.

2.2.6

Generating test vectors using Breach toolbox

Breach [39] is a Matlab/C++ toolbox. Breach has a major novel feature, the robust monitoring of STL formulas.

This feature is very helpful and can be utilized to guide the generation of test vectors in the hope of falsifying system requirements via falsification procedure. This is done by defining the requirements using STL, which can then give an objective function for the optimization problem using defined robustness semantics introduced in Chapter 2.2.3.

Figure 2.4 describes the main falsification procedure for Breach [40].

Input generators

(36)

Generator Simulator Robustness function Function evaluation Stop Parameter optimizer Output S(t) Not falsified Input signal parameters k Parameter initial

guess k Input u(t)

Objective function ρ Requirement ϕ

Falsified

Figure 2.4: A flowchart describing the main falsification procedure [40].

the input signals with fewest feasible number of parameters to keep the optimization problem as simple as possible.

0 1 2 3 4 5 6 7 8 9 10 t 0 2 4 6 u(t)

a) Constant signal (one parameter)

A = 1 A = 3 A = 6 0 1 2 3 4 5 6 7 8 9 10 t -2 -1 0 1 2 u(t)

b) Sinusoidal signal (two parameters)

A = 1, ω = 5 A = 2, ω = 1 A = 0.7, ω = 3

Figure 2.5: Two different kinds of parametrized input signals. a) shows a constant signal where the parameter is the value of the signal. b) shows a sine wave where the parameters are the amplitude and the angular frequency [40].

(37)

amplitude and the angular frequency of the sinusoidal wave.

In this example, for the first signal, we have k = 1, k = 3 and k = 6 for the amplitude. For the second signal, we have k1 = [1, 2, 0.7] for the amplitude A and

k2 = [5, 1, 3] for the angular frequency ω.

When generating input signals so simulate the system for the first time, the Gen-erator takes values for parameters from a parameter set in a random way.

Also, these two examples clarify a fact that in order to keep the number of param-eters as few as possible, it is necessary to have some insight how to characterize the signals.

Simulation

With a SUT S(t) available in Simulink and the input u(t) generated by the Gen-erator, the Simulator is able to generate a simulation trace. The simulation trace is used together with the requirement ϕ to evaluate the robustness function ρ for ϕ.

After calculating the robustness function ρ, it is evaluated whether ρ is negative (which means falsified) or not.

Since the goal for generating test vectors here is falsifying the system requirement, if ρ is already negative, the falsification procedure will stop. The current input (test vector) is able to falsify the Requirementϕ

Parameter optimization

When the objective function ρ is positive, the information will be fed to the Param-eter optimizer.

(38)

For the first constant signal in Figure 2.5, it is a one-dimensional optimization problem while the sinusoidal signal will result in a two-dimensional optimization problem.

In this way, Breach can generate new Input u(t) to simulate the SUT towards the falsification of robustness function ρ.

Similar tools include S-TaLiRo. S-TaliRo and Breach both support SUT imple-mented as Simulink/Stateflow models. One main difference is that S-TaliRo uses another formalism called Metric Temporal Logic (MTL) [41] instead of STL.

2.3

Generating Test Vectors with Testweaver

Testweaver [42] can generate test vectors in a reactive informed way, which means Testweaver learns the behavior of SUT from the results of past simulations for the purpose of increasing state coverage.

Here is a simple example to better understand state coverage: Consider a SUT with two signals, x1 and x2. Let x1 be a boolean signal and x2 be an enumeration

with 5 possible values. Full state coverage of these two signals will be the cross product of each user-defined signal range, which means that there are 2 × 5 = 10 states that Testweaver will try to cover by generating different test vectors for the SUT.

(39)

Testweaver supports development environments including MATLAB/Simulink. To use Testweaver, the user can provide Python scripts containing integration informa-tion between Testweaver and the SUT in MATLAB/Simulink. For example, the user needs to define parameters for Testweaver to generate test vectors. These parame-ters include but are not limited to the signal names, minimum and maximum values, change rate, occurrence rate (how often a signal occurs in the test vectors that are generated).

2.4

Manually Generated Test Vectors

Testers generate test cases manually for testing necessary specific situations. These situations can be simulated with a single test vector or a small set of test vectors.

For example, a tester might ask himself: “If I push down on the gas pedal, does the automatic gearbox shift up to the maximum gear?”

Then he can produce a test vector where the throttle is on MAX for 10 seconds to simulate the SUT the situation “the automatic gearbox shift up to the maximum gear”. This is an example of how a manual test vector is produced.

(40)

2.5

Summary

To sum up, Breach can generate test vectors in the hope of leading to the violations of system specifications. However, it is difficult to know how to parameterize signals for large scale SUT. Also, the Breach toolbox is a relatively new toolbox that are mostly used in academy for research reasons at present. Given more time, it is very likely that the Breach toolbox is going to play a more important role in the MBD process in automotive industry.

Testweaver can produce a large number of test vectors with light workload for the tester. However, it is not clear how to compare the qualities of the large number of test vectors.

(41)

3

Use Case

The third chapter, Use Case, includes the implementation work for a use case at Volvo Car Corporation.

In Chapter 3.1.1, the reader can find information about the outlook of the Simulink model of the SUT and what application it represents. Chapter 3.1.2 introduces the subsystems more into details and explains how they interact with each other. Chapter 3.1.3 includes a system requirement modeled in Simulink as part of the whole model for the SUT.

(42)
(43)

3.1

The System Under Test

3.1.1

the Structure of the SUT

The SUT is a Simulink model for a hybrid vehicle that can utilize both electricity and fuel to achieve propulsion. As Figure 3.1 shows, this model includes following parts:

Figure 3.1: The Simulink model under test

• The Plant subsystem contains: – an electric motor.

– a gear box.

(44)

– a high voltage DC battery.

• The Controller subsystem contains the software part controlling the plant. A high voltage direct current is transformed into a torque output.

• The System specification evaluation subsystem contains system requirements modeled as Simulink subsystems.

3.1.2

General Introduction of the SUT

The inputs to the SUT are test vectors generated in different ways. The outputs of the SUT are the satisfaction signals from the system specification evaluation subsystem. These system requirements take the output signals from other subsystems of the SUT and are evaluated during the simulation.

In general, the subsystems in the SUT interact with each other in a way described as follows: The test vectors provide reference values for the Controller subsystem. Along with the output feedback signals from the Plant subsystem, the Controller subsystem is able to calculate control signals as output. The Plant subsystem receives the control signals and is controlled by these control signals. The System specification evaluation subsystem receives signals from the rest part of the SUT to evaluate the system requirements.

(45)

3.1.3

An Example of a System Requirement

Figure 3.2 is an example of these system requirements. This requirement monitors the values for x and xref after simulating the SUT with a test vector in Simulink

environment. The SUT should behave in a way that the absolute value of their difference is always less than or equal to a constant tol. Apparently, when this specification is violated, the satisfaction signal for this requirement req sat will be 0, indicating this requirement has been violated when this test vector is simulated.

Figure 3.2: An example of system requirements expressed in Simulink models

3.2

Set Up for Evaluation

(46)

3.3

Criterion 1

3.3.1

Description of criterion 1

Since the goal for testing here is to increase design confidence by revealing system faults, the first criterion for a good test vector here is considered as the ability to cause violations to system safety requirements.

To implement criterion 1, there are two steps.

3.3.2

First Step to Implement Criterion 1

The first step is to investigate a tester who has sufficient experiences working with this model. The goal for the investigation is to get some insight into what characteristics a test vector should have so that this test vector tends to cause more violations to system safety requirements.

Table 3.1 is the result for the investigation. From the tester’s point of view, there are in total three characteristics for a test vector to have so that it is likely to violate some of the 68 system safety requirements.

Although it is hard to get any straightforward insight on why these characteristics are important to have, learning the experiences from an experienced tester on this SUT is likely the most reasonable point at present to start with.

Index Desired Signal Behavior

characteristic 1 signal1 == value1

characteristic 2 signal2 != value2 for 100 consecutive ms

characteristic 3 signal3 == value3

(47)

As is briefly mentioned in Chapter 1.2.2, test vectors can be regarded as series of signal values over time. Thus test vectors can be conveniently stored into matrix and processed in Matlab. Following 4 × n submatrix (n is the number of time stamps) is part of the original matrix representing the original test vector, extracting the relative signal values for above three characteristics (values for signal1, signal2 and signal3) and time series. In the matrix below, the first 3 rows correspond to the values of three relative signals respectively while the 4th row corresponds to time stamps.      

signal1value1 signal1value2 · · · signal1valuen

signal2value1 signal2value1 · · · signal2value1

signal3value1 signal3value2 · · · signal3V aluen

t1 t2 · · · tn      

(48)

clarify how to evaluate characteristic 2.

Algorithm 1: How to check characteristic 2 for a test vector

Result: Result at each time stamp if this test vector holds characteristic 2 from Table 3.1 or not.

1 i is the index for addressing the array of time stamps, initialized to the 1; 2 ti is the ith element from the array of time stamps;

3 consecT imei is the length of consecutive period that signal2 has not been 100

at ti;

4 ri is the result for checking if the test vector characteristic 2 at time stamp ti;

5 while i ≤ the number of time stamps do

6 calculate consecT imei;

7 % calculations here need the value of ti and other

8 % relative data from previous time stamps.

9 if consecT imei ≥ 100 then

10 ri = 1;

11 % 1 means that this test vector holds characteristic 2 at % the ith time

stamp.

12 else 13 ri = 0;

14 % 0 means that this test vector does not hold characteristic 2 at the ith time

stamp.

15 end 16 i++; 17 end

(49)

3.3.3

Second step to Implement Criterion 1

Secondly, as mentioned in Chapter 1.4.1, the development of the SUT is an iterative process. Between different releases, there are minor modifications on the SUT. So it is reasonable to assume that if a test vector caused violations to the former release of SUT, it is likely to cause the same violations to the current one.

A report is generated as the results for testing activities. It records the information of the test vector in a human-readable format. For this purpose the report contains the test result (with one of the verdicts passed/success, failed, or unknown), curves of relevant signals, data tables as well as customizable comments that illustrate the evaluated results.

By analyzing these scripts for the former release, it is clear for every test vector which system requirements are violated during their execution.

3.3.4

Quantification Based on Criterion 1

Scores are assigned as follows for each test vector based on criterion 1.

1. for the characteristics in Table 3.1

• If the test vector holds all three characteristics: 20 scores.

• If the test vector holds only characteristic 1 (regardless of characteristic 2 or characteristic 3): 10 scores.

• If the test vector holds characteristic 2 and characteristic 3 at the same time but not characteristic 1: 5 scores.

(50)

scores are assigned here also comes from the advice of the tester investigated for those three characteristics.

2. for the violations to the system requirements on a former release:

• Every time a test vector caused one violation of a system requirement, this test vector can get 17 scores.

After these two steps, the evaluation result of a test vector for criterion 1 can be obtained by summing up the scores it gets from the two parts. For example, a test vector that holds all three characteristics and has violated two system requirements would end up with a score 30 + 17 × 2 = 54 based on criterion 1.

3.3.5

the Evaluation Result for Criterion 1

After calculating scores in the way mentioned in section 3.3.4, a score for each test vector as a result is obtained.

Table 3.2 is a summary of the quantification result. The table shows the distribu-tion of the result scores for all the test vectors and the number of test vectors with that score.

Scores a test vector can have Score0 Score5 Score10 Score20 27 37 44 54

the Number of Test Vectors

Evaluated with this Score 417 5 585 249 84 15 3 2

Table 3.2: Summary of quantification result for criterion 1

(51)

Figure 3.3: The histogram of Table 3.2

It can be seen that there are 8 bins for the histogram, corresponding to 8 evaluation results (scores) for criterion 1 in Table 3.2. Here, the 1361 test vectors that have been evaluated and quantified can be classified into 8 groups based on their quantification results.

(52)

3.4

Extension to Criterion 2

3.4.1

the Need for a Second Criterion

Still, it can be difficult when a tester’s time is limited and he wants to test, for instance, the 200 best test vectors (in this case, 200 test vectors with highest scores) when his time is limited. In Figure 3.3, it is clear that many test vectors end up with the same score due to the way the score is generated in Section ??.

As a result, with criterion 1 only, it is not sufficient in this situation which test vector is better if they have the same scores.

For example, in total 249 test vectors have the score 20. It means they hold all three characteristics at some time stamp while they didn’t cause violations of system requirements when they were executed on the former release of the SUT. As a result, it is still not clear how to compare their qualities when only criterion 1 is available. To help with this problem, there is a need for proposing more criteria to better differentiate the qualities for the test vectors from each other.

3.4.2

the Idea of a Second Criterion

When thinking about a larger picture of the iterative development process for the SUT in the MBD process, many engineers, developers and testers are involved and cooperating with each other.

(53)

is violated. In this case, a constant signal is easier to analyze than a signal that varies very often. This is the idea of the second criterion.

When taking the role of other engineers into consideration, it is reasonable to propose a second criterion that suppose there exists a signal of interest that is traced by engineers for analyzing the SUT, the less this signal varies, the better quality the test vector has.

3.4.3

Implementation of Criterion 2

To implement criterion 2, there needs to be concrete information regarding which sig-nal is of interest to asig-nalyze. Unfortunately, in this use case, this kind of information is not available in practice at present.

For implementation purposes, a signal that appears in every test vectors (most other signals are not so common and they only appear in some of the test vectors) is taken as an example. Then a second score is able to be generated based on this assumption.

To demonstrate the implementation result of both criterion 1 and criterion 2, every time the example signal for criterion 2 varies, a score 0.1 is subtracted from the score a test vector obtained for criterion 1 in Chapter 3.3.4.

3.4.4

Quantification Result of Criterion 2

(54)

Figure 3.4: Quantification result for criterion 1 and criterion 2

(55)

3.5

Summary

In this chapter, a large number of test vectors are evaluated and quantified based on two criteria proposed in this chapter. The result for evaluation is a score based on how much they satisfy these criteria. The higher a score is, the better quality the test vector has in this case.

(56)

4

Analyzing Experiment Using the Breach

Toolbox

After proposing and implementing the two criteria in Chapter 3, there need to be some way to help analyzing whether the criteria and the quantification results are of interests or not.

(57)
(58)

4.1

the Reason Why Breach Toolbox is an

Appro-priate Tool

As is addressed in Chapter 3.3, a higher score for criterion 1 means it is likely that more system requirements are violated when this test vector with this score is simulated on the SUT.

A robustness value for a STL formula translated from a system requirement indi-cates how far this requirement is from being violated (See Chapter 2.2.1 to Chapter 2.2.5 for semantics and examples). By summing up the robustness values for all STL formulas representing the same system requirements when simulating a test vector, the result can also reveal the test vector’s ability to violate system requirements from some point of view.

Since the Breach Toolbox is capable of robust monitoring of STL formulas for hybrid systems in Simulink environment, it is suitable to be chosen as the tool here.

4.2

Experiment Set-Up

4.2.1

Purpose of Using the Breach Toolbox

After evaluating the test vectors with criterion 1, every one of test vectors has a score as the quantification result, as shown in Table 3.2 or Figure 2.2.

(59)

From Table 3.2 we can see that for the score 27, there are in total 84 test vectors. Here with only criterion 1, they can not be differentiated. So we pick 40 test vectors from the 84 test vectors with score 27 randomly. This does not affect the experiment using Breach.

4.2.2

Preparations

System requirements can be expressed in different ways. In Simulink environment, system requirements are modeled as sub blocks in the System specification evaluation subsystem shown in Figure 3.1.

In order to use the Breach toolbox, it is necessary to translate system require-ments modeled in Simulink environment into STL formulas so that Breach is able to calculate the robustness values for them. In practice, this is also done in Matlab: writing scripts to read each system requirement in the System specification evalua-tion subsystem from the SUT and translate them into STL formulas based on the semantics.

For instance, the system requirement taken as an example in Chapter 3.1.3, it can be translated into the STL formula shown in Equation (2.6):

ϕ := [ti,tf](|x[t] − xref[t]| <= tol)

In equation 4.2.2, ti and tf means the initial time stamp and final time stamp. [ti,

tf] together represent the time period that Breach will monitor the robustness values

(60)

4.2.3

Implementation of the Breach Toolbox

It is introduced in Section 2.2.5 how Breach toolbox can generate test vectors towards the goal to violate system requirements through the falsification procedure presented in Figure 2.4.

Here, it is better to notice that the process to use Breach to get the robustness values is a simpler process than the falsification procedure mentioned above. This is because we have already got the test vectors to run as the inputs to the SUT. The only purpose here is to get the robustness values for STL formulas when simulating those test vectors.

Figure 4.1 is presented here to describe the implementation of the Breach Toolbox: test vectors are directly used as inputs for the SUT, Breach calculates the robustness values all for STL formulas expressing system requirements. These STL formulas are translated from 68 system requirements (mentioned in Chapter 3.1.2). As the outputs, robustness values for each STL formula are obtained after re-running every test vector.

Simulator Robustness function

System requirements ϕ

Output S(t)

Test Vectors Robustness

value of ϕ

Figure 4.1: A flowchart describing the procedure using Breach to get robustness values for system requirements

4.2.4

Outputs of the Breach Toolbox

(61)

run with the Breach Toolbox. The reason is beyond analyzing and it does not affect us to process the data for other test vectors.

The data here is stored into a matrix of 68×56. In this matrix, for example, element ρi,j represents the robustness satisfaction value for the ith system requirement when

re-running the jth test vector.

      ρ1,1 ρ1,2 · · · ρ1,56 ρ2,1 ρ2,2 · · · ρ2,56 .. . ... . .. ... ρ68,1 ρ56,2 · · · ρ68,56      

The data in this matrix will be used for validating and analyzing later.

4.3

Standardization of the Original Robustness

Val-ues

With the example of robustness values for STL formulas in Chapter 2.2.5, the reader can get a straightforward understanding that the original robustness satisfaction values are heavily dependent on the exact range and numeric values of the signals that are related to a STL formula.

(62)

summing them up.

Algorithm 2: How to standardize the original robustness satisfaction values Result: A standardized robustness value ρ0 for the original value ρ within the

range of [−1, 1]

18 i is the index for addressing 68 different STL formulas; 19 j is the index for addressing 56 test vectors;

20 ρposM ax is the maximum positive value among all positive robustness values for

a given STL formula;

21 ρnegM in is the minimum negative value among all negative robustness values for

a given STL formula;

22 ρi refers to the ith row from the original matrix; 23 for ( i = 1; i <= 68; i = i + 1 ) {

24 ρposM ax= M axP ositiveV alue(ρi);

25 % find the maximum positive robustness value among 56 test vectors for the

ith system requirement; 26 ρnegM in= M inN egV alue(ρi);

27 % find the minimum negative robustness value among 56 test vectors for the

ith system requirement;

28 for ( j = 1; j <= 56; j = j + 1 ) {

29 if ρi,j > 0 then

30 ρ0i,j = ρi,j/ρposM ax

31 else

32 ρ0i,j = ρi,jnegM in

33 end

34 } 35 }

(63)

      ρ01,1 ρ01,2 · · · ρ01,56 ρ02,1 ρ02,2 · · · ρ02,56 .. . ... . .. ... ρ068,1 ρ068,2 · · · ρ0 68,56      

4.4

Validation and Analyses

With the data available, they can be put into use from two perspectives that will be discussed in Chapter 4.4.1 and Chapter 4.4.3 respectively. To compare with, only the results for criterion 1 will be considered here since criterion 2 does not have any connection to STL semantics.

4.4.1

Validating Quantification Result for Criterion 1

In Chapter 3, criterion 1 evaluates the test vector’s ability to cause violations of system requirements. Since positive robustness values mean this requirement is not violated and only negative values mean this requirement is violated, positive values are replaced as 0 in this matrix since only violations of requirements should be considered.

After replacing positive values with 0, it is time to calculate the sum of robustness values for each test vector. This is done by summing up the elements for each column:

sumOf RobustnessV aluesi =

68

X

i=1

ρ0i,j

(64)

quantification result for criterion 1.

As Figure 4.2 shows, the horizontal axis is the rank for test vectors from 1st to 56th based on the scores for criterion 1. The blue circles above represent the score for criterion 1. The green circles below are the sum of robustness values for each test vector.

Figure 4.2: A comparison.

4.4.2

Discussions about Validating Criterion 1

From the Figure 4.2 we can see that it does not seem that test vectors with higher scores for criterion 1 tend to have a lower sum of robustness values for all system requirements (since negative robustness values mean violations, a lower sum indicates more violations). There seems no clue about their sum

(65)

the satisfaction signal 1 means one requirement is violated, 0 means not violated. The implementation does not contain information regarding how far it is from being violated or not. But as mentioned in Section it is reasonable to experiment in this way and make a comparison.

Moreover, the SUT is of considerable size. The tester’s experiences on the SUT are not likely to have obvious influence. Also, the test vectors are executed twice on two different releases of the SUT. It is uncertain what kind of modifications and updates the SUT has underwent.

4.4.3

Correlation Analyses

In Chapter 4.4.1, in order to get rid of the influence of system requirements that are not violated, all the positive robustness satisfaction values are replaced with 0.

(66)

5

Conclusions and Future Work

(67)

5.1

Contributions

The contributions of the thesis work are listed as follows:

• Two possible criteria are proposed for evaluating the qualities of test vectors. • Implementation has been done on a given model as a use case. With the

implementation, the goal to pick out a subset of test vectors from a large number of test vectors can be accomplished.

• The semantics of robustness satisfaction of STL formulas is utilized to help analyzing the quantification result. This is a new method that can be further investigated: Volvo Car is now planning to apply Breach Toolbox into daily work where .

5.2

Limitations and Discussions

One obvious limitation of this thesis project is that the scores, as the quantification result for evaluating test vectors, are generated in a way that is pretty subjective. However, one thing to pay attention to is that within this report, it is not the goal to accurately define and quantify the qualities, nor is it practical at present. This project can be

Nevertheless, this thesis project can be regarded as an attempt in proposing pos-sible criteria for projects with similar goals: What

(68)

5.3

Future Work

Here are several interesting questions derived from this project. Some of them might lead to future project topics.

• What can be other possible criteria for good test vector quality?

• Is it possible to define the criteria so that there is no need for inputs from a tester?

• What conclusions can be drawn from comparing the qualities of automatically generated test vectors and manually generated test vectors?

• Is it possible to find a way or a tool to generate the test vectors (including but not limited to the methods mentioned in Chapter 2) so that there is no need to use test vectors generated in other ways?

(69)

[1] Gino Labinaz, Mohamed M Bayoumi, and Karren Rudie. Modeling and control of hybrid systems: A survey. IFAC Proceedings Volumes, 29(1):4718–4729, 1996. [2] Karl Henrik Johansson, John Lygeros, and Shankar Sastry. Modeling of hybrid

systems. 2004.

[3] Arend Aerts, Mohammad Reza Mousavi, and Michel Reniers. A tool prototype for model-based testing of cyber-physical systems. In International Colloquium on Theoretical Aspects of Computing, pages 563–572. Springer, 2015.

[4] Claire J Tomlin, Ian Mitchell, Alexandre M Bayen, and Meeko Oishi. Com-putational techniques for the verification of hybrid systems. Proceedings of the IEEE, 91(7):986–1001, 2003.

[5] Stefan Schupp, Erika ´Abrah´am, Xin Chen, Ibtissem Ben Makhlouf, Goran

Frehse, Sriram Sankaranarayanan, and Stefan Kowalewski. Current challenges in the verification of hybrid systems. In International Workshop on Design, Mod-eling, and Evaluation of Cyber Physical Systems, pages 8–24. Springer, 2015.

[6] Lat´efa Ghomri and Hassane Alla. Modeling and analysis of hybrid dynamic

(70)

[7] Ds Manolakos, Gs Papadakis, Ds Papantonis, and S Kyritsis. A simulation-optimisation programme for designing hybrid energy systems for supplying elec-tricity and fresh water through desalination to remote areas: case study: the merssini village, donoussa island, aegean sea, greece. Energy, 26(7):679–704, 2001.

[8] Rajeev Alur, Costas Courcoubetis, Thomas A Henzinger, and Pei-Hsin Ho. Hybrid automata: An algorithmic approach to the specification and verification of hybrid systems. In Hybrid systems, pages 209–229. Springer, 1993.

[9] Alongkrit Chutinan and Bruce H Krogh. Computational techniques for hybrid system verification. IEEE transactions on automatic control, 48(1):64–75, 2003. [10] Xavier Nicollin, Alfredo Olivero, Joseph Sifakis, and Sergio Yovine. An approach to the description and analysis of hybrid systems. Hybrid Systems, pages 149– 178, 1993.

[11] Matlab Simulink and MA Natick. The mathworks, 1993.

[12] Peter Fritzson and Peter Bunus. Modelica-a general object-oriented language for continuous and discrete-event system modeling and simulation. In Simulation Symposium, 2002. Proceedings. 35th Annual, pages 365–380. IEEE, 2002. [13] Thomas A Henzinger, Pei-Hsin Ho, and Howard Wong-Toi. Hytech: A model

checker for hybrid systems. In International Conference on Computer Aided Verification, pages 460–463. Springer, 1997.

[14] Kim G Larsen, Paul Pettersson, and Wang Yi. Uppaal in a nutshell. Interna-tional Journal on Software Tools for Technology Transfer (STTT), 1(1):134–152, 1997.

(71)

[16] Goran Frehse, Colas Le Guernic, Alexandre Donz´e, Scott Cotton, Rajarshi Ray, Olivier Lebeltel, Rodolfo Ripado, Antoine Girard, Thao Dang, and Oded Maler. Spaceex: Scalable verification of hybrid systems. In Computer Aided Verifica-tion, pages 379–395. Springer, 2011.

[17] Edmund M Clarke and Jeannette M Wing. Formal methods: State of the art and future directions. ACM Computing Surveys (CSUR), 28(4):626–643, 1996. [18] Thi Xuan Thao Dang. Verification and synthesis of hybrid systems. PhD thesis,

Institut National Polytechnique de Grenoble-INPG, 2000.

[19] Panos J Antsaklis. A brief introduction to the theory and applications of hy-brid systems. In Proc IEEE, Special Issue on Hyhy-brid Systems: Theory and Applications, pages 563–572. Citeseer, 2000.

[20] Herman Casier. Trends in automotive electronics. In Thermal and Mechanical Simulation and Experiments in Microelectronics and Microsystems, 2004. Eu-roSimE 2004. Proceedings of the 5th International Conference on, page 7. IEEE, 2004.

[21] Klaus Lamberg, Michael Beine, Mario Eschmann, Rainer Otterbach, Mirko Con-rad, and Ines Fey. Model-based testing of embedded automotive software using mtest. Technical report, SAE Technical Paper, 2004.

[22] Mark Utting, Alexander Pretschner, and Bruno Legeard. A taxonomy of model-based testing. 2006.

[23] A Agung Julius, Georgios E Fainekos, Madhukar Anand, Insup Lee, and George J Pappas. Robust test generation and coverage for hybrid systems. In HSCC, volume 4416, pages 329–342. Springer, 2007.

(72)

[25] Vasumathi Raman, Alexandre Donz´e, Mehdi Maasoumy, Richard M Murray, Alberto Sangiovanni-Vincentelli, and Sanjit A Seshia. Model predictive control with signal temporal logic specifications. In Decision and Control (CDC), 2014 IEEE 53rd Annual Conference on, pages 81–87. IEEE, 2014.

[26] Eckard Bringmann and Andreas Kr¨amer. Model-based testing of automotive

systems. In Software Testing, Verification, and Validation, 2008 1st Interna-tional Conference on, pages 485–493. IEEE, 2008.

[27] Andrew R Plummer. Model-in-the-loop testing. Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering, 220(3):183–199, 2006.

[28] R Isermann, J Schaffnit, and S Sinsel. Hardware-in-the-loop simulation for the design and testing of engine-control systems. Control Engineering Practice, 7(5):643–653, 1999.

[29] Manfred Broy and Ernst Denert. Software pioneers: contributions to software engineering. Springer Science & Business Media, 2012.

[30] Yibo Chen, Dimin Niu, Yuan Xie, and Krishnendu Chakrabarty. Cost-effective integration of three-dimensional (3d) ics emphasizing testing cost analysis. In Proceedings of the International Conference on Computer-Aided Design, pages 471–476. IEEE Press, 2010.

[31] David Haugh, Annabelle Mourougane, and Olivier Chatal. The automobile industry in and beyond the crisis. 2010.

[32] Georgios E Fainekos, Sriram Sankaranarayanan, Koichi Ueda, and Hakan Yazarel. Verification of automotive control applications using s-taliro. In Amer-ican Control Conference (ACC), 2012, pages 3567–3572. IEEE, 2012.

(73)

[34] Amir Pnueli. The temporal logic of programs. In Foundations of Computer Science, 1977., 18th Annual Symposium on, pages 46–57. IEEE, 1977.

[35] Oded Maler and Dejan Nickovic. Monitoring temporal properties of continuous signals. In FORMATS/FTRTFT, volume 3253, pages 152–166. Springer, 2004. [36] E Allen Emerson and Joseph Y Halpern. “sometimes” and “not never” revisited: on branching versus linear time temporal logic. Journal of the ACM (JACM), 33(1):151–178, 1986.

[37] Robert B Cleveland, William S Cleveland, and Irma Terpenning. Stl: A

seasonal-trend decomposition procedure based on loess. Journal of Official

Statistics, 6(1):3, 1990.

[38] Houssam Abbas Bardh Hoxha and Georgios Fainekos. Benchmarks for temporal logic requirements for automotive systems. Proc. of Applied Verification for Continuous and Hybrid Systems, 2014.

[39] Alexandre Donz´e. Breach, a toolbox for verification and parameter synthesis of hybrid systems. In CAV, volume 10, pages 167–170. Springer, 2010.

[40] Martin Fabian Knut kesson Johan Eddeland, Sajed Miremadi. Objective func-tions for falsification of signal temporal logic properties in cyber-physical sys-tems. In Conference on Automation Science and Engineering, Xi’an, China, 2017.

[41] Ron Koymans. Specifying real-time properties with metric temporal logic. Real-time systems, 2(4):255–299, 1990.

References

Related documents

We have not succeeded in a perfect way of finding these information but have been able to present a study which brings up some new and interesting findings regarding how banks

Om de politiska kommentatorerna ständigt bidrar tillsamma metonym i sin gestaltning av de politiska partierna, vilket denna studie visar, så blir publiken aldrig presenterad för

Mean values and standard deviation (SD) of pressure pain threshold (PPT) for temporal, masseter and the dominant thumb muscle, pollicis transversa, before and after PLACEBO and

Av protokollen framgår inget om hur styrelsen mottog Lind- blads utredning, men verksamheten fortsatte, på ungefär sam- ma nivå och sätt som tidigare, vilket bland annat framgår av

Before we move on to how Tetra Pak utilizes expatriates’ knowledge, we will also need to know what kind of knowledge expatriates gain during international assignments that could

Randomized clinical trial of adapted mindfulness-based stress reduction versus group cognitive behavioral therapy for heterogeneous anxiety disorders.. Weekly change in mindfulness

The ambiguity within this research stems from the difficulties automotive companies have when seeking data and choosing between improving current business models or implementing new

Att som ung vuxen hantera upplevelsen av ensamhet under covid-19: Samband mellan upplevd ensamhet och copingstrategier samt emotionsreglerande förmågor Socialt stöd och en känsla