• No results found

Model Driven Software Verification and Validation

N/A
N/A
Protected

Academic year: 2021

Share "Model Driven Software Verification and Validation"

Copied!
103
0
0

Loading.... (view fulltext now)

Full text

(1)

Validation

Lars Strandén SP, Josef Nilsson SP,

Henrik Lönn Volvo Technology

Electronics SP Report 2007:15

SP Technical Research Institute of Sweden

M0 – User objects M1 – Model M2 – Meta Model M3 – Meta Meta Model

(2)

Model Driven Software Verification and

Validation

Lars Strandén SP, Josef Nilsson SP,

Henrik Lönn Volvo Technology

(3)

Abstract

Model Driven Software Verification and Validation

This report describes and classifies modelling from a top-down perspective. The intention is to provide a framework that could be further instantiated when considering specific domains and applications. The focus is on requirement specification, verification and validation for model driven (model based) development of software. An important part of the report is the classification of different types of models.

In this report an evaluation is made of modelling with respect to general development processes such as V-model, iterative and incremental development, extreme programming etc and with respect to established standards such as IEC 61508, EN 50128 etc. An overview of modelling support is given concerning notations e.g. SDL, UML,SYSML, MOF and tools for application design and verification. In order to see the relevance of the framework an evaluation of the architecture language EAST ADL, developed in the EAST –EEA project, is included as a case study. There is also a second case study which contains experiences when using models for a specific application (ABS brake). The case studies are evaluated using the general results of the report.

Key words: automotive electronics, dependable systems, safety, reliability, validation

SP Sveriges Tekniska Forskningsinstitut

SP Technical Research Institute of Sweden SP Report 2007:15

ISBN 978-91-85533-81-7 ISSN 0284-5172

(4)

Contents

Abstract 3

Contents 4

Preface

8

Summary 9

1

Introduction 11

2

Terminology 14

3

Model infrastructure

16

3.1 Introduction 16 3.2 Metamodel 16 3.3 Model transformation 17 3.4 Representation 19 3.5 Classification 20 3.5.1 General 20

3.5.2 Characterisation of single model 20

3.5.3 Characterisation of group of models 21

4

Requirement specification

24

4.1 Introduction 24

4.2 Requirement criteria 24

4.3 The role of models 25

4.4 Deriving requirements 26 4.5 Traceability 27 4.6 Process aspects 28

5

Verification means

29

5.1 General 29 5.2 Model transformation 29

5.3 General verification methods 29

5.4 A representative set of general verification methods 31

5.4.1 Introduction 31

5.4.2 Identification and analysis of safety-related parts of the model 31

6

FMEA 32

6.1.1 FTA - fault tree analysis 32

6.1.2 External behaviour handling 33

6.1.3 Sampling rate check 33

6.1.4 Interactions between different sampling rates check 33

6.1.5 Data type verification 34

6.1.6 Fixed-point arithmetic check 34

6.1.7 Signal range verification 34

6.1.8 Fault injection 35

6.1.9 Model coverage check 36

6.1.10 Settings check 37

6.2 Simulation data generation 37

6.3 Test data generation 37

6.4 Verification environment 38

(5)

6.4.2 Reviews 38 6.4.3 Analysis 38 6.4.4 Dynamic methods 39 6.4.5 Formal methods 39

7

Verification 40

7.1 Introduction 40 7.2 Verification vs. validation 40 7.3 Verification means 41 7.3.1 Model transformation 41

7.3.2 General verification methods 42

7.3.3 Simulation data generation 42

7.3.4 Test data generation 42

7.3.5 Verification environment 42

7.4 Single application model 42

7.5 Group of application models 43

7.6 Software Interfaces 44

7.7 Other interfaces 44

8

Support for modelling

45

8.1 Introduction 45 8.2 Notations 45 8.2.1 SDL overview 45 8.2.2 MOF overview 45 8.2.3 UML overview 45 8.2.4 SYSML overview 46 8.2.5 Modelica overview 47

8.2.6 EAST ADL overview 47

8.2.7 AADL (SAE) overview 47

8.2.8 AP233 overview 48 8.2.9 XMI overview 48 8.3 Tools 48 8.3.1 Application design 48 8.3.2 Verification 49 8.3.2.1 Test generation 49 8.3.2.2 Formal methods 49 8.3.3 Requirements management 49

9

Development process for model

50

9.1 General 50 9.2 Requirement capture 50 9.3 Design 51 9.3.1 General 51 9.3.2 Transformation 51 9.3.3 Structure of models 51

9.3.4 Executable model development 52

9.3.5 Non-executable model development 52

9.3.6 Relating different types of models 52

9.3.7 Round-trip engineering 53

9.3.8 Separate software 53

9.3.9 Modelling guidelines 53

9.4 Verification and validation 54

9.5 Maintenance 54

9.6 Decommission 54

(6)

9.7.1 Introduction 55

9.7.2 Waterfall model 55

9.7.3 V-model 55

9.7.4 Iterative and incremental development 55

9.7.5 Rapid prototyping 56

9.7.6 Extreme Programming 56

9.7.7 Model-Driven Software Development 56

9.7.8 Aspect-Oriented development 56

9.7.9 Software Product Line 57

9.8 Domain considerations 57 9.9 Summary 57

10

Models vs. standards

59

10.1 General 59 10.2 IEC 61508 59 10.3 MISRA 60 10.4 ISO 26262 60 10.5 DO-178B 60 10.6 EN 50128 60 10.7 Conclusions 61

11

Case study: Project EAST-EEA

62

11.1 Introduction 62

11.2 Background 62

11.3 EAST ADL overview 62

11.3.1 Model organization 62

11.3.2 Behaviour 65

11.3.3 Requirements 65

11.3.4 Validation and Verification 65

11.3.5 Variant Handling 65

11.4 Evaluation 66

12

Case study: ABS Model

67

12.1 Scope 67

12.2 The model 67

12.3 Identification of safety-related parts of the model 69

12.3.1 Control flow/Finite state 69

12.3.2 Identified signals 69

12.3.3 Data flow diagram 69

12.3.4 Identified signals 70

12.3.5 Identification and analysis of safety-related parts of the model 70

12.4 Application of verification methods 71

12.4.1 Introduction 71

12.4.2 FMEA 71

12.4.2.1 ABS model without added safety function 71

12.4.2.2 ABS model with added safety function 74

12.4.3 FTA 77

12.4.4 External behaviour handling 79

12.4.5 Sampling rate check 79

12.4.6 Interactions between different sampling rates check 81

12.4.7 Data type verification 81

12.4.8 Fixed-point arithmetic check 81

12.4.9 Signal range verification 81

12.4.10 Fault injection 82

(7)

12.4.12 Settings check 84

12.5 Evaluation 86

13

Conclusions 87

14

References 88

15

Appendix 1: Model verification methods and checks support by

Simulink 91

15.1 Identification and analysis of safety-related parts of the model 91

15.2 External behaviour handling 91

15.3 Sampling rate check 91

15.4 Interactions between different sampling rates check 91

15.5 Data types verification 92

15.6 Fixed-point arithmetic check 93

15.7 Signal range verification 93

15.8 Fault injection 96

15.9 Model coverage check 96

15.10 Settings check 98

(8)

Preface

New safety functions and the increased complexity of vehicle electronics enhance the need to demonstrate dependability. Vehicle manufacturers and suppliers must be able to present a safety argument for the dependability of the product, correct safety requirements and suitable development methodology.

The objective of the AutoVal project is to develop a methodology for safety validation of a safety-related function (or safety-safety-related subsystem) of a vehicle. The validation shall produce results which can be used either as a basis for a whole vehicle type approval driven by legislation, or for supporting dependability claims according to the guidelines of the manufacturer.

The AutoVal project is a part of the IVSS (Intelligent Vehicle Safety Systems) research programme. IVSS is systems and smart technologies to reduce fatalities and severe injuries, this can be done by crash avoidance, injury prevention, mitigation and upgrading of handling, stability and

crash-worthiness of cars and commercial vehicles enabled by modern IT. Both infrastructure dependent and vehicle autonomous systems are included as are systems for improved safety for unprotected road – users. The core technologies of IVSS are:

• Driver support & human – machine interface (HMI) systems • Communication platforms – external / internal to the vehicles • Sensor – rich embedded systems

• Intelligent road infrastructure & telematics

• Crashworthiness, bio-mechanics and design of vehicles for crash-avoidance and injury prevention.

• Dependable systems

• Vehicle dynamic safety systems

Partners of the AutoVal project are Haldex, QRtech, Saab Automobile, SP Technical Research

Institute of Sweden and Volvo AB. Following researchers and engineers have participated in AutoVal: Mr Henrik Aidnell, Saab Automobile

Mrs Sabine Alexandersson, Haldex Brake Products Mr Joacim Bergman, QRtech

Mr Per-Olof Brandt, Volvo Mr Robert Hammarström, SP

Mr Jan Jacobson, SP (project manager) Dr Lars-Åke Johansson, QRtech Dr Henrik Lönn, Volvo

Mr Carl Nalin, Volvo

Mr Anders Nilsson, Haldex Brake Products Dr Magnus Gäfvert, Haldex Brake Products Mr Josef Nilsson, SP

Mr Lars Strandén, SP

Mr Jan-Inge Svensson, Volvo Mr Andreas Söderberg, SP

(9)

Summary

This report describes and classifies modelling from a top-down perspective. The intention is to provide a framework that could be further instantiated when considering specific domains and applications. The focus is on requirement specification, verification and validation for model driven (model based) development of software.

It is necessary to clarify what is meant by model driven since models have been used long before model driven development became an established notion. In this report model driven development implies that model is used for functional and/or non-functional requirements. Thus model could reflect behaviour, properties, structure etc of the system and include both application specific and application independent aspects (e.g. support needed for executing the application). A model could be executable or non-executable.

The main idea behind using models is to be able to look at a system from a specific perspective at a time i.e. for each perspective to hide irrelevant aspects and to focus on the relevant ones only. In this way it will be possible to handle more complex systems, to shorten development time and to improve requirements, development process, project management and quality. Note that normally there will be requirements expressed as text as well as requirements expressed by models. In these cases

requirement traceability between the two types is a special concern.

In this report an evaluation is also made of modelling with respect to general development processes such as V-model, iterative and incremental development, extreme programming etc and with respect to established standards such as IEC 61508, EN 50128 etc. An overview of modelling support is given concerning notations e.g. SDL, UML,SYSML, MOF and tools for application design and verification. In order to see the relevance of the framework an evaluation of the architecture language EAST ADL, developed in the EAST –EEA project, is included as a case study. EAST ADL is a metamodel based on UML2 and at the same level as UML2 (and SysML). EAST ADL is a specification language that defines a number of models and levels of abstraction. These are compared and evaluated using the results of this report. There is also a second case study which contains experiences when using models for a specific application (ABS brake). The case studies are evaluated using the general results of the report.

(10)
(11)

1

Introduction

In this report focus is on requirement specification, verification and validation for model driven (model based) development of software. By model driven development it is meant that model is a significant artefact used for the development of software. However, model driven development as such does not automatically address the scope, extent or quality of models; only that one or more models exist. The focus in this report is on safety critical, dependable software that is used in real-time, embedded and possibly distributed systems. For the work of this report the following information sources have been used:

• generally established experiences (from available literature) • established development processes

• development standards (e.g. IEC 61508)

• standards for handling models (e.g. UML, MDA) • case studies

The purpose of the work is to produce a general overview, or framework, that could be used as guidance and input for a model driven development process. The results of this work will answer the following questions:

• What are the software constituents and how are interfaces between them handled? • What are the principle phases in a model driven development process?

• How are functional and non-functional requirements specified?

• Which verification and validation methods shall be used when and where?

• How do established development processes and standards cope with model driven development?

• What support is available for model driven development? • How are models handled in real life (i.e. cases studies)?

It is necessary to clarify what is meant by model driven since models have been used long before model driven development became an established notion. In this report the following description applies:

• Use of model for model driven development implies that model is used for functional and/or non-functional application related requirements. Thus model could reflect behaviour, properties, structure etc of the system and include both application specific and application independent aspects (for the support needed for executing the application). The main idea behind using models in this case is to be able to look at a system from a specific perspective at a time i.e. for each perspective to hide irrelevant aspects and to focus on the relevant ones only. In this way it will be possible to handle more complex systems, to shorten development time and to improve quality.

• Use of model as support implies that model is used as support for aspects not directly related to requirements of the current system. The main idea behind using models in this case is to focus on a specific task in order to support verification and validation. Some examples are:

o Simulation of environment e.g. sensors, actuators, external environment to the system. o Simulation of hardware e.g. when final version of hardware is not yet available. o Simulation of human behaviour e.g. an operator.

• Use of model as prerequisite implies that model is used as a specification of conditions that are valid before development takes place. The prime example is fault model which has to be defined in order to specify what kinds of faults and errors the system is capable of handling and also how faults are manifested.

In this report focus is on the first use of model i.e. as a means for actually developing application related software and this use of model will be assumed below unless otherwise noted. The overall strong support for models used in this sense is manifested e.g. by that OMG supports both UML and MDA and that there are many tool manufacturers supporting the handling of models.

(12)

Using models implies improved properties:

• Improved understanding involves all roles related to an application i.e. customer, user, developer, project manager, maintainer etc. The most important reasons for improved understanding are:

o Models can hide not needed details at the current level o Models are visible

o Models are intuitive

o Models can be executed / animated e.g. for HMI o Models can generate consistent documentation Improved understanding results in improved

o requirements o verification o validation o reuse

o version handling e.g. a PIM (Platform Independent Model) can be used for generating several PSMs (Platform Specific Models)

• Improved project management – the most important reasons for improved project management are:

o All involved have the same picture (view) of the system. o Shorter time to market.

o Customer can be directly involved leading to more stable requirements early. o Short update loops (iterations) especially in the beginning of a project when

requirements are not settled.

o Improved quality since more verification and validation efforts can be put early in the project where it is inexpensive to change.

o Modelling opens up the possibility to use more development process alternatives. o Transformation of models makes it possible to develop new models maintaining the

same quality. One example is to generate another type of source code.

• Improved source code quality – the most important reasons for improved source code quality are:

o Requirement quality is improved (as a consequence of the aspects above) e.g. concerning completeness.

o It is possible to generate code automatically from models which reduces manual mistakes and makes it possible to use language subsets consequently.

o It is possible to produce test data automatically which improves verification and validation quality.

In the same way as for programs, slicing can be applied to models, see [23] and [24]. The purpose of slicing is similar to modelling since the idea is to lower the amount of information needed to be considered at a time. One example is when a class diagram contains hundreds or thousands of classes. If it is possible to take out slices, i.e. more or less independent partial structures of classes, analysis, design and verification will be simplified. However, from a modelling perspective as defined in this report there is

• no new level of abstraction • no new perspective

• dependence on the actual contents i.e. identification of slices could change e.g. if a new class is added in a class structure creating new dependences and removing other

In this report model slicing will not be considered as significant for modelling and will not be further commented.

(13)

There exists specific and established types of models for corresponding perspectives and some examples are (see [2]):

• For perspective Behaviour

• Finite state – A finite state machine captures behaviour by defining states and state transitions, the inputs governing these transitions and the resulting outputs.

• Concurrent processes – Concurrent process modelling is a formal approach where the interaction between concurrently executing processes are studied. Behavioural semantics is typically quite primitive, supporting send and receive of events and basic computations.

• For perspective Properties

o Reliability block modelling – A way to show reliability-related dependencies between system components. For example, the effects of redundancy and lack of redundancy for system components can be studied.

o Markov model – Markov Models capture states and the rate of state changes in a system. Typically, the states represent different failure states. The state changes are exponentially distributed, which allow an analytical computation of the steady-state probability of being in a certain state.

o Queuing model – Queuing models are based on the statistical coming and serving of events in a system. Both sources of events and servers that handle events are defined, as well as the arrival/service characteristics are defined. Steady state performance and dynamic scenarios can be studied analytically or by simulation.

For including models in application development the following steps are defined: 1. Selection of modelling support and structure

2. Creation of models

3. Use of models (in applications)

As a summary, modelling is a very useful idea and, even if balanced later on, an initial approach could be that “everything is model”.

(14)

2

Terminology

ADL Architecture Description Language

Application All application independent aspects needed for making it possible to execute Infrastructure an application

AOM Aspect-Oriented Modelling CCM Corba Component Model

CMOF Complete MOF

COTS Commercial Off The Shelf CWM Common Warehouse Metamodel DSML Domain Specific Modelling Language ECSL Embedded Control Systems Language ECU Electronic Control Unit

EMOF Essential MOF

FSM Finite State Machine

GME Generic Modelling Environment GUI Graphical User Interface

HMI Human Machine Interface

MDA Model Driven Architecture see http://www.omg.org/mda/ MDD Model Driven Development

MDE Model Driven Engineering

MIC Model Integrated Computing http://mic.omg.org/ Metamodel A model for how to express models

Model A model is used for specific, not all, aspects of a system in order to highlight specific behaviour and/or properties. Could be executable or not.

Model based/driven Model is the main artefact

Model infrastructure Support needed for making it possible to use models in application project MOF Meta Object Facility

(15)

OMG Object Management Group http://www.omg.org/ PICML Platform Independent Component Modelling Language PIM Platform Independent Model

Platform Relates to layers of abstraction. A platform hides details at lower layers and makes information available to higher layers

PSM Platform Specific Model RBD Reliability Block Diagram RUP Rational Unified Process RTOS Real Time Operating System

SPEM Software Process Engineering Metamodel

http://www.omg.org/technology/documents/formal/spem.htm UML Unified Modelling Language, see http://www.uml.org/ Validation Activity answering: is the right item built?

Verification Activity answering: does the item fulfil its specification? WCET Worst Case Execution Time

XMI XML Metadata Interchange i.e. an XML application XML eXtensible Markup Language

(16)

3

Model infrastructure

3.1

Introduction

By model infrastructure is here meant all those aspects needed in order to make use of models in applications. Example aspects are model development, model representation, model execution etc. In this document, the relation between model infrastructure and application infrastructure is that

• application infrastructure is always needed

• model infrastructure is only needed if modelling is included and in that case can be considered a subset of application infrastructure

Note again that, for both application and model, infrastructure is not limited to what is actually executed. The model infrastructure is application independent and there could be one or more types of models used for an application.

The overall purpose of this section is to present a framework that helps to understand modelling fundamentals.

3.2

Metamodel

By metamodelling is meant the description of models by using models. The two most important tasks related to metamodelling are: to specify what is possible to express and the possibility of transforming one model into another using the corresponding metamodel. There are two basic types of metamodels:

• for modelling application aspects

• for modelling application development aspects

A related concern for both is the availability of tool support and other supporting means. This, in turn, has implications on the processes involved. OMG has a profound importance concerning object oriented software development and is therefore considered exclusively in this section. OMG has defined a four layer hierarchy of models and metamodels for application aspects as shown in the figure below.

M0 – User objects M1 – Model M2 – Meta Model M3 – Meta Meta Model

Figure 1: OMG application model layers The levels are described by:

• At M0 the objects are defined that specifies the actual application (one specific application within an application domain).

• At M1 is the UML model given that defines e.g. application classes and relations between them. At this level the application domain is specified

(17)

• At M3 is the specification of what a modelling language (like e.g. UML, CWM) shall cope with. MOF is used at M3.

MOF could be used for transformations between different metamodels (i.e. for M2 level).

In the corresponding way OMG has also defined a four layer hierarchy of models and metamodels for application development aspects as shown in the figure below.

M0 – User process M1 – Model M2 – Meta Model M3 – Meta Meta Model

Figure 2: OMG process model levels The levels are described by:

• At M0 the project specific development process is defined.

• At M1 is the model given that defines a type of development processes e.g. RUP. • At M2 is the specification of what is possible to express for processes.

• At M3 is the specification of what a process modelling language shall cope with. MOF is used at M3.

A natural candidate for M2 is SPEM (Software Process Engineering Metamodel) see [12].

3.3

Model transformation

A model can be transformed into another for the following reasons:

• A subset of perspectives shall be considered (same abstraction level), i.e. filtering out specific information.

• To create a more suitable input for another model (same abstraction level). • For lowering the abstraction level.

Normally, transformations of models require tool support i.e. manual support is not suitable. The figure below shows a possible structure of transformation (expressed in UML and including MOF from OMG) where there is a transfer from the source model to the target model using a transformation engine (tool).

(18)

cd Model Meta-Meta Model (MOF) Source Meta-Model Target Meta-Model

Source Model Transformation Target Model

Engine Transformation Language Meta-Model Transformation Rules

Input model Output model

Use of rules

Figure 3: Model transformation

In principle, source and target can be the same i.e. the transformation rules can be used for updating the existing model. A generalisation is to allow more than one source model and more than one target model. An example of the former is when merge of models is needed and an example of the latter is when a PIM shall allow the generation of several PSMs adapted to different hardware. In [14] a number of important functional properties are listed for transformation language and tool:

• Ability to create/read/update/delete transformations • Ability to suggest when to apply transformations • Ability to customise or reuse transformations

• Ability to guarantee correctness of the transformations • Ability to deal with incomplete or inconsistent models • Ability to group, compose and decompose transformations • Ability to test, validate and verify transformations

• Ability to specify generic and higher-order transformations • Ability to specify bidirectional transformations

• Support for traceability and change propagation

A separation can be made using vertical abstraction levels and ddefining vertical and horizontal transformations. The general motivation for vertical transformation is to successively refine a model possibly into executable code. However, the opposite direction is also possible e.g. when creating models bottom-up or when performing reverse or roundtrip engineering of models. As shown in the figure below, going from a higher abstraction level to a lower requires that extra information is added.

(19)

This may concern application design and implementation aspects but also other types of information e.g. coding guidelines, design principles etc.

Abstraction level n Transformation Abstraction level n+1

Additional information

Figure 4: Reducing abstraction level

A horizontal transformation is defined as creating a new model but not changing the level of abstraction. The general motivation for horizontal transformation is to refine a general aspect to something more specific. One example could be a source model whose primary purpose is to describe the system and there is a need for a target model describing the class structure. In practise it may be difficult to separate the two types i.e. mixtures of both will generally occur. In any case, the main issue when using more than one model is to handle consistency between models. Some possible reasons for inconsistencies are that used

• transformation language does not preserve syntax and/or semantics • tool does not preserve syntax and/or semantics

• rules do not preserve syntax and/or semantics

• definition and/or used source and target models are in conflict

Sequences of transformations are allowed and inconsistencies could then occur between models that do not have a direct source-target relation. For example, a transformation from model M1, via M2 to M3 could result in a conflict between M1 and M3.

Design patterns could be used for simplifying transformations. Originally, design patterns (see [13]) concerned only class structure but they have been generalised successively. A design pattern can be seen as a template for a specific piece of structure.

The following source-target relations exist:

• one source – one target, e.g. when source code is generated from a single model.

• one source – more than one target, e.g. when the source model contains both structure and behaviour models which are separated into two target models.

• more than one source – one target, e.g. when source code is generated from several models and merged into one.

• more than one source – more than one target, probably not often used but included here for completeness. A possible example could be to merge structure and behaviour models of one representation and to create and separate them again using a new representation.

3.4

Representation

In order to be acceptable as a model it must be specified unambiguously. This is also a prerequisite for formal verification which in many cases is a desirable feature (e.g. when verifying functional

behaviour). Generally, the following requirements apply:

• The syntax (symbols or plain text) for expressing models shall be unambiguous. • The semantics of the model shall be unambiguous.

• The model shall be possible to visualise (even though it also has a non-visual representation such as ordinary text).

(20)

• If needed, the model shall have a representation that makes it possible to transform significant parts of the model to be used in another model.

• If needed, the model shall be possible to verify formally • Hierarchy and structure shall be possible to represent.

3.5

Classification

3.5.1

General

A software based system may, in some cases, be described by a single model or, in other cases, by several models each representing different aspects of the system. A model classification is thus a useful means for creating a common language that could be understood by all involved. In this way misunderstandings could be minimized or eliminated. Examples from literature show that there is often an implicit assumption that a model is always executable, that there is a single model etc. By using a classification it becomes possible to change implicit assumptions to explicit.

Below a number of criteria are given that can be used for general classification. Only criteria that have few and clear alternatives have been included. Note that some criteria are dependent but others are independent.

Note that classification of models for development processes is not considered since it is in principle outside scope of this report.

3.5.2

Characterisation of single model

Creation of model – the origin of the model

• As a result from model transformation – This corresponds normally to an automatically generated model. Two examples are code generated from models (i.e. a model for execution) and an allocation model generated based on models of hardware, software and various requirement/constraints.

• Developed separately – This corresponds to original information in a model defined by an engineer.

Number of perspectives – which views the model represent

• Single perspective – A perspective could e.g. be class diagram, state diagram, error model, hardware model etc.

• Multiple perspectives – Multiple perspectives means that several aspects are represented by the same model e.g. both behaviour and structure.

Representation – how the model is specified

• Formal i.e. formal proofs and/or model checking are possible • Semi-formal i.e. defined syntax and semantics but not formal • Informal e.g. plain English text

Executable – note that a model describing behaviour is not required to be executable even though this

might be the normal case. • Yes

(21)

Perspective – for what purpose the model is used. For a multiple perspective model more than one

perspective is applicable.

• Domain – where modelling is applied o Application

o Application infrastructure

o Environment (systems and/or humans)

• Implementation oriented – if the model is motivated by implementation aspects o Implementation oriented

ƒ Software ƒ Hardware

ƒ Other (e.g. mechanical) o Not implementation oriented • Area

o Structure

o Behaviour – this includes functional behaviour in different modes: ƒ Fully functioning

ƒ Degraded ƒ At error

o Properties – structure and behaviour aspects may be needed

ƒ Real-time modelling e.g. performance, determinism / indeterminism, scheduling, parallel processes (mutual exclusion, deadlock, livelock etc.) ƒ Dependability modelling e.g. use of redundancy

ƒ Configurability modelling e.g. use of parameter settings

For example, a model can be developed separately, have a single perspective, use a formal

representation, be executable, concern application, be not implementation oriented, concern degraded behaviour and be used for real-time analysis.

3.5.3

Characterisation of group of models

The purpose of this section is to characterise the properties of a group of models. Characterisation of each individual model is given above in section Characterisation of single model.

Two models can have the following types of relations:

• transformation i.e. a model is transformed into another. Transformation is here used for describing a relation and not for describing the creation principle of a model.

• data dependency i.e. data generated from one model is used by the other. The data type could be abstract and the models do not necessarily have to be executable.

• no explicit relation e.g. a structural model and a behaviour model are only weakly related The figure below gives an example where a bold arc denotes transformation and a normal arc denotes data dependency. Group of models Model D Model C Model B Model E: Software Model F: Software Model A Model G Model I Model H Model K Model J

(22)

The figure shows that:

• Model A is used standalone e.g. a structural model that is reviewed manually.

• Model B is transformed into Model E which is the generated software (i.e. a model for execution)

• Model C and Model G are first transformed into Model D which is then transformed into Model F which is the generated software (i.e. a model for execution)

• Model E and Model F have a data dependency

• Model H is transformed into Model I and is used standalone. • Model J and Model K have a data dependency

Also, see [18] for an example of connected models.

A graph of models can be created where models are connected using arcs where an arc shows the type of relation as given above. Abstraction levels are added with directional relations in the following way:

• Vertical relation where models describe the same perspective(s) but at different levels of abstraction

• Horizontal relation where models describe different perspective(s) but at the same level of abstraction.

The classification items of a single model can be applied to groups of models according to the following.

• Creation of model – significant and this can be seen in the graph

• Number of perspectives – significant and this can be seen to a certain extent in the graph when a model is split into others at the same horizontal level

• Representation – not significant for the graph • Executable – not significant for the graph

• Perspective – significant for the graph according to below o Domain – significant for the graph

o Implementation oriented – not significant for the graph o Area – significant for the graph

In the graph, input will be used for a relation to a model and output will be used for a relation from a model. Then, each model in the graph can be specified by the following:

• Abstraction level

• Number of inputs from models of higher abstraction level • Number of outputs to models of lower abstraction level • Number of inputs from models of the same abstraction level • Number of outputs to models of the same abstraction level A number of criteria for graph characterisation can now be defined.

Strict refinement characterisation of graph can be defined as:

• There is a single model at the highest level of abstraction and it has one output to lower level of abstraction

• There is a single model at the lowest level of abstraction and it has one input from higher level of abstraction

• Each intermediate model (0..n) has one input from higher level of abstraction • Each intermediate model (0..n) has one output to lower level of abstraction

Merge characterisation of graph can be defined as:

• At least one model has more inputs than outputs when the number of outputs is > 0

Split characterisation of graph can be defined as:

• At least one model has less inputs than outputs when the number of inputs is > 0

Top tree characterisation of graph can be defined as:

• There is a single model at the highest level of abstraction

(23)

• There is a single model at the lowest level of abstraction

Vertical characterisation of graph can be defined as:

• If a model has an input it comes from a model of higher level of abstraction

Horizontal characterisation of graph can be defined as:

• If a model has an input it comes from a model of the same level of abstraction

For example, a graph characterised as (Top tree, Bottom tree) is the same as a graph with (Strict

refinement) except that Split and Merge are not allowed for the latter. Since the whole structure may

contain of more than one connected graph, each of these has to be characterised according to above. UML can be used for describing the structure of groups of models. Naturally, complex structures should be avoided and some means for simplifying such structures are:

• a walkthrough of which models that are really needed

• to merge models into master models that are used for generating more than one model from each

• to separate model structures that are used separately e.g. separating structural and behaviour model structures

• to focus on vertical relations (if possible) since the corresponding levels are easier to comprehend

(24)

4

Requirement specification

4.1

Introduction

Since the borders between initial requirements, design and implementation are somewhat floating it is necessary to identify where requirement specification occurs. As a starting point, this occurs at the beginning of the project where requirements are put on the system when considering it as a black box i.e. when the interior is not visible. However, leaving requirement specification to this abstraction level is too coarse since:

• requirements assigned to subsystems are also of interest i.e. a grey box view is needed

• requirements may concern design and implementation issues such as portability, scalability etc Thus it is not realistic to assign requirement specification to just one phase of the development process and in this report the activity requirement specification will be considered as such and the mapping to development phases is not included. Note that requirements may concern both behaviour and

properties.

Requirements can be separated into two types; functional and non-functional requirements (the latter also named quality requirements). The former concerns functions i.e. given a number of inputs and a function there will be a number of outputs produced. Generally, functional requirements are easier to handle than non-functional. For example, how shall scalability be specified and tested? Thus there is a tendency to make more informal verification and validation activities for non-functional requirements since they are often more vague and ambiguous. Non-functional requirements address a number of different aspects where the most important ones are:

• Real-time – e.g. deadlines, response time, run-time performance etc • Dependability – i.e. safety, security, reliability, availability, maintainability • Portability – i.e. implementation for different platforms

• Scalability – i.e. if the system can be increased in a smooth way • Architecture (or design) – e.g. use of design patterns, layers etc

• Implementation – e.g. memory usage, dynamic/not dynamic creation, configurability, modifiability, reusability, composability (cooperation between items)

The standard ISO/IEC 9126-1 defines a Quality model with more details (both functional and non-functional).

Only requirements for software are considered here and, further, software process requirements are not considered either.

4.2

Requirement criteria

Requirement engineering denotes the task of creating good requirements according to specific criteria.

Requirement criteria can be separated into two groups; those concerning an individual requirement and those concerning a group of requirements (normally the whole system or subsystem). The following classification can be made (some criteria overlap):

For individual requirement:

• Unique – only one requirement exists addressing a specific aspect

• Atomic – the requirement addresses one aspect. This also improves the possibility of modifications (less dependences with other requirements)

• Complete – within the scope of the individual requirement • Unambiguous – no room for different interpretations • Identifiable – can be uniquely referenced

• Correct – shall address what is intended • Concise – a focussed formulation

(25)

• Traceable – both to upper and lower level requirements

• Understandable – i.e. anybody can understand the requirement. This might be somewhat in conflict with Concise.

• Rationale – a motivation for the requirement. This is necessary since this will improve the understanding of the individual requirement as well as groups of requirements.

For a group of requirements:

• Complete – covers the domain of interest

• Consistent – no requirements are in conflict with each other

• Validatable – the group addresses the intended purpose (the right product). Important is then to have Rationale specified for each individual requirement.

• Abstraction level – the group shall contain requirements given at the same abstraction level. • Traceable – both to upper and lower level groups (not absolutely necessary and might be

replaced with individual requirement traceability instead).

Metrics can be defined for textual requirements in order to evaluate the quality of requirements. In [17] analysis of the following is proposed:

• Imperatives – the strictness of requirement e.g. shall, must, should occurs. • Continuance – the relation between requirements e.g. below, listed • Directives – illustrate requirement e.g. figure, table

• Options – risk for loose requirements e.g. can, may, optionally • Weak phrases – e.g. as a minimum, easy, be able to

• Size – e.g. lines of text, number of paragraphs • Text structure – e.g. hierarchies

• Specification depth – number of imperatives at different levels • Readability – e.g. number of syllables per word

However, as always with metrics, it is important to not interpret too much into the figures but instead use them as indicators for further analysis.

4.3

The role of models

Generally there will be requirements expressed as text as well as requirements expressed by models. The consistency between the two groups must probably be checked manually (otherwise the links between them would be so clear that requirements could be incorporated into the models). For use of model for requirement specification the model itself must be expressed using well defined meanings e.g. a relation “possibly” between two items in the model is not acceptable. Thus the model

representation sets to a large extent the quality of requirements. The main advantages of using models for requirement specification are:

• A visual impression supports human ingenuity.

• A visual impression supports communication between humans. • Structures and behaviour models improve analysis quality.

• Reuse is improved since parts of models can be reused and, further, previous models can be used as starting points for developing new ones.

In a model a requirement is manifested directly as new item(s) to add/delete/modify and/or new relation(s) to add/delete/modify. Thus there is no interpretation needed as compared to ordinary text requirements. Since there is a specific place in the model matching the requirement the following criteria for individual textual requirement as given above are automatically fulfilled (or not really applicable): Unique, Atomic, Complete (within the scope of the individual requirement),

Unambiguous, Identifiable, Correct, Concise and Understandable. Traceable is more complex since it

may concern the model itself but also other models (see further discussion below). Verifiable is related to how strict items and their relations can be expressed in the model and Rationale is something that is outside the model. A group of requirements can address a specific, complete, model or all models of the application. In both cases all criteria for group of requirements as given above are applicable i.e.

Complete, Consistent, Validatable, Abstraction level, and Traceable. The improvement, when using

(26)

give support by encouraging human overall understanding. However, metrics for group criteria will be more difficult to define and will be highly dependent on choice of models.

Naturally, functional requirements and executable models are strongly related. A non-executable model for functional requirements will remove most of the advantages for using models together with functionality. The major exception is that understanding is still improved and one example is

modelling using UML use cases. Generally, non-executable models can originate from modelling something that is not executable e.g. class structure

modelling something that is executable but where execution is not supported e.g. due to limitations in tools or in models

Non-functional requirements could be used together with executable models even if the normal case will probably be to use non-executable models. The reasons for anyhow using executable models could be:

• to keep functionality and associated qualities together e.g. specifying response time for a function

• to limit the number of models

Some advantages for using non-executable models for non-functional requirements are: • to clearly separate concerns

• to make model execution for functional requirements more efficient

The table below shows a summary of the use of models for the two types of requirements.

Executable model Non-Executable model

Functional requirements Normal Normally not used

Non-Functional requirements Exceptional cases Normal

Table 1: Use of models for requirements specification

4.4

Deriving requirements

New or modified requirements can be results from verification and validation phases. For a summary of methods see [2]. However, here is discussed how requirements can be developed from a top-down perspective using models and not explicitly including verification and validation techniques e.g. testing is not considered. The border between verification / validation and specification is not always clear especially not when using iterative development. New requirements can result in:

• new or modified text

• new or modified models. This can also include general principles e.g. going from non-executable to non-executable,

• new or modified model transformations. This can in its turn create new or modified models. • new or modified relations for the three aspects above

The following general methods are applicable when working with models for requirements specification:

• Review

• Analysis, i.e. a well defined methodology for producing results in the area of interest • Simulation

These methods could be applied to

• one level of abstraction – the result could be that requirements are not correct, requirements address different abstraction levels or there are requirements missing

• several abstraction levels – the result could be that the transfer from one level to the next is incorrect even if each level in isolation is judged correct

In any case, the result is an updated set of requirements.

Survey of models can more detailed be separated into

• Walkthrough – the designer guides through the model and gets requirement suggestions from participating persons. For an executable model this can be described as a “simulated”

(27)

simulation i.e. no real execution of the model takes place but is imagined. The walkthrough process is well defined.

• Inspection – the survey is guided by a person not being the designer. The inspection process is well defined.

• Informal review – The informal review process is not well defined; it is not possible to repeat an informal review with the same quality. However, it is a quick method with a minimum of administration.

To support surveys checklists, scenarios etc can be used.

Generally, analysis for defining and refining requirements is applicable in the following areas • for identifying the scope, functionality and properties of the application. Some analysis

examples are HAZOP, “what if” etc.

• for identifying the needs that models shall fulfil. This includes individual models, model transformations and structures of models. This also includes model infrastructure and its properties.

• for verification and validation. The analysis may target both models and non-models and both individual models and structures of models. Some analysis examples are FMEA, Fault Tree Analysis, Data Flow Analysis etc.

As it seems, there is no specifically defined analysis method for identifying models to be used probably since there is a loose connection between requirements and models. For example, for a functional requirement one or more models could be defined for the same functionality of the application. Instead other criteria have to be defined guiding the development of requirements for identifying models. Possible criteria are related to development e.g.:

• quality aspects e.g. improved separation of concerns

• development support e.g. tool support, reuse, design for testability • complexity of application

However, design aspects concerning identification of models are outside scope of this report. As a summary, and in this report, analysis is not applicable for deriving requirements for identifying models to be used.

Simulation can be used for executable models and improves understanding and makes it possible to

visualize interfaces early e.g. GUI. Simulation is seen as a more reliable way for describing functional behaviour. For example, the generation of message sequences according to specific use cases can be seen as new requirements. Note that there could be functionality and properties that are implicit when executing the model in its environment i.e. related to simulation model infrastructure aspects. Some possible examples are: that something shall precede something else, indeterministic choice decided etc. This implies that the model as such, to some extent, will act as a compound implicit requirement only visible at model execution. Note that Symbolic execution is not included here since it is closely related to source code representation.

4.5

Traceability

For the requirement criteria involving models defined above Traceable requires some extra

considerations. Generally, traceability concerns the possibility of relating individual requirements of higher and lower abstractions levels with the level under consideration. However, traceability at the same level of abstraction is also possible e.g. when different models are used for different properties. Traceability involving models is relevant for a relation

• between text requirement and model • between models directly

• between models indirectly via model transformation • within a model

A relation for traceability could be: • one-to-one

(28)

• one-to-many • many-to-many

Version handling is needed for traceability and concerns administration, verification, validation and maintenance. Normally, each part of the model is not version handled, instead blocks of the model or the whole model is version handled. This means that a new version may be defined even for parts that are not changed. Further, a new version of model transformation automatically implies new versions of generated models (and possibly also new versions of source models).

4.6

Process aspects

Using models for requirements makes it possible to transfer requirements without explicitly specifying them i.e. the model or models are the specification. The most important aspect is then if model(s) are enough for specifying requirements or if additional information is necessary e.g. rationales, textual requirements etc. This could be a critical issue in an orderer – subcontractor relation. A complicating matter could be if the model is not visible for the subcontractor but only a representation or a

transformation of it. One such example is if only source code is available generated from the model. In that case the subcontractor has to consider the set of requirements as a black box which results in decreased understanding and therefore an increased risk of lowered quality.

(29)

5

Verification means

5.1

General

Verification of models can to a large extent be compared with verification of software. The main difference is that a model is not necessarily executable. Since the purpose here is to be able to improve the quality of verification there are also preparatory actions to make. The most important aspects concern

• requirement quality

• project management (administration) • development process

However, the focus of this section is on means for verification which are surveyed in isolation i.e. not considering where and how they are applied.

5.2

Model transformation

The motivation for model transformation being a means for verification is given by that

• verification can be simplified and performed with higher quality if e.g. a complex model is transformed to less complex model(s)

• by using model transformation it is possible to generate test stimuli

• by lowering the level of abstraction, verification can be performed with higher quality However, note that increasing the number of models may move complexity from individual models to structures of models instead. Also note that a model transformation will affect associated input and output data.

Another use of model transformation is to create test models which possibly have additional features in order to support verification e.g. openings for fault injection.

5.3

General verification methods

General methods are described in [2] and listed here together with comments. Not all methods are applicable and not all are applicable for all models.

• Reviews

o Walkthrough – guided by designer

o Inspection – guided by person other than designer o Informal review – difficult to repeat with same quality • Analysis

o Field experience – how experience could be used for increasing verification quality o Independence analysis – check that no unintended dependencies exist between parts of

the system

o Proven in use – check if quality is sufficient due to extensive use o Queuing analysis – requires a specific model

o Requirement analysis – may include both model and related text requirements

o Performance requirements – applicable only to very specific models with low level of abstraction

o Performance analysis – applicable only to very specific models with low level of abstraction

o Failure analysis – applicable but some more guidance is probably needed o Failure Mode and Effects Analysis (FMEA) – applicable for behavioural models o Common cause failure analysis – applicable for behavioural models

o Event tree analysis – applicable for behavioural models o Fault Tree Analysis – applicable for behavioural models

(30)

o Cause – Consequence Analysis – applicable for behavioural models o Reliability block analysis – requires a specific model

o Markov analysis – requires a specific model o State transition diagram – requires a specific model o Data flow analysis – applicable for behavioural models o Control flow analysis – applicable for behavioural models o Sneak circuit analysis – applicable for behavioural models

o Consistency analysis – applicable but some more guidance is probably needed o Impact analysis – applicable but some more guidance is probably needed o Protocol analysis – applicable for behavioural models

o Complexity metrics – generally applicable but conclusions must be handled with care o Worst case analysis – applicable only to very specific models with low level of

abstraction

o Worst Case Execution Time analysis – applicable only to very specific models with low level of abstraction

• Dynamic methods o Functional testing o Regression test o Black-box testing o White-box testing o Interface testing

o Boundary value analysis

o Avalanche/stress testing – applicable only to very specific models with low level of abstraction

o Worst-case testing – applicable only to very specific models with low level of abstraction

o Equivalence classes and input partition testing o Structure-based testing

o Statistical testing o Prototyping/animation

o Standard test access port and boundary-scan architecture – concerns hardware and is outside scope of this report

o Behavioural simulation o Symbolic execution o Monte-Carlo simulation o Fault insertion testing o Error seeding • Methods regarding formality

o Logical proofs – requires model expressed formally o Model checking – requires model expressed formally

o Rigorous argument – requires model expressed strictly however not necessarily formally

(31)

5.4

A representative set of general verification methods

5.4.1

Introduction

Below are a number of general verification methods, partly taken from [2], described in more detail. The selection should be seen as one possible choice but dependent on application and other factors other general verification methods can be considered as well.

5.4.2

Identification and analysis of safety-related parts of the model

This method is applicable for both non-executable and executable models.

Aim: The purpose of this method is

- to identify the safety-related parts of the model. - to identify the safety-related signals in the model. - to analyse how safety-related signals are used.

Description: In order to get an understanding of the functionality and safety issues of a model, it can

be analysed using different methods. Suitable methods are described in [2]. Three of these methods, control flow, finite state and data flow analysis are described below in a modelling context.

Control flow analysis:

Control flow analysis is a well known method in source code analysis. When practicing this method on models the execution order of operations is as essential as in source code analysis. Unfortunately, this order is not always contained in models. The control flow of models can have parallel paths without mutual order of execution.

Finite state analysis:

An alternative approach, to Control flow analysis, is to identify the operating modes of the system and determine the conditions for transitions between them, i.e. to create a finite state machine model. Signals in the transition conditions decide the operating mode and are therefore critical for the behaviour of the system. Classifying these signals as safety-related is one method to fulfil the aim. Data flow analysis:

The purpose is to verify that signals are passed through the model correctly. A model illustrates the data flow between operations graphically. By studying the connections between blocks, relations and dependences can be determined. These characteristics can help in the location of critical data-paths and processes in the model. Included here is also to verify that all signals requiring initialisations are initialised.

(32)

6

FMEA

This method is applicable for both non-executable and executable models.

Aim: The purpose of this method is

- to predict possible failures and evaluate the consequences for the safety of the system. - to evaluate safety-functions and analyse their ability to prevent threat-vulnerabilities.

Description: FMEA is a classical way to analyse safety critical functionality and used in many areas

such as automotive and aerospace. An FMEA basically contains different component failure modes and their corresponding effects on the system as whole. This concept can be used on hardware, software and model-based design. When creating an FMEA, the safety-related parts of the model are evaluated. Respect needs to be taken to which level the FMEA will be used. If the level is too deep the FMEA will become too extensive. To increase the reliability and number of faults detected by the system, redundancy and safety-functions can be added. When creating an FMEA on a model-based design different fault modes of the safety-related parts need to be set. When hardware is analysed, the different fault modes can be easier to set due to the more specific fault-modes for a component. For example, the fault modes of a resistor are short circuit, open circuit and drifting current, no other modes exist. When models are used, many different modes can exist. So to limit the FMEA to get too extensive, priority on the fault modes need to be taken. Different fault modes can be:

- Output from block is too high relative the correct value. - Output from block is too low relative the correct value. - Output from block is too high (out of bounds).

- Output from block is too low (out of bounds). - Output from block is zero (no signal).

- Output from block is stuck on previous value.

More specific fault modes can be defined if needed. For example, the effect of the fault mode “Sensor value delivered late” depends on when the application expects the new value and “Sensor short to ground” can instead be used to make a more specific classification of the failure.

An FMEA usually includes a failure rate on the components that are being analysed. This failure rate can be hard to decide in model-based design. If it is known which part of the hardware that runs a part of the software, the failure rate of the hardware can be used.

Example: An example of an FMEA for model-based design is performed in the case study ABS

Model, see section 11.

Reference: [2]

6.1.1

FTA - fault tree analysis

This method is applicable for both non-executable and executable models.

Aim: The purpose of this method is

- to find events or combinations of events that will cause a hazardous or serious consequence.

Description: FTA is used when searching the cause of an event defined in advance. Analysis is

carried out along a tree path and combinations of causes are described with logical operators to perform the FTA. The top event shall be well defined and may be a specific problem or safety issue. Safety-related subsystems are analysed and operations that cause an error in the subsystem are examined. The analysis stops when basic events are reached. To limit the FTA respect needs to be taken to which level it will be used on. If the level is too deep the FTA will become too extensive. Knowledge about the underlying subsystems is still required if a higher level is used. If the model

(33)

contains many conditions in different subsystems it can be hard to not get an extensive FTA. The result of the fault three will be a graphic structure of paths that can lead to a foreseeable, undesirable event in the subsystem.

Example: An example of an FTA is performed in the case study ABS Model, see section 11. Reference: [2]

6.1.2

External behaviour handling

This method is applicable for both non-executable and executable models.

Aim: The purpose of this method is

- to verify that failures in externally described behaviour do not generate an undetected dangerous failure of the system.

Description: The term external behaviour in this text defines a part of the model whose behaviour is

described using a different semantics. Examples are COTS components or source code inserted as model blocks. A common feature for these blocks is that when the model is used they appear as black-boxes, i.e. behaves as units and can not be divided into smaller components.

Methods to reach the aim with a black box approach to external behaviour:

1. Analyse and test the consequences of a failure in the external behaviour. Add functionality to prevent them from causing a failure of the system.

2. Add redundancy to known safety related parts in the external behaviour.

3. Build a wrapper around the external behaviour to prevent failures from generating undetected failures of the system.

Reference: [2]

6.1.3

Sampling rate check

This method is applicable for both non-executable and executable models. Models can describe software on different levels of abstraction. Therefore the methods to verify the sampling rates will depend on this level.

Aim: The purpose of this method is

- to verify that sampling rates are appropriate for the application.

- to verify that the sampling rates of different parts of the model are chosen with respect to each other.

Description: Study the process under control and establish the required sampling rates. Verify that the

model meets the response time required for the application.

Reference: Simulink Help, http://www.mathworks.com/

6.1.4

Interactions between different sampling rates check

This method is applicable for both non-executable and executable models. Models can describe software on different levels of abstraction. Therefore the methods to verify the sampling rates will depend on this level.

References

Related documents

Det är en stor andel elever i årskurs åtta som tycker att ämnet är svårt och att det ofta händer att de inte förstår på lektionerna, samtidigt svarar nästan alla,

First of all, we notice that in the Budget this year about 90 to 95- percent of all the reclamation appropriations contained in this bill are for the deyelopment

But she lets them know things that she believes concerns them and this is in harmony with article 13 of the CRC (UN,1989) which states that children shall receive and

The issue of food security in the developing world is not a new issue, but some global trends have made the question one of the most important, if not the most important issues of

Object A is an example of how designing for effort in everyday products can create space to design for an stimulating environment, both in action and understanding, in an engaging and

Respondenterna beskrev att information från HR-verksamheten centralt som förs vidare från personalcheferna på personalgruppsmötena ut till förvaltningarna kanske blir sållad

The children in both activity parameter groups experienced the interaction with Romo in many different ways but four additional categories were only detected in the co-creation

Is there any forensically relevant information that can be acquired by using the Fusée Gelée exploit on the Nintendo Switch, that cannot otherwise be acquired by using