• No results found

Improving testing reusability and automation for software product lines

N/A
N/A
Protected

Academic year: 2021

Share "Improving testing reusability and automation for software product lines"

Copied!
35
0
0

Loading.... (view fulltext now)

Full text

(1)

aster˚

as, Sweden

Thesis for the Degree of Master of Science (120 credits) in Computer

Science with Specialization in Software Engineering 30.0 credits

IMPROVING TESTING

REUSABILITY AND AUTOMATION

FOR SOFTWARE PRODUCT LINES

Fabio Di Silvestro

fdo19001@student.mdh.se

Examiner: Federico Ciccozzi

alardalen University,

aster˚

as, Sweden

Supervisors: Alessio Bucaioni

alardalen University,

aster˚

as, Sweden

Henry Muccini

University of L’Aquila,

L’Aquila, Italy

Company Supervisor: Inderjeet Singh

Bombardier Transportation,

aster˚

as, Sweden

(2)

Abstract

Software product lines are widely used in industrial environments for developing complex software systems. One of the main advantages deriving from the adoption of this software engineering development methodology is increased reusability. In fact, software product lines offer convenient means for representing different products belonging to the same family and different families of products by grouping shared functionalities and highlighting differences. Despite software product lines inherently improve the design and the development of complex software systems, they usually require ad-hoc strategies for the testing of such systems. To this end, testing strategies for software product lines need to account for the extensive amount of artefacts reuse and for possible differences among such artefact, too. In fact, even though software product line products might share several functionalities, the interfaces of such functionalities might differ due to the specificities imposed by the products or their designs.

In this thesis, we propose a testing approach for software product lines, which allows to test functionalities that are shared among the products, accounting for the potential heterogeneity of the exposed interfaces. The main contribution of this approach is the definition of generic test-cases from which product-specific test-scripts are automatically generated, so to enhance the reusability of the handcrafted artefacts and overcome the issue of different interfaces. What is more, the proposed approach discloses the opportunity to reduce the time required to develop testing artefacts.

(3)

Acknowledgements

This work would not have been possible without the help and the support of several people. I am deeply indebted to Alessio Bucaioni and Inderjeet Singh for giving me the possibility to carry out this thesis and for their remarkable guidance. I wish to thank Thorvaldur Jochumsson and Henry Muccini for their contributions and participation. I am also grateful to my examiner, Federico Ciccozzi, for its essential advice.

Special thanks to my wonderful family for believing in me, to my invaluable friends for never leaving me alone, and thank you.

(4)

Table of Contents

1. Introduction 1

2. Background 3

2.1. Software Product Lines . . . 3

2.2. Software testing . . . 3

2.2.2. Testing techniques . . . 3

2.2.2. Testing levels and phases . . . 4

2.3. Model-Driven Engineering . . . 5

2.3.3. MDE Architecture . . . 5

2.3.3. Model transformation . . . 5

2.3.3. Domain-Specific (Modelling) Languages . . . 6

3. Research Formulation 7 3.1. Research Goal and Questions . . . 7

3.2. Research Contributions and Outcomes . . . 7

3.3. Research Methodology . . . 8

4. A Model-based Testing Approach for SPLs 10 4.1. Generic metamodel . . . 11

4.2. ProductSpecific metamodel . . . 12

4.3. Weaving metamodel . . . 13

4.4. TestSuite DSL . . . 14

4.5. Model-to-text transformation . . . 15

5. The Aventra SPL: A Use Case from the Railway Domain 17

6. Validation 22

7. Discussion 24

8. Related Work 26

9. Conclusion and Future Work 28

(5)

List of Figures

1 Representation of smoke testing in the development process. . . 4

2 Representation of the layered architecture in MDE. . . 5

3 Selected research methodology . . . 8

4 Flowchart of the testing process with the proposed methodology . . . 10

5 Class diagram representation of the Generic metamodel . . . 11

6 Class diagram representation of the ProductSpecific metamodel . . . 12

7 Class diagram representation of the Weaving metamodel . . . 13

8 Common functionalities representation with the Generic metamodel . . . 17

9 Product-specific details representation for EAA product with the ProductSpecific metamodel . . . 17

10 Product-specific details representation for SWR product with the ProductSpecific metamodel . . . 18

11 Product-specific details representation for LOT product with the ProductSpecific metamodel . . . 18

12 Objects typing with the Weaving metamodel . . . 19

13 Trend estimation of effort required to create testing artefacts with respect to the system complexity. . . 24

List of Tables

1 Example of a generic test-case template. . . 3

2 Relations between RQs and RCOs . . . 7

Listings

1 Excerpt of the DSL grammar . . . 14

2 Test-cases definition for the OpenDoor example function with TestSuite DSL . . . 15

3 Acceleo template excerpt of the model-to-text transformation . . . 15

4 Excerpt of the test-cases definition for the TBC Response function using the Test-Suite DSL . . . 20

5 Excerpt of the test-script produced by the model-to-text transformation for the TBC Response function (EAA product) . . . 20

6 Excerpt of the test-script produced by the model-to-text transformation for the TBC Response function (SWR product) . . . 21

7 Excerpt of the test-script produced by the model-to-text transformation for the TBC Response function (LOT product) . . . 21

(6)

1.

Introduction

Nowadays, an increasing number of companies use Software Product Line (SPL) development methodology1 for sharing functionalities and assets among different software products, hence for developing complex software systems more efficiently [1]. SPLs represent convenient means for increasing reusability in those contexts where software systems are built adapting already existing software functionalities rather than built from scratch [2]. For instance, in the railway transporta-tion domain, trains belonging to the same product family share so-called core functransporta-tionalities, e.g. the Traction/Brake Controller (TBC) responsible for actuating the propulsion or brake systems based on the controller system inputs.

Even though SPLs can inherently ease the design of complex software systems, they require ad-hoc strategies for the testing of such systems. On the one hand, strategies for SPLs testing need to account for the considerable quantity of artefacts reuse while, on the other hand, they need to account for a degree of heterogeneity among shared artefacts [3]. In fact, even though software systems might share several functionalities, the interfaces of such functionalities might differ due to the specificities imposed by the products or their designs. For instance, in the case of Bombardier Transportation2, the above-mentioned TBC function is deployed in all the products belonging to

the Aventra train family. However, due to a different hardware architecture, two products from the Aventra family expose a set of signals which differ from the set of signal exposed by a third product of the family. Even though the heterogeneity of interfaces does not represent an issue during the unit testing (where each functionality is tested in isolation), it might represent an obstacle from the integration testing (where different interfaces might lead to different interactions), on. Currently, there exist a number of testing strategies for maximising reuse of testing artefacts [4, 5, 6, 7]. These strategies stretch from systematic, to opportunistic reuse management [4]. In the literature are also present strategies to design testing artefacts purposely for reuse [5] and reuse-oriented model-based techniques [6, 7]. However, there is limited to no support for reuse in presence of heterogeneous interfaces and often reuse-driven approaches are not suitable for safety-critical systems [8]. In our experience, such a lack of support forces testers to write different test-cases accounting for the differences within the interfaces. This negatively affects reuse and implies a low degree of maintainability due to the high number of handcrafted artefacts. In this context, it would be beneficial to have ad-hoc SPLs testing strategies, which could support a high-degree of artefacts reuse, still accounting for the potential heterogeneity of interfaces exposed by common functionalities.

In this thesis, we provide an approach for SPLs testing, which contributes to solving the issue of interfaces heterogeneity exposed by the functions that are common for all the product family products, still allowing artefact reuse. In particular, the proposed approach allows the definition of generic test-cases specifying interfaces with a common representation valid for all the SPL product. Starting from these generic test-cases, the approach allows for the automatic generation of product-specific test-scripts obtaining an equivalent grade of testing power, with a reduced demand for test engineers. For example, in the case of the above TBC function, the proposed approach would allow for the specification of a generic test-case from which three (one for each train) product-specific test-scripts are automatically generated. The approach uses Model-driven Engineering (MDE) techniques being models, metamodels and model transformations. Models and metamodels are used to formally represent and store SPL products and functionalities, establish a common repre-sentation for the function interfaces and specify generic test-cases. Model transformations are used to automatically generate test-scripts tailored on each product from the information contained in the generic test-cases. The main contribution of the approach is a solution to the challenge of heterogeneous interfaces in SPL common functions, together with a systematic way to enhance reuse of testing artefacts. Moreover, since the product-specific test-scripts are automatically gen-erated, a by-product contribution is that the testing process might be anticipated to early stages as compared to manual testing. Evidences show that early testing has the potential to reveal possible defects beforehand, bringing economic benefits [9]. We validate the applicability of the proposed approach, the reusability of the handcrafted testing artefacts and, the reduced development effort 1In the remainder of this thesis, we refer to Software Product Lines development methodology simply as Software Product Lines, or Software Product Families.

(7)

using a real industrial use case from Bombardier Transportation. In detail, the context is a prod-uct family composed of three prodprod-ucts. Our solution is used to specify a subset of real test-cases executed against the product family.

The remainder of this thesis is as follows. Section 2. describes the background for this thesis. Section 3. explains the research plan including research goal, research questions and contributions. Section 4. presents the proposed approach, its core components and the workflow for adopting it. Section 5. describes the industrial case study where we applied the proposed approach. Section 6. is dedicated to the validation of the proposed approach and the discussion of its strengths and limitations. In Section 8. we analysed and compared our solution with the existing research documented in literature. Eventually, Section 9. closes the thesis with future work and final remarks.

(8)

2.

Background

In this section, we introduce the basic concepts used in the remainder of this thesis.

2.1.

Software Product Lines

SPL is a software engineering development method used to develop closely related products and is composed of mandatory, as well as, variation elements. The immutable skeleton of a SPL is made up of all the mandatory elements, whereas variation elements strictly define diversities between products and their specific architectures [10]. In industrial environments, companies use SPLs for systematically managing variabilities within a collection of products (often referred to as family of products) [11]. Two key aspects, domain engineering, and application (product) engineering form the complete SPL development [11]. The product engineering phase focuses on the specific requirements of single products. The domain engineering phase focuses on building core assets. These assets represent functionalities implementing requirements, which are common for all products belonging to the family. Sharing and reusing core assets among several products improves the productivity and the quality of the software development [12]. As a consequence, software reuse is crucial within SPLs. The advantages of adopting a SPL development are not limited to software quality, but they extend to reusability and costs. In this thesis, we will provide testing strategies for managing variabilities of SPLs and reusability of software functions and test-cases.

2.2.

Software testing

Software testing represents one of the fundamental aspects of software development [13]. In general, testing activities are costly and demanding, esteems evaluate costs among 20% and 80% of the total project [13]. Among the others, one of the main purposes of software testing is to check the artefact that has been developed for conformity against the elicited requirements. The building blocks of testing are called test-cases. Each test-case describes a particular check or validation step that is going to be executed, or has been already executed, on the software. Test-cases could be additionally considered as documentation and results of the executed investigations are recorded in formal reports. Table 1 shows a generic template that can be used to specify a test-case report and its information.

Field Description

Test-Case ID Unique identifier for the test-case

Test-Case Description A brief description of the test-case and its objectives

Test Steps The formal steps that must be followed to perform the test

Expected Result The correct output according to the requirements

Pre-requisites Possible conditions that have to be satisfied in order to perform the test

Pass/Fail Whether the test passed or not

Notes Additional notes on the test-case or its execution

Table 1: Example of a generic test-case template.

In this thesis, we propose strategies to improve functional testing for SPL. Even though our strategies could be applied at various levels, the dynamic validation phase and the PoC has been based on regression and smoke tests. Thus, this subsection is dedicated to exploring those concepts.

2.2.2. Testing techniques

A number of software testing techniques there exists, ranging from functional to non-functional testing through unit and acceptance testing. In functional (or specification based) testing, test-cases are constructed starting from system requirements. The objective of functional testing is to assure that the system behaves according to the requirements, fulfilling properly the specifications. In general, this kind of testing includes four phases, such as the identification of a data set used to feed the system, the evaluation of the requirements to determine the expected output, the

(9)

actual execution of the test-case (either manually, or automatically), and the comparison between system’s output and expected output. Functional testing is often categorized as a “black-box” testing technique meaning that the tester does not require to know the actual implementation of the system under test to be able to perform the test [14].

2.2.2. Testing levels and phases

Software testing can be performed at various levels of detail in the development, e.g. unit, integra-tion, and system [15]. Each kind of test has a specific purpose and should be executed by different people or teams [16]. The lowest level on which software testing can be performed is the unit testing, where the goal is to ensure that the smallest testable piece of software meets its functional specifications.

The purpose of integration testing is to detect potential defects that can arise combining two or more units. Even though each unit is independently tested, their combination could entail an incorrect behaviour with respect to the requirements. Common problems found with integration test techniques are related to communication or sequence issues. There are two main strategies to perform integration testing: incremental and non-incremental.

Adopting a non-incremental technique means that units are all combined and tested in one single step, e.g. the “big-bang” strategy. With the incremental integration, units are gradually linked together with a top-down or a bottom-up strategy. In contrast to the “big-bang” technique, these incremental approaches ease the discovery of potential defects, but they usually require the “scaffolding”, in other words, the development of fake parts of the system.

During system testing, the system is checked as a whole. This testing phase may require also the actual hardware where the system is going to be deployed. The purposes of system testing is to verify that the developed system delivers all the requirements demanded from the customer and to detect potential defects that can arise using the complete system, or that were not noticed during unit and integration testing.

Figure 1: Representation of smoke testing in the development process.

Another important testing technique is the regression testing. It is executed to verify modified or maintained pieces of software. The main purpose of regression testing is to detect whether new defects are introduced in previously tested code [17]. In [18], the authors mention several ways to select the test-cases needed for performing regression testing (e.g. complete retest, focus on functional test-cases, and smoke test). In detail, the specific purpose of smoke testing (or build verification testing) is to verify only crucial functionalities that can cause severe fails [19]. It is used as a preliminary step before running all the detailed test-cases, as showed in Figure 1. We propose a testing approach that can be categorized as functional since we focus on testing the functionalities stated in the requirements. Furthermore, the industrial validation used smoke tests as case study.

(10)

2.3.

Model-Driven Engineering

Model-Driven Engineering (MDE) is a software engineering approach that shifts the focus of soft-ware development from code to models [20]. Analogously to the well known object-oriented prin-ciple of “everything is an object”, in MDE this statement becomes “everything is a model”.

A model is a formal artefact which provides for abstraction and separation of concerns [21]. In fact, models serve to focus on certain aspects of software systems while hiding less relevant ones. MDE is built on three components: models, metamodels, and model transformations. Figure 2 puts these three entities in relation, shaping the so-called four levels architecture.

2.3.3. MDE Architecture

At the lower level (M0), we have the system we are building. A “described by” relation links the system at M0 level with its describing model, or set of models, at M1 level. Metamodels, M2 level, define the collection of syntactic and semantic rules and constraints that models must adhere to. Similarly to Object-Oriented programming, where the software engineer creates an instance of a class to be able to use its capabilities, the modeller instantiates “meta elements” to use them, formally respecting limitations and constraints defined in the metamodel. Likewise, models conform to metamodels, metamodels conform to metametamodels (M3 level). Finally, metametamodel can be self-defined, thus they represent the highest level of the architecture.

Figure 2: Representation of the layered architecture in MDE.

2.3.3. Model transformation

Within MDE, model transformations are a way to automatically manipulate models. Formally, a model transformation is the process of manipulating a source model Msinto a target model Mt,

following a well-defined set of rules. By embracing the MDE paradigm, the software development process can be regarded as the process of transitioning from general models towards more concrete ones (until the code is generated) using model transformations. According to the principle “every-thing is a model” even model transformation are models, which conforms to their metamodels, i.e., model transformation languages. Model weaving is a model transformation technique that uses a weaving model for tracing the relations between Msand Mt that are followed in the execution of

the transformation [22]. In other words, the weaving model is a syntactic mapping between the elements belonging to, at least two distinct models. From a semantic point of view, the weaving model can assume different meanings, depending on the context where it is used. In our work, we see the weaving as a way to “type” product-specific elements to generic ones, so to simulate an intermediate layer that contains the knowledge to translate a generic element into a specific, executable one.

(11)

2.3.3. Domain-Specific (Modelling) Languages

Domain-specific languages and domain-specific modelling languages (DSLs) are also fundamental concepts related to MDE. A DSL is a “programming language or executable specification language that offers, through appropriate notations and abstractions, expressive power focused on, and usually restricted to, a particular problem domain”[23]. DSLs are also defined through metamodels that describe the abstract syntax of such languages. The abstract syntax determines the static semantics, the possible constructs and their relationships (rules and constraints) of a language. The concrete syntax expresses the interface used by the modeller to construct model instances. Typically, there are two main kinds of concrete syntax: textual and visual (or graphical). The textual syntax allows to create model instances using a textual editor while the graphical syntax replaces the textual representation with a visual editor. We provide a DSL with a textual concrete syntax.

(12)

3.

Research Formulation

In this section, we describe the research in terms of Research Goal (RG), Research Questions (RQs), Research Contributions (RCOs), and Research Methodology.

3.1.

Research Goal and Questions

Efficient SPLs testing requires ad-hoc strategies for verifying common and product-specific func-tionalities. The RG of this thesis is to improve the testing of SPLs defining strategies for increasing test-case reusability and automation. In particular, we break down the above RG into the following RQs:

RQ1: Within SPLs, can interfaces of shared functionalities be represented in a common format? In SPLs, it is common that functionalities are shared among different product of the family. How-ever, given the specificity of each product, the interfaces of such functions might differ. In this context, it might be beneficial for testing efficiency to categorize the differences of such interfaces so to represent them with a common format.

RQ2: Within SPLs, can we define a single test-case for verifying all the instances of the shared functionalities at once?

Regardless of their interfaces, shared functionalities expose the same behaviour. In this context, it might be beneficial for testing efficiency to use the same test-case for testing all the instances of the shared functionalities.

3.2.

Research Contributions and Outcomes

The main outcomes of this thesis are as follows. We provide a comprehensive overview of the current state of the art and practice on testing strategies for SPLs. We present such an overview in Section 8. where we analyse different approaches proposed in the literature and highlight their advantages and disadvantages. Moreover, for each identified strategy, we offer a comparison against our approach emphasizing the dissimilarities. We provide a model-based approach for SPLs testing, which allows testers to define generic test-cases (i.e., avoiding product-specific details) from which product-specific test-scripts are automatically generated. The definition of generic test-cases is enabled by a DSL while the test-script generation is entrusted to a model-to-text transformation. Together with Bombardier Transportation3, we provide a Proof of Concept (PoC)

where the proposed approach is applied within the railway domain. Section 5. provides more details on the PoC. The proposed model-based approach for SPLs testing consists of the following RCOs. Table 2 maps the RCOs to the RQs.

Research Questions RQ1 RQ2 Research Contribution RCO1 X X RCO2 X RCO3 X

Table 2: Relations between RQs and RCOs

RCO1: Mechanism for representing interfaces of shared functionalities. This contribution is two-fold. On the one hand, it allows engineers to define generic interfaces specifications, thus a common format to represent the functionalities. On the other hand, it allows to establish relationships be-tween generic and multiple product-specific interfaces. These relationships will be used by the model transformation to translate generic interfaces into product-specific, so to enable the cre-ation of executable test-scripts. This contribution consists of a set of three metamodels: Generic metamodel, ProductSpecific metamodel, and Weaving metamodel, realised as Ecore models within the Eclipse Modeling Framework4(EMF). The Generic metamodel provides for the representation

3Bombardier Transportation,https://www.bombardier.com/en/transportation.html 4Eclipse Modeling Framework (EMF),https://www.eclipse.org/modeling/emf/

(13)

of generic interfaces and steps. This generic elements are common for all the SPL and are used by the test engineer for the definition of generic test-cases.

The ProductSpecific metamodel provides for the representation of the product and their specific elements. This data are crucial to keep the connection with real systems, having the actual way to interact with the functions. The Weaving metamodel provides for the typing of product-specific, to generic elements. The concept of typing serves as a link between Generic and ProductSpecific models. Information in the Weaving model is used by the model transformation to generate test-scripts from generic test-cases.

RCO1 provides answers to both RQ1 and RQ2, as shown in Table 2.

RCO2: Mechanism for writing generic test-cases. This contribution allows test engineers to write generic test-cases, which are independent from the product-specific function interfaces. This con-tribution consists of as a DSL, called TestSuite, realised using Xtext5within EMF. TestSuite DSL

makes use of the generic elements defined in the Generic model allowing test engineers to create generic test-cases valid for all the products in the SPL. RCO2 provides answers to RQ2, as shown in Table 2.

RCO3 - Mechanism for the test-scripts generation. Starting from the generic test-cases and inter-faces, this contribution provides for the automatic generation of product-specific test-scripts. The automatic generation is entrusted to a model-to-text transformation, called TestSuite Transforma-tion. This contribution is realised using Acceleo6. TestSuite Transformation follows a template to generate the code, where it is possible to specify fixed elements (imposed by the target program-ming language) and dynamic elements that are evaluated depending on the input models. RCO3 provides answers to RQ2, as shown in Table 2.

3.3.

Research Methodology

In literature, there exists a number of research methodologies for conducting research in software engineering. As this thesis is done in collaboration with Bombardier Transportation, we build on the methodology described in [24], which focuses on collaborative industry-academia research projects where technology transfer is a crucial aspect. To this end, the methodology in [24] aims at preparing and maximising the technology transfer using a multi-step validation process. In particular, we lighted the industrial validation process performing it by means of an industrial use case.

Figure 3: Selected research methodology

We start eliciting the business needs of Bombardier Transportation. We use these needs for driving a preliminary investigation of the state of the art and practice with the aim of identifying a RG. We break down the RG into smaller RQs. For each RQ, we inspect the state of the art and practice for identifying candidate solutions. If no solutions are available, then we proceed with the

5Xtext,https://www.eclipse.org/Xtext 6Acceleo,https://www.eclipse.org/acceleo

(14)

definition of a candidate solution. Each candidate solution is validated using the above mentioned two-steps process. The first step is the so-called validation in academia. In this step, each candidate solution is validated within the academic environment. The second step is the so-called use case validation. In this step, each candidate solution is presented to practitioners from Bombardier Transportation with the aim of gathering feedback about its correctness and feasibility, first. Then each candidate solution is validated with an industrial application, too. In our case, we use a set of smoke tests, actually executed from the company during the release process. The industrial case study is presented in Section 5.. It is important to note that the selected research methodology is iterative, which means that the findings acquired during the definition of a candidate solution might be used for proposing new or refining existing challenges and solutions. More details on the validation steps are given in Section 6..

(15)

4.

A Model-based Testing Approach for SPLs

In this section, we describe the proposed approach in terms of its steps and enabling artefacts. The approach aims at allowing testers to specify generic test-cases (i.e., test-cases valid for the software functions common to all the SPL products) from which specific test-scripts, accounting software function interface differences, are generated. Figure 4 provides for a graphical representation of such an approach. It consists of two main stages, which are preparation and execution.

Figure 4: Flowchart of the testing process with the proposed methodology

In the preparation phase, engineers are required to describe software functionalities of the product family, as well as for the specific products belonging to the family. We call the for-mer generic functions and the latter product-specific functions. Additionally, they are required to establish links between generic and product-specific functions. The tasks of the preparation phase are enabled by three metamodels being Generic metamodel, ProductSpecific metamodel and

(16)

Weaving metamodel where Generic metamodel allows engineers to specify generic functions while ProductSpecific metamodel and Weaving metamodel allow engineers to specify product-specific functions and relationship among generic and product-specific functions, respectively. More de-tails on Generic metamodel, ProductSpecific metamodel and Weaving metamodel are given in Sections 4.1., 4.2., and 4.3.. In the execution phase, testers are required to describe generic test-cases for the elements defined in the Generic model(s). A model transformation would transform the defined generic test-cases into product-specific test-scripts using the information contained in the Weaving model. The preparation phase is pivotal for the execution phase where the generic test-cases and the product-specific test-scripts are created using the information specified in the preparation phase. What is more, the models created in the preparation phase have to be modified according to system updates, to meet potential changes and/or extensions to the functionalities.

In the following sections, we provide more details on the enabling artefacts of the proposed approach.

4.1.

Generic metamodel

The Generic metamodel allows for representing software functionalities which are common to all the SPL products (identified with

1 in Figure 4). The benefit of such a representation is two-fold. On the one hand, it is a crucial step towards the achievement of a common representation of shared software functions. On the other hand, it allows testers to define test-cases on the generic functions instead of product-specific ones. The Generic metamodel is part of the RCO1 and RCO2.

Figure 5: Class diagram representation of the Generic metamodel

Figure 5 provides for a class diagram representation of the Generic metamodel. The root metaclass is Family, which acts as a container for software functions and has two attributes: name and description. It contains one or more GenericFunction metaclasses, which specify the software functions shared among the product family. GenericFunction has two attributes namely name and description and contains one or more GenericStep metaclasses. GenericStep metaclasses are used to provide an abstract representation of function steps and signals and have one attribute, name. GenericStep is an abstract metaclass and can be either a GenericInput or a GenericOutput.

For instance, let us suppose we would like to model the functionality in charge to control the opening of a train door for a simple SLP composed of two products: product A and product B. The function is a core one meaning that it is shared among both products of the family. The requirements state that the behaviour of functionality is common, but that the function interfaces for product A and product B are different due to the different product designs. In particular, their signals have different names, e.g., Door and State for product A and Locked and State for

(17)

product B. In this context, the engineer can use the Generic metamodel for creating a standardized version of the function common to both products, i.e., OpenDoor. OpenDoor would have two generic steps (modelling the signals) DoorLocked and DoorState being a generic input and generic output, respectively. The interaction with the functionality occurs operating with the value of the input step, while the correct execution is verified checking the value of the output step.

4.2.

ProductSpecific metamodel

The ProductSpecific metamodel allows for representing single products within a SPL, their specific interfaces and steps (

2 in Figure 4). Representing specific products within a SPL is fundamental for establishing links between the generic functions and the actual ones, which will be used from the model-to-text transformation to convert generic test-cases into executable test-scripts containing the actual product-specific signals. The ProductSpecific metamodel is part of the RCO1 and RCO2.

Figure 6: Class diagram representation of the ProductSpecific metamodel

Figure 6 provides for a class diagram representation of the ProductSpecific metamodel. It has some commonalities with the Generic metamodel, though it allows to specify more information for each product of the SPL.

The root metaclass is Family, which acts as a container for products and all the product-specific element. It and has two attributes: name and description and contains one or more Product meta-classes, which specify the products belonging to the family. Product has one attribute, name, and refers to one or more ProductSpecificFunction metaclasses. ProductSpecificFunction metaclasses are used to represent product-specific functions, they have one attribute, name, and refer to one or more ProductSpecificStep and to one or more Product. ProductSpecificStep metaclasses are used to specify function steps with a couple signal-systems with a product-specific level of detail and have one attribute, name. Moreover, ProductSpecificStep might refer to Signal and System

(18)

metaclasses. Signal metaclasses have one attribute, name, whose function is to store the actual name of the signal involved in the step. System metaclasses have an attribute, name, and allow the definition of the location where the signal is deployed. A ProductSpecificStep is an abstract metaclass and can be either a ProductSpecificInput or a ProductSpecificOutput.

In the context of the above mentioned example, the engineer can use the ProductSpecific meta-model to meta-model the two products of the family, i.e., product A and product B. The meta-model would contain two Product elements each of which refers to a product-specific function: OpenDoor A and OpenDoor B, respectively. The former refers to an input step, in turn referring to the signal Door and the system 123.123.123.123, and an output step referring to the signal State and the same system 123.123.123.123. Similarly, OpenDoor B refers to an input step referring of the signal Locked and the system 123.123.123.123, and an output step referring of the signal State and the system 123.123.123.123. This example demonstrates how the metamodel eases the definition of the product-specific elements and its maintainability (as common objects are contained in the root metaclass and they do not need to be specified again, but only referenced). In case of updates, the engineers have to modify one object only and the update is propagated to all the referencing elements. There might be the case that all the products referencing a product-specific function disappear and the function remain unused. To cope with this potential issue, we refer the product even from the function. This reference has lower bound set to 1 therefore, if it becomes empty, the model is no longer compliant with the metamodel and the engineer must intervene by removing the function.

4.3.

Weaving metamodel

The Weaving metamodel allows for specifying links between elements of ProductSpecific and Generic models, hence to type product-specific elements to generic ones (

3 in Figure 4). The information stored using the Weaving metamodel is used from the model-to-text transformation to translate generic information defined in the test-case into product-specific elements for the test-scripts. The Weaving metamodel is part of the RCO1 and RCO2.

Figure 7: Class diagram representation of the Weaving metamodel

Figure 7 provides for a class diagram representation of the Weaving metamodel. The root metaclass is Weaving, which has two attributes, name and description and contains one or more FunctionLink metaclasses. FunctionLink metaclasses type one or more ProductSpecificFunction to one GenericFunction and contain a list of InputLink and OutputLink metaclasses. The former are used to type one or more ProductSpecificInput metaclasses to one GenericInput metaclasses while the latter for typing one or more ProductSpecificOutput metaclasses to one GenericOutput metaclasses.

In the context of the above example, the engineer creates a FunctionLink tagging the two product-specific functions OpenDoor A and OpenDoor B to the generic OpenDoor function. The FunctionLink instance would contain an InputLink tagging 123.123.123.123-Door and

(19)

4.4.

TestSuite DSL

The TestSuite DSL allows for the definition of test-cases on the SPL (

4 in Figure 4)avoiding the definition of single test-scripts for each product of the family. The benefits of using TestSuite DSL are several, including the followings. It allows the creation of test-scripts without requiring testers to know the details (e.g., name of the signals, interfaces, etc.) of all the products of the family. It discloses the opportunity of saving development resources. Eventually, as the TestSuite DSL grammar has been defined to be close to natural language, even practitioners without a software engineering background can understand the test-case specifications. The TestSuite DSL contributes to RCO2. Listing 1 provides for an excerpt of the TestSuite DSL grammar.

  1 ... 2 3 T e s t S u i t e : ( t e s t C a s e s += T e s t C a s e ) * ( p r o d u c t T e s t C a s e s += P r o d u c t T e s t C a s e ) *; 4 5 T e s t C a s e : ’ T e s t C a s e ’ n a m e= ID ’ c h e c k s ’ g e n e r i c F u n c t i o n = ID p r o d u c t E x c e p t i o n += P r o d u c t E x c e p t i o n * ’ { ’ 6 ( s t e p s += S t e p ) * 7 ’ } ’; 8 9 P r o d u c t E x c e p t i o n : ’ e x c e p t ’ ’ P r o d u c t ’ p r o d u c t N a m e = ID ; 10 11 S t e p : Set | C h e c k | F o r c e | U n f o r c e ; 12 13 Set : ’ Set ’ ’ S i g n a l ’ g e n e r i c S i g n a l = S i g n a l ’ to ’ v a l u e = V a l u e p r o d u c t V a l u e E x c e p t i o n s += P r o d u c t V a l u e E x c e p t i o n *; 14 ... 15 16 C h e c k : ’ C h e c k ’ ’ S i g n a l ’ g e n e r i c S i g n a l = S i g n a l ’ to ’ v a l u e = V a l u e p r o d u c t V a l u e E x c e p t i o n s += P r o d u c t V a l u e E x c e p t i o n * ’ t i m e o u t ’ t i m e o u t = T i m e o u t ; 17 18 S i g n a l : n a m e= ID ; 19 20 V a l u e : n a m e= V a l u e T y p e ; 21 22 T i m e o u t : n a m e= INT ; 23 ...  

Listing 1: Excerpt of the DSL grammar

With the TestSuite DSL is possible to define generic test-cases specifying its name and the generic function to test, as shown in line 1 of Listing 2. Generic test-cases refer to the generic-functions as modelled in the Generic model. The body of a test-case is composed of a list of operations to interact with the signals. These operations are Set, Force, Unforce and Check. Lines 2 and 4 of Listing 2 show the use of the Set and Check operations. The former sets the value of the DoorLocked signal to False, while the latter checks if the signal DoorState reaches value OPEN within a time threshold of 5000 milliseconds. It may happen that a test-cases does not apply to a specific product. In this case, the tester can add an exception in the header of the test-case. Line 7 of Listing 2 shows an exception where the product A is excluded from Check OpenDoor Exception test-case. Additionally, it is possible to define a product-specific value on an operation, using the exception syntax illustrated in line 8 of Listing 2, where the signal DoorLocked for product A requires to be set to a non-standard value (with respect to the SPL).

Besides allowing for the definition of generic test-cases, TestSuite DSL allows for the definition of product-specific test-cases, too. We decided to equip TestSuite DSL with such a feature so to keep the approach flexible. However, we deprecate the use of such a feature as it could undermine the reusability and maintainability of testing artefacts. The definition of product-specific test-cases is similar to the definition of generic ones. The only difference is that these test-cases would refer to product-specific elements. Lines 12 to 16 of Listing 2 show an example of product-specific test-case definition for the product A.

(20)

Listing 2 shows an example of a test-case for the above-mentioned OpenDoor functionality. It can be noted that the test-case does not contain any mention to product-specific details and exploits the common representation modelled in the Generic metamodel. In particular, the test case Check OpenDoor verifies the behaviour of the OpenDoor functionality by setting the DoorLocked signal to False and verifying that, as a response, the signal DoorState reaches the value OPEN within 2000 milliseconds.   1 T e s t C a s e C h e c k _ O p e n D o o r c h e c k s O p e n D o o r { 2 Set S i g n a l D o o r L o c k e d to F a l s e 3 4 C h e c k S i g n a l D o o r S t a t e to O P E N t i m e o u t 5 0 0 0 5 } 6 7 T e s t C a s e C h e c k _ O p e n D o o r _ E x c e p t i o n c h e c k s O p e n D o o r e x c e p t P r o d u c t p r o d u c t _ A { 8 Set S i g n a l D o o r L o c k e d to F a l s e (E x c e p t i o n P r o d u c t p r o d u c t _ A to 1) 9 ... 10 } 11 12 S p e c i f i c T e s t C a s e C h e c k _ O p e n D o o r _ p r o d u c t _ A for p r o d u c t p r o d u c t _ A { 13 Set P r o d u c t S i g n a l D o o r on S y s t e m 1 2 3 . 1 2 3 . 1 2 3 . 1 2 3 to T r u e 14 15 C h e c k P r o d u c t S i g n a l S t a t e on S y s t e m 1 2 3 . 1 2 3 . 1 2 3 . 1 2 3 to O P E N t i m e o u t 5 0 0 0 16 }  

Listing 2: Test-cases definition for the OpenDoor example function with TestSuite DSL

4.5.

Model-to-text transformation

The model-to-text transformation is the automation mechanism, which allows for the automatic generation of test-scripts and embodies RCO3 (

5 in Figure 4). The benefits of the transformation are multi-folded. The transformation allows to save development resources; it reduces the product-specific boilerplate code needed to interact with the system under test and it allows to anticipate testing at early stages of the process with respect manual approaches.

The transformation takes as input the generic test-cases specified using the TestSuite DSL, the Generic model, the ProductSpecific model and the Weaving model. It generates specific test-scripts in any given programming language. The transformation uses the template-based mechanism [25] meaning that its structure is a mix of static and dynamic elements. Static elements are expressed in the syntax of the target programming language and will not be changed at transformation time. Dynamic elements represent placeholders, which will be replaced with elements from the source or target models at transformation time. As we are targeting C#, static elements in our model-to-text transformation are compliant to the C# grammar. Lines 7 to 9 of Listing 3 depict an example of static elements as C# comments and method declaration.

  1 ... 2 [f i l e (’ T e s t S u i t e ’. c o n c a t ( a P r o d u c t . n a m e . c o n c a t (’ . cs ’) ) , false , ’ UTF -8 ’) ] 3 4 [for ( a T e s t C a s e : T e s t C a s e | a T e s t S u i t e . t e s t C a s e s ) ] 5 [if ( i s P r o d u c t E x c e p t i o n ( a P r o d u c t , a T e s t C a s e ) ) ][ c o m m e n t if t h i s p r o d u c t is c o n t a i n e d in the e x c e p t i o n s , s k i p t h i s t e s t c a s e / ] 6 [let a G e n e r i c F u n c t i o n : G e n e r i c F u n c t i o n = g e t G e n e r i c F u n c t i o n ( a T e s t C a s e . g e n e r i c F u n c t i o n , a G e n e r i c F a m i l y ) ] 7 // G e n e r i c f u n c t i o n [ a G e n e r i c F u n c t i o n . n a m e /] 8 //[ a G e n e r i c F u n c t i o n . d e s c r i p t i o n /] 9 p u b l i c v o i d [ a T e s t C a s e . n a m e / ] ( ) { 10 [for ( a S t e p : S t e p | a T e s t C a s e . s t e p s ) ] 11 ...  

Listing 3: Acceleo template excerpt of the model-to-text transformation The model-to-text transformation composes of the following main mapping rules:

(21)

• SPL product to test-script file: each SPL product is translated into a test-script file. For instance, in the context of the above example, the transformation would generate two files, “TestSuiteproduct A.cs” and “TestSuiteproduct B.cs”.

• Generic case to method: each generic case is translated as a method in the test-script file. An example would be the Check OpenDoor test-case, which would be translated into the Check OpenDoor method (of each file).

• Test-case operation to statement: each operation in the test-case is translated into a state-ment in the method.

• Signal to parameter: each signal in a test-case operation is translated into a product-specific couple system-signal and passed as parameter in the operation statement. For instance, the Set operation at line 2 of Listing 2 would be translated into the statement:

123.123.123.123[”DoorLocked”] = False.

The first three mapping rules involve the generic test-cases and the ProductSpecific model, only, while the last mapping rule involves the Weaving model, too. In fact, for each generic signals, the transformation navigates the Weaving model to find all the corresponding product-specific ones. The model-to-text transformation also accounts for the handling of the exceptions as specified using the TestSuite DSL. In particular, the transformation will ignore the products specified through the exceptions.

(22)

5.

The Aventra SPL: A Use Case from the Railway Domain

In this section, we demonstrate the applicability of the proposed approach, the reusability of the test-cases and the reduced development effort to build product-specific test-scripts by running our solution on industrial use case scenario from Bombardier Transportation. The use case revolves around the Aventra product family for multiple electric unit trains for passengers transportation. The Aventra family is specifically designed for the British market and composes of five trains (i.e., products) being the London Overground (LOT), the East Anglia (EAA), the South Western (SWR), the West Midlands and the Center2Coast. All the Aventra products share a massive amount of functionality as, e.g. the cab activation function for starting up the driver cabin, the traction/brake control (TBC) function for forwarding the input of the traction and brake controller to brakes and propulsion, the doors control function for opening and closing the doors, the battery box control function for switching on and off the batteries, the pantograph control function to raise and lower the pantograph, just to mention a few. For the sake of verbosity, the proposed use case focuses on a subset of the Aventra products being LOT, EAA and SWR and the TBC function. Such a function has a high degree of variability, thus, it best suites to explore the power of the approach under every aspect. What is more, we use EMF for representing the models and for executing the use case.

Figure 8: Common functionalities representation with the Generic metamodel

According to the proposed approach, the first step is to represent the common functionalities by means of the Generic metamodel. Figure 8 provides for a form-based representation of a fragment of the model for the use case. In particular, it shows that the model contains two generic functions, ActiveCab and TBC Response. The right side of the figure shows further details of the TBC Response. In turns, TBC Response consists of four generic inputs and two generic outputs.

Figure 9: Product-specific details representation for EAA product with the ProductSpecific meta-model

The second step of the proposed approach is the modelling of each product of the product family, their functions and signals. Figures 9, 10 and 11 show several fragments of the

(23)

product-Figure 10: Product-specific details representation for SWR product with the ProductSpecific meta-model

Figure 11: Product-specific details representation for LOT product with the ProductSpecific meta-model

specific model7. The model contains the three products the use case focuses on hence the LOT,

EAA and SWR products. For each of these, the model shows two functions: the

Activate-Cab LOT and TBC Response LOT functions for the LOT product, the ActivateActivate-Cab EAA and TBC Response EAA functions for the EAA product and the ActivateCab SWR and TBC Response SWR functions for the SWR product. The model contains the specific input and output of each func-tion, too. In Figure 9

1 we see a portion of the product-specific model containing information regarding the product EAA and its specific functions. The right part of Figure 9

2 depicts the list of inputs and outputs for the TBC Response EAA function of product EAA. It involves four inputs and two outputs. Similarly, Figures 10, 11

1 show the product-specific information for the products SWR and LOT respectively together with their specific functions. The right part of Figures 10, 11

2 depicts the list of input and output steps involved in the referring function. For the product SWR only three steps (two inputs and one output) compose the TBC Response SWR function. The LOT product involves six steps for the TBC Response LOT function, but the input steps contains different signals with respect to the other products.

(24)

Figure 12: Objects typing with the Weaving metamodel

The last step of the preparation phase is to type project-specific functions and steps to the generic ones. Figure 12 shows such a step. In particular, in Figure 12

1 we type TBC

Response-LOT, TBC Response EAA, and TBC Response SWR to TBC Response; in Figure 12)

, we

2

type all the product-specific inputs to the generic ones (for the sake of clarity, we show the linking of two product-specific inputs to the TBC Demand Level Validity 1, only); in Figure 12

, we

3

type the product-specific output on the generic signal Master Tractive Braking Effort. We have to note that product-specific steps might be shared among products. To this end, the generic input TBC Demand Level Validity 1 is not linked to three (number of products) product-specific input, but only two. The reason is that EAA and SWR have the same product-specific input, but LOT has a different one. Similarly, for the generic output Master Tractive Braking Effort all the three products have the same product-specific output step, thus we have a one to one correspondence between generic and product-specific.

The first step of the execution phase is the definition of test-cases for the Aventra family. In the context of this use case, the test-cases represent a subset of the smoke tests used from the integration team to verify new releases. Listing 4 shows an excerpt of the test-cases for the TBC Response functionality. Signal values in Listing 4 have been changed from the real ones due to confidentiality. The test-cases demonstrate the use of the exceptions, too. For instance, the test-case Check TBCResponse3 is not required for the product SWR, hence it is excluded with the command except Project. The last step of the approach involves running the model-to-text transformation. In the context of this case study, the transformation produces three C# files, i.e. TestSuiteEAA.cs, TestSuiteLOT.cs and TestSuiteSWR.cs. Listing 5 contains an excerpt of the TestSuiteEAA.cs file representing the test-script generated from the generic Check TBCResponse1 and Check TBCResponse3. In Listing 6 we can see an excerpt of the TestSuiteSWR.cs file repre-senting the test-script generated from the generic Check TBCResponse1. Note that the method Check TBCResponse3 has not been generated accordingli to the exception defined the that partic-ular test-case. Listing 7 contains an excerpt of the TestSuiteLOT.cs file representing the test-script generated from the generic Check TBCResponse1 and Check TBCResponse3. Comparing the ex-cerpts of the test-scripts, we can point out the differences in the signals and in the structure of the three products. Thanks to the model-to-text transformation we can also assure the output to be syntactically and semantically correct, accordingly to the models and the generic test-cases. The

(25)

generated test-scripts are integrated, compiled and executed within the Bombardier Transportation testing infrastructure. However, these steps are omitted from this report due to confidentiality.

  1 ... 2 3 T e s t C a s e C h e c k _ T B C R e s p o n s e 1 c h e c k s T B C _ R e s p o n s e { 4 F o r c e S i g n a l T B C _ D e m a n d _ L e v e l _ V a l i d i t y _ 1 to t r u e 5 F o r c e S i g n a l T B C _ D e m a n d _ L e v e l _ 1 to 100 6 7 C h e c k S i g n a l M a s t e r _ T r a c t i v e _ B r a k i n g _ E f f o r t to -10 t i m e o u t 1 0 0 0 8 C h e c k S i g n a l S l a v e _ T r a c t i v e _ B r a k i n g _ E f f o r t to -10 t i m e o u t 1 0 0 0 9 } 10 11 ... 12 13 T e s t C a s e C h e c k _ T B C R e s p o n s e 3 c h e c k s T B C _ R e s p o n s e e x c e p t P r o j e c t SWR { 14 F o r c e S i g n a l T B C _ D e m a n d _ L e v e l _ V a l i d i t y _ 3 to t r u e 15 F o r c e S i g n a l T B C _ D e m a n d _ L e v e l _ 3 to 100 16 17 C h e c k S i g n a l M a s t e r _ T r a c t i v e _ B r a k i n g _ E f f o r t to -10 t i m e o u t 1 0 0 0 18 C h e c k S i g n a l S l a v e _ T r a c t i v e _ B r a k i n g _ E f f o r t to -10 t i m e o u t 1 0 0 0 19 } 20 21 ...  

Listing 4: Excerpt of the test-cases definition for the TBC Response function using the TestSuite DSL   1 ... 2 3 // G e n e r i c f u n c t i o n T B C _ R e s p o n s e 4 // V e r i f y t h a t i n p u t r e f e r e n c e f r o m TBC is f o r w a r d e d to b r a k e and p r o p u l s i o n d u r i n g n o r m a l c o n d i t i o n s 5 p u b l i c v o i d C h e c k _ T B C R e s p o n s e 1 () { 6 // F o r c e T B C _ D e m a n d _ L e v e l _ V a l i d i t y _ 1 t r u e 7 S y s t e m 1 [" T B C _ D e m a n d _ L e v e l _ V a l i d i t y _ 1 _ E A A "]. F o r c e (t r u e) ; 8 // F o r c e T B C _ D e m a n d _ L e v e l _ 1 100 9 S y s t e m 1 [" T B C _ D e m a n d _ L e v e l _ 1 _ E A A "]. F o r c e ( 1 0 0 ) ; 10 // C h e c k M a s t e r _ T r a c t i v e _ B r a k i n g _ E f f o r t -10 11 S y s t e m 1 [" M a s t e r _ T r a c t i v e _ B r a k i n g _ E f f o r t _ E A A "]. W a i t F o r S i g n a l ( -10 , 1 0 0 0 ) ; 12 // C h e c k S l a v e _ T r a c t i v e _ B r a k i n g _ E f f o r t -10 13 S y s t e m 2 [" S l a v e _ T r a c t i v e _ B r a k i n g _ E f f o r t _ E A A "]. W a i t F o r S i g n a l ( -10 , 1 0 0 0 ) ; 14 } 15 16 ... 17 18 // G e n e r i c f u n c t i o n T B C _ R e s p o n s e 19 // F o r w a r d i n p u t r e f e r e n c e f r o m TBC to b r a k e and p r o p u l s i o n d u r i n g n o r m a l c o n d i t i o n s 20 p u b l i c v o i d C h e c k _ T B C R e s p o n s e 3 () { 21 // F o r c e T B C _ D e m a n d _ L e v e l _ V a l i d i t y _ 3 t r u e 22 S Y S 2 [" EAA - D E M _ L E V _ V A L I D _ 3 - EAA "]. F o r c e (t r u e) ; 23 // F o r c e T B C _ D e m a n d _ L e v e l _ 3 100 24 S Y S 2 [" EAA - D E M _ L E V _ 3 - EAA "]. F o r c e ( 1 0 0 ) ; 25 // C h e c k M a s t e r _ T r a c t i v e _ B r a k i n g _ E f f o r t -10 26 S Y S 1 [" T B _ E F F O R T "]. W a i t F o r S i g n a l ( -10 , 1 0 0 0 ) ; 27 // C h e c k S l a v e _ T r a c t i v e _ B r a k i n g _ E f f o r t -10 28 S Y S 2 [" T B _ E F F O R T "]. W a i t F o r S i g n a l ( -10 , 1 0 0 0 ) ; 29 } 30 31 ...

(26)

  Listing 5: Excerpt of the test-script produced by the model-to-text transformation for the TBC Response function (EAA product)

  1 ... 2 3 // G e n e r i c f u n c t i o n T B C _ R e s p o n s e 4 // F o r w a r d i n p u t r e f e r e n c e f r o m TBC to b r a k e and p r o p u l s i o n d u r i n g n o r m a l c o n d i t i o n s 5 p u b l i c v o i d C h e c k _ T B C R e s p o n s e 1 () { 6 // F o r c e T B C _ D e m a n d _ L e v e l _ V a l i d i t y _ 1 t r u e 7 S Y S 1 [" EAA - SWR - D E M _ L E V _ V A L I D _ 1 - EAA - SWR "]. F o r c e (t r u e) ; 8 // F o r c e T B C _ D e m a n d _ L e v e l _ 1 100 9 S Y S 1 [" EAA - SWR - D E M _ L E V _ 1 - EAA - SWR "]. F o r c e ( 1 0 0 ) ; 10 // C h e c k M a s t e r _ T r a c t i v e _ B r a k i n g _ E f f o r t -10 11 S Y S 1 [" T B _ E F F O R T "]. W a i t F o r S i g n a l ( -10 , 1 0 0 0 ) ; 12 } 13 14 ...  

Listing 6: Excerpt of the test-script produced by the model-to-text transformation for the TBC Response function (SWR product)

  1 ... 2 3 // G e n e r i c f u n c t i o n T B C _ R e s p o n s e 4 // F o r w a r d i n p u t r e f e r e n c e f r o m TBC to b r a k e and p r o p u l s i o n d u r i n g n o r m a l c o n d i t i o n s 5 p u b l i c v o i d C h e c k _ T B C R e s p o n s e 1 () { 6 // F o r c e T B C _ D e m a n d _ L e v e l _ V a l i d i t y _ 1 t r u e 7 S Y S 1 [" LOT - D E M _ L E V _ V A L I D _ 1 - LOT "]. F o r c e (t r u e) ; 8 // F o r c e T B C _ D e m a n d _ L e v e l _ 1 100 9 S Y S 1 [" LOT - D E M _ L E V _ 1 - LOT "]. F o r c e ( 1 0 0 ) ; 10 // C h e c k M a s t e r _ T r a c t i v e _ B r a k i n g _ E f f o r t -10 11 S Y S 1 [" T B _ E F F O R T "]. W a i t F o r S i g n a l ( -10 , 1 0 0 0 ) ; 12 // C h e c k S l a v e _ T r a c t i v e _ B r a k i n g _ E f f o r t -10 13 S Y S 2 [" T B _ E F F O R T "]. W a i t F o r S i g n a l ( -10 , 1 0 0 0 ) ; 14 } 15 16 ... 17 18 // G e n e r i c f u n c t i o n T B C _ R e s p o n s e 19 // F o r w a r d i n p u t r e f e r e n c e f r o m TBC to b r a k e and p r o p u l s i o n d u r i n g n o r m a l c o n d i t i o n s 20 p u b l i c v o i d C h e c k _ T B C R e s p o n s e 3 () { 21 // F o r c e T B C _ D e m a n d _ L e v e l _ V a l i d i t y _ 3 t r u e 22 S Y S 2 [" LOT - D E M _ L E V _ V A L I D _ 3 - LOT "]. F o r c e (t r u e) ; 23 // F o r c e T B C _ D e m a n d _ L e v e l _ 3 100 24 S Y S 2 [" LOT - D E M _ L E V _ 3 - LOT "]. F o r c e ( 1 0 0 ) ; 25 // C h e c k M a s t e r _ T r a c t i v e _ B r a k i n g _ E f f o r t -10 26 S Y S 1 [" T B _ E F F O R T "]. W a i t F o r S i g n a l ( -10 , 1 0 0 0 ) ; 27 // C h e c k S l a v e _ T r a c t i v e _ B r a k i n g _ E f f o r t -10 28 S Y S 2 [" T B _ E F F O R T "]. W a i t F o r S i g n a l ( -10 , 1 0 0 0 ) ; 29 } 30 31 ...  

Listing 7: Excerpt of the test-script produced by the model-to-text transformation for the TBC Response function (LOT product)

(27)

6.

Validation

In this section, we discuss how we have assessed the applicability and efficiency of the proposed approach. As described in Section 3.3., we build on a validation process composing of two steps, which aim at maximizing the technology transfer. The first step of the validation process, the validation in academia, focused on assessing the validity of the proposed solution and its enabling artefacts within an academic context. To this end, we have performed weekly meeting from January to May 2020 with a team involving:

• a master student in software engineering,

• an Assistant Professor in Computer Science with background in software architecture and DSL and more than 5 years of professional experience in the embedded software domain, • an Associate Professor in Computer Science with background in MDE,

• a Software Integration Manager from Bombardier Transportation, and • a senior Software Engineer from Bombardier Transportation.

The meetings were held at the M¨alardalen University premise in V¨aster˚as. Note not all the meeting involved all the above-mentioned profiles. The aim of these meetings was to strengthen the proposed solution before testing it in an industrial context. The main outcomes of these meetings were the adoption of MDE techniques for improving test-cases maintainability, reusability and automatic generation of test-scripts. For instance, the first version of the proposed approach was leveraging a data structure containing a link between product-specific signals and a generic alias to provide a common representation of such elements. Such a solution would have been hard to apply and maintain due to the need of manual interventions and a lack of a formal structure. To answer that challenge, we have decided to use metamodelling techniques. However, at first, the proposed solution was using one metamodel only. We have realised that would have hampered understandability and scalability of our solution. Eventually, we have adopted a solution using several metamodels and automation in the form of model-to-text transformation.

The second validation step of the process is the use case validation in the industrial context. On the one hand, we focused on gathering practitioners feedback on the proposed solution. We have carried out such a discussion using regular meetings within the Aventra Integration Testing group at the Bombardier Transportation premise in V¨aster˚as. The main outcomes were as follows. The product-specific metamodel was redesigned for allowing the definition of systems besides the definition of product-specific signals. Practitioners have highlighted the need for a metamodel refinement allowing for the definition of products in terms of functions and signals, respectively.

On the other hand, we focused on assessing the applicability of the proposed approach, the reusability and reduced development effort with respect to the test-cases, using the industrial use case. In Section 5., we have described the industrial use case we have leveraged for such a vali-dation step. The main finding acquired during the execution of use case valivali-dation have regarded the model-to-text transformation. In particular, the initial version of the transformation were generating test-script suites, which could only be compiled but not executed by the Bombardier Transportation testing infrastructure. This was due to some dependencies, which could not be resolved by the infrastructure. We have solved this issue by modifying the transformation for generating test-scripts, which have been integrated within the testing infrastructure. Besides re-finements to the transformation, we have improved the DSL, too. In particular, we have equipped it with an exception mechanism for handling all non-standard test-case definitions. This need has emerged after that, a first execution of the approach, failed in defining crucial test-cases for the Aventra family.

The Aventra case study helped us in gathering evidences about the industrial applicability of the proposed approach. Focusing on the execution phase of the approach, we significantly decreased the number of handcrafted artefacts since product-specific test-scripts are automatically generated from generic test-cases. As a consequence, the execution phase of the approach reduces the effort required to build testing artefacts and the possibility of errors inherent of manual work. In addition, the approach has proven to be more efficient with respect to development time as

(28)

test-scripts were automatically generated starting from a single test-case specified for the whole family of products. This has positively affected reusability, too, as there was no need to re-define test-cases in the case a new product was added to the family or an existing one was removed, but only an update on the models.

(29)

7.

Discussion

SPLs require ad-hoc strategies to perform testing efficiently. In this thesis, we propose a model-based approach, which tackles the challenges of increasing abstraction and automation in SPLs testing. This is achieved using several artefacts being models, metamodels, a DSL and a model transformation.

Models and metamodels have helped us in increasing abstraction, while providing for separation of concerns. Separation of concerns was achieved by decoupling concepts pertaining to the family of products from those pertaining to a given product. Such a decoupling has increased abstraction too and helped us in focusing only on relevant concepts and properties. The adoption of the approach requires an initial work to build the models and to establish relationships among them using the weaving technique. However, this phase is required only at the beginning, while during the software lifecycle, the models will only be updated according to the evolution of the SPL. The graph in Figure 13 shows how the development effort varies with respect to the size of the projects. The graph describes the curves of the effort over the system complexity of our approach (green line) and traditional approaches (red line). The curves are presumed and are not created based on actual estimations. However, they allow us to reason on when the proposed approach is convenient to use. In particular, the proposed approach demands a greater initial effort for its set up, while the effort tends to stay lower when the size of the system grows. Hence, the proposed approach seems more suited for medium or large SPL since the initial set up effort is balanced from the effort reduction achieved with this kind of systems. The effort for setting up the approach might be reduced depending on the amount of artefacts reuse as well as employing mechanisms for the automatic creation of models from requirements specification or from existing feature models. It should be noted, that the increased initial effort does not affect the benefits introduced from the proposed approach (e.g., test-cases reusability, increased abstraction, and separation of concerns).

Figure 13: Trend estimation of effort required to create testing artefacts with respect to the system complexity.

Moreover, the proposed approach increases automation using the DSL and the model trans-formation. However, the benefits of the DSL are not only related to this aspect. In fact, the DSL allows to hide the inherent complexity of programming languages using a grammar tailored to the testing domain and closer to the natural language. In turns, this makes test-cases easy to understand even from engineers with little to no knowledge of testing and software engineering. Eventually, using the TestSuite DSL introduces practical benefits, too as, e.g., syntax highlighting, code completion, etc. The transformation allows for the generation of test-scripts in a given target language, in our case C#. The automatic generation assures the output to be always syntactically correct and semantically correct as long as the information in the models is consistent. One might argue that writing such a transformation might be demanding especially if different target lan-guages are considered. This is a valid concern though typically in industrial environments changes on the technological stack are not so frequent. What is more, such a limitation could be overcome

(30)

using High Order Transformations (HOTs) [26], which take a transformation as input and produce a mutated version of the same transformation. Hence, a HOT could be used for producing different transformations targeting different programming languages.

Even if our approach is general and applicable to any industrial domain, its artefacts may contain domain-specific and even company-specific details. Besides adapting the transformation, some modifications on the metamodels and DSL might be required, too. For instance, in certain contexts, the allowed operations on the signals might differ from the four we have specified.

Figure

Figure 1: Representation of smoke testing in the development process.
Figure 2: Representation of the layered architecture in MDE.
Table 2: Relations between RQs and RCOs
Figure 3: Selected research methodology
+7

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Application logic component uses product specific rules to provide a user interface to control the hardware. The product specific rules provide information about the operations that

The SEAD project aims to create a multi-proxy, GIS-ready database for environmental and archaeological data, which will allow researchers to study the interactions of

In this thesis we have outlined the current challenges in designing test cases for system tests executed by a test bot and the issues that can occur when using these tests on a

This paper has proposed a video-based approach–using social media technologies–as  a way to lower the threshold for continuous capturing and sharing lessons learned (LL)

Dissatisfaction with medical information is a common problem among patients. There is also evidence that patients lack information that physicians believe they

Furthermore we can conclude in this report that, since the PLC is so different from case to case, it is hard to make one specific model that is applicable to all. If a company is

In this thesis, we investigate the generation of configurable test-suites using test models based on finite state machines by extending the formalism and the test-case