• No results found

Comparing Test Design Techniques for Open Source Systems

N/A
N/A
Protected

Academic year: 2021

Share "Comparing Test Design Techniques for Open Source Systems"

Copied!
85
0
0

Loading.... (view fulltext now)

Full text

(1)

Comparing Test Design Techniques

for Open Source Systems

Guido Di Campli

Mälardalen University, Sweden

+393209016130

guido.dicampli@gmail.com

Savino Ordine

Mälardalen University, Sweden

+393283895057

savino.ordine@gmail.com

Supervisor: Sigrid Eldh

Ericsson AB, Sweden

+46107152374

sigrid.eldh@ericsson.com

Examiner: Sasikumar Punnekkat

Mälardalen University, Sweden

+4621107324

sasikumar.punnekkat@mdh.se

(2)
(3)

3

Abstract

In this thesis we describe how to systematically test, where our target has been Open Source Systems. We have applied a series of common and overlapped test design techniques at defined levels, specifically using seven different functional and structural test approaches. Our conclusion is that open source systems often lack fundamental testing, where on average it only takes 6 test cases to reveal the first failure. The first time to failure is 1 hour on average and MTTF (mean time between failures) is approximately 2 hours with our systematic approach. Our systematic approach is not only testing in itself, but we do also describe the process of discovering a system’s requirements. We have also found that some test design techniques seem to be more effective than others to find failures. We have investigated fifteen different open source systems, attempting to classify these systems in a methodical way. Our process consists in measuring time spent to identify unique part of the system where to apply the test cases. We consider both the system and the test design technique as measures to evaluate the effectiveness and construct test cases.

(4)

4

(5)

5

Acknowledgements

My first thanks are for my family, always present in my life and ready to give me the right support during difficulties. I cannot ask for a better family. Today I am here cause of you… Really thanks from the deepest part of my heart.

Special thanks are for Sigrid Eldh. We found a mentor, a friend and a good adviser. I cannot no mention my best friends or ever: Francesco D’Onofrio and Giulia Trivilini. I always look at you as my private small family and I’m really lucky in having you close to me.

A special thought to my missing grandmothers: Iolanda Corrispresto and Speranza Liberatore. You are close to heart and your teaching will be with me until the end of time.

I would like to catch this opportunity and say thanks to Iolanda Di Campli (my lovely little sister), Ordine Savino, all friends from GSEEM, Henry Muccini and all friends of ever. My last thank you is for Joanne (my sweet Giulia) to let me understand that sometimes dreams became reality.

Guido.

In Italiano:

Il mio primo ringraziamento va alla mia famiglia che è sempre stata presente nella mia vita ed è sempre stata pronta a supportarmi nei momenti difficili. Non potrei mai avere genitori migliori di voi. Se oggi sono qui lo devo a voi….. Grazie dal più profondo del mio cuore.

Uno speciale grazie va Sigrid Eldh. In lei abbiamo trovato un punto di riferimento , un’amica ed una buona consigliera.

Non posso non menzionare i migliori amici di sempre: Francesco D’Onofrio e Giulia Trivilini. Guardo a voi come se fosse la mia privata, piccola famiglia e sono veramente fortunato ad avervi accanto.

Uno speciale pensiero va alle mie nonne che son venute a mancare durante questo Master: Iolanda Corripresto e Speranza Liberatore. Rimarrete sempre nel mio cuore ed i vostri insegnamenti resteranno con me sino alla fine dei tempi.

Colgo l’occasione per ringraziare Iolanda Di Campli (la mia adorabile sorellina), Ordine Savino, i ragazzi GSEEM, Henry Muccini e tutti gli amici di sempre.

Il mio ultimo grazie va a Joanne (la mia dolce Giulia) per avermi fatto capire che a volte i sogni possono divenire realtà.

Grazie a tutti di cuore,

(6)

6 My first thank is to my family for having spent with me bad and beautiful moments and to give me support for everything. Thank so much mom, dad and my little sister.

I also would like to say thank to Sigrid Eldh for guiding me through this Master Thesis and for all the meetings and presentations reviewed by her.

Last but not least thank to my friend Guido Di Campli for shared ideas and time, thank to Henry Muccini for giving us all the support we needed and a thank to Wen Chang to be always close to me and make my life wonderful.

Thank you Västerås!

Savino.

Prima di tutto vorrei ringraziare la mia famiglia per aver trascorso con me brutti e bei momenti e per avermi dato tutto il supporto che avevo bisogno e per aver creduto in me in questo cammino all’estero. Mi ricorderò sempre tutti gli sforzi che avete fatto per me. Grazie mille Mamma, Papà e la mia piccola Sister.

Vorrei anche ringraziare il mio supervisore Sigrid Eldh per avemi aiutato e guidato in questa tesi.

Un ultimo ringraziamento è per il mio amico Guido Di Campli per aver condiviso idee e tempo, un grazie per Henry Muccini per averci dato tutto il supporto che avevamo bisogno e un grazie a Wen Chang per essermi stata sempre vicina e per aver reso la mia vita meravigliosa.

Grazie!

(7)

7

SUMMARY

1. Introduction ... 10 2. Background ... 12 2.1 Motivation ... 12 2.2 Terminology ... 12 2.3 Method section ... 13

3. Test design Techniques ... 15

3.1 Positive Testing ... 17

3.2 Negative Testing ... 18

3.3 Boundary Value Analysis ... 20

3.4 Equivalence Partitioning ... 21

3.5 Random Input ... 21

3.6 Fault Injection ... 23

3.7 Exploratory Testing ... 23

4. Open Source System ... 24

4.1 Open Source Selection Criterions ... 25

4.2 Open source used ... 26

5. Systematic use of Test Design Techniques ... 27

5.1 Motivation about systematic routine ... 27

5.2 Systematic Test case creation ... 30

5.3 Test techniques used in experiments ... 31

5.4 Level of test used in experiment ... 32

5.5 Selection of Input Values ... 34

5.6 Test cases creation ... 35

5.7 Test Case Table. ... 38

5.8 Comparison between Systems ... 39

6. Case Study ... 40

6.1 Study of the system ... 40

6.2 practical usage of Test Design Techniques ... 41

6.3 Code Coverage Results ... 49

6.4 Case Study Results ... 50

7. Results ... 52

7.1 Test techniques ... 52

7.2 Open source system’s coverage ... 53

7.3 Open Source Installing Problems ... 54

(8)

8

7.5 Test techniques classification ... 57

7.6 Statistic results ... 58

8. Future work and Conclusion ... 64

A. Appendix “The Hat” ... 65

B. Appendix “Case Study” ... 66

References ... 82

I

MAGES

I

NDEX FIGURE 1.BVAPARTITIONING ... 16

FIGURE 2.NEGATIVE TESTING IS NOT OBVIOUS WITHOUT SPECIFICATION ... 18

FIGURE 3.BVA IS AIMING TO CHECK THE BORDERLINE BETWEEN EQUIVALENCE PARTITIONING FOR THIS SET... 20

FIGURE 4:THE HAT INTERFACE ... 22

FIGURE 5. OUR FIRST APPROACH TO TESTING ... 28

FIGURE 6. GUI EXAMPLE... 31

FIGURE 7.V-MODEL ... 32

FIGURE 8.GENERIC COMPARISON TABLE ... 39

FIGURE 9.BANKSYSTEM SCREENSHOT ... 40

FIGURE 10.SELECTED SOURCE CODE IN BKSFIJC21 ... 47

FIGURE 11.AFFECTED SOURCE CODE IN BKSFIJC21 ... 47

FIGURE 12.RESULT FROM BKSFIJC21 AT SYSTEM LEVEL ... 47

FIGURE 13CODE COVERAGE BKSFIJC21... 48

FIGURE 14.COMPLETE CODE COVERAGE OF BANK SYSTEM ... 50

FIGURE 15.BEJEWELED SCREENSHOT ... 52

FIGURE 16.HOW TO READ SYSTEMS RESULTS TABLE ... 54

FIGURE 17.TEST TECHNIQUES GRAPH ... 57

FIGURE 18.THE HAT'S GUI ... 65

FIGURE 19.SHORTCUT KEYS FOR MENU ... 68

FIGURE 20.SHORTCUT KEYS FOR OPTION ... 68

FIGURE 21.SELECTED SOURCE CODE IN BKSFIJC20 ... 78

FIGURE 22.AFFECTED SOURCE CODE IN BKSFIJC20 ... 78

FIGURE 23.CODE COVERAGE BKSFIJC20 ... 79

FIGURE 24.SELECTED SOURCE CODE IN BKSFIJC21 ... 79

FIGURE 25.AFFECTED SOURCE CODE IN BKSFIJC21 ... 80

FIGURE 26.RESULT FROM BKSFIJC21 AT SYSTEM LEVEL ... 80

(9)

9

T

ABLE INDEX

TABLE 1.ASCIITABLE ... 19

TABLE 2.SYSTEMS USED IN THIS EXPERIMENT ... 26

TABLE 3.TESTING /LEVEL ... 33

TABLE 4. ASCIITABLE... 34

TABLE 5.ASCII RANGE VALUES ... 35

TABLE 6.NORMAL AND NEGATIVE RANGE VALUES ... 36

TABLE 7. BVA RANGE VALUES ... 37

TABLE 8.FAULT INJECTION TABLE ... 37

TABLE 9.TEST DESIGN TECHNIQUES ... 38

TABLE 10.BANK SYSTEM'S REQUIREMENTS ... 41

TABLE 11.BANK SYSTEM - NORMAL TCS ... 41

TABLE 12.BANKSYSTEM-NEGATIVE TCS ... 42

TABLE 13.BANKSYSTEM-RANDOM INPUT TCS... 43

TABLE 14.BANKSYSTEM-BVATCS ... 44

TABLE 15.BANKSYSTEM- EQ.PARTITIONING TCS ... 45

TABLE 16.BANK SYSTEM - ERROR GUESSING TCS ... 46

TABLE 17.SEARCH TCS -BANK SYSTEM ... 49

TABLE 18.BANK SYSTEM –TCS SUMMARY ... 50

TABLE 19.BANK SYSTEM RESULTS ... 51

TABLE 20.SYSTEMS RESULS ... 55

TABLE 21.SYSTEM'S CLASSIFICATION ... 56

TABLE 22.TEST TECHNIQUES CLASSIFICATION ... 57

TABLE 23.FAILURE AVERAGE. ... 58

TABLE 24.TCS AND TC FAILED FOR EACH TDT ... 61

TABLE 25.SYSTEM FAULTS DESCRIPTION ... 63

(10)

10

1.

INTRODUCTION

Before to write this document, we attended a class called software verification and validation where we learned about test techniques, level of testing in a development process, what a test case is and how to be systematic in testing. All these notions gave us the basic knowledge to do this work. We spent almost 3 months to improve our knowledge about test techniques and learned how to be systematic and we participated at Ericsson in an experiment about testing an open source system. In this experiment, we use basic and common test techniques such as Normal, Negative, Random Input, Boundary Value Analysis, Equivalence Partitioning, Code coverage, Fault injection and Error guessing. We use the same test techniques in our work. These test design techniques are used to demonstrate that open source systems often lack fundamental testing. In a short time we can find faults or strange system behavior. When a new system is developed, different software development processes are used, such as Waterfall model, Spiral model or V-model.

We can create test cases not only when a system is completed but also at different steps of the development process such as during the requirement’s phase and we can execute them when the system is completed.

Why have we chosen our target to be open source systems? Open source systems by Andress section 1, page 14 [18] have different advantages, but they also may have disadvantages.

Firstly, open source systems generally do not have single entity support the product. Many open source systems are hobbies of developers and for this reason they often lack fundamental testing. Secondly, many users feel open source system is difficult to use because system has been developed for the Linux platform, but nowadays most people use Windows and also open source systems are not well-documented, they have nothing more than a README file or manual page and it is hard to understand how to use them. Thirdly, with open source system, the code can be inspected and tested by anyone [44]. This last one is a good reason for our work because we can test open source systems using fault injection technique (for more detail on this technique, see section 3) and in this way we can create test cases also on code level (see Table 8). We use statement coverage to understand the percentage of coverage of an open source system after execution of a test case and create new test case with different coverage of code lines.

We have noticed that few open source systems have requirement documentation, and if we don’t have requirements we cannot do testing, from Graham [57] “...Because requirements form the basis for tests, obviously we can’t do any

testing until we have decent requirements… Some would say that without a specification, you are not testing but merely exploring the system…”, for this

(11)

11

we do also describe the process of re-engineering a system’s requirements (see section 5). By utilizing and Partitioning an ASCII table (see Fig.10), we can systematically create different ranges of input values for different system’s domains. This will help us creating test cases and in the end test the system which takes ASCII as input.

In our work, we apply several common and overlapping test techniques on a series of open source systems in regards of different test techniques at defined levels (see section 5.3). This thesis provides a thorough description of these test design techniques. Each technique is measured in different ways (see section 3 and 5) including statement coverage.

(12)

12

2. BACKGROUND

This section is intended to give a short background and overview covered in this thesis. The main focus will be to give an introduction to motivate this work, which will be presented in 2.1 and terminology about test design techniques will be presented in 2.2.

2.1 MOTIVATION

Our motivation is to provide a better understanding of what is commonly tested and not tested in open source system, and at the same time use the opportunity to contribute to a better understanding of the applicability of these test design techniques (TDT). It is also showed a way to classify systems based on the time spent (in minutes) to identify unique part of the system where apply a Test Case (TC).

2.2 TERMINOLOGY

What do we mean with test of a system? What is testing? Testing is the process to evaluate a system in collusion with its specified requirements. Definition given in IEEE Standard Glossary of Software Engineering Terminology [29] defines testing as: the process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.

In our approach, we also using code coverage (with code coverage we specifically mean statement coverage according to [1]) to determine which part of the software has been covered by existing test case suite and which part still not executed. This code covering is measured by the tool EclEMMA [3]. With the term effective we mean the ability of a test design technique to expose failures in the system.

According to Beizer [53] there are perhaps as many as 57 different varieties of test case design techniques and Copeland [2] has produced an excellent practical description of some of the most popular and useful techniques. A Test Design Technique (also called TDT or Test Technique) is from Broekman section 11, pages 113..116 [16] and Graham section 4, pages 77..80 [17] a procedure\method to derive and\or select test cases from reference information. TDT have to be:  Applicable

 Efficient  Effective

Applicable is a rather new aspect of a test design technique, and can be defined as the ability for the technique to be automated. Another aspect of applicability is

(13)

13

defines the test design techniques being only valid in its specific context, and not necessarily possible to apply to other software and domains [43]].

Efficient is by Rothermel and Harrold[44] defined as the measurement of the computation cost, which also determines the practicality of a technique. It also include by Eldh [33] time required to comprehend and implement the test technique. Effective can be defined as the number of faults the technique will find [33] . Effectiveness of a test design technique is according to Weyuker [43], the only possible to measure if you can compare two techniques for the same set (i.e. software), but the result is not general.

Test Cases can be defined in many ways, one of them is from Schmidt [46] which defines a test case to consist of:

1. ID is the unique identifier of the test case.

2. SUT is the system under test or in another abstraction level is a part or a service of the system under test.

3. Pre Condition is a set of assertions to ensure the prerequisites of exercising the test case are satisfied.

4. Post Condition is a set of assertions to evaluate the correctness of execution results.

5. IN is composed of name and value of input parameters to the operation. 6. OUT is composed of name and value or the output of the operation.

We have also found other descriptions of how a test case can be defined from Copeland [2] and Myers [47]. According to Kamde, Nandavadekar and Pawar [56] a test case must also have quality attributes such as:

• Correctness

• Accurate and Appropriate • Economical

• Repeatable • Traceable • Measurable

Correctness means that a test case has to be appropriate for the tester and environment. Accurate and appropriate means that tests what the description says it will test. Economical means that has only the steps needed for its purpose. Repeatable means same results no matter who tests. Traceable means to a requirement. Measurable returns the tested environment to clean state.

2.3 METHOD SECTION

We were four persons simultaneously involved in the work, but we have each independently completed our research on the test design techniques. Each person had a goal to learn more about testing because of their Master Thesis is in the area.

(14)

14

We started with a personal literature study that was an extension and a better study of the content of Verification and Validation course at Mälardalen University.

We had different meetings with and without our supervisor to discuss about each technique and compare our thoughts, and we have documented the techniques below. Meeting with the supervisor includes a presentation about a tested system or about what we have done during the previous week.

In addition we remade a literature study just to clarify the new understanding. Two students were focused on Open Source System Testing and other two students were both involved in Close Source System Testing but everybody was interested in Functional Testing. We were under the same supervisor. We get our Testing background from L’Aquila University (Italy) and Mälardalen University (Sweden) and we used them as a start-point to understand how to approach the literature. We constantly had meeting each week with supervisor to show our work, our progress and discuss together about TDT and thesis. Each meeting had as goal a power point presentation about the week’s progress for each group, a discussion all together and time for questions and clarification of doubts or mistakes. These meeting were really important because we discover the meaning of “Systematic Approach”. At beginning we used a personal Model Driven Approach but soon we understand that it is a weak method compared to systematic methods because with it we did not have a complete view of all possible input in a system. During each meeting generally we had a presentation about a tested system in the previous week and we use it as a start-point to discuss about testing, mistakes and possible improvements.

We even had meetings without our supervisor where we discussed deeply about TDT. We had good results and it was helpful to understand TDT and compare our thoughts.

In these meeting we understood that Positive Testing and Negative Testing are not real techniques under particular conditions.

We clarified the main differences between BVA (Boundary Value Analysis) and EP (Equivalent Partitioning). BVA works on extreme values and on numeric domain and EP needs Partitioning classes to start it. Inside a system without documentation is not always clear what is correct and what is not and for these reasons we need precondition and suppositions to make Positive Testing and Negative Testing. Discussing on Level of Testing we look that there are some TDT as Fault Injection that work at Code Level but show changes at System Level. All these sentences are explained in this section into the relative paragraphs.

Those meetings were really useful because they made us capable to start thinking about Testing with different eyes. Using meeting with supervisor once a week and with other students we were always comparing our point of view and discovering new possible scenario. After meetings we usually relook at literature and take notes about them to rearrange our knowledge and grow up.

(15)

15

3. TEST DESIGN TECHNIQUES

In this section we are going to explain which TDT we used in our approach to test Open Source System. We selected a series of test design techniques, with the aim to identify which would be beneficial to use, to evaluate the quality of the Open Source System. At this point there is a question, are there some techniques [33] more effective than others? In our work we choose seven simple and common test techniques and they are Exploratory, Random Input, Normal, Negative, Boundary Value Analysis, Equivalent Partitioning, Code coverage, Fault injection and Error guessing.

We have chosen exploratory test [58] because not all the systems have requirement document and before to start testing, we play with the system by trying to execute, and give values to the system. It is defined in several ways, such as by Graham [57] “...exploratory testing, designed for situations with inadequate requirements...” by James Bach as “.. simultaneous learning, test design, and test execution” [45]. Tinkham and Kaner give a slightly different definition: “Any testing to the extent that the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests” [47]. According to Kaner, Bach, and Pettichord [46] exploring means “…purposeful wandering: navigating through a space with a general mission, but without a pre-scripted route. Exploration involves continuous learning and experimenting.” When we use exploratory, we are using it to explore the system functionality. Exploratory here means that we always are using another test design technique, which could have been stated as the main technique, except we have no validity source in form of any type of specification to strengthen the argument of using it. This is the weakest point in our thesis. Because of the lack of requirements or specifications in the chosen open source system, all approaches could be called exploratory. Instead, we make an assumption that we have an understanding of the system, thus we know what a correct outcome of a test case is, and hence can use a specific test design technique. We have chosen “normal” execution, which means that the main intention of the usage of a function is executed – which appears to be the “default” normal values expected. This test design technique has many names, such as positive, requirement or functional test from Watkins section 3, pages 19..22[11] and [45]. This technique “the normal case” is valuable since is not only demonstrates the actual program use, but to show that a particular feature, or functionality or the system as a whole is working properly. Negative testing from Watkins section 2, page 9[11] and [14]is a test design technique with the aim to insert unexpected values into the system, meaning – values that would trigger either fault handling or not be normally allowed as input. It is intended to demonstrate which aspects of the system are working. It is often used to test which aspects that have not been documented or have been poorly or

(16)

16

incompletely documented in the specification, according to Watkins section 3 page 16 [11]. The goal is to evaluate if these “negative input values” are managed or not. Normal and negative test design techniques are commonly used in industry [11][14]. Random input testing from Craig and Jaskiel section 5 page 175 [1] and from Watkins section 3, page 21 [11] is a technique using an automated tool to insert random input values from a given or generated set of inputs. It contains all kinds of values (and formats) of input. Boundary Value Analysis [54][55] and from Burnstein section 4, pages from 72-73 [12] and from Copeland section 4, pages from 39-44 [2] is a test design technique, that when applied yields normally three test cases for each boundary. On each side of the boundary and on the actual boundary, in the figure 1 we have x-, x, x+ (lower bound, extreme and upper bound) and z-, z, z+ (lower bound, extreme and upper bound). This testing technique from Copeland section 4, pages 42-43 [2] is most appropriate where the input is a continuous range of values.

x Partitioning z

Boundary values

Figure 1. BVA Partitioning

It is not possible to include all the values of attributes from the domains in test cases, but the domain can be split in equivalence partitions and in this case from Pol, Teunissen and Van Veenendaal section 15, pages 201-202 [19] we use equivalence Partitioning[54][55] also called Equivalent Class from Copeland section 3, pages from 28-33 [2]. It is a testing technique used to reduce the number of test cases to a manageable level while still maintaining a high coverage of the system. Using these functional techniques we cover major test techniques handling input. For Structural approaches we have chosen Statement coverage by Burnstein section 5, pages from 101-108 [12] and [2]. Code statement coverage from Craig and Jaskiel section 5, pages 181-182[1]and [8][48][49][50] is used to understand which code has not been addressed after a TC (Test Case) execution and improve or create new test cases to cover undressed code. In our work, we used a simple and free tool to make code coverage on JAVA environment called EclEMMA, see [3]. This tool was selected because it is fast to develop/test, coverage results are immediately summarized and highlighted in the Java source code editors and it does not require modifying the projects or performing any other setup. In addition we approach the code level with a test design technique, called Fault Injection [51][52]. It is by Hsueh, Tsai and Iyer [59] "...In this technique, instructions are added to the target program that allows fault injection to occur before particular instructions, much like the code-modification method. Unlike code modification, code insertion performs fault injection during runtime and adds instructions rather than changing original instructions", and by Voas and McGraw “a useful tool in developing high quality, reliable code. Its ability to reveal how software systems behave under experimentally controlled anomalous circumstances makes it an

(17)

17

ideal crystal ball for predicting how badly good software can behave.”[13]. The aim of this technique is to test codes and the database (is present) of a system by entering faults and checking if the fault is propagated in the code or if it is caught or not. We use Error Guessing [41][42], which is an ad hoc approach based on intuition and experience with the goal to identify critical test values, which will most likely reveal errors. Some people, from Myers and Glenford section 4, pages 88-89 [41], seem to be naturally adept at program testing, these people seem to have a knack for ‘smelling out’ errors by intuition and experience. The goal is to utilize negative, random input, boundary value analysis, Equivalence Partitioning, fault injection and error guessing to get a better overview of the interaction between components and see how the system respond when some faults are propagated in the code, instead of the normal use to evaluate that the test suite is complete.

We want to show some more definitions found on scientific papers and books and at the end show our conclusions on TDT. We even show an example of how to apply TDT.

The structure for each TDT description is as follows:

• Citation of definition from scientific papers and books selected after a selection on different ;

• Problems and misunderstanding during the project; • Conclusions on literature;

• A simple example on how to make TCs referring to TDT.

3.1 POSITIVE TESTING

“The process of Positive Testing is intended to verify that a system conforms to its stated requirements. Typically, Test Cases will be designed by analysis of the Requirements Specification document for the Application Under Test (AUT).”[61 page 16]

During studying we ask ourselves if Positive Testing is a real technique or not. The answer is “it depends on the context”. This sentence comes from different reflections. When we are testing a system and we have documents, it become easier understand what functional requirements are and what the system has to do. In this case we have a very strong assumption where to apply Positive Testing and what are the expected results. In a scenario without documentation became harder to understand functional requirements because we suppose that a feature is a part of system routine and we must use our intuitions as precondition to make testing. Positive Testing is not a math based technique.

Concluding Positive testing (or Normal Testing) means confirming the main requirements of the system when they are available. We provide as input only data that system domain expects to receive as valid input. Positive Testing without specifications is not an exact science, because without documentation we guess the intentions of the system and it is possible to miss e.g. computations results etc.

We want to show an example to understand what a TC for Positive Testing should be. Imagine a bank scenario where a system has to handle customer holder accounts and we miss system documentations. We have to suppose that during

(18)

18

customer registration a field with the customer name accepts only characters and a field with address only alphanumeric values and so on. To make TC we must suppose that something is right into the system and use it as precondition and then make a TC on it or use a systematic method to understand the system. This could be considered “re-engineering” the requirement, with the aid of the test design technique.

3.2 NEGATIVE TESTING

“The process of Negative Testing is intended to demonstrate "that a system does not do what it is not supposed to do". That is, Test Cases are designed to investigate the behavior of the AUT (Application Under Test) outside the strict scope of the Requirements Specification. It is often used to tests aspects of the AUT that have not been documented or have been poorly or incompletely document in the specification.”[61], page 16.

The discussion about Negative Testing has similar properties as Positive Testing.

Negative Testing is obvious and easier to apply when there exists documentation or specifications. Using documents is easy to understand what input is wrong and what we can use as input but for the other systems that lack documents could be really difficult. This scenario is common on system without documentation because we have to decide what is right and what is wrong, Figure 2 shows an example of how it could be difficult to understand what a normal execution path is and what it isn’t. In this example, there is no manual or “help” guidance to read or specified rules to refer to. Therefore we generally base Test Cases construction only using precondition and specifying what we assume to be normal intention and what is not.

Figure 2. Negative Testing is not obvious without specification

Our conclusion is therefore that Negative Testing creates test cases, where the input to the test case will give the system wrong data input compared to what the system would expect. The intention is that we will observe how the system reacts to these inputs. We expect that the Negative Test design technique will reflect how well the system manages to handle wrong input (for example using alert message

(19)

19

or some other routine to handle wrong data). The negative test design technique is also working outside the accepted norm for the system, or in other words we do something with the system that was not the intention.

Imagine a text field where is possible to insert only numbers and we want use Negative Testing. We have to define which values are accepted and which are not. One example is to define the allowed and not allowed input domain. We use an ASCII Table (table 1) to show all one character possible inputs.

TABLE 1. ASCII TABLE

Positive values domain are from 48 to 57 (considering characters referring to Dec column) and Negative values domain are from 0 to 47 and from 58 to 127 (considering characters referring to Dec column in the table 1).

This test design technique will result in a series of test cases, where one or a combination of more values in “Negative values domain” can be used as input for TCs in Negative Testing. Again, a combination of Positive values domain and Negative value domain could produce more TCs.

(20)

20 3.3 BOUNDARY VALUE ANALYSIS

“It requires that the tester select elements close to the edges, so that both the upper and lower edges of an equivalence class are covered by test cases.

1. If an input condition for the software-under-test is specified as a range of values, develop valid test cases for the ends of the range, and invalid test case for possibilities just above and below the ends of the range.

2. If an input condition for the software-under-test is specified as a number of values, develop valid test case for the minimum and maximum number as well as invalid test cases that include one lesser and one greater than the maximum and minimum.

3. If the input or output of the software-under-test is an ordered set, such as a table or a linear list, develop tests that focus on the first and last element of the set.”[62]

BVA is a rule based TDT that can be automated, if the range can be clearly defined in an ordinal, countable set.

During our study we were a little confused between Equivalence Partitioning and Boundary Values Analysis because in some case they are very similar and some TCs are the same. Now we can say that BVA is between Equivalence Partitioning and we are going to explain it. We refer only to numeric EP.

We would like to show an example to better understand this sentence. We consider the following domain: 5<=X<=20.

In Equivalent Partitioning we have: Class A: 5<=X<=20

Class B: values > 20 (we expected that all these data are managed in the same manner and it is different from class C)

Class C: values <5 (we expected that all these data are managed in the same manner and it is different from class B)

In BVA we have the following boundaries using 5 and 20 as extreme values: Possible Test Case on 5 value: use 4 as input

Possible Test Case on 5 value: use 5 as input Possible Test Case on 5 value: use 6 as input Possible Test Case on 20 value: use 19 as input Possible Test Case on 20 value: use 20 as input Possible Test Case on 20 value: use 21 as input

The following figure (Figure 3) shows exactly what we mean.

(21)

21

As we can see if we represent Equivalent Partitioning and we mark in blue the classes and we mark in red values of BVA we can see that BVA is between Equivalence Partitioning. Generally in BVA we have 3 possible test cases for each extreme value.

Referring to Figure 3 we can see that BVA is composed by three values and they are the extreme value, the previous lower-bound of the extreme value and the next upper-bound of the extreme value.

Concluding we can say that the number of BVA in a system is equal to number of extreme values multiplied by three. This is not a rule but it is the routine used in our project. In Boundary Value Analysis we focus testing on extreme values and on the follow value or the previous value in according with the current domain. Each value is a TC.

3.4 EQUIVALENCE PARTITIONING

”Partitioning testing techniques must produce distinct partitions of the input domain and that none may overlap, as any intersections could not be considered homogeneous values within these intersections would not behave similarly to both partitions.”[59]

In Equivalent Partitioning Testing the domain is divided in different sub-domains and there is an assumption that all data (values, characters and so on) within a sub-domain (class or Partitioning) is treated the same by the system. It is really useful to cover untouched part of the system with other test techniques.

In equivalent Partitioning testing we should divide domain in sub-domain and also add external sub-domains. An example of different partitions using domain showed in boundary values (Figure 3) should be:

Partitioning A: Values inside domain (5,6,7…18,19,20) Partitioning B: upper bound values (> 20)

Partitioning C: lower bound values (<5)

We make “Partitioning B” and “Partitioning C” because we expect that values greater than 20 are handled differently from values lower than 5 (for example with different error message or different routine)

After this study we are able to make TC easier because values from each class could be an input for a Test Case.

3.5 RANDOM INPUT

“It is creating tests where the data is in the format of real data but all fields are generated randomly, often using a tool.

1. The tests are often not very realistic. 2. Many of the tests become redundant.

3. The tests cannot be recreated unless the input data is stored (especially when automated tool are used).

(22)

22 4. There is no gouge of actual coverage of the random tests.”[63] p 175.

“Random input is one of the very few techniques for automatically generating Test Cases. Test tools provide a harness for the AUT (Application Under Test), and a generator produces random combinations of valid input values, which are input to the AUT. This technique can be highly effective in identifying obscure defects.”[61 p18]

Random input is not always applicable on each domain. During our studying we were in front of an application that manage UML domain. We haven’t found a method to give random values in UML domain to the system.

In our way of thinking in Random Input use automated tools to create input data and check how the system respond, because most of the time it can be used for “crash-proofing” or to see if the system will “hang together” under adverse impact.

Using copy and paste of random values on systems we encountered many unexpected results. On java application for example (such as the system propose on section 6) they have some sort of data input checking when we digit character by keyboard but they are not capable to understand if we use copy and paste of the extracted values from “The Hat” (see Figure 4).

Figure 4: The Hat interface

The Hat is a free tool from “Harmony Hollow Software” to generate possible input to make Test Cases. This tool has two different ways to store input. A

(23)

23

Manual insertion of values that consists in type many different input and then press on shuffle to receive a value.

The Hat has as second option the possibility to take a .txt file that contains values and when we click on shuffle and it extracts the value that we use as input for the TC. Each line of the .txt represents a possible value.

3.6 FAULT INJECTION

“Fault injection, i.e., the deliberate introduction of faults into a system-the target system-is applicable every time fault and/or error notions are concerned in the development process. When fault injection is to be considered on a target system, the input domain corresponds to a set of faults F and a set of activations A that specifies the domain used to functionally exercise the system and the output domain corresponds to a set of readouts R and a set of derived measures M. Together, the FARM sets constitute the major attributes that can be used to fully characterize fault injection [60]”

In our way of thinking, Fault Injection is the most important Code Level Testing of studied TDT. We essentially modify system at source code and we look for change at system level.

Fault Injection Testing gives good results on familiar environment or when we can reconstruct Software Architecture. Construct good test cases with Fault Injection needs some requirements as a good knowledge of language of development otherwise we cannot understand what we are touching and why. Again, we need a good knowledge of the system indeed Fault Injection is easier to apply with this precondition.

If we have the possibility to understand how components are connected and how they communicate each other we can observe error propagation into the system and looking for how affected component react to code change.

Combine Fault Injection with Coverage Testing could give great result because we want test with Fault Injection untouched area of the system by Coverage Testing and increase the test coverage average.

Generally we select Test Cases in according to the covered source code. We try to select unexplored area of source code.

3.7 EXPLORATORY TESTING

“Exploratory testing is a test design technique where the tester actively controls the design of the tests as those tests are performed and user information gained while testing to design new and better tests.”[64]

We decided to discard Exploratory testing because it is not possible to repeat test cases and they do not contain an expected outcome/verdict. We use exploratory Testing just to explore and use the system.

(24)

24

4.

OPEN SOURCE SYSTEM

Open source doesn't just mean access to the source code. The distribution terms of open-source software must comply with the following criteria according to Peres and Bruce pages from 177 to 180 [35]:

“TO BE OPEN SOURCE, ALL OF THE TERMS BELOW MUST BE APPLIED TOGETHER, AND IN ALL CASES. FOR EXAMPLE, THEY MUST BE APPLIED TO DERIVED VERSIONS OF A PROGRAM AS WELL AS THE ORIGINAL PROGRAM. IT'S NOT SUFFICIENT TO APPLY SOME AND NOT OTHERS, AND IT'S NOT SUFFICIENT FOR THE TERMS TO ONLY APPLY SOME OF THE TIME.”

Free Redistribution Source code

Derived Works

Integrity of The Author's Source Code

No Discrimination Against Persons or Groups No Discrimination Against Fields of Endeavor Distribution of License

License Must Not Be Specific to a Product License Must Not Restrict Other Software License Must Be Technology-Neutral

Free Redistribution means the license may not require a royalty or other fee for such sale. It is possible to make any number of copies of the software and sell or give them away, without pay anyone. Source code means the program must include source code. The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed. Derived Works means the license must allow modifications and derived works. Integrity of The Author's Source Code means the license may require derived works to carry a different name or version number from the original software. No Discrimination Against Persons or Groups means the license must not discriminate against any person or group of persons. No Discrimination Against Fields of Endeavor means the license must not restrict anyone from making use of the program in a specific field of endeavor. Distribution of License means there are no restrictions on its use. It can be used in a business or for genetic research. License Must Not Be Specific to a Product means the rights attached to the program must not depend on the program's being part of a particular software distribution. It must remain free if it is separated from the software distribution. License Must Not Restrict Other Software means the license must not place restrictions on other software that is distributed along with the licensed software. License Must Be Technology-Neutral means no

(25)

25

provision of the license may be predicated on any individual technology or style of interface.

4.1 OPEN SOURCE SELECTION CRITERIONS

For the sake of this experiment, we have chosen to look at open source based software systems. We are only using open source systems of small size (until 300.000 code lines, where the number of code lines doesn’t include jar file and dll), since we preferred to look at a many systems, instead of focusing only on a few in the limited time available. We also choose open source systems of different type to have a more ample range of different systems. A list of different types of open source systems is shown below:

1. Bank / Insurance / Economic 2. Management / Calculate 3. Modeling / Development 4. Graphic / Paint 5. Game / Wits 6. Communication / Government 7. Military 8. Medicine/diagnostics 9. Tools 10.Embedded 11.Robot (industrial) 12.Electronic 13.Web / Internet 14.Mobile application

For our experiment, we are only using open source systems from the first five types.

(26)

26 4.2 OPEN SOURCE USED

A table with all the characteristic of the systems that we used in our experiment. The table below is divided in seven columns, and in the first one is written the name of the system and in the second one there is the type in according of the type selected in the previous section.

System Type Number

of versions Size of Software First Release Number of release Number of downloads

IRC Client [32] Communication 1 85,4Kb

12/17/2002 8:20:42

AM

1 31066

Age Calculator [6] Management / Calculate 1 16,2Kb 04/17/2002 6:32:32 AM 1 7208

DraW [26] Graphic / Paint 2 341,6Kb 20 March

2006

1 1404

Image Processing [5] Graphic / Paint) 1 45,8Kb 08/02/2007 1 5769

CleanSheet [4] Management / Calculate 4 1,1Mb 12 May 2005 4 6631 UMLet [20] Modeling / Development 9 5,99Mb not available 19 not available

Bejeweled [31] Game / Wits 1 110Kb 11/09/2006

16.19

1 6499

Bomberman [27] Game / Wits 2 2Mb August

2001 It arrives at 2.4 but we haven't more information not available Euro Budget [25] Bank / Insurance / Economic 2 1,1Mb 16 August 2002 2 7575

Student Helper [39] Management / Calculate 1 39Kb 9/17/2003 1 30934 JavaJUSP [38] Management / Calculate 1 2Mb not available 2 not available Bank System [40] Bank / Insurance / Economic 1 169Kb 12/10/2003 1 33677

Image J [28] Graphic /Paint 1 631Kb not

available

2 not available

Latex Draw [30] Graphic /Paint 2 4,2Mb 28 january

2006 16 not available Jmoney [24] Bank / Insurance / Economic 4 1Mb 10 March 2001 16 3723

(27)

27

5. SYSTEMATIC USE OF TEST DESIGN TECHNIQUES

In this section we are explaining step by step how systematic test is done on our different systems. We also explain both our test design technique, and our test cases for the system under test. In this section we describe our method used initially of this experiment, and how we understood that we need a more systematic routine. We found that our initial approach was useful to make code level testing and help us select appropriate test cases and we are going to explain it inside this section.

5.1 MOTIVATION ABOUT SYSTEMATIC ROUTINE

Planning was an important task of this project because it is the result of all researches on data input domain and it was refined many times during the development of the project.

We show how our approach used at beginning of the project can be used to support the validity of a systematic routine.

At beginning we tried to develop our routine in perspective of system size’s and language familiarity. Documentation is a problem on open source system because they should be pouring or affected by erosion and drift and for these reasons we need a double way approach.

The approach is divided (Figure 5) into 2 different paths that depend on what we have at hand.

The documentation approach (Blue path in Figure 5) is the simplest one because we have documentation and we can use it as a guide to understand the system and to plan testing. We have to be careful about drift and erosion and validate documentation using requirements, software architecture and running system to evaluate what we have at hand (Figure 5 point 1.a). For these systems there are no problems about size and we can test them on each level and with any data input technique when applicable and use documentation as a guide to test the Open Source System (Figure 5 point 1.b and Testing phase).

Figure 7 (Red path) even shows the approach used when we have no available documentation.

(28)

28 Figure 5. Our first approach to Testing

To better understand each Level of Testing we need to draw the Software Architecture of the system and use it as a guide for testing. Using combinations of Testing Techniques, results of them and software engineering notions it will be possible to improve Software architecture and continue with testing.

Here is showed the approach step by step: a) Determine system dimension

b) Research for available documentation (Figure 5, point 2.a)

c) Exploratory Testing and first draft of Software Architecture (Figure 5, point 2.b)

d) System Level testing and when is possible improve Software Architecture (Figure 5, point 2.b, 2.c)

e) Integration Level testing and when is possible improve Software Architecture (Figure 5, come back from point 2.c to 2.b)

f) Code Level testing and when is possible improve Software Architecture (Figure 5, point 2.b and 2.c)

g) One more iteration from point a) to f) to check robustness of data and conclude testing (Figure 5, Testing point).

We start looking at source dimension to understand System’s dimension and learn more about code. If we have a small or a normal size system this approach

(29)

29

give good result otherwise we will spend more time in Architectural Recovery than in testing. A small, normal or big size system is classified in the following manner:

If we talk about community of open source where we have a maximum of 3 developers for a system we can say:

• Small size system: from 3 to 5000 lines of code

• Normal size system: from 5001 to 15.000 lines of code • Big size system: over 15.000 lines of code

• Otherwise in term of big companies we say:

• Small size system: from 3 to 280.000 lines of code

• Normal size system: from 280.001 to 2.500.000 lines of code • Big size system: over 2.500.000 lines of code

If we have not a Path Coverage Testing tool for the current environment we go to “step b of stepwise description.

We read about main requirements on the web page of the application and looking for some section on application such as “about us”, “about” or “references”. A good source of information are forums indeed open source system are upload on free community and they discuss about software on them.

Step 2: We run the application and became confident with it. Time strictly depends by system and tester skills. In this phase exploratory testing is a good alley. We start in draw a draft of Software Architecture using elements at hand;

Step 3. We commence with System Testing. We improve the Software Architecture trying to split big components (discovered before) into smaller components or to create a component composed by subcomponents (depends by current domains);

In Step 4 we can start with Integration Level Testing assuming we have a clear architectural view of the system, and we continue to redefine Software Architecture with the obtained results;

Finally, we use it as a guide to start Code Level Testing and (if you want but is not necessary for testing) complete the architecture;

Restart the approach and looking for inconsistencies and ambiguity. Here we use basic modeling techniques to clarify ambiguities.

This approach works on small systems but it has some problems on medium systems and more problems with big size systems indeed became hard draw architecture using testing because the understandability is much more difficult in big system. We understood that we need a different way to test the systems. Unfortunately it is not possible to create a universal routine to test each kind of system.

A tester has to open the system, study it and be able to associate fields or functionalities to Testing Techniques and he has to be capable to combine these techniques in according with their current domains. A tester must be systematic and results must be valid. In the next paragraph we show our view. Necessity of a Systematic approach, the creation of test cases without principles and the not valid results are the reasons because we leave from this approach.

This approach was used at the beginning and we understand the weakness of it. The explanation is simple: It is not possible to define precisely how many Test Cases we could create and which parameters we need as input.

(30)

30 5.2 SYSTEMATIC TEST CASE CREATION

As claimed earlier, it is used to understand how the system works and identify main requirements otherwise we read at system requirement documents (if it exists).

We can resume our approach in the following tasks: 1. Exploratory testing.

2. Study domain. 3. Domain fusion. 4. TCs creation

Task 1: We check the whole system for all data input fields. We collect those inputs to work on them in tasks 2 and 3.

Task 2: We create groups for different input domains and system functionality. We collect data input from different system areas. In this way we have a good view all data input grouped in different locations. I.e. As is show in figure 6 we should collect data into the following manner:

Board: Name(character, size length 100), Width (numeric, size length 6), Height (numeric, size length 6).

Default Value: Level (numeric, size length 6), Point to next level (numeric, size length 6), Time decrease (numeric, size length 6), interval (numeric, size length 6), bonus (numeric, size length 6).

Task 3: All homogeneous values (same type and same size length) and into the same functionality are collect as a unique domain as it is shown in figure 6 (Dots, dashes and line).

For the considered system example (figure 6), we can put together in a same group text fields with same values. This means that when we test one input field into the same group we test all the fields into that group. In this example we create 3 different groups

• Dots (Field Name)

• Dashes (Fields Requirement factor and Decrease factor) • Line (All the left fields)

Where into the dots circle we have text fields, into the line circle we have

integer number fields and into the dashes circle we have float number fields.

We also can create different groups for different domain where different domains could be symbol fields, date fields, images.

(31)

31 Figure 6. GUI example

Task 4: See section 5.6.

5.3 TEST TECHNIQUES USED IN EXPERIMENTS

For each system we create and execute at least 20 test cases but the number could be different between the systems. This depends on our ability to perform error guessing because sometimes it is possible to have two error guessing test cases and sometimes it is impossible to have one. We select test cases using on the following test design techniques: four Normal test cases and four Negative, three Random, three Boundary values, three Equivalence Partitioning, two Fault injections and one code statement coverage to improve Fault injection test cases and in addition we are basing our Error guessing on our initial exploratory play with the system.

(32)

32 5.4 LEVEL OF TEST USED IN EXPERIMENT

In this section we want to have a short introduction on different levels of software development process.

Figure 7. V-model

Typically, it is possible to make testing on different levels. In our work, we create test cases on different levels and we use a simplified test process model (V-model, see fig. 7) to identify different levels:

 Code or Unit  Integration  System  Acceptance

 Code or Unit level [2]: A unit is the “smallest” piece of the software that a

developer creates. It is typically the work of one programmer.

 Integration level [9] is a test that explores the interaction and consistency of

successfully tested components that is, components A and B have both passed their component test. They are aggregated to create a new component C= (A, B). Integration testing is done to explore the self-consistency of the aggregate. The aggregate behavior of interest in integration testing is usually observed at the interface between the components.

(33)

33

 System level: Typically system testing [2] includes many types of testing:

functionally, usability, performance and so on. This kind of testing is useful to confirm that all code modules work as specified, and that the system as a whole performs adequately on the platform on which it will be deployed.

 Acceptance level [2] is defined as testing, which when completed successfully,

will result in the customer accepting the software and giving us their money. Levels we applied the different test design techniques are: system, integration and code. A table with our selection in this experiment of the combinations between test technique and levels is shown below.

Level Test Technique

System Integration Code

Normal X - - Negative X - - Random X - - Boundary values X - - Eq. Partitioning X - - Fault injection - - X Error Guessing X - - Statement Coverage X X X

TABLE 3. TESTING / LEVEL

Using the character “X” in the position Normal\System means we used Normal technique at System level instead where we don’t made testing is marked with the “-“symbol.

(34)

34 5.5 SELECTION OF INPUT VALUES

After different groups are created, we define a range of values where it is possible select specific values for each domain and for each test techniques. To do this we need to use an ASCII table (see Fig 11).

TABLE 4. ASCII TABLE

We create range of values/group using the column “Dec” of an ASCII table. For example, if we want to test an integer field, we choose values from the column Dec using all the combination of values from 48 to 57. In this way, we are sure to use only integer values when an integer field is tested. All the possible combinations have got an ID code and are shown in the follow page.

(35)

35 ID

code

Type Values Range

a.1 Characters – Uppercase [65..90]

a.2 Characters – Lowercase [97..122]

a.3 Characters- Both [65..90] and [97..122]

b Integer number [48..57]

c Float number [48..57].[48..57] or [48..57],[48..57]

d Symbol [32..46] and [58..64] and [91..96] and [123..126]

e Date [48..57]/ [48..57]/ [48..57]

f.1 File random files are written using ASCII table’s values and loaded into the system

f.2 Image The load random image I use www.google.com

and write into search field the name Image and select the first one found

TABLE 5. ASCII RANGE VALUES

5.6 TEST CASES CREATION

Normal test case are created to test “normal” execution of the system, which means that the main intention of the usage of a function is executed - which what appears to be the “default” normal values expected and testing using system’s documentation. It is not always possible because most of the open source lacks system documentation. By Graham [57] “...Because requirements form the basis for tests, obviously we can’t do any testing until we have decent requirements…” and when we have no access to documentation, we use before exploratory test to “play” with the system trying to identify the main requirements. We aimed at describing the main features, and from them define the input domain. It means that when we are testing characters fields of a system, we pick up values into a specific range. In according to the table 6, we use the range (a) for character values. Negative test cases are based and created on which field has to be tested. We have to suppose to put wrong data input into the system. Insert wrong values it means insert values outside of appropriate range, this means if we are testing a character field of a system, we pick up values into a specific range. In according to the table 6, we use values from the range (b), (c), (d), (e) and if it is possible also (f). To see all the possible range of values see table shown above.

(36)

36

Domain Type Normal range Negative range

Characters (a) (b), (c), (d), (e) and if it is

possible also (f)

Characters Lowercase (a) (a.1) and (a.3)

Characters Uppercase (a) (a.2) and (a.3)

Text (a) (b), (c), (d), (e) and if it is

possible also (f)

Integer numbers (b) (a), (c), (d), (e) and if it is

possible also (f)

Float numbers (c) (a), (b), (d), (e) and if it is

possible also (f)

Symbols (d) (a), (b), (c), (e) and if it is

possible also (f)

Date (e) (a), (b), (c), (d) and if it is

possible also (f)

Files/images (f.1)/(f.2) (a), (b), (c), (d), (e) and empty file

or image TABLE 6. NORMAL AND NEGATIVE RANGE VALUES

Random input test case are created by inserting random input values by defining the input domain (when possible) using an automated tool called “The Hat” (see section Appendix “The Hat”) and verifies the response of the system.

Boundary value analysis test cases are based and created on which field has to be tested. We try to concentrate software testing effort on cases on boundary values of a system also called limit of valid values (see table 7), but sometimes it is not so easy recognize them or there aren’t. Equivalence Partitioning test cases are based and created on which field has to be tested. We divide a domain in different equivalent Partitions (when it is possible) and it is useful because in this way we can reduce the number of test cases and we create a test case for each of this Partitions. For example, if we are testing an integer field where its allowed range is from 10 to 100, we’ll create test cases using values from the range (b) (see table 7).

(37)

37

Domain Type Boundary values range Equivalence

Partitioningrange

Characters or text not applicable (a) and (b), (c), (d), (e) and if

it is possible also (f)

Integer numbers It depends by domain of the

system

(a) and (b), (c), (d), (e) and if it is possible also (f)

Float numbers It depends by domain of the

system

Values in (b) in according to the field under test

Symbols not applicable Values in (c) in according to

the field under test

Date Boundary values for day depends

on values assumed by month and year, for more detail see Karan and Wenying page 72 [15].

not always applicable

Files or images It depends by domain of the

system

not always applicable

TABLE 7. BVA RANGE VALUES

Test cases for fault injection are done inserting faults in the core of the system. It is by Hsueh, Tsai and Iyer [59] "...the program instruction must be modified before the program image is loaded and executed." We change the core of the program (initial value) by entering faults (mutated value) and checking if the fault is propagated in the code or if it is caught or not (sometimes it crashes or it is possible seeing strange system behavior. Examples of faults are shown in Table 8).

Code Type Initial Value Mutated Value

FIJCO Comparison Operators == != < > <= >= != == > < >= <= FIJFP Function parameters Modify the parameter/s of a function when it is called FIJDL Skip condition1

If (..){...} /* If (..) {…} */ FIJCV Change value Ie. While(i<10){…} Ie. While(i<20){…}

TABLE 8. FAULT INJECTION TABLE

1

“If” condition is commented, in this way some controls are skipped and it is possible analyze unusual system behavior, it is also possible comment an whole function.

(38)

38

Error guessing: it depends by tester and how much the tester knows the system.

5.7 TEST CASE TABLE.

In this section we are going to show how to write test case using a table. In this table we add all the information useful to understand a test case and in according to Kamde, Nandavadekar and Pawar [56] a test case table have to have a name and number, a stated purpose that includes what requirement is being tested, a description of the method of testing, actions and expected results, does not exceed 15 steps, saved in specified formats, file types. We use a detailed test case table with the following values:

 Test case ID

 Test Technique and Level  Input  Comment  Expected value  Actual value  Time  Verdict

Where Test case ID is an ID used to identify the whole test case, it is composed in the following way XXXYYYZ00.

Where XXX is a short name of the system, YYY is the name of the test (see the Fig 14), Z is the level testing (S= System, I =Integration and C =Code) and

00 is an identification number.

Name of test

NRL Normal

NEG Negative

RND Random

BVA Boundary Value

Analysis

FIJ Fault Injection

CCV Code statement coverage

ERG Error Guessing

TABLE 9.TEST DESIGN TECHNIQUES

Test Technique and Level is used to write the name of test design technique

(see section 3) and level of test (see section 5.4).

Input is the description of the input. Describe an input is not simple as it looks

like. First, we need to know input domain, what is an input and if it we are putting it into the right box or not. Second, we need to specify really clearly input at integration level because we must explain with part of system (or group of

References

Related documents

Some test rigs are built to evaluate the maximum torque capacity of the clutch, while others are built to investigate the friction caracteristics of a contact..

The ambiguity within this research stems from the difficulties automotive companies have when seeking data and choosing between improving current business models or implementing new

Om ett skriftlighetskrav skulle gälla för alla muntliga avtal per telefon på liknande sätt som för premiepensioner, skulle det innebära att ett erbjudande från en näringsidkare

Den avgörande skillnaden mellan de två myndigheternas metoder på den här punkten ligger inte i graden av teoristyrning, utan snarare i det faktum att Naturvårdsverket mäter

For a two-tone test, ideally, two spectrally clean sinusoidal signals with low phase noise must be added linearly to provide a test stimulus. The tones can be

Linköping Studies in Science and Technology

This method, called Complete IOCO, generates complete test suites for a specification IOTS with respect to a fault domain that contains all implementation IOTSs with at most as

This paper proposes three main concepts that can be used to investigate aging effects in the use and failure behavior of system test cases: test case activation curves, test case