• No results found

What makes agile test artifacts useful?: An activity-based quality model from a practitioners' perspective

N/A
N/A
Protected

Academic year: 2022

Share "What makes agile test artifacts useful?: An activity-based quality model from a practitioners' perspective"

Copied!
10
0
0

Loading.... (view fulltext now)

Full text

(1)

What Makes Agile Test Artifacts Useful? An Activity-Based Quality Model from a Practitioners’ Perspective

Jannik Fischbach Henning Femmer

Qualicen GmbH firstname.lastname@qualicen.de

Daniel Mendez

Davide Fucci

Blekinge Institute of Technology firstname.lastname@bth.se

Andreas Vogelsang

University of Cologne vogelsang@cs.uni-koeln.de

ABSTRACT

Background: The artifacts used in Agile software testing and the reasons why these artifacts are used are fairly well-understood.

However, empirical research on how Agile test artifacts are even- tually designed in practice and which quality factors make them useful for software testing remains sparse. Aims: Our objective is two-fold. First, we identify current challenges in using test ar- tifacts to understand why certain quality factors are considered good or bad. Second, we build an Activity-Based Artifact Qual- ity Model that describes what Agile test artifacts should look like.

Method: We conduct an industrial survey with 18 practitioners from 12 companies operating in seven different domains. Results:

Our analysis reveals nine challenges and 16 factors describing the quality of six test artifacts from the perspective of Agile testers. In- terestingly, we observed mostly challenges regarding language and traceability, which are well-known to occur in non-Agile projects.

Conclusions: Although Agile software testing is becoming the norm, we still have little confidence about general do’s and don’ts going beyond conventional wisdom. This study is the first to distill a list of quality factors deemed important to what can be considered as useful test artifacts.

CCS CONCEPTS

• Software and its engineering → Agile software develop- ment; • General and reference → Empirical studies.

KEYWORDS

agile testing, artifact quality, industrial survey ACM Reference Format:

Jannik Fischbach, Henning Femmer, Daniel Mendez, Davide Fucci, and An- dreas Vogelsang. 2020. What Makes Agile Test Artifacts Useful? An Activity- Based Quality Model from a Practitioners’ Perspective. In ESEM ’20: ACM / IEEE International Symposium on Empirical Software Engineering and Mea- surement (ESEM) (ESEM ’20), October 8–9, 2020, Bari, Italy. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3382494.3421462

Also with fortiss GmbH.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.

ESEM ’20, October 8–9, 2020, Bari, Italy

© 2020 Association for Computing Machinery.

ACM ISBN 978-1-4503-7580-1/20/10. . . $15.00 https://doi.org/10.1145/3382494.3421462

1 INTRODUCTION

The Agile Software Development (ASD) principle “working soft- ware over comprehensive documentation” promotes that documen- tation should be kept to what is necessary or useful [3]. Hence, common ASD frameworks, such as Scrum [21], mention only few artifacts (Epics, User story, etc.) that should be created, used, and maintained for documentation purposes. Instead, face-to-face com- munication should be encouraged in order to convey information.

Nevertheless, Agile practitioners have increasingly changed their attitude towards documentation [22] and are producing a variety of artifacts that are not inherent to ASD [2, 16, 23]. According to Wagenaar et al. [24], practitioners need additional artifacts for four reasons: i) they provide team governance, ii) they are useful for internal communication, iii) they are needed by external parties, and iv) they are useful for quality assurance. For the latter reason, a range of additional artifacts (e.g., Acceptance tests) are commonly created to perform comprehensive software testing.

Currently, we understand which test artifacts Agile teams in- troduce (or should introduce) on their own initiative [2] and why they are needed [24]. However, empirical research on how Agile test artifacts are designed in practice and, more specifically, which properties make them useful for quality assurance remains sparse.

Existing normative standards such as the ISTQB Acceptance Test- ing Syllabus [14] or ISO 29119:2013 [15] occasionally mention some properties that test artifacts should possess. However, there are issues with these normative standards. Firstly, the list of properties is not complete—most of the properties are defined for the artifacts introduced by the ASD frameworks but not for the additionally- required artifacts introduced by the team. Secondly, normative standards describe quality through abstract properties—e.g., Ac- ceptance criteria should be both “precise and concise” [14]. The standard does not provide any further description of what is meant by these vague properties. Thirdly, the empirical basis and reason- ing for these criteria remains unclear. This implies that the criteria are difficult, if not impossible to falsify. We argue that for a combina- tion of all of these reasons, we observe in practice that Agile teams fail to satisfy these normative criteria, and struggle in maintaining their documentation artefacts [13].

In contrast to these existing ways to define quality criteria, we argue that quality of test artifacts should be defined from a quality- in-use perspective. Following the idea of Activity-Based Artifact Quality Models [10], we postulate that the quality of a test artifact depends on the stakeholder using it and the activities for which it is used. Accordingly, we explore properties (so-called “quality factors”) of test artifacts that have a positive or negative im- pact on the activities of the stakeholders. To understand why

(2)

a certain quality factor is considered good or bad by the practi- tioners, we first study current challenges in using test artifacts.

Consequently, we extend the normative qualities with a list of con- crete factors describing what Agile test artifacts should look like.

For this purpose, we conduct an industrial survey based on one-on- one interviews with 18 practitioners from 12 companies operating in seven domains, and make the following contributions (C):

C 1: A list of nine challenges that practitioners face when using the test artifacts Acceptance criterion, Acceptance test, Feature, Test documentation, Test data, and Unit tests.

C 2: An Activity-Based Quality Model of 16 quality factors for these artifacts, serving as a foundation for systematic quality control in practice.

2 FUNDAMENTALS

In this section, we briefly define the theory that we will use as the foundation of this work. Femmer and Vogelsang [10] argue that it is not sufficient to speak of good and bad quality in general since the quality of an artifact depends on the context in which it is used.

More specifically, quality is determined by the stakeholders and the activities that they conduct with the artifact. The quality of an artifact is considered good if its properties allow stakeholders to effectively and efficiently carry out their activities. Following this line of thought, they propose Activity-based Artifact Quality Models (ABAQM) and apply them to study the quality of requirements engineering artifacts [9]. In this paper, we create an ABAQM for all test artifacts involved in ASD, which enables us to understand their quality in the Agile context. We use the following concepts to describe an ABAQM (see a meta model in Fig. 1):

Artifact: Following the quality-in-use paradigm, an artifact is a collection of coherent documented information which assists a stakeholder in reaching the project goals. Examples of artifacts are Use case documents and Test data. Artifacts that share similar properties can be combined into a generalized super-class. For ex- ample, Unit tests, Integration tests and System tests address different test levels but can be bundled into a super-class Test. In addition, artifacts can contain other artifacts, such as a User story which contains multiple Acceptance criteria.

Stakeholder: A stakeholder is interested in an artifact and uses it during a certain activity. An example of a stakeholder is a Test designer, who uses User stories to derive Acceptance tests.

Activity: An activity is an invested effort which involves one or more of the mentioned artifacts. An activity can be divided into sub-activities. For example, Acceptance test design can be decomposed into Acceptance test creation and Acceptance test updating. During an activity, stakeholders do not only use artifacts but also create new ones. Hence, artifacts can be both input and output of activities.

Quality Factor: A quality factor is a property that is or is not present in an artifact. Femmer and Vogelsang stress that this prop- erty “must be objectively assessable through a measure to be used for quality control” [10]. For example, a Test should only contain the minimum number of required Test cases to avoid excessive testing. This quality factor Minimal can be evaluated objectively.

Impact: An impact is a relation between a quality factor and an activity. The relation can be either positive (i.e., the presence of

Stakeholder

Activity

Quality Factor Sub-Artifact

Super-Artifact

creates

performs consists of contains

is present

impacts +/- generalized

by RQ 3

RQ 3

RQ 1

RQ 2

RQ 5 RQ 4

Fig. 1: ABAQM meta model and mapped RQs, based on [9]

the quality factor supports the stakeholder in the execution of an activity) or negative (i.e. the quality factor hinders the stakeholder).

The aforementioned quality factor Minimal, for example, has a positive impact on the activity Testing.

3 METHODOLOGY

In order to identify and understand the quality factors of Agile test artifacts, we chose (qualitative) survey as our research method. For our study, we followed the guidelines by Ciolkowski et al. [7] for conducting empirical studies based on surveys. These guidelines in- clude six steps that are performed in an iterative fashion: definition, design, implementation, execution, analysis, and packaging.

3.1 Survey Definition

3.1.1 Goal of this study. Following the Goal-Question-Metric [1]

technique, we define the goal of our survey as follows:

(1) Object. Test artifacts

(2) Purpose. Identify, understand, and define (3) Focus. Quality factors

(4) Viewpoint. Agile practitioners

(5) Context. Agile Software Development Projects

The expected outcome of our survey is a better understanding of quality factors of test artifacts. In our activity-based quality under- standing, these are properties that positively or negatively affect the stakeholders and their follow-up activities. These quality fac- tors should provide guidance for practitioners on how test artifacts should be designed. It should further establish the foundation for a systematic quality control of test artifacts. Based on the classifi- cation of Robson [20], our research goal is exploratory as we are seeking for new insights into the quality of Agile test artifacts.

3.1.2 Research Questions. Based on the idea that artifact quality is determined by the context in which it is used, we derived five research questions (RQ) from our survey goal. Each RQ addresses a specific component of the ABAQM meta model (see Fig. 1).

RQ 1: Which stakeholders are involved in Agile testing?

RQ 2: Which activities are performed by the stakeholders?

RQ 3: Which artifacts are used by the stakeholders in the context of these activities?

RQ 4: Which quality factors positively influence the execution of these activities?

RQ 5: Which quality factors negatively influence the execution of these activities?

(3)

Tab. 1: List of Participants

Company No. Role Size Domain

C1 P1 Product Owner

150k Insurance P2 Test Designer

C2 P3P4 Test DesignerTest Lead 1k Retail

C3 P5 Test Lead 20 Software

C4 P6 Agile Team Lead 50 E-Mobility

C5

P7 Partner

500 IT Consulting P8 Software Developer

P15 Software Developer

C6 P9 Product Owner 10 PropTech

C7 P10 Agile Coach 50 IT Consulting

C8 P11 Agile Team Lead 1k Software

C9 P12 Software Developer 2k Software

C10 P13 Software Architect 200 Software C11 P14P16 Software ArchitectSoftware Architect 200 Software C12 P17P18 Business AnalystBusiness Analyst 40k Reinsurance

3.2 Survey Design

3.2.1 Population and survey sample. The selection of the survey participants was driven by a purposeful sampling strategy [19].

Specifically, we defined criteria that the participants need to meet to be suitable for our survey, a) they work for a company that de- velops software following a defined Agile software paradigm (e.g., SCRUM, SAFe), b) they have been involved in the testing process for at least one year, and c) they create, use and/or maintain at least one test artifact. Each researcher involved in the survey prepared a list of potential interview partners using their industrial contacts (convenience sampling). From this list, the research team jointly selected suitable partners based on their adequacy for the study.

To further increase the sample size, we asked each interviewee for relevant contacts after the interview (snowball sampling). We stopped conducting more interviews after we reached saturation.

More precisely, once we could no longer identify new quality fac- tors. Tab. 1 presents an overview of the participants, their roles, and information about their companies. In total, 18 practitioners from 12 different companies operating in seven different domains participated in our survey. We did not restrict our population with regard to company size or application domain. Rather, we involved practitioners from companies of different domains and sizes to obtain a holistic understanding of test artifact quality.

3.2.2 Data Collection. We chose interviews over other data collec- tion instruments for two reasons. First, ambiguities in the questions can be resolved directly ensuring that all questions are understood correctly and that they are not skipped. Second, the interviewer can observe the behavior of the participants and ask them to elab- orate their responses (e.g., to better understand the reasoning of the participant, or to go deeper into details). This is particularly important to understand why the participant considers the quality of a certain test artifact good or bad.

3.2.3 Questionnaire Design. Prior to conducting the interviews, we developed an interview guideline to gather the data for answering our RQs. We designed the interview questions to systematically identify the elements of the ABAQM to shed light on the quality of test artifacts. For this purpose, we followed the guidelines of

Tab. 2: Questionnaire Structure

intro

How many employees work at your company?

In which domain does the company operate?

Since when does your project follow the Agile software paradigm?

Which framework (Crystal, SCRUM etc.) do you follow?

core

What is your role in the testing process?

Which activities do you perform?

What is the purpose of your activities?

Which artifacts do you create as part of your work?

Which artifacts do you use as part of your work?

Which artifacts do you maintain as part of your work?

What do you need the artifacts for?

Do problems or challenges arise during your activities?

probing

What exactly about the artifact bothers you?

How should the artifact be designed instead?

How is the quality of the test artifacts currently checked?

Dillman et al. [8] to reduce common mistakes when setting up a questionnaire (e.g., avoiding double-barreled questions). Since our research goal and RQ are of exploratory nature, most questions are open-ended. Our questionnaire consists of 15 questions, including 13 open-ended questions and two closed questions (see Tab. 2). In each interview, we asked introductory questions to gather informa- tion about the participant’s background (e.g., company, experience in ASD), followed by questions about the activities of the partici- pants and the artifacts they use in the context of these activities. To avoid misinterpretations, we gave a short briefing of the concepts central to this study at the beginning of each interview.

The greatest challenge in compiling the questionnaire was to develop questions for determining the quality factors. For this pur- pose, we discussed two different questioning strategies. First, ask the participant directly which quality factor are important for the artifact to be useful (e.g., “Which properties should the artifact possess from your perspective?”). In this case, the participant is explicitly asked to state the quality factors. Second, initially ask the participants which problems occur during their activities and the usage of their artifacts. Subsequently, use probing questions to ask what exactly bothers the stakeholder about the artifact and how the artifact should have been designed instead. Using this

“problem-oriented questioning approach,” we first determined the problems related to artefact usage and then derived the respective quality factors from these problems.

3.2.4 Pilot. To decide which questioning strategy is best suited for the creation of a ABAQM, we designed a questionnaire for both strategies and evaluated them in a pilot (see step 1 in Fig. 2). We conducted the pilot phase iteratively; it consisted of two parts, the internal pilot and the real case pilot. In the internal pilot, the ques- tions were continuously refined by the research team with regard to suitability, understandability, and correctness. The real case pilot involved two interviews with participants from the targeted popu- lation, and revealed that the “problem oriented” approach is more suitable for collecting the quality factors. Practitioners struggle to abstract and define independently which property of an artefact leads to good or bad quality. It proved more effective to gather the quality factors together by first discussing current challenges and then successively deriving the quality factors of the artifacts. Hence, the probing questions are an integral part of our questionnaire as

(4)

they encourage the participants to expand a particular anecdote and to define precisely what they like or dislike about the artifact.

3.3 Survey Implementation

During this step, we compiled all the material needed to conduct the survey. We prepared an invitation letter to ask potential participants for an interview. In addition, we provided our questionnaire to the participants in advance to allow them to get a first impression of the content of the interview and prepare accordingly.

3.4 Survey Execution

All interviews were conducted by the first author. The duration of the interviews had an average of 41 minutes with a minimum of 31 and a maximum of 67 minutes. They took place from March to May 2020. All interviews were conducted remotely via GoToMeeting and Google Hangouts as face-to-face interviews were infeasible due to COVID-19. We interviewed all participants individually to prevent that their statements were influenced by others. The participants were informed, before starting the interview, that the data will be treated anonymously. Additionally, all interviews were conducted in the native language of the participant—German, with the exception of one in English. The audio of all interviews was recorded with the permission of the participants for subsequent analysis (see step 2 in Fig. 2). Due to confidentiality agreements with the respective individuals, the recordings cannot be published.

3.5 Survey Analysis

Since most of our interview questions are open-ended, we decided to use qualitative content analysis in order to analyze the interview data (see step 3 in Fig. 2). Following the guidelines of Mayring [18], we conducted a content analysis inductively as our research goal is explorative and we needed to derive the quality factors of the test artifacts from the interview data. The first author analyzed the interview recordings and performed two steps for each interview.

First, for each artefact discussed in the interview, the mentioned problems were determined. Second, we derived factors from theses problems that influence the quality of the artifacts positively or negatively. In particular, we studied the answers to the probing questions, which give a precise insight into which quality factors are considered good or bad. To validate our results, we performed an internal review process (see step 4 in Fig. 2). We involved three students and provided them with the interview recordings as well as the hypotheses and derived quality factors. They performed the same steps as the first author and compared the results in order to agree on the information to be extracted. In case of deviations, the respective passages in the interview were analyzed together until reaching a consensus. After the validation process, we used frequency analysis to find out which problem was mentioned more often. This provides a first indication of where systematic quality control may be most needed.

3.6 Survey Packaging

We report our results in two ways. Firstly, in the form of a research paper to share our findings regarding quality control of Agile test artifacts in the research community. Secondly, as an executive sum- mary to share the results with the interviewed practitioners.

4 RESULTS

This section presents the results of our survey structured accord- ing to the research questions. Based on our activity-based quality understanding, we describe for each test artifact (see RQ 3) the stakeholders using it (see RQ 1) and the context of the activities (see RQ 2). As described in Section 3.2.3, we applied a “problem oriented” questioning approach to determine the quality factors.

We report the identified challenges arising when the stakeholders use each artefact during their activities. From these challenges, we derived the factors that positively (see RQ 4) or negatively (see RQ 5) influence the quality of the artifacts. A positive impact of a quality factor is indicated by ⊕, while a negative impact is indicated by ⊖. The artifacts that were discussed by our participants were acceptance criterion, acceptance test, feature, test docu- mentation, test data, unit test and, finally, all test artifacts for factors independent of the concrete artifact.

Our final ABAQM (see Fig. 3) includes 16 quality factors. Most of the quality factors support the stakeholders in carrying out their activities (13 out of 16 quality factors). However, three quality factors hinder the stakeholders in performing certain activities.

4.1 Artifact 1: Acceptance Criterion

4.1.1 Stakeholder and Activities. Acceptance criteria (AC) are conditions that a system must meet in order to fulfill a User story and be ultimately accepted by the user. AC are used by Test de- signers during Acceptance test creation. This activity involves two steps. Firstly, the Test designer analyzes all AC assigned to a particular User story to understand the expected system behaviour.

Secondly, the Test designer derives Test cases for each AC and merges them into an Acceptance test, which is later used in the quality assurance process to check the compliance of the system with the User story. All Test designers interviewed stated that both steps are performed manually.

4.1.2 Challenges and Quality Factors. We found two major chal- lenges with the usage of AC during Acceptance test creation.

From these challenges, we derived six quality factors.

Challenge 1: Acceptance criteria are ambiguously formulated.

The interviewed participants complained about the poor linguistic quality of the AC. In many cases, it is not clear “what exactly the system is supposed to do, which makes it difficult to derive test cases”

(P3). Hence, Test designers need to contact the Business analyst who defined the criterion and clarify its meaning before they can start with the actual test case creation. This leads to delays in the test design process. According to P2 and P3, it would be helpful if the formulation of an AC is checked prior to the test process to ensure that only testable AC are submitted to the Test designer. For this purpose, the Test designer should be involved in the formulation of the AC. However, this would require additional resources which are very limited in practice as stressed by P2 and P17:

“We lack the time to discuss every acceptance criterion with each other. We are dependent on the formulation skills of our business analysts.” (P2)

“The quality of acceptance criteria varies from project to project. Some analysts specify them precisely, while some do not. However, checking the formulations manually is not possible due to tight time constraints.” (P17)

(5)

Quality-in-Use Paradigm Research Goal

Draft Instrument Questioning Strategy I

Draft Instrument Questioning Strategy II

Validated Instrument

Recordings Pilot & Main

Interviews

Derived Quality Factors Workshop

Researcher

Validation Process

Consolidated Quality Model Double-checked

Quality Factors

1

2

3

4

1

5

Fig. 2: Overview of the method followed in our industrial survey: (1) preparing and validating the instrument, (2) conducting interviews, (3) qualitative content analysis, (4) review process, and (5) creating the Activity-Based Artifact Quality Model.

Hence, a quality assurance check of ACs should be performed automatically to be suitable for practical use. In this context, the AC should be reviewed with respect to the following quality factors:

QF 1: Coreferences ⊖ A user story usually contains multiple AC specifying the expected system behaviour. In practice, these AC often contain coreferences (i.e., expressions that refer to the same entities). As the number of AC increases, it becomes difficult to resolve these coreferences correctly, which hinders the test design.

Hence, AC should not contain coreferences to ensure testability.

QF 2: Vague phrases ⊖ The interviews show that AC are usually defined using unrestricted natural language. The use of natural language is intuitive for Business analysts, but bears the risk of vagueness and ambiguity. As already stated by Berry and Kamsties, this can lead to “diverging expectations and inadequate or undesirably diverging implementations’’ [4]. We found that vague phrases often occur in AC and hinder the Acceptance test design.

“You often see criteria like “the system should be able to upload the data quickly.” What exactly is meant by quickly? You do not know then what to test.” (P11)

“A typical example you often see: “if possible, the system should do xy.” It is unclear what possible means.” (P2)

We refrain from listing all vague phrases in the ABAQM since a number of studies (e.g. [9, 12]) have already dealt with this quality factor in requirements. Instead, we want to explicitly point out that this quality issue is also relevant for Acceptance criteria.

QF 3: References to emails, calls, and documents ⊖ Instead of fully documenting the desired system functionality, expressions like

“as discussed by phone” are often found within the AC. This leads to a series of problems. First, the AC can only be understood by the stakeholders involved in the call and cannot be converted into test cases by another Test designer. Therefore, the testability of the AC is limited due to the undocumented, implicit knowledge. Second, information about the system functionality is lost with changes in the project team and it is no longer known which functionality the created test case was initially supposed to test. As a result, there is no traceability between the created Test case and the AC. In C1, AC also often contain references to other Acceptance criteria or documents (e.g. “as described in document x”). Due to the high change dynamics in Agile projects, these references become quickly outdated, which results in gaps in the requirement specification.

Challenge 2: Lack of an overview of dependencies between ac- ceptance criteria. Acceptance test design involves not only the creation of Acceptance tests for new system functionalities but

also the adaptation of existing Acceptance tests to changing customer requirements. The latter is essential in order to keep re- quirements and tests aligned. For this purpose, Test designers need to understand the relationship between already implemented User stories and new User stories, and adapt the test suite ac- cordingly. If the new User story introduces a new functionality, a new Acceptance test must be created and added to the test suite.

If the User story changes an already implemented functionality, the existing Acceptance tests must be adapted. However, it is increasingly difficult to identify these relationships due to the high number of User stories. The interviews showed that for every re- quested change in the software a new User story is created, rather than the existing User story changed. This observation coincides with the results of the study by Hotomski et al. [13]. Consequently, the number of User stories and AC is growing steadily, making it more and more difficult to keep track of them:

“If someone adds a new user story to the backlog that changes or overwrites another user story, we don’t notice it. So we don’t know which tests we need to adjust.” (P2)

Instead, separate Acceptance tests are created for each User story, resulting in a test suite constantly increasing in scope and complexity. When the test suite is executed and some tests fail, “we don’t know if these tests reveal a real bug in the system or if they are checking old functionality and should have been updated” (P2). This leads to additional effort and therefore high testing costs. A similar situation is found in company C2:

“We do not know whether some acceptance criteria overlap or even contra- dict each other. Hence, it sometimes happens that we create contradictory test cases.” (P3).

QF 4: Conflict-free ⊕ A Test designer can only maintain a con- sistent test suite if the underlying AC are not contradictory. Conse- quently, practitioners require a method that automatically compares AC with each other and reveals inconsistencies. This will have a positive impact on both Acceptance test creation and updating as it indicates which User stories the Test designer needs to check, as stressed by P2:

“As a test designer I have to understand where and how the functionality is supposed to change and how I need to adapt my tests. If you could some- how automatically display overlaps between acceptance criteria, that would help me a lot.” (P2).

In the case of major changes introduced by the new User story, the existing Acceptance test should be archived and a new Ac- ceptance test created. Otherwise, the existing Acceptance test

(6)

should be adapted. This helps to avoid false negatives (i.e., invalid failing tests) during test execution. According to P2, P10 and P11, the old User story should be eventually assigned the status “old”

and linked to the new User story. This is essential to enable version control of the test assets as described in Section 4.7.

QF 5: Unique ⊕ In order to prevent the creation of unnecessary tests, it is essential that AC do not describe redundant function- alities. If two AC describe the same functionality, the Business analyst needs to be informed and both AC need to be merged.

QF 6: Link to related Acceptance Criteria ⊕ Acceptance tests sometimes cover more than one AC, sometimes of more than one User story. Therefore, knowing the relations between AC enables to minimize the overall testing effort since related functionalities can be tested simultaneously.

“We often noticed afterwards that test cases could have been bundled together, e.g. for acceptance criteria that describe the same UI view.” (P17)

“We’ve been trying to optimize our test suite for some time now. But we fail frequently to create combined test cases for related requirements, because we do not know which acceptance criteria belong together.” (P4) In order to create such joint tests, the Test designer needs to understand the relationships between the AC and consider them during test case creation. Accordingly, the quality of AC is consid- ered good if they are linked to related AC. An example are two AC that handle the same input parameters as AC 1: “If input A then function B” and AC 2: “If input A and input B then function C.” Both AC can be checked with a joint Acceptance test.

4.2 Artifact 2: Acceptance Test

4.2.1 Stakeholder and Activities. Acceptance tests are instru- ments used to verify the conformity between user expectations and actual system behavior (Acceptance testing). Each Accep- tance test contains a set of Test cases. In practice, there are two ways of Acceptance testing. Internal Acceptance testing (alpha testing), which is performed by members of the organization that developed the software, and external Acceptance testing (beta testing), which is performed by the customer. As we were not able to talk to customers during the study, we examine the quality of Acceptance tests from the perspective of an internal Product owner in the context of alpha testing.

4.2.2 Challenges and Quality Factors. Our interviews reveal two challenges with the usage of Acceptance tests during Accep- tance testing. We derived three quality factors from the identified challenges.

Challenge 3: Acceptance tests contain too many or too few test cases. We found that Acceptance tests are often not systemati- cally created resulting in incomplete or excessive Test cases.

“We do not follow any particular procedure in the preparation of acceptance tests. Every test designer does this based on his experience. Of course, such a manual process is prone to errors, because you can overlook some cases. In fact, I have also been in the situation where I forgot test cases.” (P3) In the case of missing Test cases, system defects are not (or only partially) detected. As a result, faulty software is ultimately deliv- ered to the customer, leading to errors in production and lower customer satisfaction. According to an internal analysis conducted by company C1, 83% of the system defects at company C1 could

have been detected by more complete Test cases. A similar obser- vation was made in company C4:

“We often experience that our live system does not completely fulfill all user stories. This could have been avoided by the right acceptance tests” (P6).

Instead of systematically determining which Test cases are re- quired to cover an AC, they are usually created based on past expe- rience of the Test designer. This makes the test case derivation error-prone and increases the risk of missing test cases, as Test designers “tend to only test the positive cases and not the negative ones” (P1, P9). A major challenge is the complexity of the AC:

“We often have to implement highly complex business rules that include a range of parameters. It’s hard to decide which combinations of parame- ters should be tested” (P1).

This increases the risk of missing Test cases. However, not only Test cases are missing, but also superfluous Test cases might be created leading to an increase in the testing effort. According to P1, many Test designers are lacking the required qualification and, more importantly, the time for a systematic test case derivation.

Consequently, there is a great demand for an automated test case derivation from AC to maintain the high development speed.

QF 7: Positive and negative scenarios ⊕ Acceptance tests are only suitable for detecting system defects if they are complete (i.e.

covering all positive and negative Test cases).

QF 8: Minimal ⊕ Achieving QF 7 is crucial to the quality of an Acceptance test, however, it is also necessary to strike a balance between full test coverage and the number required of Test cases.

More specifically, an Acceptance test should contain only the minimum number of Test cases needed to fully cover the AC in order to minimize the required testing effort.

Challenge 4: Lack of automation of acceptance tests. The inter- views revealed that the degree of test automation is still insuffi- cient in practice. At the lower levels of the test pyramid, such as Unit tests and Integration tests, the execution has mostly been automated. However, Acceptance tests are usually performed manually resulting in large testing efforts.

"Our acceptance tests are always carried out manually. Therefore, the testing process takes a rather long time and we are highly dependent on how the product owner performs the test." (P18)

This represents a major challenge, especially as the development project proceeds and the system’s functionality increases:

"Before each new release, we have to run acceptance tests that check already implemented user stories to avoid regression. The number of tests quickly increases during a project and then you ask yourself who is going to execute these tests? We have to run the new acceptance tests as well." (P7) The reason for the low automation of Acceptance tests stems not from limited tool support, but rather from the fact that many companies still neglect to use them: We found that some smaller companies like C6 have already automated the majority of their Acceptance tests. The problem of insufficient automation occurs mainly in large companies like C1 and C2. According to P9, P1 and P6, this might be due to the culture of these companies, who allegedly refuse to implement new automation tools initially and therefore introduce them with a considerable delay.

QF 9: Automated ⊕ In order to cope with the high development speed, Acceptance testing needs to be automated, e.g. through tools such as Selenium, Cypris and Robot Framework.

(7)

4.3 Artifact 3: Feature

4.3.1 Stakeholder and Activities. A Feature is a specific piece of functionality that is desired by the customer. In case of new or changed Features in the current iteration, the Test lead must verify that no regression on already implemented Features is in- troduced (Regression testing). With a growing system, the scope of Regression testing also increases, so that running an entire regression test suite is time consuming:

"In our project, some manual regression tests take four days." (P4) A similar picture emerges with automated regression tests per- formed by continuous integration tools:

"We run our automated tests via Travis. The Travis build for our entire application takes an entire day." (P5)

Such long test suite runs pose a major problem, especially consider- ing the short sprint cycles that are often only two weeks. Selecting the right regression tests is therefore essential in order to minimize the testing effort. Specifically, the Test lead needs to run the regres- sion test for the changed Feature and for all dependent Features to identify potential regressions (Regression test selection). For this purpose, knowledge about Feature dependencies is required.

4.3.2 Challenges and Quality Factors. We found one major chal- lenge related to the usage of Features during Regression test selection. From this challenge, we derived two quality factors.

Challenge 5: Lack of an overview of dependencies between fea- tures. In practice, there is no overview of the relationships between Features. As a result, there is a negative impact on the Regres- sion test selection as it is not transparent which regression test runs are necessary. Consequently, practitioners need to test on a risk-based basis:

"I execute the regression tests of all those features which I know from expe- rience to be related to the changed feature." (P5)

This is prone to errors resulting in a growing desire of the Test leads for a "functional structure" in their project, which indicates the relationship between the Features. This allows to understand which features might be impacted by a change of a certain feature and need to be tested.

QF 10: Link to dependent feature ⊕ According to P4 and P5, there are too many Features in the projects to manually track the de- pendencies between them. Furthermore, existing tools such as Jira do not provide the option of illustrating relationships between Features via links. Hence, there is a need for a method that au- tomatically reveals dependent Features in order to establish the

"functional structure".

QF 11: Link to regression test ⊕ For the selection of regression tests, the Test lead requires a clear traceability between Features and corresponding regression tests. Hence, each Feature must have a link to a corresponding regression test.

4.4 Artifact 4: Test Documentation

4.4.1 Stakeholder and Activities. In addition to Regression test selection, the Test lead is also responsible for Test reporting and Estimation planning. As part of Test reporting, the Test lead has to provide an overview of successful and failed tests after

each iteration to track the progress of the development team. For this purpose, a comprehensive Test documentation is required.

4.4.2 Challenges and Quality Factors. We found one major chal- lenge related to the usage of Test documentation during both Test reporting and Estimation planning. From this challenge we derived two quality factors.

Challenge 6: Test results and effort are not properly documented.

The interviews revealed that there is a common problem that test results are not properly documented at all test levels. Especially on the intermediate test levels such as integration testing there is a lack of an overview of the results. Therefore the reporting is mainly done on unit and acceptance level.

"I experienced a number of projects that do not separate between the test levels and consider every technical test as a unit test and simply document all results at unit level." (P10)

Thus, it is difficult to identify the bugs at the correct test levels and change the software accordingly.

QF 12: Contains passed/failed rates at each test level ⊕ Each test type needs to be linked to its respective test result. Specifically, the Test documentation needs to contain the corresponding test result for each Unit test, Integration test, System test and Ac- ceptance test to provide a comprehensive overview of all testing levels at any time during the life cycle of the software.

In addition, we found that not only the test results, but also the test effort is not properly documented, which leads to issues in Estimation planning. This activity aims at planning the resources needed to perform the testing in the next iteration:

"It is very difficult to estimate the required number of testers when I join a new project as there is no documentation of the required test efforts from previous iterations". (P4)

Thus, there is no indication how much working days were needed by the testers involved in the project to validate former User stories, as stressed by P4 and P13:

"We need some key performance indicators especially for Agile testing that are tracked and documented during the test execution. Based on these, I can plan future test activities." (P4)

"For example, it would be great to know how many story points were implemented and tested in past sprints." (P13)

QF 13: Contains testing effort per Story Point ⊕ A Test lead needs an overview of the number of User stories implemented in past sprints, their story points and the required test effort. Specif- ically, it is necessary to document how many working days the testers needed per story point to get an overview of their testing capabilities and estimate future testing efforts accordingly.

4.5 Artifact 5: Test Data

4.5.1 Stakeholder and Activities. In addition to suitable Test cases, Test data is also needed to systematically test the behavior of the system. Test data is therefore required at all test levels. Our interviews indicated that there are two main approaches to decide which stakeholder is responsible for which testing level. In small companies, the Software engineer is usually conducting end-to- end tests, i.e. he is responsible for all test levels:

"We do not have dedicated tester roles. Our software engineers perform the entire process from unit testing to system testing." (P9, P6)

(8)

In large companies, the test levels are allocated to different roles:

"Unit tests are written and executed by our software engineers, but we have different testers who perform integration tests or other test types." (P18) However, the interviews showed that all roles have an interest in high quality Test data.

4.5.2 Challenges and Quality Factors. We found one challenge related to the usage of Test data and derived one quality factor.

Challenge 7: Lack of test data to properly test the software. In practice, the generation of Test data is a great challenge, so that

"we often do not have enough test data or the quality of our test data is poor". (P1). In this context, poor quality denotes the deviations from real production data. The participants complained that their Test data often does not cover all possible boundary cases that might occur in production, so that the system is not tested under all potential conditions. We found this issue in large companies like C1 as well as in small companies like C6. This is mainly caused by the fact that Test data is not systematically derived from production data, but rather that testers use random test values as Test data.

"I often see the problem in projects that poor test data is used. Poor means that the developer enters arbitrary values in his unit test, but omits con- stellations that might occur in practice. Obviously, this leads to errors." (P7) Hence, the testing of all possible boundary values depends on the experience of the testers.

QF 14: Boundary values ⊕ To ensure that all potential excep- tional conditions are covered during testing, the Test data must contain the same boundary cases as the production data.

"We need a method that learns to generate appropriate test data from production data." (P1)

In this context, it is crucial to keep the Test data anonymized, especially when dealing with sensitive data.

4.6 Artifact 6: Unit Tests

4.6.1 Stakeholder and Activities. Unit tests are usually imple- mented and used by Software engineers to test individual units of the source code or sets of modules.

4.6.2 Challenges and Quality Factors. We found one challenge related to the usage of Unit tests and derived one quality factor.

Challenge 8: Inadequate Code Coverage of Unit Tests. The inter- views demonstrated that there are not only problems with func- tional testing (i.e. poor Acceptance testing) but also problems with testing at lower test levels:

"There are always bugs in production that could have been detected at lower levels." (P1)

The core problem mentioned by the participants is that Unit tests are not created following a certain pattern. Rather, it depends on the developer and the reviewer which Unit tests are created. This leads to a strongly fluctuating quality of the Unit tests. Similar to the automation of Acceptance tests, we found differences between small and large companies. The smaller companies were able to give us an overview of the code coverage of their Unit tests, while larger companies were not aware of the quality of their Unit tests.

QF 15: Code coverage ⊕ To control the quality of Unit tests, code coverage metrics should be applied as they allow to deter- mine how much of the developed code is tested. The participants mentioned arbitrary thresholds (e.g. 80%) which they considered useful.

"This does not mean that no errors can occur. But it provides a good first overview of the quality of my unit tests." (P7)

4.7 All Test Artifacts

In the following, we present a challenge that applies to all test arti- facts equally. Hence, the quality factor derived from this challenge is relevant for all presented test artifacts.

Challenge 9: Missing version control of all test assets. Configu- ration management is an integral activity to monitor and control the status of software during its life cycle. Version control of source code and automated tests is already anchored in today’s business practice and is supported by control systems such as git, allowing to track code changes over time. However, version control of all Agile test artifacts is only partially implemented:

"We often don’t know what the software is capable of doing or has done at a certain point in time and what exactly we have tested." (P1)

This poses a problem especially in regulatory environments. For example, companies in the insurance industry need to document the functionality of the different software versions and prove which tests have been performed to verify that functionality. Hence, prac- titioners "need the historic information of all test assets." (P1, P2).

QF 16: Documented Status ⊕ All test artifacts need to be main- tained with an appropriate status. This can be illustrated by a User story and its corresponding Acceptance test. After the creation of a User story, its status is set to "New". It will be set to "Commit- ted" by the developer once its implementation has started. After the implementation, the User story’s status is set to "Resolved" and is finally set to "Done" by the Product owner if the Acceptance test was successful. To monitor the status of the software during its life cycle, it is indispensable that artifacts are archived and not discarded. For example, if a new User story overwrites another one, the old User story including the Acceptance tests’ status should be set to "old" and the old user story should reference the new User story. This allows to review the tested functionality and the test results for any given build at any given point in time.

5 DISCUSSION AND RELATED WORK

In this section, we discuss our findings and put them in context of related work.

Finding 1: Identified challenges are partly similar to known prob- lems from non-Agile projects. Poor formulation of requirements and lack of traceability between artifacts are well-known problems from traditional projects [17]. Interestingly, we have also identified these two problems in our survey. The lack of traceability was observed especially for the artifacts AC and Feature. According to our fre- quency analysis, the lack of traceability between AC was mentioned by eight of the 18 respondents, whereas the missing traceability between Features was mentioned by only four respondents (see Fig. 4). The challenge of inadequate formulated Acceptance cri- teria was mentioned by more than 80% of the respondents and

(9)

Customer Product Owner

External Internal

Minimal Automated Positive/

Negative Scenarios Acceptance

Test

Test Case Test Designer

Creation Updating Acceptance Testing

Acceptance Test Design

User Story

Acceptance Criterion

Conflict- free Link to related AC

Unique Coreferences

Vague Phrases References to mails etc.

Link to dependent feature

Link to regression test Epic

Feature

Regression Test Selection

Test Lead

Software Engineer

Unit Integration Testing

System

Boundary Values Test Data

Unit Test Code Coverage Configuration Management Documented

Status

Code

Integration Test

System Test

Test Test Documentation

Passed/failed rates Testing Effort

Per Story Points

Estimation Planning

Test Reporting

Dedicated Tester Role

Artifact Quality Factor Activity Stakeholder Key

+ -

+ + +

+

+ +

+

Fig. 3: Activity-Based Quality Model for Agile test artifacts.

therefore has the highest frequency in our sample. Our results indi- cate that already known problems from traditional software projects could not be solved by the shift to Agile software development. Re- gardless of the development paradigm applied, practitioners seem to face the same quality issues in some of their used test artifacts.

Implications for Practice & Academia If teams switch to an Agile paradigm, common requirements and test engineering prob- lems still need explicit attention and won’t go away by themselves.

Academia will continue evaluating whether solutions applied in traditional software engineering, also work in ASD.

Finding 2: Quality model contains only currently relevant quality factors. There already are a number of studies on quality factors of artefacts used in traditional projects. For example, it exists a large body of work on quality dimensions of data [5, 25, 26]. An integrated view is provided by Catarci and Scannapieco [6], who define the quality of data by the criteria accuracy, completeness, consistency, and timeliness. We were surprised that the interviewed practitioners mentioned only a few of the already known quality factors and instead emphasized specific factors. For example, in the context of test data, the practitioners only mentioned boundary values that the Test data must cover. Completeness of Test data, i.e. "the degree to which a given data collection includes data describ- ing the corresponding set of real-world objects" [6], seems to be a critical problem in Agile testing. The other quality factors were not discussed. We assume that the practitioners have less pressing problems in maintaining these factors and therefore do not mention them explicitly. Our problem-oriented question approach focuses on current and critical problems. As a result, our quality model con- tains only those quality factors that are difficult to achieve by the practitioners. This can be seen as a strength, but also as a weakness, since it will produce an incomplete model, yet provide a quality model of what is most relevant. Since project pressure is such a

0 5 10 15

Test results and effort are not properly documented AC are ambiguously formulated

Lack of an overview of dependencies between AC Acceptance tests contain too many or too few test cases Lack of automation of acceptance tests

Lack of an overview of dependencies between features Lack of test data to properly test the software

Inadequate Code Coverage of Unit Tests Missing version control of all test assets

Fig. 4: Frequency Analysis of Mentioned Challenges.

dominant topic in our interviews, we would argue that this makes the model more useful for practitioners.

Implications for Practice & Academia If practitioners do not have enough time for full-blown QA of test artifacts, we suggest to start with the quality factors mentioned by fellow practitioners.

Academia however, needs to validate the quality model in particular regarding relative relevance of the quality factors.

Finding 3: Most identified quality factors cannot be controlled manually. Multiple studies have shown that the high change dy- namics and development speed in Agile projects requires an in- creasing automation of the test process. For example, Fischbach et al. stresses the need for an automatic Test case derivation from Acceptance criteria [11]. Our survey indicates that quality con- trol of test artifacts should also be automated as far as possible. We identified a number of quality factors which should be controlled since they have a significant impact on testing activities. However, they cannot be managed manually due to time constraints. The in- terviewed practitioners are aware of many of the identified quality factors, but cannot meet them without automated tool support. The need for tool support primarily concerns all quality factors related to traceability such as QF 4, 5, 6, and 10.

Implications for Practice & Academia Academia and prac- tice need to collaborate on creating effective and efficient tool sup- port for automatic quality control of test artifacts.

Finding 4: The quality of test artifacts influences the quality of other artifacts indirectly. We reported which quality factors of a test artifact have a positive or negative impact on certain activities.

Our interviews revealed, however, that the presence or absence of the identified quality factors not only affects the activity itself but also its output and thus another test artifact which is used in subsequent activities. This can be illustrated by the two artifacts AC and Acceptance test. We identified seven quality factors of a AC, which have an impact on Acceptance test design. The output of this activity are Acceptance tests (see Fig. 3). Consequently, the quality of Acceptance tests is indirectly impacted by the quality of AC as they influence the activity in which Acceptance tests are created. This reflects the common claim that quality defects in early artifacts (e.g. requirements-like artifacts) have consequences across multiple layers of indirection.

Implications for Practice & Academia Practitioners should carefully analyze which artifacts are at the beginning of the pro- cesses and focus their limited QA resources on these artifacts, in particular on AC. Academia should try to understand this in more depth and further qualify and quantify the impact.

(10)

6 THREATS TO VALIDITY

Internal Validity. The interviewees may have misunderstood the questions resulting in poor quality or invalid answers. To minimize this threat, we followed the guidelines by Ciolkowski [7] in the creation of the questionnaire. In addition, we conducted a pilot phase to validate the questionnaire internally through discussions in the research team and externally through pilot interviews. An- other threat is that the interviewed practitioners may not have the necessary knowledge to provide suitable input to our study. We minimized this threat by selecting practitioners based on previ- ously defined criteria to ensure sufficient experience. As in every interview-based survey, practitioners’ statements may be incorrect due to fear, pride or other subjective biases, despite us stressing the anonymity of the study. As such, our resulting quality model reflects the subjective views on quality and needs to be validated with experiments. The selection bias is another threat to internal validity. Although we have started with personal contacts to find participants, the sampling process has been extended by indirect contacts (snowball sampling). As a result, the selection bias threat has been reduced. Our study is also subject to a potential researcher bias, because all interviews and the data analysis were conducted only by the first author. To minimize this threat, all interviews were audio recorded to document the results of the interviews and to provide a basis for further analysis. In addition, the hypotheses and quality factors derived by the first author were validated by an internal review process in order to mitigate confirmation bias. Fur- thermore, we assured credibility by sending the identified quality factors to the participants for validation (member checking).

Construct Validity. The questionnaire might not sufficiently cover our research questions limiting the availability of data that provides suitable answers to the research questions. To minimize this threat, we performed two mitigation actions. First, we designed the ques- tionnaire to successively identify the individual elements of the ABAQM. In addition, we mapped the questions of the questionnaire to the research questions and discussed in the research group if the questions are adequate or if further questions are required to answer the RQ in a targeted way.

Reliability. As in every interview-based survey, the limited sam- ple size and the sampling strategy do not provide the statistical basis to generalize the results of the study beyond the studied companies and stakeholders. However, we tried to interview practitioners in different roles from different domains and companies of different sizes to obtain a comprehensive picture of the quality of the test artifacts. Nevertheless, the results of our frequency analysis are not statistically representative and do not allow a general conclu- sion about challenges in using test artifacts. In order to achieve reasonable generalizabilty, future studies should investigate our derived hypotheses in the context of a broader survey and assess their relevance, e.g. by using a Likert scale.

7 CONCLUSION

Quality of test artifacts matters. In this paper, we conducted an in- dustrial survey to create an Activity-Based Artifact Quality Model to define what this means from a stakeholder’s viewpoint. Specifically, we explored quality factors of test artifacts that have a positive or

negative impact on the activities of Agile testers. Our quality model contains 16 quality factors for six test artifacts that are reportedly relevant to at least five stakeholders in the process. Further studies should validate the findings, extend the quality model and research the objective relevance of the mentioned quality factors. We en- courage Agile testers to use our quality model as the foundation for systematic quality control in practice.

REFERENCES

[1] V. R. Basili, G. Caldiera, and D. H. Rombach. 1994. The Goal Question Metric Approach. Encyclopedia of Software Engineering 1 (1994).

[2] J. Bass. 2016. Artefacts and agile method tailoring in large-scale offshore software development programmes. Information and Software Technology 75 (2016).

[3] K. Beck, M. Beedle, A. van Bennekum, A. Cockburn, W. Cunningham, J. Fowler, M.and Grenning, J. Highsmith, A. Hunt, R. Jeffries, J. Kern, B. Marick, R. C. Martin, S. Mellor, K. Schwaber, J. Sutherland, and D. Thomas. 2001. Manifesto for Agile Software Development.

[4] D. M. Berry and E. Kamsties. 2004. Perspectives on Software Requirements. Chapter Ambiguity in Requirements Specification.

[5] M. Bovee, R. P. Srivastava, and B. Mak. 2001. A conceptual framework and belief-function approach to assessing overall information quality. International Journal of Intelligent Systems 18 (2001).

[6] T. Catarci and M. Scannapieco. 2003. Data quality under the computer science perspective. Archivi & Computer 2 (2003).

[7] M. Ciolkowski, O. Laitenberger, S. Vegas, and S. Biffl. 2003. Practical Experiences in the Design and Conduct of Surveys in Empirical Software Engineering.

[8] D. A. Dillman, J. D. Smyth, and L. M. Christian. 2014. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method.

[9] H. Femmer, D. M. Fernández, S. Wagner, and S. Eder. 2017. Rapid quality assurance with requirements smells. Journal of Systems and Software 123 (2017).

[10] H. Femmer and A. Vogelsang. 2019. Requirements Quality Is Quality in Use. IEEE Software 36 (2019).

[11] J. Fischbach, A. Vogelsang, D. Spies, A. Wehrle, M. Junker, and D. Freudenstein.

2020. SPECMATE: Automated Creation of Test Cases from Acceptance Criteria.

In ICST.

[12] V. Gervasi, A. Ferrari, D. Zowghi, and P. Spoletini. 2019. From Software Engineering to Formal Methods and Tools, and Back: Essays Dedicated to Stefania Gnesi on the Occasion of Her 65th Birthday. Chapter Ambiguity in Requirements Engineering:

Towards a Unifying Framework.

[13] S. Hotomski, E. B. Charrada, and M. Glinz. 2016. An Exploratory Study on Handling Requirements and Acceptance Test Documentation in Industry. In RE.

[14] International Software Testing Qualifications Board. 2019. Certified Tester Specialist Syllabus. https://www.istqb.org/downloads/send/62-acceptance- testing/257-acceptance-testing-specialist-syllabus.html

[15] ISO 29119 2013. Software and systems engineering – Software testing. Standard.

International Organization for Standardization.

[16] O. Liskin. 2015. How Artifacts Support and Impede Requirements Communication.

In REFSQ, S. A. Fricker and K. Schneider (Eds.).

[17] D. M. Fernández and S. Wagner. 2015. Naming the pain in requirements engi- neering: A design for a global family of surveys and first results from Germany.

Information and Software Technology 57 (2015).

[18] P. Mayring. 2014. Qualitative content analysis: theoretical foundation, basic proce- dures and software solution.

[19] M. Q. Patton. 1990. Qualitative evaluation and research methods.

[20] C. Robson. 2002. Real World Research - A Resource for Social Scientists and Practitioner-Researchers.

[21] K. Schwaber. 1995. SCRUM Development Process. In OOPSLA.

[22] C. J. Stettina and W. Heijstek. 2011. Necessary and Neglected?: An Empirical Study of Internal Documentation in Agile Software Development Teams. In SIGDOC.

[23] G. Wagenaar, R. Helms, D. Damian, and S. Brinkkemper. 2015. Artefacts in Agile Software Development. In PROFES.

[24] G. Wagenaar, S. Overbeek, G. Lucassen, S. Brinkkemper, and K. Schneider. 2018.

Working software over comprehensive documentation – Rationales of agile teams for artefacts usage. Journal of Software Engineering Research and Development 6 (2018).

[25] Y. Wand and R. Y. Wang. 1996. Anchoring Data Quality Dimensions in Ontological Foundations. Commun. ACM 39 (1996).

[26] R. Y. Wang and D. M. Strong. 1996. Beyond Accuracy: What Data Quality Means to Data Consumers. J. Manage. Inf. Syst. 12 (1996).

References

Related documents

Finally, the presented criticism towards certain language tests showed that the tests were not used to assess language proficiency, and had both reliability and

Inf¨orda beteckningar skall f¨orklaras och definieras. Resonemang och utr¨akningar skall vara s˚ a utf¨orliga och v¨al motiverade att de ¨ar l¨atta att f¨olja. Numeriska svar

Den avgörande skillnaden mellan de två myndigheternas metoder på den här punkten ligger inte i graden av teoristyrning, utan snarare i det faktum att Naturvårdsverket mäter

the length of an intake pipe or the duration of combustion, that were significant for the gas exchange process with the alternation of intake pressure, engine speed and valve

”Vad behöver alla kunna om arbetsmiljö, arbetsrätt, livsmedelshanteringsregler, skattefrågor, säkerhet och dylikt?” var det så få deltagare som uppgav svar för

The generated Symbolic Mealy machine is generated by state variables for storing input information, locations for representing control states and action expressions for defining

One purpose of the present investigation was a study of the applicability of the leaching theory to Swedish clays. Therefore determination of the salt content in

Brinkmann continue to explain that the interviewees perception and understanding of the conversation determines the precision and quality of the answers provided.