• No results found

Verification and validation of knowledge-based clinical decision support systems - a practical approach: A descriptive case study at Cambio CDS

N/A
N/A
Protected

Academic year: 2022

Share "Verification and validation of knowledge-based clinical decision support systems - a practical approach: A descriptive case study at Cambio CDS"

Copied!
46
0
0

Loading.... (view fulltext now)

Full text

(1)

Bachelor’s degree Project

Verification and validation of knowledge-based clinical decision support systems: a practical approach

A descriptive case study at Cambio CDS

Author: José Duarte de Sousa Barroca Supervisors: Hemant Ghayvat, Daniel Toll External Supervisor: Rong Chen

Semester: VT 2021

Subject: Computer Science

(2)

Abstract

The use of clinical decision support (CDS) systems has grown progressively during the past decades. CDS systems are associated with improved patient safety and outcomes, better prescription and diagnosing practices by clinicians and lower healthcare costs.

Quality assurance of these systems is critical, given the potentially severe consequences of any errors. Yet, after several decades of research, there is still no consensual or standardized approach to their verification and validation (V&V). This project is a descriptive and exploratory case study aiming to provide a practical description of how Cambio CDS, a market-leading developer of CDS services, conducts its V&V process.

Qualitative methods including semi-structured interviews and coding-based textual data analysis were used to elicit the description of the V&V approaches used by the company.

The results showed that the company’s V&V methodology is strongly influenced by the company’s model-driven development approach, a strong focus and leveraging of domain knowledge and good testing practices with a focus on automation and test-driven development. A few suggestions for future directions were discussed.

Keywords: verification and validation, software testing, clinical decision support systems, computer-interpretable guidelines.

(3)

Preface

In late 2017, when I decided to interrupt my clinical practice as a medical doctor to study software development, my goal was to someday be able to use technology to improve the quality of care. As time progressed, I discovered the fields of health informatics and CDS systems, which captured my interest as the perfect synergy of my clinical experience and my newly acquired technical knowledge. This project allowed me to further explore these topics and provided me with the great opportunity of collaborating with Cambio CDS.

I wish to thank Rong Chen for his friendly and enthusiastic reception to my initial contact and project ideas, his constant support in finding the best way to structure this project and, of course, for providing me with the opportunity to be a part of the Cambio CDS team. I extend my thanks to the entire Cambio CDS team for their constant availability, support, and encouragement.

I also wish to thank Daniel Toll for stepping forward and being there to assist me in this journey through the marshy terrains of qualitative studies in computer science.

Finally, to Rita, who made this journey possible, to Pedro, who has followed us since

its beginning, and to Clara, who joined the three of us at its end – thank you for your

smiles, love, and support.

(4)

List of acronyms and abbreviations

AI Artificial intelligence CDS Clinical decision support

CI/CD Continuous integration/continuous delivery CIG Computer-interpretable guideline

CPG Clinical practice guideline

DDSS Diagnostic decision support system EHR Electronic health record

GDL Guideline Definition Language GUI Graphical user interface

HIT Health information technology MDD Model-driven development MDR Medical Devices Regulation QA Quality assurance

SRS Software requirement specifications

V&V Verification and validation

(5)

Contents

Abstract ... i

Preface ... ii

List of acronyms and abbreviations ... iii

Contents ... iv

1. Introduction ... 1

1.1 Background 1 1.2 Related work 2 1.3 Problem formulation 2 1.4 Motivation 3 1.5 Scope/Limitation 3 1.6 Outline 3 2. Method ... 4

2.1 Research Project 4 2.2 Research Methods 4 2.2.1 Literature review 4 2.2.2 Case study and qualitative methods 4 2.2.3 Coding and data analysis 5 2.3 Reliability and Validity 6 2.3.1 Reliability 6 2.3.2 Internal validity 6 2.3.3 External validity 7 2.4 Ethical Considerations 7 2.4.1 Non-disclosure 7 2.4.2 Participation and consent 7 2.4.3 Confidentiality 7 3. Theoretical Background ... 8

3.1 Clinical decision support 8 3.1.1 Definition 8 3.1.2 Applications and effects of CDS systems 8 3.1.3 Classification of CDS systems 9 3.1.4 Knowledge-based CDS systems 9 3.1.5 Knowledge representation in rules-based systems 10 3.1.6 Clinical practice guidelines and computer-interpretable guidelines 11 3.1.7 Guideline Definition Language 11 3.2 Verification and validation 12 3.2.1 Verification of knowledge-based systems and CIGs 13 3.2.2 Validation of knowledge-based systems and CIGs 14 3.3 Model-driven development 15 4. Results ... 16

4.1 Approaches to verification 16

4.1.1 Focus on testable requirement specifications 16

4.1.2 Static techniques 17

4.1.3 Model-driven development practices 17

(6)

4.1.4 Testing practices 17

4.2 Approaches to validation 18

4.2.1 Requirement validation 19

4.2.2 Knowledge validation 19

4.2.3 Exploratory tests 20

4.2.4 Acceptance and usability evaluation 20

5. Discussion ... 21

5.1 Discussion of key strategies 21 5.1.1 Domain knowledge 21 5.1.2 Model-driven development 22 5.1.3 Peer reviews and good testing practices 23 5.1.4 Narrow scope of each CDS service 24 5.2 Research questions 24 5.2.1 Research question 1 24 5.2.2 Research question 2 25 5.2.3 Research question 3 25 5.2.4 Research question 4 25 6. Conclusions and Future Work ... 27

References ... 28

Appendix A. Overview of CDS Services and development tools ... 33

Appendix B. Interview guide ... 37

Appendix C. Coded text data ... 38

C.1 Initial coding pass 38

C.2 Final coding of approaches to verification 39

C.3 Final coding of approaches to validation 40

(7)

1. Introduction

This document constitutes a report on the final thesis project for a Bachelor of Science degree in Computer Science at Linnaeus University. This project is the culmination of the three-year program Software Development and Operations, taught at the Faculty of Technology, and amounts to a total of 15 HEC.

The subject of this project is the verification and validation (V&V) of knowledge- based clinical decision support (CDS) systems. The process of V&V is part of the quality assurance (QA) process, an integral part of the software development lifecycle, and aims to determine if the software under development is fit for its intended use [1]. The V&V of CDS systems, however, offers a particular set of challenges and still lacks a consensual approach. This project will be conducted as a case study of the V&V processes used by the company Cambio CDS

1

, which develops cloud-native, standards-based, vendor neutral CDS services. Cambio CDS is a part of Cambio Healthcare Systems

2

, a provider of a market leading electronic health record (EHR) solution and other e-health applications.

The goals of this project are to provide a current, practical example of how knowledge- based CDS systems are verified and validated, to discuss the company’s V&V processes in relation to existing research in this field and, if possible, to suggest improvements to the company’s V&V process.

This study is aimed at software developers, testers, health informatics professionals and managers who deal with knowledge-based CDS systems and are interested in gaining a real-world perspective of their V&V process.

1.1 Background

Computerized CDS systems are software systems designed to help healthcare professionals make decisions about individual patients at the point-of-care (i.e., at the moment the decisions are made) [2], [3]. The use of CDS systems has grown during the last decades and their implementation and integration into EHRs has been endorsed by governmental health information technologies (HIT) policies. CDS systems are currently used in various healthcare processes and have shown a positive impact on diagnostics, patient safety, adherence to clinical guidelines, cost-effectivity of health systems and administrative tasks [2]. A review of CDS is provided in section 3.1 of this report.

Along their development lifecycle, CDS systems are subject to a V&V process. A didactic definition of the processes of V&V in the context of software engineering was previously provided by Boehm [4]. Software verification answers the question “are we building the product right?”, and entails checking that the software under construction meets its requirements. Software validation answers the question “are we building the right product?” and aims at checking that the software meets the customer’s expectations (see section 3.2 for a more detailed review of V&V). This process is particularly important for CDS systems considering their increasing adoption and the potentially severe consequences of any errors for both users and patients [5], [6]. However, different technical, organizational and human factors have made HIT in general and CDS systems in particular more challenging to verify and validate than usual software systems [7]. The V&V of knowledge-based CDS systems in particular is challenging due to factors such as the large variety of knowledge representation formalisms, the extension and evolving nature of knowledge bases, the difficulty of engaging domain experts in the V&V process and the large testing effort usually required [8]. Despite decades of research, no

1https://www.cambiogroup.com/our-solutions/cambio-cds-clinical-decision-support 2https://www.cambiogroup.com/about-us/

(8)

consensual or standardized approaches to the V&V of CDS systems currently exist [9].

Recent studies suggest that CDS system malfunctions continue to be commonplace, can remain undetected for long periods, and that detection systems are often insufficient for detecting these malfunctions [10].

1.2 Related work

The literature seems to lack descriptive studies of how a company developing CDS applications organizes its V&V process in practice. This was recognized back in 1994, when Lee and O’Keefe published a general strategy for V&V of expert systems as a response to a lack of mapping of V&V methods to development lifecycles and system characteristics in the literature [11]. Although this paper did not provide a description of a company’s V&V process, it had a practical focus, aiming to assist developers in deciding which methods should be used for V&V of specific systems and how they should be planned along the development lifecycle. Other papers would continue to develop this strategy, albeit with a narrower scope. Batarseh and Gonzalez [12], for example, focused only on validation and on a specific lifecycle model.

Several descriptions of the V&V of individual implementations of CDS systems can be found in the literature [5], [6], [13]–[17]. These implementations are often made in the context of academic projects, and their V&V approaches have varying degrees of generalizability depending on the knowledge representation techniques used. Such papers do not usually contextualise the V&V process in a company’s development process or relate it to other strategies.

Lastly, the literature contains several reviews of different approaches to V&V of knowledge-based systems, knowledge-based CDS systems or CIGs [8], [9], [18]–[22].

These papers focus on reviewing and comparing different methods and approaches but do not describe how they are used in practice in the context of a company’s development process.

1.3 Problem formulation

As described in section 1.1, a consensual, standardized approach to the V&V of knowledge-based CDS systems is still lacking, and there is still room for improvement given the prevalence of errors in these applications. As described in section 1.2, the literature contains several approaches to this process but lacks practical, contextualized examples of how the V&V process is implemented at companies developing commercial CDS applications. This is the knowledge gap this study set out to address: to provide an example for a current approach to the V&V of CDS systems and to contextualize this approach considering the current and previous research in this field.

Table 1.1 shows the study’s research questions, which guided the data collection and analysis processes:

Table 1.1 - Research questions

RQ1 What approaches are used to verify CDS applications at Cambio CDS?

RQ2 What approaches are used to validate CDS applications at Cambio CDS?

RQ3 How do the verification and validation approaches used at Cambio CDS compare to existing literature?

RQ4 How could the verification and validation approaches used at Cambio CDS be

improved in the future?

(9)

1.4 Motivation

By providing a practical, contextualized perspective on current V&V practices at a market-leading CDS systems provider, this project hoped to contribute to a better understanding of the current state of the art in the V&V of knowledge-based CDS systems. Given that the adoption of these systems is increasing and their impact on healthcare delivery will continue to grow, it is important for society to have as clear an understanding as possible about the steps taken to verify and validate CDS systems.

This project could also be useful for Cambio CDS because it aimed to discuss their current V&V approach considering existing research, which might suggest directions for improvement. This, in turn, might be useful for other companies developing similar systems, providing a further societal benefit.

1.5 Scope/Limitation

Given the project’s limited time frame, the scope of its theoretical review and discussion were limited to the specific type of CDS systems developed at Cambio CDS (knowledge- based CDS systems using rules-based formalisms to encode computer-interpretable guidelines). The use of qualitative research methods might pose some limitations regarding the study’s reliability. The ability of this study to provide suggestions for improving the company’s V&V process might also be conditioned by its limited time frame and the technical complexity of the research topic.

1.6 Outline

The present report is structured as follows: Section 2 describes the methods used for data collection and analysis and the mitigation strategies used to overcome validity and reliability issues. Section 3 provides a review of theoretical concepts and previous research relevant for understanding the study’s context, results, and discussion. Section 4 presents the results. Section 5 discusses the results in the context of the theoretical review and presents some suggestions for eventual improvements according to the literature.

Section 6 presents the study’s conclusions, limitations, and suggestions of future work.

Lastly, the appendices contain overviews of the structure of the CDS services developed

at Cambio CDS and of the tools used in that development, as well as some of the artifacts

used for the qualitative research and data analysis (interview guide and coding structures).

(10)

2. Method

2.1 Research Project

This project was planned in conjunction with Rong Chen MD, PhD, CEO of Cambio CDS, as a means of analysing the company’s V&V process considering the existing literature in order to find opportunities for improvement and suggesting changes and future directions. The project was structured as a case study of the company, in which qualitative research methods were used to describe and analyse its V&V process. In parallel with the case study, a literature review was used as both the theoretical foundation for the qualitative methods and as the context for the discussion. Collected textual data was continuously analysed through coding.

2.2 Research Methods 2.2.1 Literature review

Theory is important for guiding a study, not only for allowing the researcher to better understand the processes and observations they are studying, but also to guide their interpretation of the study’s results [23]. The method used to gather that theoretical foundation was a literature review, performed by searching for articles focusing on knowledge-based CDS systems and their V&V process on databases such as IEEE Xplore

3

, ACM digital library

4

, ScienceDirect

5

and PubMed

6

. This review was not performed systematically (i.e., using pre-defined search expressions and selection criteria), but rather ad-hoc. This approach was taken because the literature review would be performed continuously throughout the study and would provide the theoretical support to the study’s discussion as well. As such, it needed to be able to adjust to the information elicited from the interviews, which could not be fully anticipated at the beginning of the study. This approach sacrifices some reliability in exchange for greater validity, as discussed later (see section 2.3).

2.2.2 Case study and qualitative methods

This project was structured as a case study, whose object was the company Cambio CDS.

Runeson and Höst define a case study as “an empirical method aimed at investigating contemporary phenomena in their context” [24, p. 134], which is in accordance to the study’s purpose of investigating the current approach to V&V of CDS services at Cambio CDS. The subjects of the study were members of the Cambio CDS team involved in different areas of the development process. This case study had both a descriptive purpose and an exploratory purpose. The first and second research questions (see section 1.3) aimed not to develop or test a new solution, but instead to describe an already existing set of practices, reflecting the study’s descriptive purpose. The third and fourth research questions aimed at analysing the study’s findings considering existing literature and suggesting improvements and future directions, reflecting an exploratory purpose [24].

This case study used qualitative research methods for data collection and analysis.

Qualitative methods aim to describe, interpret, and explain a particular situation or process. This is achieved using textual data (in contrast with quantitative studies which resort mainly to numerical data) gathered from three main sources: observations, interviews and documents [23].

3https://ieeexplore-ieee-org.proxy.lnu.se/Xplore/home.jsp 4https://dl-acm-org.proxy.lnu.se/

5https://www.sciencedirect.com/

6https://pubmed.ncbi.nlm.nih.gov/

(11)

Figure 2.1, below, represents how the different research methods were used in this study. These methods will be described in more detail in the following paragraphs.

Cambio CDS had adopted a full remote work policy at the beginning of the SARS- CoV-2 pandemic

7

and continued to follow it during this study’s implementation period.

This made the use of observations an impossibility. Besides allowing researchers to document ongoing activities, observational techniques also allow engaging participants in informal discussions where clarifications and personal perspectives can be obtained to better understand those processes [23]. In order to obtain this information given the impossibility of observational techniques, a series of preliminary informal conversations were conducted with different members of the team. These conversations were used to gain information not directly related to the research objectives, but nevertheless necessary for planning the interviews used for data collection. The analysis of documents such as test plans, test reports, software requirement specifications and source code was also used to understand the company’s V&V process and for planning the interviews.

Interviews were used for collecting data directly associated to the research questions.

Interviews can be classified in unstructured, semi-structured and structured according to how pre-determined the questions are and how much variation is allowed during their course [24], [25]. Semi-structured interviews were used, providing a balance between including pre-formed questions essential for answering the study’s research questions and maintaining the freedom to diverge and explore other issues brought up by the participants’ answers. The previously discussed literature review, informal discussions and document analysis were the basis for the interview guide, which in turn provided guidance for the semi-structured interviews. Each interview was recorded, and its content transcribed for posterior analysis.

2.2.3 Coding and data analysis

Textual data collected in the interviews was analysed using coding, a qualitative research technique used to sort textual data into categories of meaning as a way to facilitate its comprehension and discussion [23]. Usually, the codes assigned to the textual data are defined in a coding structure. These coding structures may be defined in advance to data collection (based on prior literature reviews and theoretical concepts) and immutably applied to the collected data. Alternatively, coding structures may be defined simultaneously with data collection and progressively refined according to the data collected during the interviews [23]. The latter approach was used. The qualitative research software application Quirkos

8

was used to assist with coding.

7https://www.who.int/emergencies/diseases/novel-coronavirus-2019 8https://www.quirkos.com/index.html

Figure 2.1 - Overview of the qualitative methods used in this study

(12)

2.3 Reliability and Validity

Qualitative methods generally pose a greater threat to reliability than to validity. There is always some degree of subjectivity and individual judgement inherent to both data collection and data analysis methods, which are influenced by the researcher’s knowledge, biases, and interpretations. Validity, on the other hand, is usually stronger because of the finer-grained understanding of the study object allowed by qualitative methods. Methods such as observations with informal interactions and semi-structured interviews allow for an understanding of the processes surrounding the study object and of the participants’ personal perspectives that is often not possible with quantitative methods [23].

2.3.1 Reliability

The study’s ad-hoc literature review might negatively impact its reliability because it did not obey any rigorously pre-defined search expressions or selection criteria, limiting its reproducibility. This approach allowed the literature review to also provide the theoretical background for the information being discussed during the interviews. This was another example of the balance mentioned above regarding qualitative studies, in which validity is increased in detriment of reliability.

Regarding the interviews, several steps were taken to ensure their reliability was as high as possible. The interviews were recorded to ensure that the exchanged information is reliably collected. No recordings were made of the initial informal discussions because their purpose was to provide information and context about other aspects of the company’s activity, organization, and work processes not directly connected to the research questions. Therefore, the additional effort and time required to record and transcribe these conversations did not seem justified given the low impact the study’s overall reliability, especially considering the study’s limited timeframe. Besides recording the interviews, each participant was to receive the written transcript of their respective interview, if time allowed, so that any eventual mistakes in the raw data could be corrected, as suggested by Runeson and Höst [24]. Finally, the duration of the interviews was planned to be no larger than 30 minutes. Excessively long interviews could make the participants less cooperative due to fatigue or concern about other pressing tasks, which would have decreased the reliability of the collected data [24].

2.3.2 Internal validity

During the process of planning the external collaboration for this thesis, I was hired by Cambio CDS and started working part-time by the time the thesis project was started.

This might have significantly affect validity in two ways: first, it is known that companies in general may be sensitive about negative results when they are the object of a case study [24]. Therefore, as an employee of the company under study, I could have been pressured in some way to hide faults in the V&V process and to somehow embellish the study’s results. Secondly, being employed by the company changed my relationship with the study participants radically, which could have a marked effect on participant behaviour during data collection and on the information they provided [23], [24].

Some mitigating strategies were used to address these threats to validity. First, the

study’s goal was openly discussed with the project’s external supervisor (and company

CEO) since the start of the project. The company saw this study as an opportunity to

identify limitations in their V&V process and was in favour of discussing these limitations

as a catalyst for future improvement. Secondly, regarding the relationship with the

participants, anonymity and confidentiality (discussed below) were assured as part of the

interview process to minimize the impact on the validity of the discussions. Extra

attention was taken to the use of techniques to establish trust and rapport with the

(13)

participants at the start of the interviews [25] to make them comfortable and minimize, as much as possible, the impact of my professional relationship with them.

To maximize the validity of the collected data, data source triangulation was used. This meant interviewing as many participants as possible within the project’s constraints and including participants with different skills and positions. This process of cross-validation is used to reduce bias and increase the precision of the collected data and the robustness of the results [23], [24].

2.3.3 External validity

The generalizability of the study’s findings is limited to CDS systems that share at least some fundamental characteristics with those produced by Cambio CDS. This includes knowledge-based CDS systems that use some form of computer-interpretable guideline (CIG) as their knowledge base and are integrated with an EHR system for data input. The theoretical review (section 3) and discussion (section 5) attempt to provide enough insight into existing theory and the company’s CDS services to allow the reader to understand which types of systems this study can be generalized to.

2.4 Ethical Considerations 2.4.1 Non-disclosure

A non-disclosure agreement was signed between me and Cambio CDS before the start of this thesis project. The contents of this report needed to comply to that agreement, and my external supervisor was responsible for verifying that no content of this report disclosed any protected intellectual property.

2.4.2 Participation and consent

Participants in the study’s initial informal discussions were informed that their goal was to obtain preliminary information for the case study. No formal record of these conversations was made, nor were its participants identified in the project report.

Regarding the interviews, participants were informed in advance about the goals of the interview, the fact that the interview would be recorded for posterior transcription and how the data would be treated. Participants remained anonymous in the present report.

After being informed, each participant had to explicitly agree to participate in the interview for it to be conducted.

2.4.3 Confidentiality

Participants were not identified in the report in order to maintain their anonymity.

Participants at Cambio CDS would be very easily identifiable given that Cambio CDS is

a company with a small number of workers for each position. Given that the project did

not require analysing any data regarding the participants themselves, no information

about them was included in the report.

(14)

3. Theoretical Background

This section will review certain theoretical concepts that are important for understanding the study’s context, its results, and its discussion. The information used for writing this section was obtained from the literature review conducted throughout the study’s duration as detailed in section 2.2.1.

3.1 Clinical decision support

This section will start by introducing key concepts required for understanding CDS systems, such as their definition, applications, and classification. This is followed by a more in-depth overview of knowledge-based CDS systems. Lastly, the concept of CIGs is introduced, which is used by Cambio CDS to formalize expert knowledge in their knowledge-based CDS systems.

3.1.1 Definition

The term “clinical decision support” encompasses a large variety of tools designed to assist decision making in healthcare. Wasylewicz and Scheepers-Hoeks [26] establish the following basic classification of CDS systems in increasing degree of complexity: Non- computerized CDS systems are sources of information such as clinical practice guidelines (see section 3.1.6) or digital clinical resources containing up-to-date, peer-reviewed, evidence-based medical knowledge. Healthcare professionals can access these knowledge sources to gain information that will help them make decisions. Computerized basic CDS systems are “tools to help focus attention” [26], such as alerts highlighting abnormal values for a laboratory test. Computerized advanced CDS systems encompass CDS systems that can provide recommendations that are specific to the individual patient currently being treated. These systems have been the focus of research concerning CDS systems and will also be the main topic of this report.

Computerized CDS systems can therefore be defined as computer systems that assist in making healthcare-related decisions about individual patients at the moment in time in which these decisions are made [2], [3], [26]. Users of CDS systems include not only clinicians, but also patients and everyone in any way involved in the delivery of healthcare [27].

3.1.2 Applications and effects of CDS systems

CDS systems have been applied to a wide variety of areas in clinical activity. Common examples are: CDS systems offering support in medication orders, which will take an input such as the patient’s latest laboratory values or current medication and generate a recommendation or alert regarding possible interactions or contraindications to the medication being prescribed [3]. CDS systems designed to assist with diagnosis, sometimes referred to as diagnostic decision support systems (DDSS). DDSS will typically receive a list of signs and symptoms as input and will generate a list of possible diagnosis. The clinician is then expected to apply their expertise when considering the DDSS’s recommendations. DDSS have not been as successful as CDS systems applied to other areas due to factors such as poor acceptance by physicians or poor system integration [2], [3]. CDS systems applied to the reasoning behind the ordering of different diagnostic exams, namely imaging studies (assisting radiologists in selecting the most appropriate examination and modality) and laboratory tests (assisting in test interpretation) [2].

Several studies have showed CDS systems to have positive effects in these different

areas of application. When applied to medication orders, CDS systems decrease

prescribing and dosing errors, thereby increasing patient safety [2], [3]. CDS systems

(15)

have also been shown to contribute to lower healthcare-related costs by reducing patient length of stay, suggesting more affordable medication alternatives or preventing clinicians from requesting unnecessary tests [2]. Additionally, CDS systems can assist with administrative tasks and improve the quality of clinical documentation [2].

3.1.3 Classification of CDS systems

CDS systems can be classified according to several different criteria or dimensions [3], [26], such as the timing at which the decision support is provided relative to the clinical act, the model for giving advice (i.e., whether the CDS system’s recommendations are shown passively, requiring input or actions from the user, or actively), how easily accessible they are to the user, the style of communication (consulting systems, which advise on a decision when it is being taken, and critiquing systems, which emit alerts if necessary after the user takes a decision on their own), the form of human-computer interaction and the underlying decision-making process or model (problem-specific flowcharts, Bayesian models, artificial neural networks, support vector machines, artificial intelligence (AI)).

For this report, the most relevant classification distinguishes two types of CDS systems: knowledge-based systems and non-knowledge-based systems [6]. Knowledge- based CDS systems require expert medical knowledge to be directly encoded into the system in some computer-readable form. Knowledge-based systems are dependent on this encoded knowledge to be able to make decisions [3], [28]. In contrast, non-knowledge- based CDS systems rely on some form of AI-based supervised or unsupervised learning techniques, or on statistical pattern recognition techniques. These systems are able to

“learn” how to make decisions without requiring explicitly encoded knowledge or rules, and can detect patterns and extrapolate findings from large sets of data much better than knowledge-based systems or humans can [2], [3], [28]. However, the lack of explicitly encoded knowledge and rules makes these systems operate as “black boxes” (i.e., in a non-transparent way). This creates serious ethical problems regarding explainability and accountability, which in turn has contributed to low acceptance of non-knowledge-based CDS systems among clinicians [3], [26].

The focus of this report will be knowledge-based CDS systems.

3.1.4 Knowledge-based CDS systems

Knowledge-based CDS systems can be traced back to the 1970s [3]. In general, a knowledge-based CDS system is composed of the following parts: a knowledge base, an inference engine, and an interface for communication with the user. Finally, the system receives input (clinical data) and generates output (recommendations) [3], [28], as represented in figure 3.1:

Figure 3.1 - General structure of knowledge-based CDS systems (adapted from [2], [29])

(16)

The knowledge base contains the expert medical knowledge that the system needs for generating its recommendations. This knowledge is encoded (or represented) in a specific computer-readable format. There are several ways of representing expert knowledge in the knowledge base. The most common representation is as a set of “if-then” production rules [28], [30], further discussed ahead. Other knowledge representation formats include logical conditions, graphs or network-based representations such as Bayesian networks and decision trees, and structural representations such as frames [28], [29].

The inference engine contains the algorithms that will apply the knowledge/rules stored in the knowledge base to the input clinical data (including both input from the user and clinical data extracted from an EHR). This will generate the decisions and recommendations that will form the output of the CDS system [28].

The communication interface will present the decisions generated by the inference engine to the user and, when needed, receive input from the user.

3.1.5 Knowledge representation in rules-based systems

Decision rules are one of the main forms of representing knowledge [31]. A decision rule is a form of algorithm that encapsulates the logic employed in making a single decision, usually represented as an IF-THEN statement [32], [33]. This modularity of rules-based formalisms improves knowledge maintainability and facilitates reasoning explanation (important to clinical experts and users), contributing to verifiability (see section 3.2.1) [32], [34]. Decision rules can be represented either as procedural rules or as production rules. Procedural rules include both the decision logic and the control logic and are executed in a pre-specified order. Although this tight coupling of domain knowledge and control logic makes it easier to predict and control the system’s output, it also makes the knowledge base harder to acquire, maintain and update with new knowledge. Production rules, on the other hand, evaluate if the input data satisfies a Boolean condition and, if so, execute a certain action (also called “condition-action rules”). In contrast to procedural rules, each production rule is independent, and the order of execution of the (potentially many) different rules in a production rules-based knowledge base is determined by the system’s inferencing engine. This separation between domain knowledge and control logic makes it harder to predict the system’s output when it contains many rules, but makes the knowledge base easier to acquire and maintain [32].

In the 1990s, a growing number of institutions were implementing their own rules- based systems. These were developed in isolation from one another, as in-house systems, often as academic projects [2], [27]. Building the knowledge base of each unique CDS system required performing the same complex knowledge engineering process, including knowledge elicitation and acquisition (gathering the required knowledge from several sources, namely from human experts), knowledge validation (checking the quality of the elicited knowledge) and knowledge representation (encoding the expert knowledge in the desired computer-readable format) [33], [35]. Building and maintaining these knowledge bases was often too costly for such isolated, academic projects [28]. It then became clear that a standardized representation for encoding clinical logic was needed in order to decrease this redundant implementation work and to enable the transfer of computable knowledge between systems [32]. This drove the creation of the Arden Syntax for Medical Logic Systems

9

, later sponsored and developed by the standards development organization Health Level 7 (HL7) International

10

. In this formalism, single decisions are represented in units named medical logic modules that can work both as procedural rules and production rules, affording the advantages of both types of representation. The Arden

9https://www.hl7.org/implement/standards/product_brief.cfm?product_id=2 10https://www.hl7.org/

(17)

syntax had important limitations, as it was not expressive enough to represent complex processes and lacked standardized data mappings. Obtaining the clinical data needed for decision processing required queries to be manually inserted in a syntax specific to the local data mappings. This made it difficult to transfer knowledge between different systems, requiring a lot of manual (and error-prone) work, and demanded that non- technical experts validating the system were familiar with the syntax used for the local database mapping [32].

3.1.6 Clinical practice guidelines and computer-interpretable guidelines

Clinical practice guidelines (CPGs) are “statements that include recommendations intended to optimize patient care that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options” [36, p. 15]. CPGs were introduced over four decades ago to address inappropriate care and unjustified variations in patient care, to improve quality of care and to help reduce healthcare costs [37], [38]. CPGs are usually written by expert panels assembled by scientific organizations who review previous studies to compile evidence-based recommendations [39]. By providing an overview of current evidence and recommendations about best practices, CPGs can bridge the gap between clinical research and healthcare practitioners, leading to improved consistency and quality of care [37], [38].

Despite their potential benefits, CPGs have been consistently under-utilized by clinicians due to various barriers [2], [37]. One key barrier to the adoption of CPGs is their presentation in non-computable formats such as narrative texts in medical journals or websites. These formats hinder clinicians from accessing a guideline’s content at point- of-care and adapting its recommendations to the individual patient in front of them [37], limiting the guidelines’ ability to effectively improve the quality of care [40].

In the 1990s, as CPGs gained importance in the clinical decision-making process, it became necessary to develop computable representations of CPGs that could overcome these barriers and improve CPG adoption by clinicians. At that time, the Arden syntax was suitable only for simple rules such as alerts and reminders and not for encoding complex guidelines, which demanded greater expressivity and standardized data mappings to extract data from clinical repositories [32], [41]. A number of different CIG formalisms were developed in the following years, many of which have been reviewed and compared in the literature [38], [39], [41], [42]. Although they all aim to encode CPGs, these formalisms diverge on a number of dimensions, such as the language used to specify decision criteria, how medical concepts are represented and which patient information model they use [41], [42]. No single formalism has yet emerged as a consensual standard.

The development of CIG formalisms has enabled the development of CIG-based CDS systems that use the formalized CIGs as their knowledge base. When properly integrated with an EHR system, these systems can automatically receive the required patient data as input and have the inference engine apply the rules in the CIG to the data in order to produce the system’s recommendations. The result is a set of recommendations provided at point-of-care that follow the evidence-based recommendations contained in CPGs, but that are also individualized for each specific patient [37], [38]. This has made CIG-based CDS systems more effective than traditional CPGs in changing clinicians’ behaviour and achieving the intended benefits [39].

3.1.7 Guideline Definition Language

Guideline Definition Language (GDL) is a rules-based formal language for expressing

decision support logic, developed by Cambio Healthcare Systems. Its first specification

(18)

was released in 2013 in collaboration with the openEHR Foundation

11

, which endorsed GDL as a standard. The second version of the specifications (GDL2) was released in 2019

12

.

The structure of a GDL guideline consists of a definition part and a terminology part.

The definition part includes the definition of the guideline’s production rules and their pre-conditions, as well as the data bindings. GDL rules can be used for single decision- making or chained together to create complex decision-making processes. Data bindings in GDL guidelines make use of standards-based clinical models such as openEHR Archetypes

13

and, since the release of GDL2, HL7 FHIR resources

14

. By relying on standard clinical models, GDL rules can be shared between systems independently of their technical implementation and maintain the ability to bind the clinical data needed for rule execution. The terminology part of the GDL guideline contains all the natural language terms and external terminology codes assigned to locally defined terms. This independence between the guideline’s definition and its terminology enables the translation of a single guideline into multiple natural languages, and the use of any number of external terminology systems.

GDL has been studied and evaluated in a variety of use cases, including encoding complex CPGs [43]–[45] and screening applications [46] and integrating data with quality registries [47]–[49], encompassing a wide variety of clinical areas. To date, Cambio CDS has developed and published 591 open source GDL2 guidelines, which are available online for anyone to use

15

.

3.2 Verification and validation

This section will review the concepts of software verification and validation, focusing on the specificities of these concepts in the context of knowledge-based systems and CIGs.

Pertinent techniques used for V&V of these systems will be briefly reviewed.

V&V can be performed using a variety of techniques. Selecting the most adequate techniques and determining the required level of confidence to successfully verify and validate a particular system will depend on different factors involving the system, its users and its market [1]. In the case of CDS systems, QA is particularly important because, when poorly implemented, these systems can have a negative impact on the quality of care through disrupted workflows, alert fatigue or improperly maintained knowledge bases, and any wrongful recommendations could have potentially serious effects on patients’ lives [2].

Since computerized CDS systems are software systems in the first place, the definitions of V&V from a software engineering perspective (as mentioned before, in section 1.1) remain applicable. However, knowledge-based systems require specific approaches to their V&V process that verify and validate the knowledge they contain.

When researchers started focusing significantly on the V&V of knowledge-based systems in the late 1980s and 1990s [9], [11], the definitions of verification and validation used in this context were highly inconsistent, sometimes even contradictory [50]. In 2000, Gonzalez and Barr [50] reviewed the literature about V&V of knowledge-based systems to gather these inconsistent definitions and attempt to provide a consensual definition of both concepts in the context of knowledge-based systems. Those definitions are presented in the following sections.

11https://www.openehr.org/

12https://specifications.openehr.org/releases/CDS/latest/GDL2.html 13https://specifications.openehr.org/releases/AM/latest/Overview.html 14https://www.hl7.org/fhir/

15https://github.com/gdl-lang/common-clinical-models

(19)

3.2.1 Verification of knowledge-based systems and CIGs

After reviewing previous definitions and identifying common trends, Gonzalez and Barr proposed the following definition for verification of knowledge-based systems:

“Verification is the process of ensuring that the intelligent system (1) conforms to specifications, and (2) that its knowledge base is consistent and complete within itself” [50, p. 412].

The “specifications” mentioned in this definition refer to the expert knowledge that was acquired and represented in the knowledge-based system. Therefore, conformance to specifications in this context means ensuring that the knowledge base, where that knowledge is represented, is free of internal errors. Verification does not imply checking the correctness of the knowledge in comparison to the actual domain knowledge (that is the realm of validation), but rather checking that the knowledge contained in the knowledge base is consistent and complete within itself (with no external benchmark).

This definition of verification focuses therefore on what Lee and O’Keefe [11] had previously called “logical verification” (checking the knowledge base for logic anomalies), as opposed to the “verification of engineering attributes”, which referred to the more general definition of conformance to requirement specifications. Several taxonomies have been proposed to categorise the different types of logic anomalies found in knowledge bases [21], [22], [51]. Most taxonomies classify these anomalies as either errors of inconsistency or errors of incompleteness. Errors of inconsistency result of any kind of ambiguity in the rules (conflicting/contradictory rules, redundant rules, subsumed rules, circularity). Errors of incompleteness, such as unreachable conclusions or unreferenced attributes, are the result of missing knowledge. Both errors of inconsistency and incompleteness can be found without exercising the system, by means of static code analysis. Because this process is suitable for automation, numerous automated verification programs were developed and used with different knowledge-based CDS systems [8].

Regarding CIG-based systems, the aim of verification is similar, i.e., to demonstrate that the CIG’s specifications are internally consistent and have no structural anomalies.

The verification of CIGs must cope with certain challenges resulting from the use of CPGs as a source of knowledge. The first of these challenges is the narrative format of CPGs. CPGs are written in a language that is not formal enough to be directly translated into a CIG formalism. Anomalies such as vagueness, ambiguity, incompleteness, inconsistency and redundancy are often detected when a guideline is formalized into a specific CIG language [39], [52], [53]. Another challenge is the fact that medical knowledge frequently needs to be updated, and that each time the knowledge is updated carries the risk of introducing new logic anomalies [15], [34], [54].

Many different approaches to the verification of CIGs have been published and

reviewed in the literature [38], [39], [55]. An approach of particular interest to rules-based

CIG formalisms (such as GDL) is the use of decision tables. In the 1990s, decision tables

were first used to verify medical knowledge contained in CPGs and encoded as decision

rules [34], [54]. Decision tables are a tabular representation of production rules, where a

matrix of rows and columns is used to represent the association between the possible sets

of conditions (such as values of physical findings) and different actions (such as starting

a treatment). After a decision table is built from all the possible condition values and

actions described in the CPG, it can be used to verify the rule set by checking for

completeness and consistency. Besides increasing the rule set’s verifiability, the use of

decision tables also increases the rules’ clarity and makes them more understandable to

(20)

clinical experts and end users. This is particularly important because the input of these stakeholders will be needed to correct any anomalies that are detected. Finally, the use of decision tables is also beneficial when the decision rules need to be updated, allowing this to be done without introducing new errors [34], [54].

3.2.2 Validation of knowledge-based systems and CIGs

After reviewing previous definitions and identifying common trends, Gonzalez and Barr proposed the following definition for validation of knowledge-based systems:

“Validation is the process of ensuring that the output of the intelligent system is equivalent to those of human experts when given the same inputs.” [50, p. 412]

Lee and O’Keefe [11] had previously defined this as a “result-oriented validation”

process, focusing on the correctness of the system’s decisions and whether their quality satisfies the stakeholder’s needs. Unlike verification, validation implies comparing the output of the system against an external benchmark (the knowledge of the human experts). This requires a dynamic approach in which the system is executed to produce the output, i.e., testing [50]. This definition of validation states that the system’s output should be “equivalent” to a human expert but leaves the requirements for “equivalency”

to be defined for each specific project. This concept of result-oriented validation also applies to CDS systems based on CIGs. When validating these systems, the external benchmark used for comparison is the output of a human expert (or, preferably, several human experts) using the original CPG [5].

The most common validation method used for such systems is case testing, in which a set of test cases is executed by the system and their output is compared to the expected results [9], [50], [56]. There are two major concerns regarding the set of test cases to be used: its completeness and the coverage it offers. The completeness of a set of test cases is the “degree to which the data represents all types of cases which could be presented to the system under intended conditions of use.” [56, p. 3]. The coverage offered by that set of test cases is the degree to which all the possible decision paths of the knowledge base are tested by that set of test cases [56]. Other important requirements for test cases used for validation are that they should be based on real-life patient data, should cover a wide range of problems and cover both common cases and extreme values [5].

Exhaustive testing of a knowledge-based system (i.e., using a set of test cases that covers every possible combination of input values) is impractical, even for simple systems [57]. The challenge has been to develop methods for creating reduced test case sets which still offer enough coverage for adequate validation. Instead of a “naively exhaustive” approach, it is possible to generate a “functionally exhaustive” test case set by removing functionally equivalent input values and input values that subsume other values. However, even such a functionally exhaustive test case set would still be too large [58]. Knauf et al described an approach for generating reduced test case sets that can be generalized to rules-based systems [58]. The authors describe a series of computations for reducing an exhaustive test case set into a “quasi exhaustive” set of test cases (QuEST), to which different sets of criteria are applied for reducing it further into a

“reasonable” set of test cases (ReST). The authors describe how such an approach would allow reducing an estimated exhaustive set of 3,5 x 10

10

cases into a ReST of 345 cases or fewer.

More recently, Usman et al. [5] described an approach for automating the extraction

of real-life test cases from an EHR according to CIG specifications. This approach is

generalizable to any knowledge-based system expressible as production rules and with

(21)

access to real patient data via an EHR. The system’s knowledge base is first expressed in a human-readable specification called a “rules document”. A set of filters and paths is then generated from the rules document, with which it is possible to extract patient records from the EHR matching each of these paths.

3.3 Model-driven development

Reviewing the model-driven development (MDD) paradigm in this study is pertinent because it became apparent during data collection that this approach strongly influences the development process at Cambio CDS, with implications in the company’s V&V process.

MDD is “a development paradigm that uses models as the primary artifact of the development process” [59, p. 9] in which the implementation is generated from those models. MDD has emerged due to factors such as the increased complexity of software artifacts demanding development at higher levels of abstraction, the need for developers to discuss technical aspects with non-technical stakeholders, a shortage of technically skilled developers, and the continuously increasing pervasiveness of software [59]. MDD can provide several advantages [59]–[62], some of which are enumerated in the following paragraphs.

MDD allows for a better separation of concerns. Team members with domain knowledge do not need to focus on the technical implementation. Instead, they can remain focused on the domain and contribute to developing the system by using expressive high- level models to model the application. Technical experts, on the other hand, can focus on the technical aspects, such as developing the MDD tools, without needing to become familiar with complex domain knowledge. By developing adequate tools and notations for modelling, the technical experts can empower the domain experts even further, sometimes to a point where domain experts become able to develop the systems by themselves.

The use of high-level formal models allows domain experts to express the implementation using concepts much closer to the problem domain. This increases the understandability and maintainability of these systems.

MDD facilitates reusability and portability, because it is only necessary to adapt the translator to a particular technical implementation for allowing the reuse of all existing models in different platforms. Furthermore, because the product is less dependent on a specific computer technology, it is less affected by changes to that technology.

MDD speeds up the design and development process, decreasing development costs and time to delivery.

Finally, MDD can also simplify the V&V process. The implementation code of the system under development is created by a generator that executes the created models.

Therefore, once this generator tool has been thoroughly tested and verified, then every

other project will mainly require functional testing and validation, and not be encumbered

by the verification of the low-level technical implementation.

(22)

4. Results

The main results of this study are the categories of meaning obtained from coding analysis of the textual data collected by the interviews (as described in section 2.2.3). These results will be presented and described below, in sections 4.1 (approaches to verification) and 4.2 (approaches to validation).

Besides these main results, other artifacts were produced by the study’s methods which, although not directly related to the research questions, were essential for the main data collection and analysis processes, as described in section 2.2. These artifacts are gathered in appendices A and B as follows: Appendix A contains a high-level overview of the structure of CDS services and the modelling and authoring tools used for their development. This information stems from the notes gathered from informal conversations and document analysis done prior to data collection. Appendix B shows the interview guide, which was created based on the initial literature review, document analysis and preliminary discussions and was used to conduct each interview.

Six interviews were performed. Each interview lasted between 30 and 45 minutes and was conducted as an online meeting on the company’s Microsoft Teams

16

platform.

Participants were involved in different activities in the company, with roles including product owner, clinical modeller, QA engineer and software developer.

An initial coding pass of the transcribed textual data categorised 207 text fragments into a total of 41 different codes, mainly related to the stages of the development process under discussion (these initial codes can be found in Appendix C.1). Further coding was performed using these initial categories as a foundation, with the goal of categorising the discussed approaches as related to verification or validation. The end results of this coding analysis are presented in sections 4.1 and 4.2.

4.1 Approaches to verification

The categories of approaches to the verification of CDS services discussed during the interviews are listed in table 4.1 (see appendix C.2 for the corresponding mind map). Sub- sections 4.1.1–4.1.4 will describe each of these categories:

Table 4.1 - Categories of approaches used for verification.

Main categories Sub-categories Focus on testable

requirement specifications

- Requirement analysis - Requirement-based testing Static techniques - Peer reviews

- Knowledge verification Model-driven development

practices

- Use of high-level modelling tools

- Generation of decision rules from decision tables Testing practices - Test-driven development

- Custom test libraries for automated tests - Exploratory tests

- Acceptance tests

4.1.1 Focus on testable requirement specifications

During the requirement gathering stage, a strong focus is placed on ensuring requirement verifiability, i.e., writing testable requirements. During requirement analysis, the scope of each elicited requirement is narrowed down until they become as granular as possible.

16https://www.microsoft.com/sv-se/microsoft-teams/group-chat-software

(23)

This facilitates the creation of tests and simplifies the traceability of any failures encountered during verification.

The requirements specified in the software requirement specifications (SRS) document are the basis for developing most of Cambio CDS’s automated functional tests, following the practice of requirements-based testing. Defining and organizing test cases according to the requirements they verify is not only a systematic approach towards verification, but also makes it easier to adapt the test suite to any changes in the application’s SRS.

4.1.2 Static techniques

Peer reviews are performed throughout the entire development process at Cambio CDS.

Documents are reviewed whenever changes are made (the SRS, for example, is reviewed not only by other product owners but also by medical customers, to ensure that the specified requirements are complete, consistent, and verifiable). Clinical modellers also review the decision rules and test fixtures created by other modellers to ensure they are correct and as optimized as possible, focusing on frequent sources of rule anomalies such as incorrect rule boundaries and conflicting rules.

In the specific context of knowledge verification (see section 3.2), several static approaches were discussed. Even though no specific instrument is used to formally assess the implementability of CPGs, the medical background of the Cambio CDS team has enabled them to analyse the contents of the CPGs to be encoded and has further facilitated the early detection of logic anomalies in the knowledge they contain. In specific projects, decision tables have been created to verify the completeness and consistency of more complex rule sets. This has allowed for a quick detection of logic anomalies of incompleteness and inconsistency in the early stages of knowledge elicitation and formalization. Similar checks have also been done during the clinical modelling process.

4.1.3 Model-driven development practices

The development process at Cambio CDS is influenced by the MDD paradigm. The development of a CDS service does not involve writing much low-level code. Instead, clinical modellers rely on authoring and modelling tools such as the Archetype Editor and GDL2 Editor (see appendix A for a description of the structure and development of Cambio CDS’s services). These tools allow the use of simple drag-and-drop graphical user interfaces (GUI) to edit the openEHR archetypes and GDL guidelines while the corresponding low-level code is automatically generated in the background. The CDS studio plugin later abstracts much of the complexity of creating a new CDS service by using configuration files to bring together the necessary backend and UI components and the pertinent GDL guidelines. This MDD approach has also been applied to test cases, which can be created by clinical modellers within the GDL2 Editor (guideline test cases) and within the CDS Studio (backend test cases) without requiring low-level knowledge of the testing libraries or writing low-level code.

Previous projects have used decision tables in a MDD approach, by attempting to automatically generate decision rules from the decision tables. In such projects, decision tables function as models from which the rules code is automatically generated.

4.1.4 Testing practices

When implementing the decision rules and creating the GDL guidelines, clinical

modellers at Cambio CDS can use the “test” section in the GDL2 Editor to specify input

values and execute the modelled rules. The GDL2 Editor can automatically generate a

YAML test case file containing the used input values, and several test cases may be

grouped and saved as an external YAML file. This functionality, complemented by the

test coverage information provided by the GDL2 Editor when test cases are executed,

(24)

enables the adoption of a test-driven approach to clinical modelling. Each individual rule is tested as it is being modelled so that defects are easily traceable and can be caught as early as possible. This test-driven approach is strongly instilled in newly hired modelers and in health informatic students who contribute with part-time modelling of community clinical models.

A test plan is defined for each CDS service. Three testing levels are defined in the test plan: guideline (unit) tests validate the guidelines’ decision rules and are presented as an approach to knowledge validation (see section 4.2.2), while functional tests and endurance tests verify conformance to functional and non-functional requirements.

Functional tests verify each functional requirement specified in the SRS except external system functional requirements (which are tested by the external system provider). Functional tests include automated backend tests and UI tests, as well as exploratory tests. Backend tests are automated tests designed to test the roundtrip from the backend to the frontend. Test cases for backend tests are first created by clinical modellers as JSON files where the input and the expected output data are specified. These test fixtures can be imported to the IDE with CDS Studio and executed, which will send the specified input values as network requests to the mocked backend services. The payload of the response of the backend is then subject to a full comparison with the expected output and the result is displayed. The user can then simply choose to save that test case as a backend test, and the backend test file will automatically be saved in the location required by the automated testing framework for automated execution. UI tests are integration tests that use a running instance of the application and the Selenium

17

test library for automated GUI interaction and testing. A few manual UI tests may be required to verify UI requirements that cannot be evaluated by automated UI tests. Exploratory tests are typically performed by the responsible product owner, who confirms that the application fulfils its specifications. These tests are performed using the CDS Portal, an online environment where the applications can be deployed and tested using the browser.

An endurance test is performed to verify non-functional requirements such as availability and performance. Automated functional tests and the endurance test are executed as part of a CI pipeline whenever a change is committed to the CDS service’s repository in GitLab

18

, thus effectively providing regression testing after each change.

When the development is finalized and any eventual defects found during exploratory tests have been solved, acceptance tests are planned with the customer. Acceptance test specifications contain detailed test cases designed to verify all the functional requirements in the SRS. Acceptance tests have been performed in different circumstances throughout the years. In some projects, the customer’s environment could be replicated and the application was installed and executed there for acceptance testing. In other projects where this is too complex, acceptance tests have been conducted on site.

4.2 Approaches to validation

The categories of the approaches to validation discussed during the interviews are listed in table 4.2 (see appendix C.3 for the corresponding mind map display). Sub-sections 4.2.1–4.2.4 will describe each of these categories.

17https://www.selenium.dev/

18https://about.gitlab.com/

References

Related documents

Since the NBT test is influenced by a number of technical factors, the test method must be carefully standardized and each laboratory must determine the normal NBT test values.

In contrast to our results, Sia et al [116] found that patient homozygoue for C3435 (CC) required a lower dose of morphine for pain relief after caesarean section in a large number of

Självfallet kan man hävda att en stor diktares privatliv äger egenintresse, och den som har att bedöma Meyers arbete bör besinna att Meyer skriver i en

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Product-Service Systems Functional Product Development Industrial Product-Service Systems Integrated Product and Service Engineering Concurrent Engineering Integrated

The purpose in this paper is to describe practical activities of Needfinding in the early phases of a team-based product innovation project to gain insights into what the fuzzy

However, the certainty factor technique is the most applied inexact-reasoning method in KBS today (Turban, 2011). One significant product of this research is

In Figure 8.9 the three different APs obtained using the CN electrode are shown together with the fully deconvolved signal where the highpass filtered (2 kHz) AP from Figure 8.7