The curious case of artificial intelligence

86  Download (0)

Full text

(1)

Department of Law Spring Term 2021

Master’s Thesis in Legal Informatics 30 ECTS

The curious case of artificial intelligence

An analysis of the relationship between the EU medical device regulations and algorithmic decision systems used within the medical domain

Author: Pernilla Björklund

Supervisor: Katja De Vries

(2)

(3)

There’s no time limit, stop whenever you want.

You can change or stay the same, there are no rules to this thing.

We can make the best or the worst of it. I hope you make the best of it.

- Eric Roth, The Curious Case of Benjamin Button, 2008

(4)

(5)

Abstract

The healthcare sector has become a key area for the development and application of new technology and, not least, Artificial Intelligence (AI). New reports are constantly being published about how this algorithm-based technology supports or performs various medical tasks. These illustrates the rapid development of AI that is taking place within healthcare and how algorithms are increasingly involved in systems and medical devices designed to support medical decision-making.

The digital revolution and the advancement of AI technologies represent a step change in the way healthcare may be delivered, medical services coordinated and well-being supported. It could allow for easier and faster communication, earlier and more accurate diagnosing and better healthcare at lower costs. However, systems and devices relying on AI differs significantly from other, traditional, medical devices. AI algorithms are – by nature – complex and partly unpredictable. Additionally, varying levels of opacity has made it hard, sometimes impossible, to interpret and explain recommendations or decisions made by or with support from algorithmic decision systems. These characteristics of AI technology raise important technological, practical, ethical and regulatory issues.

The objective of this thesis is to analyse the relationship between the EU regulation on medical devices (MDR) and algorithmic decision systems (ADS) used within the medical domain. The principal question is whether the MDR is enough to guarantee safe and robust ADS within the European healthcare sector or if complementary (or completely different) regulation is necessary. In essence, it will be argued that (i) while ADS are heavily reliant on the quality and representativeness of underlying datasets, there are no requirements with regard to the quality or composition of these datasets in the MDR, (ii) while it is believed that ADS will lead to historically unprecedented changes in healthcare , the regulation lacks guidance on how to manage novel risks and hazards, unique to ADS, and that (iii) as increasingly autonomous systems continue to challenge the existing perceptions of how safety and performance is best maintained, new mechanisms (for transparency, human control and accountability) must be incorporated in the systems. It will also be found that the ability of ADS to change after market certification, will eventually necessitate radical changes in the current regulation and a new regulatory paradigm might be needed.

(6)

(7)

Table of Contents

Abstract ... v

Abbreviations ... ix

1. Introduction ... 1

1.1. Objective ... 3

1.2. Scope and limitations ... 3

1.3. Methodology and Materials ... 4

1.4. Thesis structure ... 6

2. The Medical Device Regulation ... 7

2.1. The objective of the MDR ... 7

2.2. ADS as medical devices ... 8

2.3. Requirements for manufacturers of SaMD and ADS ... 9

2.3.1. A risk-based approach ... 10

2.3.2. Pre-market and post-market procedures ... 11

2.3.3. Unique device identification ... 13

2.4. MDR and the development of new medical technology ... 14

3. Artificial Intelligence ... 14

3.1. Machine Learning ... 15

3.2. Supervised and Unsupervised Machine Learning ... 15

3.3. Deep Learning and Artificial Neural Networks ... 16

3.4. ‘Locked’ and ‘Adaptive’ Algorithms ... 17

4. Artificial Intelligence in medical decision-making ... 18

5. The particular features of ADS and the regulatory challenges ... 20

5.1. Trust and acceptability as the ultimate goal ... 20

5.2. Data management: access, quality and representativeness ... 22

5.2.1. The relationship between accurate and reliable AI systems and data accessibility ... 23

5.2.2. Limitations in data accessibility due to data protection and competition ... 24

5.2.3. Lack of good quality labelled data ... 26

5.2.4. Biased data endangers diversity, fairness and the right to non-discrimination ... 27

5.2.5. Regulating access, data quality and representativeness ... 29

5.3. The system: robustness, transparency and autonomy ... 31

5.3.1. Technical robustness and the infrastructure of the AI model ... 32

5.3.2. The disputed notion of transparency ... 33

5.3.3. Autonomy, human control and accountability ... 36

(8)

5.3.4. Adaptiveness: the ability to change after market certification ... 39

5.4. With great power comes great (regulatory) responsibility ... 40

6. Medical ADS & the MDR ... 41

6.1. Pre-market procedures for accuracy, reliability & avoidance of unfair bias ... 42

6.1.1. Specification of the intended purpose ... 42

6.1.2. Envelopment to avoid unfair bias? ... 44

6.1.3. Verification, validation and testing environment ... 47

6.2. Lifecycle management for continuous safety and performance ... 49

6.2.1. Systems for quality and risk management ... 50

6.2.2. Reinforced post-market procedures ... 52

6.3. The rationale of transparency in medical ADS ... 53

6.3.1. Traceability to facilitate accuracy, reliability and accountability ... 54

6.3.2. Explainability to enhance trust and confidence in medical decisions ... 55

6.3.3. Regulating transparency in the MDR ... 57

6.4. Autonomy ... 59

6.4.1. A prohibition of automatic decisions? ... 60

6.4.2. Human oversight and control ... 60

6.4.3. Accountability mechanisms in medical ADS ... 63

6.4.4. Adaptiveness: the ability to change after market certification ... 65

7. Concluding remarks ... 67

Bibliography ... 69

(9)

Abbreviations

AI Artificial intelligence

ADS Algorithmic Decision Systems SaMD Software as a medical device

MDR Regulation (EU) 2017/745 of the European parliament and of the council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (MDR)

GDPR Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)

MDD Council Directive 93/42/EEC of 14 June 1993 concerning medical devices.

AIMD Council Directive 90/385/EEC of 20 June 1990 on the approximation of the laws of the Member States relating to active implantable medical devices UDI Unique device identifier

UDI-DI A unique numeric or alphanumeric code specific to a model of device and that is also used as the ‘access key’ to information stored in a UDI database UDI-PI A numeric or alphanumeric code that identifies the unit of device production EUDAMED The European database on medical devices

ICU Intensive Care Unit

IMDRF International Medical Device Regulators Forum FDA Food and Drug Administration

ITU International Telecommunication Union

(10)

(11)

1. Introduction

To attain good and affordable healthcare for all people has been a top priority for governments and organisations worldwide for a long time.1 However, major challenges have been, and are still, faced in growing populations, demographic changes and shortages of healthcare practitioners. In striving to overcome these challenges and reach global health goals, the healthcare sector has become a key area for the development and application of new technology and, not least, Artificial Intelligence (AI).2 New reports are constantly being published about how this algorithm-based technology supports or performs various medical tasks. These illustrates the rapid development of AI that is taking place within healthcare and how algorithms are increasingly involved in systems designed to support medical decision- making.3 Furthermore, many working in the field of medical technology predict a near future with AI systems that are able to not only support, but also autonomously make medical decisions. There is a widely shared belief that the use of these algorithmic decision systems (ADS) will generate tremendous improvements for patients as well as healthcare practitioners and ultimately, the society as a whole.4 It is however also widely recognised that medical decisions often have significant impact on patients’ health and lives.5 Hence, the introduction and use of ADS within the medical domain is not without challenges.

Systems and devices relying on AI differs significantly from other, traditional, medical devices. AI algorithms are – by nature – complex and partly unpredictable.

Additionally, varying levels of opacity has made it hard, sometimes impossible, to interpret and explain recommendations or decisions made by or with support from ADS.6 These characteristics of AI technology raise important technological, practical, ethical and

1 Wiegand, T. et al. (ITU), Whitepaper for the ITU/WHO Focus Group on Artificial Intelligence for Health, 2018, p 2;

See e.g. Constitution of the World Health Organisation, 2006; UN General Assembly, Transforming our world:

the 2030 Agenda for Sustainable Development, 2015; Charter of Fundamental Rights of the European Union Art. 35;

European Commission, Together for Health: A Strategic Approach for the EU 2008-2013, 2007 (Hereinafter cited as ‘Together for Health’); Public Health - European Commission, Europe 2020 – for a healthier EU - Public Health - European Commission, [online] Available at: <https://ec.europa.eu/health/europe_2020_en>.

2 See e.g. Wiegand, T. et al. (ITU), p 3; Europe 2020. A strategy for smart, sustainable and inclusive growth.

Communication from the Commission. COM (2010) 2020 final, 2010, p 10.

3 Ibid. See also e.g. Castelluccia, C. and Le Métayer, D., Understanding algorithmic decision-making: Opportunities and challenges, 2019, p 15.

4 Esmaeilzadeh, P., Use of AI-based tools for healthcare purposes: a survey study from consumers’ perspectives, 2020, p 2;

Horgan, D. et al., Artificial Intelligence: Power for Civilisation – and for Better Healthcare, 2019, p 146; European Commission, White Paper on Artificial Intelligence - A European approach to excellence and trust, 2020 (Hereinafter cited as ‘White Paper on AI’), p 1.

5 Ibid.

6 See e.g. Wiegand, T. et al. (ITU), p 3; Robbins, S., AI and the path to envelopment: knowledge as a first step towards the responsible regulation and use of AI-powered machines, 2020, p 392 f.

(12)

regulatory issues. Unforeseeable safety deficiencies, discriminatory bias and lack of oversight and control may result in unequal distribution of healthcare and catastrophic consequences for individuals. Furthermore, there is a significant risk that uncertainty about the quality and safety of the systems leads to an unwillingness among stakeholders to use them. In striving towards a world-leading position in development of innovative and technological solutions to promote health, the European Union (EU) must thus find a way to balance the assurance of safe and robust ADS in healthcare on the one hand and the promotion of innovation on the other.7 Apart from technological and financial measures, adequate regulation and guidance is essential, particularly in ensuring patient safety as well as enhancing market acceptance.

Hitherto, there is no regulation explicitly governing medical ADS. However, these systems must still comply with both general regulatory requirements for medical devices and other relevant national and international legislation.8 The most central regulation is the new and updated framework for medical devices, Medical Device Regulation 2017/745 (MDR)9, which will be applicable and fully replace the preceding legislation on the 26th of May 2021.10 The MDR imposes strict requirements on safe and well-performing medical devices on the European market. Medical ADS are no exception. Nevertheless, as the regulation aspires to be technologically neutral, the lack of requirements tailored for AI systems is notably absent.

In the light of this, this thesis will analyse and discuss the relationship between the MDR and medical ADS. A number of gaps will be identified. In essence, it will be found that (i) while ADS are heavily reliant on the quality and representativeness of underlying datasets, there are no requirements with regard to the quality or composition of these datasets in the MDR, (ii) while it is believed that ADS will lead to historically unprecedented changes in healthcare, the regulation lacks guidance on how to manage novel risks and hazards, unique to ADS, and that (iii) as increasingly autonomous systems continue to challenge the existing

7 High-Level Expert Group on Artificial Intelligence, Sectoral considerations on the policy and investment

recommendations for trustworthy artificial intelligence, 2020, p 11 f. (Hereinafter cited as ‘HLEG (2020)’); Europe 2020 – for a healthier EU [online] Available at: <https://ec.europa.eu/health/europe_2020_en>;

White Paper on AI, p 2.

8 Cf. e.g. Becker, K. et al., Digital health – Software as a medical device in focus of the medical device regulation (MDR), 2019, p 217; Minssen, T. et al., Regulatory responses to medical machine learning, 2020, p 7; Buyers, J., Artificial Intelligence the Practical Legal Issues. p 96 f.

9 Regulation (EU) 2017/745 of the European parliament and of the council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (MDR).

10 MDR Art. 2(1) and preamble (19). The date of application was primarily set for May 2020, but due to the outbreak of the coronavirus, it was postponed to 2021.

(13)

perceptions of how safety and performance is best maintained, new mechanisms (including transparency, human control and accountability) must be incorporated in the systems.

1.1. Objective

The objective of this thesis is to analyse the relationship between the existing regulation for medical devices in the EU, MDR, and ADS used within the medical domain. The principal question is whether the MDR is enough to guarantee a safe and responsible development of ADS within the European healthcare sector or if complementary (or completely different) regulation is necessary. To answer this, the following sub-questions will be answered;

i) Which are the key features of ADS that distinguishes them from other SaMD and what novel regulatory challenges do they involve?

ii) In relation to these features, how does the MDR apply to ADS?

iii) Are the existing pre-market and post-market procedures established in the MDR enough to guarantee safe and well-performing ADS within the European healthcare sector?

1.2. Scope and limitations

This thesis focuses on algorithmic decision systems, ADS. In essence, ADS are a specific type of systems using AI algorithms to somehow support decision-making.11 The algorithms may be trained in a supervised or unsupervised environment, through ‘traditional’ machine learning or through deep learning techniques (with neural networks).12 They may exist on a scale from fully autonomous to merely supportive, relating to the level of human intervention. Additionally, they may be locked, not allowing changes after market placement, or adaptive and continuously evolving throughout their lifetime.13 These systems are increasingly involved in medical decision-making and are predicted to generate radical improvements (and challenges) for the healthcare sector.

The scope of the thesis’ analysis is further limited to the MDR.14 In particular, the analysis will focus on, but not be limited to, the requirements on the pre-market and post- market procedures that manufacturers of medical devices must meet to demonstrate safety

11 Castelluccia, C. and Le Métayer, D., p 3.

12 See ch. 3.1, 3.2 and 3.3.

13 See ch. 3.4

14 Note that the MDR refers to a number of harmonised standards in which more detailed guidance about the production of medical devices may be found. These standards are not available to the general public and will therefore not be included in this thesis.

(14)

and performance of their devices. The analysis is a legal-systematic one. The aim is not to provide any definite or exhaustive answers to how future regulation should be phrased, but to identify where ADS poses new regulatory challenges within the medical domain and discuss whether or not complementary regulation is needed.

At the time of writing, practically all medical ADS operate on datasets including personal data. In 2016, the General Data Protection Regulation 2016/679 (GDPR)15 was implemented to safeguard individuals’ right to personal data protection.16 Thus, the GDPR will be addressed in relevant parts, when needed to make a comprehensive analysis of the existing regulation. However, the relationship between ADS and personal data protection will not be considered at length. The focus on GDPR will be limited to (i) how the GDPR limits the access to data in the development of ADS and (ii) how the GDPR addresses automated decision-making and how that affects the use of ADS within healthcare.

1.3. Methodology and Materials

When studying the legal sources for the purpose of determining the meaning and applicability of a legislative act, the method known as ‘legal dogmatic method’ is commonly used.17 The legal dogmatic method primarily uses the traditional legal sources, including legal acts and legal literature.18 In this thesis, the method is employed to make a de lege lata analysis of the MDR and its applicability on ADS; that is, an analysis of how the current state of the regulation applies to such systems in practice.To this end, the MDR has been used as the primary legal source. A selection of guiding documents from the EU and doctrinal material have been used as support for interpretation and analysis. Moreover, the GDPR will be addressed in relevant parts to highlight certain concerns related to the development of ADS.19.

The thesis further encompasses elements of legal reasoning de lege ferenda; beyond merely defining and applying the current regulation, it discusses the sufficiency the MDR and if additional regulation is needed.20 To what extent a traditional application of the legal dogmatic method allows for a de lege ferenda perspective is rather ambiguous. The method is

15 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)

16 GDPR, Art. 1.

17 Lerbergh, B., Praktisk juridisk metod, 2020, s 203.

18 Kleinman, J., in Juridisk Metodlära, 2018, p 21.

19 See ch. 1.2.

20 Lerbergh, B., p 281.

(15)

criticised by some for excluding such a perspective, while others insist that it may be included.21 An alternative method, is the ‘legal analytical method’.22 This method focuses on a broader analysis of the law and opens up for criticism and argumentation as well as discussions about appropriate legal solutions for the future.23 Irrespective of where the line between these two methods is drawn, the method of this thesis undoubtedly goes beyond the more stringent application of the legal dogmatic method. Due to the nature of the topic and the absence of explicit regulation on medical ADS, a more critical perspective on the law and a discussion of potential regulatory solutions is considered necessary for an adequate analysis. Thus, the thesis claims to employ a dogmatic method for the reasoning de lege lata and an analytical method for the reasoning de lege ferenda.

In the parts reasoning de lege ferenda, a broader selection of material has been made.

In addition to research and literature about ADS, specifically referring to the MDR, articles and papers discussing the features of AI in relation to regulation more broadly have been used. Two of the primary sources in this regard have been the reports on trustworthy AI, published by the EU Independent High-level expert group on Artificial Intelligence (HLEG) to advise the European Commission in its development of a harmonised AI strategy for the EU.24 Furthermore, without any attempts to make an exhaustive comparison of the regulations, the analysis includes some references to the corresponding regulation in the US and associated doctrinal literature. The United States (the US) and the EU are both market leaders in the development of AI in the medical domain. Additionally, the US is the first (and, so far, only) country to suggest a regulatory framework specifically designed for this purpose.25 Thus, such references are suitable to further highlight certain regulatory concerns and potential solutions.

In addition to the dogmatic and the analytical methods, a methodology of legal informatics has been employed. Legal informatics is an interdisciplinary method, which adds perspectives from the field of informatics to the traditional legal ones.26 Although the thesis has a regulatory perspective, it is, because of the nature of the topic, necessary to look beyond the traditional legal sources. To fully understand the regulatory concerns, fundamental

21 See e.g. Kleineman, J., p 33 f.; Jareborg, N., Rättsdogmatik som vetenskap, 2004, p 8; Sandgren, C, Rättsvetenskap för uppsatsförfattare – Ämne, material, metod och argumentation, 2018, p 48 f.

22 Sandgren, C., p 48 f.

23 Ibid.

24 High-Level Expert Group on Artificial Intelligence (HLEG), Ethics guidelines for trustworthy AI, 2019, p 16 f.

(Hereinafter cited as ‘HLEG (2019)’); HLEG 2020.

25 Osadchuk, M.A. et al., Legal Regulation in Digital Medicine, 2020, p 150.

26 See Magnusson Sjöberg, C, Rättsinformatik: Juridiken i det digitala informationssamhället, 2016, p 23; Seipel, P., IT Law in the Framework of Legal Informatics, p 32 f.

(16)

knowledge about the features of AI as well as how it is used within healthcare in practice, is essential. To this end, the method of legal informatics allows for material from other fields, such as computer science and medical technology, to be used.27 While facilitating an analysis of the interaction between information technologies and law, the broader scope of materials may contribute to a partly incoherent method. Thus, certain considerations have been made in the selection of material. When referring to medical researches and studies of the practical use of AI in medical devices, the selection of sources has been limited to studies cited by credible actors such as the World Health Organisation, (WHO) and the EU. When defining and discussing AI, regulatory studies of AI have been the primary sources. These sources have been found to provide coherent and, for the purpose of this thesis, relevant definitions.

There, where a more comprehensive definition and deeper understanding of certain features was needed, a number of other, more technical, sources have been used as support.

In short, the thesis encompasses legal reasoning de lege lata as well as de lege ferenda, employing a ‘legal dogmatic method’ for the former and a ‘legal analytical method’ for the latter. Furthermore, a method of legal informatics and material from medical technology and computer science have been employed to provide sufficient background information on the topic and facilitate a comprehensive analysis.

1.4. Thesis structure

There is evidently a lot of promise in medical ADS, but appropriate regulation will be needed to ensure that such systems are developed, deployed and used to maximise benefits and minimise risks. In this thesis, the new regulation for medical devices (MDR) and its applicability on medical ADS will be discussed. In the following chapter, the MDR and its key provisions in respect to medical ADS will be introduced (chapter 2). Before these are returned to in chapter 6, it will be necessary to zoom in on AI (chapter 3), how ADS are applied in medical decision-making (chapter 4) and which regulatory issues the particularities of AI and ADS raises (chapter 5). Subsequently, the provisions in the MDR introduced in chapter 2 will be more thoroughly analysed in relation to medical ADS (chapter 6). The analysis will be centred around the requirements for manufacturers of medical devices and whether or not they are sufficient to ensure a safe introduction of increasingly complex and autonomous ADS in medical decision-making. Where gaps in the existing regulation are

27 Ibid.

(17)

identified, appropriate amendments are discussed. Finally, a short summary and a concluding remark will be provided (chapter 7).

2. The Medical Device Regulation

In the EU, medical devices are subject to extensive regulation. An updated regulatory framework, the MDR, implemented in 2017, will be applicable and fully replace the preceding directives on the 26th of May this year, 2021. The regulation format makes the MDR, unlike the former directives, directly enforceable in all member states with the aim to further harmonise and clarify the rules surrounding medical devices.28 Consequently, all stakeholders on the EU market must comply with these rules. The most central stakeholders in ensuring qualitative and safe medical devices are the manufacturers.29 They must, both prior to the market placement of their device and throughout the device’s lifetime, undertake a number of procedures to ensure and demonstrate its safety and performance.

The aim of this chapter is to give an introduction to the MDR, its applicability to medical ADS and its requirements on medical device manufacturers, manufacturers of software-based medical devices (SaMD) in particular. After a short presentation of the objective of the regulation, its relevance for and applicability on medical ADS will be accounted for. Subsequently, the key requirements that a manufacturer must meet and their relation to the features of the device will be identified and presented. These requirements will later be returned to in chapter 6, where they will be analysed more thoroughly in relation to ADS.

2.1. The objective of the MDR

Before certain products are distributed to the market within the EU, the manufacturer must assure conformity with all relevant EU legislation. To demonstrate such conformity, a number of EU regulations establish a mandatory quality assurance scheme, obligating manufacturers to draw up a declaration of conformity and affix a ‘CE marking’ to the device before releasing it to the market.30 For medical devices, these requirements have previously

28 MDR preamble (2); Contardi, M., Changes in the Medical Device's Regulatory Framework and Its Impact on the Medical Device's Industry: From the Medical Device Directives to the Medical Device Regulations, 2019, p 173.

29 See e.g. MDR preamble (32).

30 Regulation (EC) No 765/2008 of the European Parliament and of the Council of 9 July 2008 setting out the requirements for accreditation and market surveillance relating to the marketing of products and repealing Regulation (EEC) No 339/93, see especially Art. 30.

(18)

been codified in a number of legislative acts, primarily the MDD31 and the AIMD32, and are now established in the MDR.33 In large parts, the requirements in the MDR equal the requirements in the former legislation. Nevertheless, the MDR implements some significant reinforcements and clarifications of already existing regulatory elements and introduces a number of important novelties, especially regarding SaMD.34

The aim of the regulation and the CE-marking system is two folded. Primarily, the aim is to create a solid, sustainable and predictable regulatory framework that ensures high quality healthcare, patient safety and a functioning internal market.35 At the same time, the regulation strives to support innovation and take into special account the small- and medium- sized companies active in the sector.36 This balance, between comprehensive regulation and promotion of innovation, is an inevitable concern when new regulation is introduced in the field of medical technology.37 While the often critical nature of medical decisions justifies the calls for regulation, increased robustness and additional safety barriers often come at the cost of expensive and timely procedures. In the medical technology industry, the companies that are driving the development of medical AI are furthermore often smaller companies or start- ups with limited resources.38 Too far reaching regulation could thus create significant obstacles for innovative companies to comply with the regulation and receive market certification. In turn, the healthcare sector risks losing the potential benefits of the systems and devices that never reach the market. Thus, proposals for additional regulation must at the same time be designed to allow for innovation and continued technological development.

2.2. ADS as medical devices

In article 2 of the MDR, ‘medical device’ is defined as “any instrument, apparatus, appliance, software, implant, reagent, material or other article intended by the manufacturer to be used, alone or in combination, for human beings”39. It must further be intended for one or more of medical purposes listed in the definition, including diagnosis, monitoring, prevention,

31 Council Directive 93/42/EEC of 14 June 1993 concerning medical devices.

32 Council Directive 90/385/EEC of 20 June 1990 on the approximation of the laws of the Member States relating to active implantable medical devices.

33 MDR Art. 2 (43), 10(6), 19, and 20.

34 Becker, K. et al., p 211; Contradi, M., 2019; See further ch. 2.3.

35 MDR preamble (1) and (2); Becker, K. et al., p 211.

36 MDR preamble (1) and (2).

37 Thienpont, E., et al., Guest Editorial: New Medical Device Regulation in Europe: A Collaborative Effort of Stakeholders to Improve Patient Safety, 2020, p 930.

38 Contardi, M., p 167.

39 MDR Art. 2(1).

(19)

treatment or prognosis of diseases and other medical conditions.40 The definition remains similar to the previous legislation (MDD) but has been extended to include more devices and software applications than before.41 Worth noting, is that the new regulation explicitly establishes that stand-alone software, when meeting the requirement for intended medical purpose, qualifies as a medical device (generally referred to as ‘software as a medical device’

or SaMD). Hence, independent software as well as software controlling or driving (hardware) medical devices may be subject to the MDR.42 Software intended for general purposes, including ‘simple search’, storage, staff planning and invoicing systems, are however not, even when used in medical settings, to be defined as medical devices.43 This revision of the definition has made the boundaries between medical and non-medical software devices clearer and thus, weakened the discussion about when MDR is applicable to software-based devices and not.44

For the purpose of the MDR, “software” may further be defined as “a set of instructions that processes input data and creates output data”45 and AI, accordingly, as a specific type of software.46 Hence, all AI-based systems (including ADS) used for a medical purpose (such as medical decision-making) fall under the scope of the regulation as SaMD.

Whether or not the ADS operates alone as software or is incorporated in hardware medical devices, makes no difference. Ultimately, this means that manufacturers of medical ADS must comply with not only the requirements targeting traditional medical devices, but also the requirements specifically designed for SaMD.

2.3. Requirements for manufacturers of SaMD and ADS

The MDR includes a number of generic requirements that apply to ‘traditional’ medical devices as well as SaMD and, thus, also medical ADS. These establish obligations for the different operators47 involved in the development and deployment of medical devices in relation to technical documentation, clinical performance evaluation, risk and quality

40 Ibid.

41 Cf. MDD Art. 1(2)(a) and MDR Art. 2(1); Becker, K. et al., p 211.

42 Medical Device Coordination Group (MDCG), Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 – MDR and Regulation (EU) 2017/746 – IVDR guidance, 2019, p 7; See also Mitchell, C. and Ploem, C., Legal challenges for the implementation of advanced clinical digital decision support systems in Europe, 2018, p 426.

43 MDR preamble (19); MDCG, p 6 f.

44 Becker, K. et al., p 211; See also Mitchell, C. and Ploem, C., p 426.

45 MDCG, p 5.

46 See the definition of AI and Machine learning in ch. 3.1 below.

47 Manufacturers, authorised representatives, importers and exporters.

(20)

management, post-market activities and surveillance. Additionally, they include requirements targeting design, functionality, environment interaction and connectivity as well as labelling and instructions for use.48 A number of these provisions are taking software into particular account, including in aspects regarding IT security, design and manufacture, technical documentationand clinical investigations.49 As a result of these partly new, partly reinforced requirements, the MDR clearly addresses software and new technology to a significantly larger extent than previous legislation.

The requirements for manufacturers of medical devices, may be approached in three stages; (i) the risk categorisation of the device, (ii) the pre-market procedures to demonstrate compliance and receive market certification (CE marking), and (iii) the post-market procedures to maintain safety and performance throughout the lifetime of the device. Below, the key provisions of these stages will be considered in more detail. Additionally, the newly introduced system for unique device identification (UDI), offering enhanced possibilities to surveillance, will be presented. The aim however is not to address these provisions exhaustively, but to provide an introduction to the regulation before zooming in to medical ADS and their particularities. Only after that, these requirements, and their relationship to safe and well-performing ADS, will be considered in more detail.50

2.3.1. A risk-based approach

The regulation relies on a risk-based approach, where medical devices are classified according to risk-level and the requirements for the operators subsequently depend on the classification. The MDR introduces specific rules for software. The new rules impose a general upgrade in the classification of software devices, placing the vast majority of such devices in a higher risk group. Consequently, the requirements for pre-market approval of these devices have become significantly higher.51 In brief, the classification system divides medical devices into four risk groups – I, IIa, IIb and III – based on certain criteria.52 Software embedded in a hardware device should be classified together with that device.53 The classification is then broadly based upon the intended use and inherent risk of the hardware

48 Turpin, R, et al. (AAMI and BSI) Machine learning AI in medical devices: Adapting regulatory frameworks and standards to ensure safety and performance, 2020, p 12.

49 See e.g. MDR Annex I s 7; MDR Annex II s 6.1(b); MDR Annex XV s 2.3.

50 See ch. 6.

51 Cf. Becker, K. et al., p 213.

52 MDR Art. 51(1); MDR Annex VIII ch. 2 and 3.

53 MDR Annex VIII rule 3.3; MDCG, p 8, 12.

(21)

device and its interaction with the patient’s physical body.54 Independent software is classified according to the risk related to the information provided, rather than the interaction between the device and the human body.55 The MDR introduces a new rule, in which the classification criteria refer to (i) the impact of the information provided by the software on a healthcare situation and (ii) the severity of the healthcare situation or patient condition.56 Explicitly, the rule establishes that “software intended to provide information which is used to take decisions with diagnosis or therapeutic purposes”57 fall under class IIa. In certain, more critical, cases the software is classified as class IIb or III.58 All other software fall under class I.59 ADS are per definition intended to support decision-making, why they fall under class IIa, IIb or III. Noteworthy is, however, that software devices that are not intended to provide information which is used to take decisions with diagnosis or therapeutic purposes, are unlikely to even be considered as medical devices. Hence, in practice, they fall outside the scope of the MDR. Consequently, almost all software medical devices will fall under the three higher risk groups and be subject to considerably higher requirements on pre-market and post-market assessments and activities than such devices under the previous legislation.60 2.3.2. Pre-market and post-market procedures

Before a device is placed on the EU market, the manufacturer is required to conduct a number of assessments and evaluations to ensure its functionality and quality (hereinafter referred to as “pre-market procedures”).61 The pre-market procedures vary according to the risk classification of the device. However, all manufacturers are obliged to establish, implement, document and maintain a risk management system and present certain technical documentation on the device.62 They must also undertake product testing and conduct a clinical evaluation to verify the performance and safety of the device.63 Furthermore, all devices in class IIa, IIb and III (that is, all ADS) require assessment and certification by a Notified Body – a national body designated to assess conformity with the MDR.64 In

54 Ibid. rule 9,10 and 12.

55 Ibid. rule 11; MDCG, p 12.

56 MDCG, p 26.

57 MDR Annex VIII rule 11.

58 Ibid.

59 Ibid.

60 Cf. Becker, K. et al., p 211.

61 See MDR Art. 52; Annex IX, X and XI.

62 See MDR Art. 10(2) and Annex I s 3 on risk management system; See MDR 10(4); Annex II; Annex III on technical documentation. Note that manufacturers of custom-made products are excluded in 10(4).

63 MDR Art. 10(3) and Art. 61; MDR Annex II s 6; MDR Annex XIV part A.

64 MDR Preamble (60); MDR Art. 52, see especially 52(3), (4), (6) and (7); For definition of ‘notified body’, see MDR Art. 2(42); See also Minssen, T. et al, p 14.

(22)

addition, a quality management system, covering a number of organisational aspects must be established and assessed by the notified body.65 Manufacturers of higher class devices are in certain cases also obliged to provide the notified body with further technical documentation and be subject to periodical audits.66 Lastly, for some class III and IIb devices, additional procedures are implemented for clinical evaluation consultation by independent expert panels.67

Once the device has been released to the market, there are two major post-market procedures in which the manufacturer plays an active role; post-market clinical follow-ups (PMCF) and post-market surveillance.68 The PMCF includes an obligation for the manufacturer to continuously collect clinical data from post-market experience in order to update the clinical evaluation. The aim is to ensure high performance and safety of the device throughout its lifetime, identify and monitor risks and ensure continued benefit and acceptability.69 Additionally, a more comprehensive post-market surveillance system must be established to systematically gather and assess post-market experience for the purpose of updating the technical documentation and detecting any urgent need for preventive or corrective actions.70 The level of activity and the frequency of the actions are determined with regard to the risk classification and the type of device.71 In contrast to the previous legislation, in which post-market activities were hardly addressed, the MDR introduces significant reinforcements and strengthens the post-market surveillance system.72

Despite few explicit references to software in the provisions on pre-market and post- market procedures, the updates in the regulation are highly relevant for manufacturers of SaMD and ADS. As noted, a majority of all SaMD were previously classified as class I devices. Thus, they were not subject to any intervention by notified bodies prior to distribution. Under the MDR, these devices, particularly those classified as higher risk devices, are subject to a significantly increased number of requirements – both prior to and after placement on the market. As a result of the upgrade in risk classification, practically all future SaMD will be subject to approval by a notified body. Hence, substantial efforts will be required by manufacturers of these devices to adapt to the extensive reinforcements of

65 MDR Art. 10(9); MDR Annex IX, ch. 1, 1 and 2.1; ch. 3.

66 MDR Annex IX ch. 1, 3; ch. 2.

67 MDR Art. 54; Annex IX 5.1.

68 See MDR Art. 10(3) and Annex XIV part B for PMCF; See MDR ch. VII section I for Post-market Surveillance.

69 MDR Annex XIV part B s 6.1.

70 MDR Art. 2(60); MDR preamble (74); MDR Art. 83.

71 See MDR Art. 83(1), 85 and 86.

72 Cf. Thienpont, E., et al., p 329.

(23)

the pre-market and post-market procedures. While this might hinder some manufacturers to access the market, the requirements (as will be showed in chapter 6) includes important elements to ensure safety and performance of medical devices and medical ADS.

2.3.3. Unique device identification

To enhance transparency and traceability the MDR introduces a new system for unique device identification (UDI), requiring all devices to be marked with a unique identification code.73 From a labelling approach, the UDI should comprise (i) a device identifier (UDI-DI) and (ii) a production identifier (UDI-PI). The former is specific to the manufacturer and the device. The latter is used to identify the unit of device production.74 All information gathered in relation to the device’s UDI should be published in the newly created European database EUDAMED.75 The primary intention is to make information regarding medical devices, certifications, clinical investigations and economic operators available.76 In addition, by enabling unique identification and increased traceability, it facilitates extended reporting of incidents and appropriate responses in case of safety deficiencies.

The database is still under construction, but when fully implemented, the UDI system has great potential to increase the general safety of medical devices.77 Firstly, it offers an efficient system for surveillance, tracing and withdrawal of devices. Furthermore, it enhances the opportunities to discover and manage errors at an early stage – if such errors have passed unnoticed in the pre-market procedures. Despite the unambiguous benefits, it’s impact on medical ADS is however dual. On the one hand, the safety of medical ADS is equally probable to improve as the safety of other medical devices. As will be further considered in chapter 6, it might even (together with the reinforced post-market procedures), constitute one of the more critical novelties in the MDR in regard to the advancement of AI and ADS within medicine. On the other hand, it induces new thresholds for post-market changes which may further impede the development of systems incorporating adaptive AI.78

73 Cf. Becker, K. et al., p 214; Contardi, M., p 175.

74 MDR Art. 27(1)(a); See also Contardi, M., p 175.

75 MDR Art. 28 and 33.

76 MDR Art. 33.

77 European commission, Medical Devices – EUDAMED, [online] Available at:

<https://ec.europa.eu/health/md_eudamed/overview_en>.

78 See definition of adaptive AI in ch. 3.4; See further discussion in ch. 6.4.4.

(24)

2.4. MDR and the development of new medical technology

It is evident that the updated regulation, MDR, is more adapted to today’s rapid technological development than its predecessor. The regulation has, in almost all aspects, become more comprehensive. Additionally, the explicit amendments in regard to software will presumably facilitate further technological developments for, at least, a near future. An adaption to technology and software in general does however not guarantee suitability for the advancement of medical AI or ADS in particular. AI technologies include, as noted above, functionalities which have not previously existed in the field of medical technology. They also include processing of large amounts of data. This, in turn, requires new policy and regulatory considerations to sufficiently cover possible future medical technology.

Considerations not needed in the software field before.

3. Artificial Intelligence

The definition of Artificial Intelligence is noticeably ambiguous and this thesis does not purport to provide any definite or exhaustive definition. Instead, an attempt to line out the most essential features of AI (necessary to understand the potential of, the concerns about and the regulatory challenges surrounding the use of AI in medical decision-making) will be made.

The term ‘Artificial Intelligence’ was first coined by Alan Turing79 in the 1950’s. AI is thus not in itself a new term. However, substantial advancements have been made during the last decades and today, there are numerous suggestions of and answers to what AI actually is. Roughly, most of the proposed definitions align around computerised systems being able to solve problems and make decisions in ways traditionally requiring what we regard as human intelligence.80 Such systems may be embedded in different hardware devices (e.g. robots or autonomous cars) or operating purely virtually as software (e.g. search engines and image recognition systems that are sold separately of the hardware).81 When talking about ADS used in medical settings, the most central concept to be familiar with is machine learning– an application of AI used to program and train algorithms in performing certain tasks.82 Especially one, more recent approach to machine learning, called deep learning, has

79 Turing is often considered to be the first scientist in the field of AI. Two other pioneers were Warren McCulloch and Walter Pitts, see e.g. Buyers, J., p 4; Topol E.J. (2019), p 45.

80 See e.g. Wiegand, T. et al. (ITU), p 2; Esmaeilzadeh, P., p 2; Boucher, P. (European Parliament), How artificial intelligence works, 2019, p 1.

81 Ranschaert, E. R. et al., Artificial Intelligence in Medical Imaging: Opportunities, Applications and Risks, 2019, p 349.

82 Tschider, C.A., 2018, 183 f; Boucher, P. (European Parliament), p 3.

(25)

showed great potential and generated major breakthroughs for medical ADS.83 Depending on if, when and how the algorithms adapt to new data after being distributed to the market, they can furthermore be placed on a scale from locked to adaptive.84 In this chapter, the fundamentals of machine learning, deep learning and the difference between locked and adaptive algorithms will be further explained.

3.1. Machine Learning

A majority of the AI systems used for medical decision-making today may be classified as machine learning systems. There are multiple branches also within machine learning, but fundamentally, algorithms are trained to independently interpret and analyse data (input data) in order to create an output, for example a prediction or decision.85 For traditional computers to recognise an object, every angle of the object, every variable and every eventuality has to be programmed for specifically. In contrast, machine learning involves systems programmed with algorithms capable of evolving and adjusting to new and unfamiliar data.86 Imagine taking a picture of a flower with your mobile phone and saving it in a photo application.

When the picture is uploaded, you can use the search field to search for the word “flower”.

Your picture, together with other pictures of flowers that you have uploaded previously, will then turn up on your screen. This is possible because an algorithm has been taught what

“flower” is as a concept. Consequently, it is capable of recognizing and categorising assorted variations of flowers as flowers, despite angle, shape or colour.87 With this ability to conceptualise and learn, machine learning (especially deep learning) algorithms are predicted to both improve and automate future decision-making.88

3.2. Supervised and Unsupervised Machine Learning

Machine learning algorithms can further be broadly categorised into supervised and unsupervised learning,referring to the level of human interaction involved in the training.89 Supervised machine learning involves training datasets, consisting of input data that are labelled by humans, and series of examples of desired outputs.90 An illustrative analogy to

83 Hinton, G., 2018, Deep Learning—A Technology with the Potential to Transform Health Care, p 1101.

84 FDA, p 5; Turpin, R, et al. (AAMI and BSI), p 6.

85 Tschider, C.A., p 184.

86 Buyers, J., p 3 and 10.

87 See Buyers, J., p 3.

88 See e.g. Horgan, D. et al., p 146.

89 Litjens, G. et al., A Survey on Deep Learning in Medical Image Analysis, 2017 p 3; Sidey-Gibbons, J. and Sidey- Gibbons, C., Machine learning in medicine: a practical introduction, 2019 p 3; Tschider, C.A., p 184.

90 Buyers, J., p 11.

(26)

the method is a student studying a number of questions with predefined answers to later be able to answer other questions in the same field.91 The aim is for the machine to learn to accurately create a certain output – usually a prediction, analysis or classification – for certain input data.92 This is widely used for image recognition systems. As in the example above, a machine learns to identify and categorise objects using algorithms trained with a large number of images of the object (training dataset) and corresponding explanations (desired output).93 To verify the functionality of the algorithm, another set of data (testing dataset) is subsequently used for testing the device.94

Unsupervised machine learning refers to algorithms processing datasets with unlabelled data to find patterns, correlations or outliers.95 The absence of labels and desired outputs makes the technique exploratory. At present, it is primarily used for classification and cluster analysis of raw data in large data sets.96 The process involves little or no human intervention or supervision. Therefore, unsupervised methods effectively enable processing of larger datasets than supervised.97 However, while potentially offering radical advances in finding previously undiscovered patterns and relationships in data (including health data98), the lack of human insight and control in these techniques also poses significant trust and safety challenges.99

3.3. Deep Learning and Artificial Neural Networks

Currently, a subfield of machine learning, called deep learning, is piloting the advancement of AI.100 By using artificial neural networks, the technology aims to replicate the functioning of the human brain and its ability to analyse complex and unfamiliar environments.101 Input data is processed through multiple hidden layers of small processing units – ‘neurons’ – to ultimately provide an output.102 This technology enables deeper analyses of considerably

91 Ranschaert, E. R. et al., p 361.

92 Tschider, C.A., 2018; Wiegand, T. et al. (ITU) p 3; Ching, T. et al., Opportunities and obstacles for deep learning in biology and medicine, 2018 p 2; Buyers, J., p 11.

93 Cf. the example in ch. 3.1.

94 Cofone, I. N., Algorithmic Discrimination Is an Information Problem, 2019, p 1426.

95 Tschider, C.A., 2018, p 184; Ching, T. et al., p 2; Litjens, G. et al., p 3.

96 Sidey-Gibbons, J. and Sidey-Gibbons, C., p 4; Ranschaert, E. R. et al., p 362.

97 Litjens, G. et al., p 27; Gibbons, J. and Sidey-Gibbons, C., p 4; Tschider, C.A., p 184.

98 Such as patient records, medical information, clinical studies and diagnostic results, see e.g. Osadchuk, M.A. et al., p 150.

99 Tschider, C.A., p 184 f.

100Wiegand, T. et al. (ITU), p 2; Esteva A. et al., p 24.

101 Buyers, J., p 5 and 12 f; Hinton, G., p 1101; Boucher, P. (European Parliament), p 4;

Ranschaert, E. R. et al., p 353.

102 Topol E.J. (2019), p 45; Litjens, G. et al., p 27.

(27)

larger datasets than possible in ‘traditional’ machine learning. That being said, these models also need significantly more data and greater computer power than previous machine learning techniques.103 The mathematical processes behind deep learning algorithms are typically extremely complex, non-linear and multi-layered, which has led to great difficulties in interpreting and explaining how and why a specific decision has been made by the system.104 This is also the reason why AI systems involving deep learning are often referred to as “Black Boxes”.105

3.4. ‘Locked’ and ‘Adaptive’ Algorithms

A (for the purpose of this thesis) final essential feature of AI systems is the ability to continuously and autonomously adapt to new data after being placed on the market.

Algorithms capable of such adaption have naturally been referred to as ‘adaptive algorithms’.

Systems employing adaptive algorithms are able to utilise novel input data and real-world experience to identify potential improvements and autonomously modify their internal algorithms.106 Consequently, the output for the same set of input data changes over time as the algorithms are being updated.107In medical settings, adaptive models may serve a number of different purposes. For example, they may optimize their performance to different environments and usage (such as local patient populations or a specific physician’s preferences) or improve their performance as more data is collected.108 In time, they may also be capable of generalising their knowledge to data beyond the training data and subsequently extending their field of application (for example, a system trained on male X- rays that is able to perform well on female X-rays).109

At the other end of the spectrum, ‘locked algorithms’ refers to algorithms that do not ‘learn’ or change when exposed to new data after being distributed for use. Hence, the algorithm provides the same output each time the same input data is applied to it.110 Changes subsequent to distribution may for some models be made manually, but autonomous modifications and updates are not available. While locked algorithms still dominate the field

103 Buyers, J., p 15.

104 Ibid.; Ranschaert, E. R. et al., p 336.

105 Buyers, J., p 15; see below ch. 5.3.2.

106 Turpin, R, et al. (AAMI AND BSI), p 7 and note 8; Benjamens, S. et al., The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database, 2020 p 2.

107 FDA, p 5.

108 Ibid.

109 Turpin, R, et al. (AAMI AND BSI), p 10.

110 FDA, p 5; see also Benjamens, S. et al., p 2. Statistic lookup tables and decision trees are examples of locked algorithms.

(28)

of medical ADS, adaptive models occur more and more in recent research and modern applications.111

4. Artificial Intelligence in medical decision-making

With an extensive shortage of healthcare practitioners and a continuously growing population, the digitalization and the further development of AI is believed to constitute a valuable, at times even necessary, contribution to global healthcare.112 Thus, it is natural that medicine is one of the domains where AI algorithms are increasingly involved in decision- making.113 ADS may, in various ways, impact both healthcare in general and the wellbeing of individuals. Advancements in medical imaging, pathology detection and diagnosing will ultimately improve decisions taken by healthcare practitioners and specialists.114 Furthermore, ADS are predicted to eventually be capable of not only supporting, but also taking medical decisions autonomously (that is, without any human intervention).115 By employing these systems, healthcare sectors around the world are probable to see major improvements in quality, availability and affordability of healthcare.116

The great majority of currently deployed ADS are trained in supervised learning environments and applied to limited sets of data.117 Supervised machine learning systems (similar to the one categorising flowers118, but trained with medical images) supports medical decision-making by facilitating medical diagnosis and pathology detection. A number of these systems have been remarkably successful in tasks like detecting tumours and skin lesions; making predictions and analyses based on data such as laboratory results, magnetic resonance imaging, demographics and diagnostic codes; and identifying and diagnosing urgent or easily missed medical conditions.119 In addition, ADS may assist in or recommend treatments based on a specific patient’s health data, offer support in adherence to treatment or therapeutics and enhance communication between practitioners and patients.120

111 FDA, p 2.

112 Wiegand, T. et al. (ITU), p 3.

113 See ch. 1; see also Castelluccia, C. and Le Métayer, D., p 1.

114 Castelluccia, C. and Le Métayer, D., p 15 f.

115 See e.g. Tschider, C.A., p 189.

116 Wiegand, T. et al. (ITU), p 3; Bell, D., p 9.

117 Esteva A. et al., A guide to deep learning in healthcare, 2019, p 25; Wiegand, T. et al. (ITU), p 4; Litjens, p 4;

Topol, E.J. High-performance medicine: the convergence of human and artificial intelligence, 2019 (Hereinafter cited as

‘Topol E.J. (2019)’), p 45.

118 See ch. 3.1.

119 See e.g. Esteva A. et al., p 24 f; Horgan, D. et al., p 146; see also Litjens, G. et al.; Attia, Z.I. et al., An artificial intelligence-enabled ECG algorithm for the identification of patients with atrial fibrillation during sinus rhythm: a retrospective analysis of outcome prediction, 2019; Ching, T. et al.

120 See e.g. Horgan, D. et al., p 146 f.; Mitchell, C. and Ploem, C., p 424 ff.

Figure

Updating...

References

Related subjects :