• No results found

Roboticprocessautomation-AnevaluativemodelforcomparingRPA-tools U U

N/A
N/A
Protected

Academic year: 2021

Share "Roboticprocessautomation-AnevaluativemodelforcomparingRPA-tools U U"

Copied!
63
0
0

Loading.... (view fulltext now)

Full text

(1)

U PPSALA U NIVERSITY

D EPARTMENT OF I NFORMATICS AND M EDIA

B

ACHELOR THESIS

Robotic process automation -

An evaluative model for comparing RPA-tools

Authors:

Lucas Bornegrim and Gustav Holmquist

Supervisor:

Prof. Dr. Andreas H

AMFELT

14 juni 2020

(2)

Sammanfattning

Denna forskning studerar de tre marknadsledande RPA-verktygen, Automation Any- where, Blue Prism och UiPath, f¨or att fylla bristen p˚a litteratur om metoder f¨or utv¨ardering och j¨amf¨orelse av RPA-verktyg. Design science research genomf¨ordes genom att utfor- ma och skapa artefakter i form av processimplementeringar och en utv¨arderingsmodell.

En typisk process som representerar ett vanligt anv¨andningsomr˚ade implementerades med anv¨andning av vart och ett av de tre RPA-verktygen f¨or att skapa en utv¨arderingsmodell.

Officiell dokumentation, tillsammans med de tre implementeringarna, studerades. Utv¨arder- ingsfr˚agor specifika f¨or RPA-verktygsutv¨ardering skapades baserat p˚a en kvalitetsmodell f¨or produktkvalitet som finns i ISO/IEC 25010-standarden. Egenskaper som ¨ar beroende av organisatoriskt sammanhang ingick inte i utv¨arderingen f¨or att skapa en utv¨arderingsmodell som inte ¨ar beroende av n˚agon specifik aff¨arsmilj¨o. Resultaten av forskningen ger kunskap om (1) hur RPA-verktyg kan implementeras och (2) skillnaderna som finns mellan de tre marknadsledande RPA-verktygen. Forskningen bidrar ocks˚a i form av en metod f¨or att unders¨oka och utv¨ardera RPA-verktygen. Vid skapandet av utv¨arderingsmodellen drogs slutsatsen att n˚agra av kriterierna i kvalitetsmodellen i ISO/IEC 25010 var av l˚ag rele- vans och de ¨ar d¨arf¨or inte inkluderade i den resulterande modellen. Genom att analysera och utv¨ardera den skapade utv¨arderingsmodellen, med hj¨alp av ett teoretiskt koncept av digitala resurser och deras utv¨ardering, f¨orst¨arktes utv¨arderingsmodellens validitet. Ur ett utv¨arderingsperspektiv betonar denna forskning behovet av att anpassa och ¨andra befint- liga utv¨arderingsmetoder f¨or att framg˚angsrikt utv¨ardera de mest relevanta egenskaperna hos RPA-verktyg.

Abstract

This research studies the three market-leading RPA-tools, Automation Anywhere, Blue Prism and UiPath, in order to fill the lack of literature regarding methods for evaluating and comparing RPA-tools. Design science research was performed by designing and cre- ating artefacts in the form of process implementations and an evaluative model. A typical process representing a common area of use was implemented using each of the three RPA- tools, in order to create an evaluative model. Official documentation, along with the three implementations, were studied. Evaluative questions specific to RPA-tool evaluation were created based on a quality model for product quality found in the ISO/IEC 25010 standard.

Characteristics dependant on organisational context were not included in the evaluation, in order to create an evaluative model which is not dependant on any specific business environment. The results of the research provide knowledge of (1) how RPA-tools can be implemented and (2) the differences that exist between the three market-leading RPA tools. The research also contributes in the form of a method for investigating and evaluat- ing the RPA-tools. When creating the evaluative model, some of the criteria found in the ISO/IEC 25010 quality model were concluded to be of low relevance and, therefore, not included in the model. By analysing and evaluating the created evaluative model, using a theoretical concept of digital resources and their evaluation, the validity of the evaluative model was reinforced. From an evaluative perspective, this research emphasises the need to appropriate and change existing evaluative methods in order to successfully evaluate the most relevant characteristics of RPA-tools.

(3)

Contents

List of Figures D

List of Tables D

1 Introduction 1

1.1 Background . . . 1

1.2 Literature review . . . 2

1.3 Problem Description/Area . . . 3

1.4 Purpose . . . 4

1.5 Research question . . . 5

1.6 Delimitations . . . 5

2 Research Approach and Methodology 5 2.1 Design Science Research . . . 5

2.2 Research approach . . . 7

2.3 Research method . . . 9

2.3.1 Data collection method . . . 9

2.3.2 Analysis method . . . 9

2.4 A critical examination of the method . . . 10

3 Theory 10 3.1 ISO/IEC 25000 series . . . 10

3.2 ISO/IEC 25010 . . . 11

3.3 Digital resources . . . 13

3.4 Evaluating digital resources . . . 15

4 Process implementation 16 4.1 The typical process . . . 16

4.2 Instantiation artefacts . . . 18

4.2.1 UiPath . . . 18

4.2.2 Blue Prism . . . 23

4.2.3 Automation Anywhere . . . 31

5 Evaluative model 37 5.1 Creating the evaluative model . . . 37

5.1.1 Compatibility . . . 37

5.1.2 Usability . . . 38

5.1.3 Reliability . . . 39

5.1.4 Security . . . 39

5.1.5 Maintainability . . . 39

5.1.6 Portability . . . 40

5.2 Evaluative questions summarised . . . 41

6 Evaluation 42 6.1 Evaluation of the RPA-instantiations . . . 42

6.1.1 UiPath . . . 42

6.1.2 Blue Prism . . . 44

(4)

6.1.3 Automation Anywhere . . . 46

6.1.4 Comparative tables of the evaluation . . . 48

6.2 Evaluation of the evaluative model . . . 54

6.2.1 Analysis and evaluation of RPA-tools as digital resources . . . 54

6.2.2 Design science research evaluation . . . 54

7 Discussion 55 7.1 Summary of what was learned . . . 55

7.2 Limitations . . . 56

7.3 Areas requiring further work . . . 56

8 Conclusion 57

(5)

List of Figures

1 ISO/IEC 25010: Quality models (Estdale and Georgiadou 2018) . . . 11

2 The typical process being implemented . . . 17

3 UiPath: Main process . . . 19

4 UiPath: Send email to each address . . . 20

5 UiPath: Read Responses and Mark off based on response . . . 21

6 UiPath: Check who did not respond: first for each-loop . . . 22

7 UiPath: Check who did not respond: second for each-loop . . . 22

8 UiPath: Send reminders . . . 23

9 Blue Prism: Extract and Store Excel sheet . . . 24

10 Blue Prism: Send Email action . . . 25

11 Blue Prism: Read and Store Emails action . . . 26

12 Blue Prism: Loop to Remove certain rows . . . 26

13 Blue Prism: Properties of the Decision Node . . . 27

14 Blue Prism: Remove Columns from Collection . . . 28

15 Blue Prism: Open Workbook and Write from Collection . . . 29

16 Blue Prism: Action: Write Collection to Workbook . . . 29

17 Blue Prism: Remove rows with common e-mail address . . . 30

18 Blue Prism: Compare E-mail address column of two Collections . . . 31

19 Automation Anywhere: Send e-mails and read e-mails . . . 32

20 Automation Anywhere: Setup for ’Email Connect’ . . . 33

21 Automation Anywhere: Read emails and mark off . . . 34

22 Automation Anywhere: Check whom to remind: outer loop . . . 35

23 Automation Anywhere: Check whom to remind: inner loop . . . 36

List of Tables

1 Nine dimensions of digital resources (Goldkuhl and R¨ostlinger 2019) . . . 14

2 Quality ideals for digital resources divided into nine dimensions (Goldkuhl and R¨ostlinger 2019) . . . 16

3 Evaluation of RPA-tools compatibility . . . 48

4 Evaluation of RPA-tools usability, part 1 . . . 49

5 Evaluation of RPA-tools usability, part 2 . . . 50

6 Evaluation of RPA-tools reliability . . . 51

7 Evaluation of RPA-tools security . . . 52

8 Evaluation of RPA-tools maintainability . . . 53

9 Evaluation of RPA-tools portability . . . 53

(6)

1 Introduction

In this section, a background of Robotic Process Automation (RPA) as a subject is presented.

Following the background is a literature review presenting studies of RPA-tool implementa- tions. The researched problem area and the research purpose, including the research contribu- tions, and research questions, are described. The lack of a theoretical framework specified to RPA-tools is described. Finally, the delimitations of the research are presented.

1.1 Background

Automation is a solution where a machine performs a task. When a work task or workflow is automated, this means that a machine or technology performs something previously performed by a person. A crucial question posed is what should be automated and what should be done by a person. Repetitive, time-consuming work processes that require neither intuition or creativity can be performed by a machine that mimics the input a person would make, and with greater efficiency. Automation eliminates human errors, something that is otherwise a risk when hu- mans perform processes. One way to implement automation is Robotic Process Automation or RPA. (Aguirre and Rodriguez 2017)

RPA-tools are a way to perform if-, then- and else-statements on structured data; RPA-tools mainly achieve this by interacting in a similar way that a person would do, i.e. using the user interface. Application Programming Interface (API) can be used together with RPA; in reality, it is usually a combination of these two ways. RPA-tools work by (1) mapping the processes to be implemented and how the processes are carried out and then (2) being able to perform the process independently. The purpose of RPA-tools is to automate simple and repetitive processes. (Aalst, Bichler, and Heinzl 2018; Radke, Dang, and Tan 2020) It is important to note that RPA is not a physical robot but is a form of computer-based program (Aguirre and Rodriguez 2017).

To exemplify the effectiveness of what RPA-tools can achieve, there is a previous case study at Telef´onica O2 that used an RPA-solution to streamline their business operations. The company started with 20 robots and later expanded to 75 robots. This implementation was carried out by three RPA-specialised employees which used RPA to automate fifteen main processes. These fifteen processes represented 35 per cent of all back-office operations. Some of the processes that were automated were: SIM swaps, credit checks, order processing, customer assignment, unlatching, porting, ID generation, customer conflict resolution and customer data updates.

(Willcocks, Lacity, and Craig 2015a) Examples of other uses for RPA are to

• monitor specific events (received e-mail or documents stored in a folder on your com- puter);

• read and extract data from files (Electronic spreadsheets, PDFs or e-mails);

• perform checks on data according to certain specifications (VAT, price, et cetera);

• securely log into one or more programs;

• create documents in the organisation’s system;

• make decisions based on predefined conditions (e.g. if e-mail attachments are not in an

(7)

the sender with a request for new files in the correct format);

• send confirmations (e-mails, messages, logs, et cetera). (Willcocks, Lacity, and Craig 2015b; Moffitt, Rozario, and Vasarhelyi 2018; Radke, Dang, and Tan 2020)

A significant advantage of RPA is that the tool can be quickly configured to perform the tasks of users without incurring any high costs. In general, RPA does not require technical know-how and can be configured by anyone without programming knowledge. However, according to Willcocks, Lacity, and Craig (2015a), it is advantageous for the IT department to be involved in the design and implementation. The IT department must participate to be consistent with IT governance, security, IT architecture and IT infrastructure. RPA-tools are implemented on top of existing systems, and thus there is no need to develop a whole new platform when implementing RPA-solutions. (Anagnoste 2017)

RPA-tools can lead to lower costs for companies by automating repetitive processes. RPA is a cheaper way to link legacy systems with one another instead of changing or developing a new information infrastructure where the interaction is seamless. RPA is a faster way to get a higher return on investment instead of doing it traditionally by improving the information system. (Aalst, Bichler, and Heinzl 2018; Willcocks, Lacity, and Craig 2015b)

1.2 Literature review

RPA is a new subject to the scientific domain, some of the first appearances of research about RPA are from 2015. Much of the previous research regarding RPA are case studies of organisa- tions that have implemented RPA-tools. The primary focus areas of the previous studies are (1) what has been automated and (2) what advantages or disadvantages the organisation faced after the implementation. As previously mentioned, Willcocks, Lacity, and Craig (2015a) conducted a case study at Telef´onica 02 where they studied the implementation of RPA-robots. The study puts focus on evaluating whether a process should or should not be automated and provides five guidelines for implementers.

Another example of a case study is at a business process outsourcing (BPO) company located in Bogota, Colombia, which Aguirre and Rodriguez (2017) conducted. The case study describes a task that has been automated using an RPA-tool, and the researchers want to compare the RPA- tool with humans. The comparison was conducted by dividing the workers into two groups.

The first group used RPA-solutions, and the second group used solely human workers. The results of the study showed that RPA-tools boosts productivity by 21 per cent but were only 2 per cent faster than humans. The lack of speed increase of RPA-implementation could be because some workers were very skilled and faster than the RPA-tools, but the RPA-tools could do several tasks at once.

A case study conducted by Fernandez and Aman (2018) researched the organisational and individual impact of implementing an RPA-solution on Global Accounting Services in one of the largest global business services firms. The case study puts focus on the human experience of implementing RPA-solutions and highlights the importance of collaboration among RPA and human operators. Implementing the RPA-solution boosted work quality and accuracy, which lead to saving the time of accountants, time which the accountants could then spend on more challenging tasks. The article describes that implementing RPA-solutions did not necessarily lead to increasing unemployment; instead, RPA-solutions managed to create new job roles and change existing ones. The study further describes the need for accountants to

(8)

have IT knowledge as technology advances and changes every year, which is highlighted by the implementation of RPA.

In a study conducted by Moffitt, Rozario, and Vasarhelyi (2018), they apply RPA as a concept to the auditing business. The authors claim that there is little to no research in the area of RPA and that leading firms tend to focus on the field of Artificial Intelligence (AI) rather than RPA.

The authors make a case for implementing RPA solutions in the auditing field of business based on RPA being successfully implemented in other fields of business. Repetitive processes within auditing that could be automated, as well as requirements for making automation possible, are presented. The conclusions reached are that RPA solutions could benefit accounting businesses mostly by reducing time spent within highly repetitive processes and by being able to perform auditing tasks error-free. The authors suggest RPA-related research areas and future research issues for further investigation of RPA-tools. The authors pose the following questions for future research: (1) ’which of the RPA-tools are most promising?’ and (2) ’how should RPA- tools be evaluated?’ (ibid.).

1.3 Problem Description/Area

Automation, and particularly RPA, is currently at the forefront of the IT industry, and many are looking for modern RPA-solutions to manage their problems and streamline their operations.

IT consulting companies are more focused on solving their customers’ problems with the help of RPA-tools. (Willcocks, Lacity, and Craig 2015b).

None of the studies in the literature review evaluated the RPA-tools that were used for the process implementation. Previous studies put the focus on comparing the effectiveness and efficiency of business processes and their result before and after implementing RPA solutions, without evaluating the tools used to implement the processes. The focus of previous studies lies in the impact of RPA in an organisational context; this points to a gap of knowledge and research on the subject of evaluating the RPA-tools themselves.

Regarding such a highly topical subject, research should exist regarding the implementation of the RPA-tools. There is a lack of scientifically based comparative studies of RPA-tools for choosing which of the RPA-tools suits the automation intended. The literature review shows that there is an evident lack of literature regarding the implementation and comparison of RPA- tools, which makes further research attractive and desirable not only for a small group of re- searchers.

As mentioned previously, Moffitt, Rozario, and Vasarhelyi (2018) suggest RPA-related research areas and future research issues for further investigation of RPA-tools. In these research areas, the authors address specific questions about which RPA-tools are the most promising and how RPA-tools should be evaluated.

What initiated this project is a request from an IT consulting company. The company has not previously and does not currently use RPA-solutions, but has an interest in further exploring the area to help its customers. The company’s interest in the subject also reaffirms the lack of information and knowledge in the evaluation and comparison of RPA-tools in the business sector.

(9)

While previous research has been conducted regarding the organisational impact of implement- ing RPA solutions, there is a clear gap in researching the tools which are used for the imple- mentation. The lack of research could point to the choice of RPA-tool being of low priority due to their potential similarity, or the need for such research. A theoretical framework, based on scientific research, for the specific evaluation of RPA-tools, has not yet been created, and the basis for creating such a framework must be methods for evaluating other software or systems.

This research considers the ISO/IEC (International Organisation for Standardisation/Interna- tional Electrotechnical Commission) 25000 series of standards to be the most applicable when evaluating software without a clear evaluative framework. The ISO/IEC 25000 series of stan- dards creates a framework for software and system evaluation focusing on quality in use and product quality(ISO Central Secretary 2014).

1.4 Purpose

By studying the market-leading tools within RPA, the purpose is to fill the lack of literature by providing an understanding of what differentiates the RPA-tools from each other and how acquirers can evaluate RPA-tools. This research creates an evaluative model, working as a proof of concept for comparing RPA-tools, in order to achieve the purpose of evaluating and comparing RPA-tools. The evaluative model is based on the quality models of the ISO/IEC 25010 standard, which will be studied in the implementation of a typical process that rep- resents a common application area. Evaluative questions are created based on the ISO/IEC 25010 standard while researching two areas, (1) official documentation of the RPA-tools and (2) implementation of each RPA-tool.

In order to analyse and validate the evaluative model, Goldkuhl and R¨ostlinger (2019) concep- tual framework of digital resources is used by applying and comparing the evaluation of digital resources with the evaluative model. Because RPA-tools can be seen as digital resources, the evaluative model must have the ability to be proven to cover the different facets of digital re- source evaluation.

The results of the research will provide knowledge of (1) how RPA-tools can be implemented and (2) the differences that exist between the three market-leading RPA-tools. The research contributes in the form of a method for evaluating the RPA-tools. The knowledge contributions that this study will produce are (1) prescriptive information on implementation with the three market-leading RPA-tools, and (2) an evaluative model for evaluating and comparing the most relevant characteristics of different RPA-tools.

(10)

1.5 Research question

• How can RPA-tools be evaluated and compared, by appropriating traditional characteris- tics for evaluating software and systems?

In order to answer the research question, the following questions are researched when implementing the RPA-tools:

– How can the market-leading RPA-tools be implemented in a typical situation that represents a common area of use?

– Which of the market-leading RPA-tools is preferred in a typical situation that rep- resents a common area of use?

1.6 Delimitations

When examining and comparing RPA-tools, the research will not consider any tools other than UiPath, Automation Anywhere and Blue Prism because these tools are market-leading RPA- tools (Anagnoste 2017), and are the most relevant RPA-tools to the client. The research will not examine how RPA-implementations affect an organisation or its individuals, but only how the three RPA-tools are suitable for a typical situation that describes a common area of use.

This research will draw general conclusions, based on the characteristics of the ISO/IEC 25010 standard, regarding the tools from the researched implementations. Attributes which depend on a specific organisational context or evaluate a specific process have not been selected for evaluation by this research.

When implementing the process, this research uses solutions that require the least amount of programming experience; as such, the method for analysis does not go in-depth regarding the extended possibilities when writing code within the RPA-tools.

2 Research Approach and Methodology

In this section, the research approach is presented, describing the design science research strat- egy and how design science research is performed in this study. The research method is pre- sented, followed by a critical examination of the used method.

2.1 Design Science Research

The research strategy is design science research, where a method for the implementation of the same process is designed and used to evaluate the different RPA-tools. Based on the ISO/IEC 25010 standard, the criteria for evaluating the tools are developed inductively during the imple- mentation of the IT artefacts and by studying the official documentation of the RPA-tools.

The purpose of design science research in information systems is to create a purposeful IT artefact to address crucial organisational problems. The artefact must describe the implemen- tation and the application so the implementer can use it in the correct domain. (Hevner et al.

2004)

(11)

Design science research is differentiated from regular design and creation by focusing on un- certain areas. By performing novel and risk-taking design, researchers can claim that they are performing actual research rather than regular design and creation as practised in the industry.

(Oates 2006)

Hevner et al. (2004) describe design science research as a problem-solving process and have derived seven guidelines for design science in information systems research.

1. Design as an artefact, in design science research artefacts are rarely full-grown informa- tion systems that will be used by organisations in practice. Potential implementers should interpret the artefacts as a proof of concept of the ideas, practices, technical capabilities and products that the information system should become. The proof of concept provides an understanding of how to effectively and efficiently analyse, design, implement and use information systems.

2. Problem relevance, information systems research aims to acquire knowledge and under- standing of how the development and implementation of technology-based solutions can solve essential business problems. A design science research project provides relevance by constructing innovative artefacts which provide an understanding of how a specific problem can be solved.

3. Design evaluation, the evaluation of a designed artefact is based on validity, utility, qual- ityand efficacy. The evaluation is based on the requirements of the business environment for which the artefact is designed. The evaluation is also based on how the artefact can be integrated within the business environment’s technical infrastructure. Functionality, completeness, consistency, accuracy, performance, reliability, usability and fit with the organisation are attributes that can be used by implementers to evaluate IT artefacts.

4. Research contributions, for design science research to be useful, it must contribute in at least one of three ways:

• The design artefact itself can be the contribution by being the solution to a prob- lem. The artefact contributes by extending the knowledge base or applying existing knowledge in new ways.

• Extending and improving existing foundations in the design science knowledge base. For example, adding to an already existing model.

• Through methodology by developing and using evaluation methods (e.g. experi- mental, analytical, observational, testing and descriptive) and new evaluation met- rics. Measures and evaluation metrics are a crucial part of design science research.

The designed artefacts need to accurately represent the environments (e.g. business en- vironment or technological environment) for which they are designed in order to allow evaluation of the contribution. The artefacts must have the ability to be implemented, pointing to the importance of instantiations. The instantiations prove the artefacts ability to be implemented within the intended environment.

5. Research rigour, the designed artefact must be constructed and evaluated using rigorous methods. The level of rigour is measured in the effective use of the knowledge base.

A successful design science research project includes selecting appropriate development techniques, constructing a theory or artefact and selecting appropriate ways of evaluating the artefact.

(12)

6. Design as a search process, the design process can be described as a search process to find a solution to a problem. In design science research, it is common to simplify a problem by only representing it with suitable means, goals and rules. The researcher can also simplify the problem by dividing it into smaller sub-problems. Sometimes the simplifications can be unrealistic, but it provides a starting point.

7. Communication of research, the research needs to be communicated with the targeted au- dience in mind. A technology-oriented audience requires presentation detailing how the researcher constructs the artefact and how implementers can use the artefact within the intended organisational context. A management-oriented audience requires presentation providing the detail needed for determining whether the artefact is worth implementing within an organisational context. (Hevner et al. 2004)

The developed artefact must be evaluated in order to perform design science research correctly.

Evaluating design science research is not only a question of whether the developed artefact works, but also a question of how and why it works (Pries-Heje, Baskerville, and Venable 2008). The evaluation criteria depend on the purpose of the artefact. The artefact can be evaluated as a proof of concept instead of evaluating in a real-life context. By creating a proof of concept, the design solution can be shown to have specific properties that behave in a particular way under certain conditions. Conclusions can be drawn from the proof of concept without evaluating the artefact in a real-life context. (Oates 2006)

2.2 Research approach

This research develops two different types of artefacts. (1) The first type of artefact is instantia- tion. Three instantiations, which automate the same process, are designed by this research; one instantiation is created using each RPA-tool. The process has been developed in collaboration with the client to find a typical process from which general conclusions can be drawn. (2) The second type of artefact is an evaluative model for evaluation of the RPA-tools, according to the criteria corresponding to the questions created by researching the RPA-tools based on the ISO/IEC 25010 standard. The evaluative model will be derived from the implementation of the RPA-tools. The research implements the following process:

1. Spreadsheet with e-mail addresses is retrieved.

2. E-mails with attached spreadsheet are sent to those responsible.

3. Waits for e-mail responses and marks off based on the response.

4. Sends an e-mail reminder to those who did not respond.

5. Compiles a list of those who need manual application.

Most businesses make use of electronic spreadsheets, through software such as Microsoft Ex- cel and Google Sheets. E-mail is a form of communication used by most businesses and users.

RPA-solutions are typically not used for very complex processes; a simple process implement- ing spreadsheet- and e-mail functionality is thereby concluded to be a typical process.

The purpose of the evaluative model is to be used by acquirers as a method for evaluating RPA- tools; the main research contribution of this research is thereby through a methodology. The evaluative model is a proof of concept, the utility of which is proven through applying the model when evaluating the three researched RPA-tools. While the use of RPA-tools is increasing, so

(13)

there is a lack of such research. The tools Automation Anywhere, Blue Prism and UiPath have been selected as they are the three market-leading RPA-tools (Anagnoste 2017).

The attributes of relevance for this research are based on the ability to be applied within any organisational context. Accuracy and fit with the organisation are not evaluated in this research because these attributes are dependent on the organisational context, which is not the primary focus of this research. Performance is not evaluated because measuring performance in the development environment is not representative of the performance within every operational environment, according to Estdale and Georgiadou (2018). Consistency is not measured be- cause only one implementation, performed by one implementer, within each tool is studied, and thereby, no measurement of consistency between multiple implementations can be made.

Including attributes which depend on an organisational context would limit the application ar- eas of the evaluative model. When using the evaluative model, the acquirer can evaluate and compare RPA-tools in a way that allows the acquirer to decide which tool is appropriate for the intended business environment.

The three instantiation artefacts provide validity and utility through their ability to be evalu- ated based on the characteristics of the ISO/IEC 25010 standard, and thereby work as a basis for developing the evaluative model. The instantiation artefacts provide quality and efficacy by implementing a simple process that is easy to conclude from, while still being relevant in representing a common area of use. The evaluative model’s validity is proven through the ap- plication of the model when evaluating the three RPA-tools. The evaluative model’s validity is further proven by connecting the model to a theoretical concept of digital resources and their evaluation. The evaluative model provides utility by filling the gap of being able to evaluate and compare RPA-tools. The model provides quality and efficacy through being based on official documentation and data from the implementation of the instantiation artefacts.

Rigorous methods are used through:

• Basing the evaluative model on the established quality model in the ISO/IEC 25010 stan- dard.

• Appropriating the quality model to fit RPA-tool evaluation with the purpose to fit any organisational context.

• Performing the appropriation of the quality model based on knowledge gained from the official documentation of each RPA-tool and through process implementation within each RPA-tool.

• Validating the evaluative model by applying Goldkuhl and R¨ostlinger (2019) concept of digital resources to the model’s evaluation of RPA-tools.

Research rigour is, thereby, reached through effective use of the knowledge gained from the ISO/IEC 25010 standard, the RPA-tool implementations, the RPA-tool documentation and model validation through the application of the theoretical concept of digital resources.

The problem of evaluating RPA-tools is simplified in this research by only implementing a simple typical process representing a common area of use. Rigorous methods are used by basing the evaluation on the ISO/IEC 25010 standard, and applying the standard when selecting relevant characteristics for evaluation of RPA-tools. The relevance of the characteristics is determined by studying the implemented artefacts and by studying official documentation of the RPA-tools.

(14)

This research describes the instantiations in detail in order to explain how the evaluative model is derived. The instantiations are also described to understand how the model can be applied when choosing the appropriate tool in an organisational context. The instantiation artefacts are communicated through a technology-oriented presentation because it provides details on how to configure the implementation. The evaluative model is management-oriented because it pro- vides details on how to perform the comparison when deciding which RPA-tool to implement within an organisational context.

2.3 Research method

In this section, the methods for data collection and data analysis are presented. The data collec- tion is performed through observations and document analysis. The data analysis is qualitative and is used to analyse qualitative data.

2.3.1 Data collection method

The research project collects data through observations and document analysis. Observations are made, regarding the characteristics based on the ISO/IEC 25010 standard, of the tools dur- ing the implementation of the process. Document analysis is performed by studying the official documentation of the RPA-tools. The implementations are performed by the observant, which means that the observations are of a participatory form. When conducting participant observa- tions, the observer must pay close attention to and document relevant findings. Participating observers should also reflect on how their participation affects the observed process. (Bowen et al. 2009; S. L. Schensul, J. J. Schensul, and LeCompte 1999)

The choice of method for data collection is justified by the need for detailed and comparable data on the entire process of the three implementations. The focus areas of the observations are based on the quality model for product quality in the ISO/IEC 25010 standard. The entire process of the three implementations is documented before the data analysis.

Two data collection methods are used in this research to study RPA-tools. Using multiple methods provides a way to confirm the data from multiple sources. Both document analysis and observations generate data that can be compared and confirmed by the other. Document analysis is used to collect data which is not provided by the observations of the implementations and to confirm the results of the observations further. (Carter et al. 2014)

2.3.2 Analysis method

The data analysis has an inductive approach, as evaluative questions are created based on ap- plying the ISO/IEC 25010 standard to the evaluation of RPA-tools. The evaluative model is created through open/goal-free evaluation (Goldkuhl and R¨ostlinger 2019) of the RPA-tools, as the criteria are established by studying the tools and their official documentation. The data analysis method is qualitative and is used to analyse qualitative data collected from studying the official documentation of each RPA-tool and from observations made during the imple- mentation of each RPA-tool. By applying the standard to the evaluation of RPA-tools, some characteristics may be found less relevant than others in RPA-tool evaluation compared to eval- uating other software or systems. The characteristics of the ISO/IEC 25010 quality model for product quality are appropriated or removed based on the relevance for evaluating RPA-tools outside of specific organisational contexts.

(15)

Differences between the RPA-tools are illustrated in tables, which are divided into categories.

The categories and the evaluative questions of each category are decided based on applying the ISO/IEC 25010 standard to RPA-tools during the creation of the evaluative model. The tables contain all tools and evaluative questions and describe how each tool corresponds to each evaluative question.

After creating the evaluative model, the validity of the evaluative criteria is reaffirmed by analysing the model based on Goldkuhl and R¨ostlinger (2019) concept of digital resources.

Seeing RPA-tools as digital resources, the analysis based on the concept of digital resources aims to prove the evaluative model valid on a theoretical basis.

2.4 A critical examination of the method

Even though the implemented process represents a common area of use, basing the evaluative model from the implementation of only one process is a limit that can not be overlooked.

This research focuses on evaluating characteristics outside of an organisational context, which means that the evaluative model will not be fully applicable when evaluating RPA-tools for every specific organisational context.

3 Theory

In this section, the ISO/IEC 25000 series of standards and the ISO/IEC 25010 standard for evaluating software and systems are presented. This section describes the quality model for product quality, found in the ISO/IEC 25010 standard, in detail. Goldkuhl and R¨ostlinger (ibid.) concept of digital resources is presented, followed by a description of digital resource evaluation.

3.1 ISO/IEC 25000 series

The ISO/IEC 25000 series of international standards is entitled ’Systems and software engi- neering – Systems and software Quality Requirements’ and Evaluation and is used to evaluate systems or software quality based on two models, quality in use and product quality, defined in ISO/IEC 25010. Quality in use applies to evaluate the interaction of software in use within a specific context by specific users to achieve specific goals. Product quality applies to evaluate static properties of software and dynamic properties of the computer system, independent of context. (ISO Central Secretary 2014)

The goal when creating the ISO/IEC 25000 series of standards was to assist the development and acquisition of systems and software products with the specification and evaluation of qual- ity requirements. Two main processes are covered to reach these goals: (1) software quality requirements specification and (2) software quality evaluation, which is supported by a system and software quality measurement process. (ibid.)

(16)

3.2 ISO/IEC 25010

The quality models found in ISO/IEC 25010 are composed of various characteristics; five char- acteristics define quality in use, and eight characteristics define product quality. The quality models are presented in Figure 1. (ISO Central Secretary 2011)

Figure 1: ISO/IEC 25010: Quality models (Estdale and Georgiadou 2018)

Product quality is of most interest to potential buyers or acquirers who want to get technically involved with the software or system. Product quality is divided into eight characteristics.

• Functional suitability, unlike the following seven characteristics, deals with suitability within an organisational context without specifying which specific context. The char- acteristic is divided into the sub-characteristics functional completeness, functional cor- rectnessand functional appropriateness. The characteristics are viewed by potential ac- quirers or buyers to assess whether the product fits their particular organisational needs.

• Performance efficiency measures the technical performance of the product, including time behaviour, resource utilisation and capacity. The relevance of such measures might, however, be questioned, as measures performed during development might not be taken in the same context (platform or environment) as will be used for potential acquirers.

• Compatibility describes how software or systems affect and are affected by other software or systems. Compatibility is divided into two sub-categories:

– Co-existence measures the degree to which a product can perform its required func- tions while sharing an environment and resources with other products, without a detrimental impact on any other products.

(17)

– Interoperability is measured by the degree of support for information exchange and the use of the exchanged information with other products or applications.

• Usability evaluates the human interaction with the product, and is divided into six sub- categories:

– Appropriate recognisability measures the degree to which users can recognise if the product is suited for their needs.

– Learnability is the capability of a software product to enable users to learn how to use the product.

– Operability describes how easily a software product or system is operated.

– User error protection describes what happens when a user error occurs and how the errors are handled within the software product.

– User interface aesthetics measures the degree to which a user interface is pleasing and satisfying to interact with.

– Accessibility measures the degree to which a product or system can be used by users with the broadest range of characteristics and capabilities.

• Reliability measures the degree to which a system or product performs specified functions under specified conditions. Reliability is divided into four sub-categories:

– Maturity measures the extent to which the product can be trusted to perform and work as it is supposed to.

– Availability measures the availableness of the software product, for example, if the service is reachable at all times.

– Fault tolerance measures the degree of a system or product operating as intended despite errors in hardware or software.

– Recoverability specifies how the product handles data recovery and re-establishing of the system state in case of an interruption or failure.

• Security measures the degree to which a product or system protects data so that users, other products or other systems have the appropriate degree of data access according to authorisation levels. The security characteristic is divided into five sub-categories:

– Confidentiality measures how the product ensures that data is only accessible to those authorised.

– Integrity measures the extent to which a product or system prevents unauthorised access or modification of data.

– Non-repudiation measures the degree to which actions and events can be proven to have occurred.

– Accountability measures the degree to which actions of a unique user, product or system can be traced back to the same unique user, product or system.

– Authenticity measures the degree to which the identity of a subject or resource can be proved to be the one claimed.

(18)

• Maintainability measures how well a product or system can be modified in order to improve it, correct it or adapt it to changes. Maintainability is divided into five sub- categories:

– Modularity is the degree to which a system is composed of discrete components such that a change to one component has minimal impact on other components.

– Reusability is measured in the possibilities of reusing an asset in more than one system or reusing an asset in building other assets.

– Analysability measures the ability to assess the impact of intended changes in a product or system, or the ability to diagnose a product for errors or flaws.

– Modifiability is the degree to which a product or system can be effectively and efficiently modified without degrading existing product quality.

– Testability is the degree of effectiveness and efficiency test criteria can be estab- lished for a system, product or component, and how effectively and efficiently tests can be performed to determine whether the test criteria are met.

• Portability measures how effectively and efficiently a system, product or component can be transferred from one hardware, software or other operational or usage environment to another. Portability is divided into three sub-categories:

– Adaptability measures how well a product or system can be adapted for different or evolving hardware, software or other operational or usage environments.

– Installability measures how well a product or system can be successfully installed and uninstalled within a specific environment.

– Replaceability measures how well a product can replace another specific product for the same purpose in the same operational environment.

(ISO Central Secretary 2011)

3.3 Digital resources

Goldkuhl and R¨ostlinger (2019) use the term digital resource to describe all the different as- pects of modern information technology. A digital resource is an integration of software re- sources and information resources, based on the necessary hardware resources. For a business, a digital resource works as an asset and aids in reaching business goals. A digital resource is the result of planned investments and, as an asset, it is economically generative. (ibid.)

Digital resources are used for communication and other handling and exchange of informa- tion. Digital resources are realised through technological resources, software and hardware, and can have relations to other digital resources through digital information exchange or other digitalised sharing. (ibid.)

’Digital resource’ is a flexible term and can be used to describe many different aspects of modern information technology. Goldkuhl and R¨ostlinger (ibid.) structure digital resources into nine dimensions to cover the different facets of digital resources. The dimensions are illustrated in Table 1.

(19)

Dimension Aspect

Relational Actors in the digitised activity or business; organisers, users (informa- tion providers, information consumers)

Semantic Digitised communication/information; business terms and concepts, in- formation resources

Functional Digital functionality; digitised business activities, digital services Interactive Digital meetings for users; interaction via an interface between digital

resource and user

Normative Goals and values operating the digitised activity or business Regulative Regulations operating the digitised activity or business

Economic Investments/costs, benefits and assets in the digitised activity or busi- ness

Architectural Digital landscapes; relations between digital resources, e.g. exchange, linking, sharing

Technical Technologies (for software and hardware) and technical mechanisms used for the digital resource (e.g. transfer, storage, security)

Table 1: Nine dimensions of digital resources (Goldkuhl and R¨ostlinger 2019)

• The relational dimension describes the stakeholders which can be related to the digital resource, such as users interacting with the resource. Users can provide a digital resource with information or use the information provided by a digital resource.

• The semantic dimension describes the parlance and phraseology of the information re- lated to the digital resource. Digitised information resources need to have a perceivable meaning for users of the resource. Semantics, in this case, refers to the concepts and terminology used by the digital resource.

• The functional dimension refers to the business processes and business activities which make use of and interact with the digital resource. Digital resources are frequently based on existing business processes and are introduced to assist in the execution of these pro- cesses. Regarding the functional dimension, digital resources perform communicative and other types of information handling functions.

• The interactive dimension refers to the way concepts and terms are organised and dis- played, in useful ways, to the users of the digital resource. The way the users interact with the digital resource is through user interfaces and, thereby, the interactive dimension deals with how the users can interact with the digital resource through a user interface.

• The normative dimension refers to the goals and values which guide the digital resource.

The digitising of businesses needs to be based on the goals and values of the business in question. The digital resources are normatively managed and characterised in order to follow these goals and values.

• The regulative dimension refers to the regulations which guide the digital resource. Dig- ital resources of businesses need to comply with regulations, such as standards and busi- ness agreements. The creation of IT-systems, and thereby digital resources, has to comply with rules and laws. The regulative dimension overlaps with the normative dimension, in the fact that rules can guide and stand as a ground for the goals and values of businesses.

• The economic dimension: digital resources make up economical assets in businesses,

(20)

which means that they contribute to the economic efficiency of the business incorporating the resource. Developing, acquiring and managing digital resources lead to costs, and digital resources lead to benefits for the business.

• The architectural dimension: digital resources are most often part of a system, interacting with other digital resources. Digital resources are parts of digital landscapes, which refers to being part of a network of multiple digital resources. The digital landscape refers to the digital resources within the network, and which relations exist between these digital resources.

• The technical dimension refers to the technologies utilised by digital resources. In order to manage, store, present and transfer information, software and hardware technologies need to be utilised. (Goldkuhl and R¨ostlinger 2019)

3.4 Evaluating digital resources

In order to perform a successful evaluation of a digital resource, the resource needs to be de- scribed in detail. After describing the digital resource, evaluative conclusions can be drawn.

The description of the digital resource can be used as a part of the documented evaluation.

There are multiple different strategies for evaluating digital resources. Three common evalua- tion strategies are open/goal-free evaluation, goal-based evaluation and theory-based evalua- tion. (ibid.)

Open/goal-free evaluation is performed without using pre-set evaluative criteria. Positive and negative aspects of the digital resource are studied, including potential problems and strengths.

These aspects can be identified by (1) studying the actual implementation of the digital re- source, (2) communication with relevant business actors or (3) studying documents which de- scribe the digital resource. (ibid.)

Goal-based evaluation uses business goals as pre-set criteria for evaluation. These business goals need to be identified before the evaluation of the digital resource. After setting up the criteria, the digital resource is described and compared with the criteria. (ibid.)

Theory-based evaluation means that the digital resource is evaluated based on a theoretical framework. An example of such a theoretical framework is the concept of digital resources, according to Goldkuhl and R¨ostlinger (ibid.). In the case of digital resources, the evaluation can be performed by evaluating the resources qualities based on the nine dimensions of digital resources. The connection of these qualities to each dimension is illustrated in Table 2.

(21)

Dimension Digital resource quality

Relational Organiser clarity/responsibility; target clarity (user properties), clarity in information source, availability/security for users

Semantic Information quality, business-language compliance and clarity

Functional Functional repertoire and service quality, business activity contribu- tions, digital process integration

Interactive Usability, availability

Normative Normative compliance and clarity Regulative Regulative compliance and clarity Economic Cost/benefit effectiveness

Architectural Interoperability

Technical Robustness, technical efficiency and security

Table 2: Quality ideals for digital resources divided into nine dimensions (Goldkuhl and R¨ostlinger 2019)

4 Process implementation

This section begins by describing the typical process. Following the process description, the implementations of the process within the RPA-tools are described.

4.1 The typical process

The process that was implemented begins by reading a spreadsheet file containing e-mail ad- dresses. An e-mail with an attached spreadsheet file is then sent to each e-mail address. After waiting for responses, unread e-mails are read and stored in two spreadsheet files based on re- sponse; the first file stores all responses, and the second file stores responses which need manual application. In this case, the manual application list represents instances when the respondent needs additional support which can not be supplied by the RPA-solution. After storing the re- sponses, a reminder is sent via e-mail to those who have not yet responded. After waiting for additional responses, unread e-mails are read and stored again. The process was first imple- mented using UiPath, followed by Blue Prism, and lastly Automation Anywhere. The typical process being implemented is illustrated in Figure 2.

(22)
(23)

4.2 Instantiation artefacts

This section describes the implementation of the typical process within each RPA-tool. The section is divided into three parts, each of which describes the implementation within one of the RPA-tools.

4.2.1 UiPath

UiPath supports both local and cloud-based implementation. For this research, the process was implemented and run locally. No additional set-up other than installing the software itself was needed before implementing the process.

The implementation process began by creating the main process, in which all parts of the process were to be implemented. The main process, containing all subprocesses, is illustrated in Figure 3.

(24)

Figure 3: UiPath: Main process

(25)

A built-in function for reading a Comma-separated values file (CSV-file) was used to collect and save all e-mail addresses in order to allow the automation of sending the e-mails to multiple recipients. The e-mail addresses were extracted from a CSV-file and saved as a DataTable- variable. Within the main sequence, a for each-loop was used to loop through the DataTable- variable and send an e-mail, containing a CSV-file, to each e-mail address. The e-mail was sent by using the built-in function for sending e-mails through Simple Mail Transfer Protocol (SMTP). Other options for sending e-mails are Exchange, IBM notes, Outlook and POP3. The sending of e-mails is illustrated in Figure 4.

Figure 4: UiPath: Send email to each address

In order to read e-mails, the built-in function ’Get Internet Message Access Protocol (IMAP) Mail Messages’ was used. Settings were selected for reading only unread messages and mark- ing messages as read. The e-mails are saved in a universal variable that can later be used by other processes within the main process. A sequence was implemented to encapsulate the pro- cess of checking the contents of the e-mail responses. Within the sequence, a for each-loop was used to loop through all the received e-mails. An if-statement was used to check if the content of the e-mail contained the word ’Yes’, which is used to mark off that no manual application is needed. The e-mail contents were retrieved from the universal variable containing all e-mail responses and the contents were saved by adding them as a DataRow in a DataTable-variable.

The e-mails with the response ’Yes’ were saved in a CSV-file containing the senders’ e-mail address. An else-statement was used to save all other e-mails, which need manual application.

The e-mails with a response other than ’Yes’ were saved in a CSV-file containing the senders’

e-mail addresses and the message contents. The sequence results in two CSV-files; (1) the first containing the e-mail addresses of the responders who do not need manual application, and (2) the second containing the e-mail addresses and e-mail contents of the responders who need manual application. The process of reading responses and marking off based on response is illustrated in Figure 5.

(26)

Figure 5: UiPath: Read Responses and Mark off based on response

In order to send reminders, a DataTable-variable containing the e-mail addresses of those who had not responded was created. The e-mails were stored in the DataTable by looping through each e-mail in the CSV-file containing all e-mail addresses and all received e-mails (Figure 6). Within the for each-loop, a second for each-loop was used to check if the email address existed within the DataTable of responses (Figure 7). A boolean-variable, by default set to

’False’, was created to address whether or not the email was found. If the email was found in the DataTable of responses, the boolean was set to ’True’. When the inner for each-loop had concluded, a new DataTable-variable was created to store those who had not responded. If the boolean-variable was set to ’False’ the e-mail address was stored in the new DataTable. After looping through all e-mails, the DataTable containing all e-mails that need reminders was saved in a CSV-file.

(27)

Figure 6: UiPath: Check who did not respond: first for each-loop

Figure 7: UiPath: Check who did not respond: second for each-loop

(28)

A new sequence was created to handle the sending of reminders (Figure 8). The CSV-file con- taining the e-mail addresses of those who had not responded was stored in a DataTable vari- able. A for each-loop was used to loop through all e-mail addresses and send e-mail reminders through the built-in ’Send SMTP Mail Message’ function.

Figure 8: UiPath: Send reminders

Finally, time delays were added between the sending and reading of e-mails. The implemented time delays can be seen in Figure 3.

4.2.2 Blue Prism

When testing Blue Prism, the software was installed locally. Accompanying the software itself is an instance of an SQL-database which is required for Blue Prism to store data. When access- ing the RPA-tool itself, a local SQL-database is needed. The user interface is based on placing nodes and arrows for all sub-processes and the connections between them. Even though the interface with its built-in functions is easy to understand, further functions installed through packages called Visual Business Objects (VBO) are required to use functionality such as inter- acting with Excel sheets and sending or reading e-mails. These packages are easy to install and are found within the locally installed folder structure.

The implementation process began by creating a new ’Object’ and editing the object through the Object Studio interface. Initially, there are ’Start’- and ’End’-nodes to which the process was connected. First, a VBO for handling Excel sheets was installed, which included the function

(29)

Excel sheet is implemented by creating an action node for creating an instance with a ’handler’.

The handler is then used to implement the action ’Open Workbook’, which reads and stores the Excel sheet in a data variable. An action is then implemented called ’Get Worksheet’; this action extracts the data from the stored Excel sheet and saves it in a ’Collection’. Collections can be seen as generic lists, and are the variables used to store any data within Blue Prism.

The data extracted from the Excel sheet is a collection of all e-mail addresses. The process of reading and storing the data from an Excel sheet can be seen in Figure 9. To the left are the actions performed in the process; to the right are the stored data which will hold the data when the process is run.

Figure 9: Blue Prism: Extract and Store Excel sheet

After storing all e-mails, a loop was implemented. When implementing a loop in Blue Prism, a collection which will be looped through is selected. For sending e-mails, a VBO for handling e-mails through Outlook was installed. An action for sending e-mails was implemented within the loop. The action extracts the e-mail address from the current row of the collection being looped and sends an e-mail to each address. The structure used in the ’Send Email’ action is illustrated in Figure 10.

(30)

Figure 10: Blue Prism: Send Email action

When the e-mails are sent, an action for reading e-mails was implemented. The action uses the inbox of a locally logged in Outlook account and stores the e-mails in a collection. The action can be set to read e-mails that are read, unread or both. The collection of e-mails are split into columns containing, among others, ’SenderEmailAddress’ and ’Body’. The properties of the action can be seen in Figure 11.

(31)

Figure 11: Blue Prism: Read and Store Emails action

After creating a collection of all received e-mails, the collection was copied to another collec- tion. The new collection was looped through in order to create a collection of e-mail addresses which need manual application. Within the loop, a decision node was implemented with a condition checking the contents of the e-mail body. When the body contained the string ’Yes’, the e-mail was removed from the collection; this resulted in a collection containing only the responders which will need manual application. The creation of a new collection and the loop removing specific rows is illustrated in Figure 12. The properties of the decision node with the condition are illustrated in Figure 13.

Figure 12: Blue Prism: Loop to Remove certain rows

(32)

Figure 13: Blue Prism: Properties of the Decision Node

The collection of responders who need manual application are then stored in a separate Excel sheet. Before writing the contents of the collection to an Excel sheet, actions were imple- mented to remove unnecessary columns. Blue Prism does not contain any built-in functions for writing only specific columns of a collection and therefore, actions to remove columns were implemented. Separate actions for removing each unnecessary row were implemented as seen in Figure 14. The rows left in the collection were the e-mail address, sender name and the body of the e-mail.

(33)

Figure 14: Blue Prism: Remove Columns from Collection

Actions for creating a new instance, opening a new Workbook (representing a new Excel sheet within Blue Prism) and writing the contents of the collection to the Workbook, were imple- mented through the VBO for handling Excel sheets. The overview of the actions can be seen in Figure 15 and the properties for writing the collection to the Workbook can be seen in Figure 16.

(34)

Figure 15: Blue Prism: Open Workbook and Write from Collection

Figure 16: Blue Prism: Action: Write Collection to Workbook

(35)

After compiling the Excel sheet of responders who need manual application, actions for sending the reminders were implemented. First, a loop going through the collection of all e-mails was created; within the loop, a second loop was created, going through all received e-mails.

In the inner loop, a decision node was implemented, comparing the e-mail addresses of the two collections. When the e-mail address of the outer loop matched an e-mail address of the inner loop, the row was removed from the collection of all e-mail addresses; this resulted in a collection of only the e-mail addresses which had not responded. The two loops are illustrated in Figure 17. The properties of the decision node are illustrated in Figure 18.

Figure 17: Blue Prism: Remove rows with common e-mail address

(36)

Figure 18: Blue Prism: Compare E-mail address column of two Collections

Finally, loops for sending the reminders were implemented. The collection containing the e-mail addresses who had not yet responded was looped, and within the loop, an action for sending e-mails was implemented. After the reminders are sent, the responses are read as previously in Figure 11; this was implemented in the same way as the initial sending of the e-mails (Figure 9 and Figure 10). When the reminders are sent, and the responses are read, additional actions for reading and storing received e-mails were implemented. Lastly, time delays were implemented between the sending and reading of e-mails.

4.2.3 Automation Anywhere

Automation Anywhere is a cloud-based platform, and the process was implemented through a web interface. Before implementing the process, files were required to be downloaded in order to perform specific tasks locally.

The implementation process began by using a built-in function for opening a CSV-file through an ’Action’. The data from the CSV-file was iterated through by a loop. Within the loop, an action for sending e-mails via Outlook was implemented; e-mails were sent to each e-mail address in the CSV-file. After the e-mails are sent, an ’Action’ for starting an ’E-mail session’

was implemented, obtaining e-mails via IMAP. When specifying the e-mail account, built-in functionality for securely storing the login credentials was used. The process of sending and reading the e-mails is illustrated in Figure 19. The properties of the E-mail connection action is illustrated in Figure 20.

(37)

Figure 19: Automation Anywhere: Send e-mails and read e-mails

(38)

Figure 20: Automation Anywhere: Setup for ’Email Connect’

After implementing sending and reading of e-mails, a loop was implemented going through all e-mails from the e-mail session, specified to reading only unread messages. The e-mails are stored as a dictionary of strings. The built-in ’dictionary GET’-function was used to assign the e-mail messages to a generic variable called ’emailBody’, through the key ’emailMessage’.

A second dictionary was created and was assigned a generic variable with the standard value

’Yes’. Then in an if-block, ’emailBody’ is compared with the generic variable containing ’Yes’, as the software does not explicitly support functionality to compare strings. If the body of the e-mail equals ’Yes’, it is stored in a CSV-file used for storing all responses. If the e-mail body does not equal ’Yes’, the e-mail is stored in a separate CSV-file containing responses who need manual application, as well as in the CSV-file with all responses. The e-mails are stored using the built-in ’Log to file’-function, which is a generic function for adding data to files. The

’Log to file’-function is used because there is no built-in functionality for appending data to a CSV-file. The function has some drawbacks; because the function is generic, the contents need to be manually formatted to conform to the format of the CSV-file. The loop is illustrated in Figure 21.

(39)

Figure 21: Automation Anywhere: Read emails and mark off

The CSV-file of the received e-mails is then opened to loop through all e-mail addresses which have responded. The loop goes through all rows of the CSV-file and assigns the e-mail address to a ’Record’-variable. Within the loop, a second loop is implemented which goes through all rows of the CSV-file, containing all e-mail addresses, and stores them in a ’Record’-variable.

In the inner loop, an if-statement is implemented, which compares the e-mail address of the received e-mail with all e-mail addresses. If the e-mail address is found in both records, a boolean-variable called ’IfFound’ is created and set to the value ’True’, and a ’Break’ is imple- mented to exit the loop. In the outer loop, if the ’IfFound’-variable is set to ’False’ (meaning that the e-mail address had not responded), the e-mail address is stored in a CSV-file using the built-in ’Log to file’-function. Finally, the ’IfFound’-variable is set to ’False’ before iterating through the loop again. The two for each-loops for storing all e-mail addresses which have not responded are illustrated in Figure 22 and Figure 23.

(40)

Figure 22: Automation Anywhere: Check whom to remind: outer loop

(41)

Figure 23: Automation Anywhere: Check whom to remind: inner loop

(42)

After storing the e-mail addresses which have not responded, the CSV-file is opened and looped through to send reminders via Outlook. The contents of the e-mail inbox are then obtained in the same way that was performed initially, via IMAP. The e-mail addresses of responses are once again compared with the contents of the CSV-file containing all e-mail addresses to store new responses. Lastly, time delays were added between the sending and reading of e- mails.

5 Evaluative model

In this section, the creation of the evaluative model is described based on the ISO/IEC 25010 standard, official documentation of the RPA-tools and the implementations of each RPA-tool.

The creation of the evaluative model is connected to the concept of digital resources. Finally, the evaluative questions created for the evaluative model are summarised.

5.1 Creating the evaluative model

In order to reach an evaluative model, the process implementations within the three tools along with official documentation were analysed in order to connect the properties of RPA-tools with the ISO/IEC 25010 quality model for product quality. Based on the experience of implementing the process within the RPA-tools, as well as connecting the implementations and documentation with the ISO/IEC 25010 quality model for product quality, the evaluative questions making up the evaluative model were created.

The first aspect of the ISO/IEC 25010 quality model for product quality is functional suitability.

Functional suitability is not deemed relevant in this research because it deals with suitability within an organisational context.

The second aspect of the ISO/IEC 25010 quality model for product quality is performance efficiency. As mentioned previously in section 3.2, performance efficiency measurements are rarely fully applicable according to Estdale and Georgiadou (2018). Because performance efficiency measurements are platform- or environment-dependent, it does not fit within a gen- eral evaluative model; therefore, performance efficiency is left out of the evaluation in this research.

5.1.1 Compatibility

RPA-solutions act on top of existing systems, in the same way as a user would. Because RPA- solutions run in the background and act as a user, co-existence with software and systems will inherently work in most cases. The most relevant aspect when looking at co-existence with RPA-tools is thereby whether other sources can also manipulate the files or software which are manipulated by the implemented processes, which could lead to inaccurate data. In order to make sure that the files or software are accurate for the process, the RPA-solution can be run in a contained environment, such as a computer or web server set up for the sole purpose of running the automated processes. Measuring the co-existence characteristic of RPA-tools is dependent on how the process is implemented and in which environment the process runs.

Because co-existence is dependent on context and the fact that RPA-solutions can be run within a contained environment to solve co-existence issues, it is not of a high priority when evaluating RPA-tools.

References

Related documents

- As a scientific tool through comparison of different triage methods in major incident response - As an educational tool by development and validation of an interactive

To validate the no-reference model, we applied the model to the EPFL-PoliMI and LIVE Wireless Video Quality Assessment databases and we calculated the linear correlation

A record can only be evidence of a transaction if a record is reliable and authentic (e.g. It is important to notice that a record does not need to consist of true information

The normality of the misclosures is tested, and the analysis is performed on unfiltered and filtered misclosures with confidence intervals (CIs) of 95% and 99.7% to

Before we give numerical results for the performance of functionals in different regions identified by values of ELF, we can already make some general observations: (i) high ELF

Two tools for distributing the control ef- fort among a redundant set of actuators are control allocation and linear quadratic control design.. In this paper, we inves- tigate

Structural equation analyses revealed that internet-based cognitive behavioural therapy decreased depressive symptoms, and that a decrease in depression, in turn, resulted in

Gruppen som har erfarenhet av par- och familjesamtal, 16 patienter (17 %), uttryckte att kontakten med samtalsbehandlaren blev för 14 respondenter (15 %) i positiv riktning och