• No results found

Real-time auto-test monitoring system

N/A
N/A
Protected

Academic year: 2021

Share "Real-time auto-test monitoring system"

Copied!
79
0
0

Loading.... (view fulltext now)

Full text

(1)

Real-time auto-test monitoring system

Fanny Blixt

Computer Science and Engineering, master's level 2021

Luleå University of Technology

Department of Computer Science, Electrical and Space Engineering

(2)
(3)

Abstract

At Marginalen Bank, there are several microservices containing endpoints that are covered by test automation. The documentation of which microservices and endpoints that are covered by automated tests is currently done manually and is proven to contain mistakes. In the docu- mentation, the test coverage for all microservices together and for every individual microservice is presented. Marginalen Bank needs a way to automate this process with a system that can take care of test coverage documentation and present the calculated data. Therefore, the pur- pose of this research is to find a way to create a real-time auto-test monitoring system that automatically detects and monitors microservices, endpoints, and test automation to document and present test automation coverage on a website. The system is required to daily detect and update the documentation to be accurate and regularly find eventual changes.

The implemented system that detects and documents the test automation coverage is called Test Autobahn. For the system to detect all microservices, a custom hosted service was imple- mented that registers microservices. All microservices with the custom hosted service installed and extended to registers to Test Autobahn when deployed on a server. For the system to detect all endpoints of each microservice, a custom middleware was implemented that exposes all endpoints of a microservice with it installed. For the microservices to be able to install these and get registered, a NuGet package containing the custom hosted service and the custom mid- dleware, was created. To detect test automations, custom attributes models were created that are supposed to be inserted into each test automation project. The custom attributes are placed in every test class and method within a project, to mark which microservice and endpoint that is being tested within every automated test. The attributes of a project can be read through the assembly. To read the custom attributes within every test automation project, a console application, called Test Autobahn Automation Detector (TAAD), was implemented. TAAD reads the assembly to detect the test automations and sends them to Test Autobahn. Test Au- tobahn couples the found test automation to the corresponding microservices and endpoints.

TAAD is installed and ran on the build pipeline in Azure DevOps for each test automation project to register the test automations.

To daily detect and update the documentation of the test coverage, Quartz.NET hosted service is used. With Quartz.NET implemented, Test Autobahn can execute a specified job on a sched- ule. Within the job, Test Autobahn detects microservices and endpoints and calculates the test automation coverage for the detection. The calculation of the test coverage from the latest detection is presented on the webpage, containing both the test coverage for all microservices together and the test coverage for each microservice. According to the evaluations, the system seems to function as anticipated, and the documentation is displaying the expected data.

iii

(4)
(5)

Acknowledgments

First, I would like to thank my supervisor Michael Almgren at Marginalen for giving me the opportunity to work on this project. I would also like to thank him for his support and guidance during this thesis.

I would also like to thank the team leader for the test team at Marginalen, Indranil Sinha, for forming this assignment and supporting me during the project.

Thanks to all the developers and system architects at Marginalen for being so supportive and patient during my thesis. Thanks for always answering my questions and being open to discuss my proposed ideas.

I will also thank my supervisor at Lule˚a University of Technology, Josef Hallberg, for the valuable support and feedback during the thesis.

Finally, I will thank my family for their support.

v

(6)
(7)

Contents

Chapter 1 – Introduction 3

1.1 Background . . . 3

1.2 Motivation . . . 4

1.3 Problem definition . . . 4

1.4 Delimitations . . . 5

1.5 Thesis structure . . . 5

Chapter 2 – Related work 7 2.1 Swagger . . . 7

2.1.1 OpenAPI Specification . . . 8

2.1.2 Implementation of Swagger to web application . . . 8

2.1.3 Related to detection of endpoints problem . . . 9

Chapter 3 – Theory 11 3.1 Methods for detection of microservices . . . 11

3.1.1 Detection of microservices from folder structure on server . . . 11

3.1.1.1 Approaches for Test Autobahn Server Extension . . . 12

3.1.1.2 Advantages and disadvantages with Test Autobahn Server Extension . . . 14

3.1.2 Detection of microservices from URL structure . . . 14

3.1.3 Implementation of background task with IHostedService . . . 15

3.1.4 Motivation for selection of microservice detection method . . . 17

3.2 Detection of endpoints . . . 17

3.2.1 Detection with Swagger documentation . . . 17

3.2.2 Custom middleware for endpoint detection . . . 18

3.2.3 Motivation for selection of detection method . . . 20

3.3 Detection of test automation . . . 20

3.3.1 Executable console application . . . 21

3.3.1.1 Creating a NuGet package in Azure DevOps pipeline . . 22

3.3.1.2 Including an EXE file in a NuGet package . . . 22

3.3.1.3 Installing and executing the NuGet package from a test automation pipeline . . . 23

3.3.1.4 Reading assembly in console application . . . 23

3.3.2 Reading assembly with PowerShell . . . 24

3.4 Connect and send the test automations to Test Autobahn . . . 25

3.5 Scheduling runs on daily basis with Quartz.NET . . . 26 1

(8)

4.1.1 GUI . . . 28

4.2 Test Autobahn . . . 30

4.2.1 Code architecture . . . 30

4.2.2 Database structure . . . 33

4.2.3 NuGet package containing contract models . . . 36

4.2.4 API . . . 37

4.2.5 Scheduling detection run with Quartz.NET . . . 41

4.3 Test Autobahn Microservice Detector . . . 44

4.3.1 Installation and deployment of TAMD in a microservice . . . 45

4.4 Test automation detection . . . 46

4.4.1 Custom attributes for test automations . . . 47

4.4.2 Creation of NuGet package containing EXE file . . . 49

4.4.3 Pipeline task group . . . 50

4.4.4 Implementation of test automation detection . . . 51

Chapter 5 – Evaluation 53 5.1 Microservice detection . . . 54

5.2 Endpoint detection . . . 54

5.3 Test automation detection . . . 55

5.4 Connection between endpoints and test automations . . . 56

5.5 Scheduled detection job . . . 58

Chapter 6 – Discussion 61 6.1 Detection of microservices in Test and UAT . . . 61

6.2 Detection of the endpoints of the microservices . . . 62

6.3 Detection and coupling of test automations . . . 63

6.3.1 Custom attribute set up for coupling to microservices and endpoints. 63 6.3.2 Reading the custom attributes from the assembly . . . 64

6.4 Scheduled runs for detection . . . 65

6.5 GUI . . . 65

6.6 Extension possibilities for future processes . . . 65

6.7 General remark of the implementation . . . 66

Chapter 7 – Conclusions and future work 69

Chapter 8 – References 71

2

(9)

Chapter 1 Introduction

1.1 Background

Marginalen Bank has one test team, whose primary responsibility is to ensure software quality, for the microservices built by eight development teams. The test team has devel- oped an efficient test automation process over the past years, mostly for the microservices which support backend functionalities. The test automation tests the different microser- vices endpoints, to verify that they are working as expected. Currently, there are over 60 microservices and the test team is working in steady progress to achieve full test coverage for all the microservices and their corresponding endpoints. The endpoints that currently exist are API endpoints and endpoints on service buses. Currently, there exists two ways of presenting test automation coverage. The first way is to present test automation cov- erage for all the existing microservices. If a microservice has any test automation, it has test coverage. The second way to present test automation coverage is for each of the microservices. For example, if a microservice has 40 endpoints and 30 of them have test automation, the test coverage for that microservice is 75%. The calculation for the test coverage is currently done manually and shown in an Excel file, which is not updated regularly and prone to be mistaken. This situation could prove problematic since the documented test coverage could differ from the actual test coverage.

What Marginalen Bank lacks today is an automated process which:

• Keeps up with an increasing number of microservices and their abundant endpoints.

• Compare them with existing microservice test automation.

• Present real-time data visually on a web page.

The vision is to have a real-time auto-test monitoring system that can take care of the calculations and present the documentation and statistics on a web application. The solution needs to be implemented in such a way that it can be extended to or used by

3

(10)

new processes to come. The real-time auto-test monitoring system will not only present microservice test automation coverage but also any other Marginalen Bank daily process which will come under test automation.

1.2 Motivation

Currently, the test automation coverage is documented manually in an Excel file, which can be problematic in several ways. First and foremost, the documentation is not updated regularly which makes it unreliable. The documented microservices, endpoints, and test automation may differ from the actual deployed, which can lead to the documented test coverage not being accurate. Miscommunication between the development teams and the test team can also easily lead to gaps in the documentation, for instance, when microser- vices are added or updated. By automating the process of updating the documentation of the test coverage daily, the documentation will be more accurate. These documentation gaps would be prevented by automatically detecting all existing microservices and their corresponding endpoints.

The current way of manually generating the documentation is also time-consuming and contains a lot of repetitive work for the test team, which should be avoided. There- fore, a system that automatically produces documentation of the test coverage would be more time-efficient. With an associated web application, developers would also get an improved overview of all microservices and their abundant endpoints to easily see if they are covered by test automation. This would help the test developers to easily apprehend which microservices and endpoints that need additional test automation to obtain full test coverage.

1.3 Problem definition

All microservices deployed in the Test and UAT (User Acceptance Test) environment and their corresponding REST API endpoints should be found and stated for the documen- tation. Also, the automated test that covers the microservices and their endpoints need to be found to display the test coverage. Considering that the microservices and the test automations develops and runs separately, each test automation needs to be mapped to the corresponding microservice and endpoint. For the documentation to always be accu- rate, the system needs to automatically discover additional microservices, detect changes in already existing microservices and discover when test automations are implemented, changed, or removed. A web interface should be designed and developed to display the documentation and statistics. Additionally, the system is required to be implemented so it can be extended to and used by additional processes for Marginalen Bank in the future.

Different models and approaches of implementing this real-time auto-test monitoring system are found and examined in this project. The final system is implemented in such

(11)

1.4. Delimitations 5

a way that the developer’s effort will be minimized for it to work.

To find a suitable approach to implement the system, several sub-questions have been stipulated and investigated:

• Which method is most suitable to find all microservices in Test and UAT as auto- matically as possible?

• Which method is most suitable to find all REST endpoints for each found microser- vice as automatically as possible?

• Which method is most suitable to find all the test automations as automatically as possible?

• Which method is most suitable to connect the test automations to the correspond- ing endpoint?

• How will this system be set up to run every weekday?

Each sub-question has been examined and answered before the implementation was added to the real-time auto-test monitoring system. Since all the existing microservices at Marginalen are built in .NET, this real-time auto-test monitoring system is developed in .NET using Visual Studio 2019 and Visual Studio Code.

1.4 Delimitations

Some delimitations are done since there is not enough time to cover every aspect of the real-time auto-test monitoring system. Currently, Marginalen Banks microservices have four different types of endpoints that need to be detected. The microservices use two types of API endpoints, REST and SOAP, and two types of service buses containing endpoints, Azure Service Bus and Service bus on-premise. The solution only covers the REST API endpoints since that endpoint type is the most usual and most of the implemented test automation covers them. One other thing that was wished for in the implementation was to show the test automation pass rate for each microservice, but this is not included in the scope because of the time limit.

1.5 Thesis structure

In section 2 of this thesis, some related work will be presented. This section is followed by section 3 that is covering the theoretical part of the project containing comparisons and selection of methods used for the implementation. In section 4, the implementation of the real-time auto-test monitoring system is presented. In section 5 evaluations of the projects are presented followed by discussion in section 6. Last, the conclusion and suggestions for future work are presented in section 7.

(12)
(13)

Chapter 2 Related work

At this moment, there is not much work that is related to this project. There are multiple API documentation services for REST API that displays the endpoints for a microservice. But a solution that automatically detects microservices, their correspond- ing endpoint, and the test automation to connect them is not found. There are some services, for example, Splunk, that monitor microservice and analyzing real-time data [1], but those services are unnecessarily complex and cannot influence a method that connects the microservices and endpoints to their corresponding test automations. Therefore, one example of a REST API documentation service is presented in this section, called Swag- ger, to give inspiration on how to detect endpoints within a microservice.

2.1 Swagger

Swagger is built around OpenAPI Specification and is a set of open-source tools that can be helpful to build, document, and consume REST APIs [2]. With Swagger, a developer can describe the structure of their developed API so machines can read them. By reading the APIs structure, Swagger can automatically build interactive API documentation [3]. In Figure 2.1, an example is shown of how a Swagger API documentation can be presented.

7

(14)

Figure 2.1: An example of a Swagger API documentation for REST APIs [4].

2.1.1 OpenAPI Specification

OpenAPI Specification, previously named Swagger Specification, is an API description format for REST APIs. An OpenAPI file can describe a web applications entire API, including:

• All available endpoints

• Operation of each endpoint (GET, POST, DELETE, etc.)

• Operation input and output parameter for each endpoint

• Authentication methods

The API specifications can both be written in JSON and YAML. The format can easily be read by humans and machines [5].

2.1.2 Implementation of Swagger to web application

To implement Swagger in a .NET application, some NuGet packages need to be installed.

NuGet is the package manager that is used in .NET, which provides the ability to produce and consume packages [6]. The following NuGet packages need to be installed [7]:

• Swashbuckle.AspNetCore.Swagger: A Swagger object model and middleware to expose SwaggerDocument objects as JSON endpoints.

(15)

2.1. Swagger 9

• Swashbuckle.AspNetCore.SwaggerGen: A Swagger generator that builds Swag- gerDocument objects from models, routes, and controllers. It is combined with Swagger endpoint middleware to expose Swagger JSON automatically.

• Swashbuckle.AspNetCore.SwaggerUI: A version of the Swagger UI tool. After installation, the Swagger generator needs to be added and the Swagger middleware needs to be enabled in Startup.cs.

After installation, the Swagger generator needs to be added and the Swagger middleware needs to be enabled in Startup.cs.

2.1.3 Related to detection of endpoints problem

Swagger is related to the detection of endpoint problems stated in 1.3, since Swagger is using a middleware to detect and expose all available REST endpoints that exist in a web application. With Swagger in mind, two different approaches can be used to solve the endpoint detection problem. At Marginalen Bank, most of the microservices use Swagger for the API documentation. One approach to detect all endpoints for each microservice is to read the Swagger documentation for each microservice. Another approach that can be possible is to create a custom-made middleware, like Swagger, that exposes all the endpoints in a microservice.

(16)
(17)

Chapter 3 Theory

In this section, different techniques will be presented and evaluated to build this real- time auto-test monitoring system. The system that will be implemented to fulfill the different requirements in 1.3, will be called Test Autobahn.

3.1 Methods for detection of microservices

The microservices are hosted on several Marginalen Bank web servers in different environ- ments, i.e. Test, User Acceptance Test (UAT), and Production. The real-time auto-test monitoring system that will be developed, will merely take concern of the microservices in Test and UAT. The microservices the system will detect are web-based applications, which means that each microservice has a defined URL path to their web application.

In this subsection, different methods to detect microservices will be presented. The different methods are investigated, and the most suitable method is selected for the final implementation.

3.1.1 Detection of microservices from folder structure on server

As mentioned in section 3.1, all microservices at Marginalen Bank are hosted on several different web servers. All the servers have the same folder structure, with the microservice web applications placed in a specific folder. In the folder, there exist several subfolders.

For deciding the base URL for the microservices, Citrix ADC is used. Citrix ADC is a comprehensive application delivery and load balancing solution for microservice-based applications [8]. Through Citrix ADC, all the sites are published and load balanced. The base URL for each microservice is created and based on the specific folder structure in the server. In other words, the base URL for each microservice is decided by the folder structure in each server. Through the knowledge of the folder structure, the base URLs to the microservices can be detected.

11

(18)

3.1.1.1 Approaches for Test Autobahn Server Extension

To use this method, a service must be created that will search through the folder struc- ture in each server of interest, to find all the available microservices. This service needs to be placed at every server to have access to the microservices in each server. The service needs to either to report to, or to be called from Test Autobahn. The service for this proposed solution, is called Test Autobahn Server Extension (TASE), and could work in two different proposed ways. For the documentation, in both approaches, Test Autobahn is collecting data daily.

Approach 1 (A1):

In the first approach, TASE has an endpoint that returns all the microservices on that server. TASE will, as described in 3.1.1.1, be placed on every server of interest. The TASE on each server will register to Test Autobahn through a REST POST endpoint once. Test Autobahn saves the name of the server and the URL to the REST endpoint that returns all the microservices for that server. Daily, Test Autobahn will call all the registered endpoints, to get all available microservices. All found microservices will then be added to the database if new and updated if already existing. Before the Test Auto- bahn calls all the TASE endpoints, all already existing microservices in the database will be marked as inactive. Then, all the found microservices will be marked active. In this way, all the microservices detected in the latest search will be marked and stated in the documentation. This detection process is explained through a flow diagram in Figure 3.1.

Figure 3.1: Flow diagram of the microservice search process for Test Autobahn with Approach 1.

(19)

3.1. Methods for detection of microservices 13

Approach 2 (A2):

In the second approach, TASE will daily report to Test Autobahn. TASE will, as in A1, be placed on every server of interest. In this approach, Test Autobahn will have a REST endpoint that takes a server and a list of microservices as arguments. Each TASE will daily, through a schedule timer, call this endpoint and send information about itself and all the microservices on the server. In this approach, no server would have to regis- ter manually. The registration would happen automatically when TASE calls the REST endpoint in Test Autobahn. All sent microservices would be added to the database if new and updated if already existing. When a TASE is calling the Test Autobahn endpoint, all the microservices in the database for that server are marked inactive. Then all sent-in microservices will be marked as active, to keep track of which microservices were found.

In this way, all the microservices detected in the latest call for that server will be marked and stated in the documentation. This registration process is explained through a flow diagram in Figure 3.2.

Figure 3.2: Flow diagram of the TASE microservice registration process for Approach 2.

(20)

Comparison of approaches:

A big difference between the approaches is that registration of TASE is manually in A1 and automatically in A2. Since the goal is to approach a solution that is as au- tomatic as possible, A2 is a better choice in that aspect. Another aspect important to consider is how accurate the documentation is. In A1, the search for all the microservices is guaranteed to happen at the same time, and all found microservices that are marked active are truly active in that timestamp. In approach A2, the TASE itself is sending all the microservices into Test Autobahn and there is no guarantee that the reported information is accurate when Test Autobahn is collecting data. In A1, it is guaranteed that the microservices were found on a specific timestamp, but it is not in A2. For example, if a server is down and A2 is used, the TASE for that server will not report anything, but in the database, it would seem like the server and microservices are still active. However, this problem should be avoided by, for example, checking timestamps for the most recently reported data.

3.1.1.2 Advantages and disadvantages with Test Autobahn Server Extension The advantage of this approach is that the developers do not have to register or include something in the code for the microservices to be found. All the microservices would be found directly when added to the server. But since this approach is based on a certain folder structure, it would not be exact if the folder structure would change in the future.

This can lead to a lot of maintenance being required for the service to function correctly or, in the worst-case scenario, the service would no longer function at all. Another dis- advantage of this approach is that it is a lot of work to set up TASE on all the servers.

A consequence of having the service on multiple servers is that problems with load bal- ancing can occur. Load balancing is the distribution of network and application traffic across multiple servers in a server farm [9]. When the same service exists on multiple servers in the same server farm, it is not possible to separate the services and it will always find the same service on one server as default. For this to not occur, the TASE services will need different names on the different servers to keep them separated.

3.1.2 Detection of microservices from URL structure

All microservices that the system will detect are, as mentioned in section 3.1, web-based applications containing a defined URL path to the microservices web application. As explained in 3.1.1, some of the microservices are on shared sites with the same base URL and some microservices have their base URL. Trough collection all the base URLs of interest, all the existing microservices can be detected. All microservices on the shared sites would be found by detection of sub-paths in the URL.

(21)

3.1. Methods for detection of microservices 15 In Figure 3.3, it is shown how Test Autobahn would use this detection technique to find all available microservices. The process starts with that Test Autobahn sets all microservices in the database to inactive. Then all the base URLs will be fetched from the database. For each URL, all sub-paths will be found to all microservices. All found microservices will be marked as active and added or updated to the database.

Figure 3.3: Flow diagram for Test Autobahn to get all active microservices with detection through URL structure.

The advantage of this approach is that the developer does not need to add anything to their code for the microservices to be found. But for this approach to function, all the base URLs always need to be registered and it is no guarantee that all the relevant base URLs are registered. The number of base URLs that currently exist is quite a few, which makes this approach less efficient.

3.1.3 Implementation of background task with IHostedService

Another approach to detect microservices, is through implementing a custom background task by using IHostedService. In .NET, these types of tasks are called Hosted Services since they are services or logic that are hosted within the application or microservice [10]. In short, it means that a hosted service is a class with background task logic. The purpose of implementing a hosted service is to register each microservice from its ap- plication. All the microservices would add this hosted service to their application and

(22)

would thereby register to Test Autobahn through a REST POST request.

For this to function, all the microservices need to take part in the same hosted ser- vice. Therefore, a common hosted service would be created and extended by the mi- croservices. Since all the microservices need to access the same hosted service, a NuGet package needs to be published. This solution would imply that each service requires to install this NuGet package containing the custom hosted service and add the extension to it into their Startup class. A microservice with this hosted service installed and set up would automatically register to Test Autobahn when deployed in a server. The process of installation of the NuGet package containing the hosted service to registration to Test Autobahn is shown in the flow diagram in Figure 3.4.

Figure 3.4: Flow diagram of installation of Test Autobahn hosted service and endpoint registra- tion.

An advantage of using this method is that it updates directly for each new deployment of a microservice in a server. It is also a stable method since it is not depending on a specific structure to function. This NuGet package can be made from a project file in Test Autobahn and does not require additional service to function, like the solution in 3.1.1. This would make this solution smaller, and more time-efficient to create and

(23)

3.2. Detection of endpoints 17 maintain. The disadvantage of this solution is that only the microservices with this NuGet package installed and extended to will be detected, which can lead to misses in the documentation.

3.1.4 Motivation for selection of microservice detection method

The method with the IHostedService described in 3.1.3 will be the method of choice to the final implementation since the method is the most stable and needs the least maintenance. Even though the method requires that all microservices install the service, once installed the microservice will be monitored and nothing additional needs to be added. The method with reading the folder structure described in 3.1.1 is the most automatic, but it is depending on a specific structure that may be changed in the future, which easily can lead to much error that requires huge modifications of the service.

With the method using detection from URL structure described in 3.1.2, huge misses in the documentation can be made since many URLs need to manually be registered which easily can be forgotten. There does not exist a collection of all the base URLs for Marginalen Bank’s web-based microservices and it would take much time to collect them and regularly update any additions or changes manually.

3.2 Detection of endpoints

For each microservices, all endpoints need to be found. In this case, only the REST endpoints will be taken into consideration since the delimitations. In this subsection, different methods to detect REST endpoints will be presented. The different methods will be investigated, and the most suitable method will be selected for the final imple- mentation.

3.2.1 Detection with Swagger documentation

Most of the microservices have Swagger documentation showing the available REST end- points. One way to get all the interesting endpoints would be through the swagger.json file produced by Swagger. In the Swagger file, all endpoint paths for a microservice could be found. The body for the swagger.json file has a list that is called “paths” that holds all the REST endpoint paths. This list can be read to get all the necessary information about each endpoint.

In Figure 3.5, the flow diagram for Test Autobahn to get all active endpoints is shown.

First, all the endpoints in the database are set to inactive before the detection. Then all the available microservices will be fetched from the database. For each microservice find, Test Autobahn is retrieving the Swagger documentation from the URL and reading the information of all endpoints. For each endpoint found, it will be marked as active and added or updated to the database, depending on if the endpoint already existed.

(24)

Figure 3.5: Flow diagram for Test Autobahn to get all active endpoints with Swagger documen- tation.

An advantage of this approach is that the microservices do not need to add anything new in their code to function. But this approach is requiring that all the microservices is using Swagger documentation, which is not always the case. This would mean that all the microservices with Swagger missing would need to add Swagger for this to function.

Another disadvantage with this approach is that it only solves the problem for REST and cannot be extended to other types of endpoints in the future.

3.2.2 Custom middleware for endpoint detection

A middleware is a software that is assembled into an app pipeline to handle both requests and responses [11]. Middleware is commonly used in .NET and there exist many built-in middleware components, but in some cases is it necessary to write a custom middleware.

As an approach to detect all REST endpoints of a microservice, a custom middleware can be implemented. This middleware can work as an endpoint, returning all the available endpoints for the microservice.

When a microservice is up and running it would be preferable to call the microser- vice and get all the endpoints at any time. Therefore, it would be advantageous to make the middleware callable via an URL. When the specific URL is called for the custom middleware, all the endpoints will be sent as a response to the calling service. To set up the middleware and create a callable URL, the “IApplicationBuilder.Map” method can be used. The method branches the request pipeline based on the matches of a requested path. If the requested path matches the specified path, the branch is executed [12]. In this case, if the requested path matches the path that the custom middleware is bound to, the custom middleware is executed and the microservices endpoints will be sent as a response. As in the solution with IHostedService in 3.1.3, this custom middleware would be created as a NuGet package, that needs to be installed on every microservice, and

(25)

3.2. Detection of endpoints 19 would be extended to in the Startup.cs file. In Figure 3.6, a flow diagram is shown of the process from the installation of the custom middleware to available endpoints to call for Test Autobahn.

Figure 3.6: Flow diagram from installation of the custom middleware to available endpoint to call for Test Autobahn.

As mentioned, the custom middleware has a specified URL to the endpoint that returns all the endpoints for a microservice. With all the microservices base URLs registered in the database, Test Autobahn can call this specified URL for each, to get and register all the active endpoints.

The process of the detection of endpoints for Test Autobahn is described in Figure 3.7.

First will all endpoints in the database be set to inactive. Test Autobahn will get all microservices registered in the database and call the custom middleware endpoint for each microservice. All found endpoints will be marked as active and added or updated to the database, depending on if the endpoint already existed.

Figure 3.7: Flow diagram for Test Autobahn to get all active endpoints with custom middleware endpoint detection.

(26)

A disadvantage with this approach is that all microservices need to install and register the custom middleware manually. Manually installation of the custom middleware leads to that the endpoints of a microservice will not be detected if not installed. However, the installation of the custom middleware requires minimal implementation for a developer.

Once installed, the endpoints are monitored, and all changes will be detected without further additions. An advantage of this approach is that nothing else is required other than this custom middleware for it to work. Another advantage of this approach is that it does not need any maintenance to function after installation.

3.2.3 Motivation for selection of detection method

Since all microservices do not have Swagger, the solution with the custom middleware is a better approach since it is not depending on any other technique. Swagger also only covers REST endpoints, which will not make it a suitable solution if additional types of endpoints will be added to Test Autobahn in a future extension. The custom middleware will have more expansion opportunities since it is custom-made, which will make it more appropriate. Therefore, the solution with the custom middleware described in 3.2.2 will be the method of choice and will be in the final implementation.

3.3 Detection of test automation

All microservices that are tested with test automation are up and running in the Test and UAT environments. The test automations are developed in a different repository and project than the microservices it tests and does not have a direct connection to each other.

In a test automation project, there are several different test automations that tests differ- ent endpoints of one or more microservices. In the tests that covers REST endpoints, the tests are based on requests to one or more REST endpoints. The URLs for the requests are manually typed into the test cases by the test developers, and the URLs that are used are found in the API documentation for each microservice.

Since the test automations and the microservices do not have a direct connection to each other, coupling between them for the documentation in Test Autobahn can be com- plicated. After studying how the test automation is built, attributes appear to be the most suitable solution for marking which automated test belongs to which microservice and endpoint. Attributes are used in .NET to associate information with code in a declarative way and provide a reusable element that can be applied to different targets.

An attribute can be applied to classes, methods, constructors, structs, and more [13]. In the test automation project, with .NET Core 3.1 as the target framework, test classes are used with multiple test methods. Attributes can be used to mark which microservice a test class and test method are testing. There does not exist a predefined attribute that

(27)

3.3. Detection of test automation 21 can be used to achieve this, therefore custom attributes need to be created. A custom attribute is essentially a traditional class that derives from .NET’s “System. Attribute”.

Like ordinary classes, custom attributes receive and store data [14].

The custom attributes for marking which microservice and endpoint are being tested will be implemented in a project in Test Autobahn. For all test automation projects to be able to use the custom attributes in the code, a NuGet package must be created from the project with the custom attributes. The NuGet package will then be installed and used in every test automation project.

In .NET, the attributes in a project can be read through the assembly. Assemblies can be implemented as DLL or EXE files, but in the test automation projects, the as- semblies are implemented as DLL files. An assembly is a collection of types and resources that are built to work together and form a logical unit of functionality [15]. Every test automation project is run and built through a dedicated pipeline in Azure DevOps and in the pipeline, the DLL file for the project is created. The DLL file needs to be read to get the custom attributes.

Two different methods of how to read the assembly to find the custom attributes con- taining information about the tested microservices and endpoints will be investigated. In the first method, the assembly is read through an executable console application in each test automation project pipeline. In the second method, the assembly is read within a PowerShell task in each test automation project pipeline. The most suitable method will be selected for the implementation.

3.3.1 Executable console application

One possible method to read the DLL file created in the pipeline is through an executable console application. The console application needs to take the DLL file path as an argu- ment, to read the assembly. In this method, all attributes in the DLL file will be read in the console application and the information about the found test automations will be sent to Test Autobahn through a REST POST request. The console application must be executed in each test automation project pipeline with the built DLL file’s path as an argument. For this, all test automation project pipelines need access to the console application, which means that the console application must be reused and available for all pipelines.

A way to make the console application available and runnable for all test automation pipelines are to create a NuGet package on the console application project. A NuGet package is a ZIP file that has been renamed with “.nupkg” and standard NuGet packages do often only contains assemblies in form of DLL files [16], but in this case, an EXE file assembly needs to be included for the pipelines to run the console application. A way to create a NuGet package containing an EXE file must be found to be able to use this

(28)

method.

3.3.1.1 Creating a NuGet package in Azure DevOps pipeline

In the Azure DevOps pipeline, the NuGet package for the console application needs to be built and published so that other pipelines can access it. To create a NuGet package in Azure DevOps, the CLI tool nuget.exe needs to be installed and used. In the Azure DevOps pipeline, this installation task is called “NugetToolInstaller” and will be done initially. Then two additional tasks, “NuGet pack” and “NuGet push”, need to be added to create the NuGet package. In the “NuGet pack” task, the nuget.exe packages a project file into a NuGet package. The last task, “NuGet push”, publish the NuGet package as an artifact that other project can install and use [17].

3.3.1.2 Including an EXE file in a NuGet package

For this method to function, an EXE file must be included in the NuGet package. When trying to create a NuGet package of a console application described in 3.3.2.1, there was just a DLL file and no EXE file in the package. Therefore, something additional is needed to include the console applications EXE file in the NuGet package.

In the article written by Karamfilov [18],instructions on how to create a single EXE app that is self-contained are stated. With this method, a single EXE file can be built containing the entire console application. As the instructions say in the article, the first step is to right-click at the project file in Visual Studio 2019 and choose “Publish”. In the “Publish” window, press “Edit”. The following settings need to be set:

• Deployment Mode : Self-Contained

• Target Runtime : win-x64

• Produce single file: true (checked)

Using this method, an EXE file is created when the console application project is built in Azure DevOps Pipeline. When trying to create the NuGet package containing these set- tings, still just a DLL file is included and no EXE file even though the EXE file is created in the build step of the project. Since the method was still not working, the NUSPEC file for the created NuGet package needed to be investigated. All NuGet packages contain a NUSPEC file which is the manifest that explain the package content [19]. In the created NUSPEC file, information of the project was missing which could lead to that the EXE file was absent. Instead of creating a NuGet package directly on the console applications project file, a custom NUSPEC file could be created and be deployed. When the missing information was added, and the NUSPEC file was deployed instead of the project file in the “NuGet pack” task in the Azure pipeline, the EXE file was included in the NuGet package.

The conclusion of this is that the console application project needs to be a single EXE

(29)

3.3. Detection of test automation 23 app that is self-contained and that a custom NUSPEC needs to be created and packed in the Azure DevOps pipeline to create a NuGet containing an EXE file for the application.

The NUSPEC file needs the correct information about the project’s settings to contain an EXE file.

3.3.1.3 Installing and executing the NuGet package from a test automation pipeline

To use the executable console application in a test automation pipeline, the NuGet package needs to be installed to be able to run the EXE file. After installation, the EXE file is available and can be run from a command-line task. In other words, two tasks need to be added to each test automation project pipeline in Azure DevOps. Instead of adding these two tasks inside every test automation, a task group can be created and added to each pipeline. In Azure DevOps, a task group encapsulates a sequence of tasks that already are defined in a build pipeline, into a single reusable task. If any unique values are needed in the task group, parameters can be extracted from the encapsulated task as configuration variables [20]. The EXE file needs the DLL file path as an argument when executed in the command line task inside the task group. The DLL file path will be a configuration variable for the created task group.

3.3.1.4 Reading assembly in console application

In the console application, the NuGet package for the custom attribute needs to be in- stalled to be able to read the custom attribute from the assembly. To read the assembly, some code needs to be inserted in the console application. In the article written by Tansey [21], it is described how custom attributes can be read from the assembly in C#.

The conclusion made after reading the article is that the below code needs to be imple- mented to read the assembly and get the custom attributes. The methods are using the

“System.Reflection” library which must be referred to in the console application.

To load the assembly from the DLL file:

var assembly = Assembly.LoadFrom(filePath);

, where “filePath” is the path to the DLL file.

To get the project classes from the assembly:

var types = assembly.GetTypes();

, where “types” is an array with classes.

To get the custom attributes from a class:

(30)

var attributesC = type.GetCustomAttributes(typeof(customAttribute), false);

, where “customAttribute” is the name of the made custom attribute, which has been installed from the NuGet package.

To get the methods in a class:

var methods = GetMethods();

, where “methods” is an array with methods.

var attributesM = method.GetCustomAttributes(typeof(customAttribute), false);

, where “customAttribute” is the name of the made custom attribute, which has been installed from the NuGet package.

3.3.2 Reading assembly with PowerShell

Another possible method to read the DLL file created in the Azure DevOps pipeline is by using a PowerShell task. With the DLL file path as an argument to the PowerShell task, the assembly can be read from the DLL file. In the PowerShell file, code will be implemented to read the attributes from the assembly. To be able to read the custom at- tributes in the assembly, the NuGet package for the custom attribute needs to be installed.

To read the assembly in PowerShell, this code row needs to be added:

$assembly = [Reflection.Assembly]::LoadFrom($File) , where $File is the folder path to the DLL file.

To get all classes in assembly, this code row will be inserted:

$types = $assembly.GetTypes()

, where $types is an array with the classes of the assembly.

When running the script both on the local computer and as a PowerShell task in Azure DevOps pipeline the GetTypes() method returns a ReflectionTypeLoadException and the method seems not able to load all the requested types in the assembly. To get around this problem, the assembly could be read through C# code instead of PowerShell code in the PowerShell task. In the article written by Furmanek [22], instructions are written on how to add and execute C# code inside a PowerShell script. The C# code can be written directly in a variable and the variable can be defined as C# code with

(31)

3.4. Connect and send the test automations to Test Autobahn 25

the following PowerShell code line:

Add-Type -TypeDefintion $code -Language CSharp , where “$code” is the variable containing the C# code.

The assembly in the C# code is written in the same way as described in section 3.3.1.4.

By using this method and executing the code on the local computer, all the types are being read and the error is no longer existing. But, when running the script in the Azure DevOps pipeline, the error still occurs. By adding the following try-catch statement into the code, the ReflectionTypeLoadException can be investigated:

try{

types = assembly.GetTypes();

}catch(RefelctionTypeLoadException e){

types = e.Types.Where(x =>x != null).ToArray();

}

By executing the code, the exception can be handled and the types that did not catch an error could be read in the variable “types”. When reading the variable, it seemed that some of the types could be read, but not all.

After some research and error message handling, the conclusion is that the PowerShell task does not seem to have access to read all the types in the assembly in the Azure DevOps pipeline. A solution for this problem is not yet found, and therefore the solution described in 3.3.1 will be the method of choice for reading the test automation assemblies.

3.4 Connect and send the test automations to Test Autobahn

After reading the attributes, the information about the test automations must be regis- tered to Test Autobahn. This could be done by sending the information about the test automations through a REST POST request from the console application described in 3.3.1 to Test Autobahn. For this to function, Test Autobahn needs to implement a REST POST endpoint that takes in a specified model with the test automations as input. The input model needs to be specified in Test Autobahn and the console application needs to access this model to send the request. Therefore, the input model must be made as a NuGet package so the console application can access it. In the console application, this NuGet package containing the input model needs to be installed and implemented in the request JSON body.

(32)

3.5 Scheduling runs on daily basis with Quartz.NET

Quartz.NET is a hosted service in .NET that is used for running background tasks on a timer. There are two main concepts in Quartz.NET: job and scheduler. The job is the background task that is triggered to execute on a scheduled time and the scheduler is running jobs based on triggers on a timed schedule [23]. By creating a Quartz.NET hosted service, a .NET Core application can run the assigned task on schedule in the background. As with all other hosted services, mentioned in 3.1.3, a Quartz.NET hosted service is extended to in the Startup.cs file and is started when the .NET Core applica- tion starts.

To schedule daily detection after microservices, endpoint, and test automations, Quartz.NET will be used in the solution. The schedule will collect all data and check which microser- vices and endpoint that have test coverage and save the computed data into the database.

This data will be presented in the documentation and statistics presented in a web in- terface. The scheduler will be set to run every weekday at a specific time.

(33)

Chapter 4 Implementation

In this section, the implementation of the real-time auto-test monitoring system is presented. The implementation is based on the selected techniques presented in section 3.

4.1 System architecture

The different components that are in the real-time auto-test monitoring system, are both located inside and outside Test Autobahn, as shown in Figure 4.1. Most of the function- ality of the system is within Test Autobahn: GUI, frontend, web application containing API, backend, SQL database, and the three different projects that are being used for external services. These projects are called Test Autobahn Microservice Detector, Test Autobahn Contract Models, and Test Autobahn Attributes and are built as NuGet pack- ages so that external services can install and implement the project’s functionality. The components outside Test Autobahn are the Test Autobahn Automation Detector, the test automation projects, and the microservices. The frontend is written in JavaScript with React.js and the rest of the components are written in C#. The microservices and the test automation projects are previously already existing components at Marginalen, needs to install and use NuGet packages to send information to Test Autobahn.

27

(34)

Figure 4.1: System components of the real time auto-test monitoring system with the commu- nication stated.

As shown in Figure 4.1, some dependencies exist across the different components. The Test Autobahn Microservice Detector (TAMD) is installed on every microservice to regis- ter the microservices and expose their endpoints to Test Autobahn. When registered, the Test Autobahn Web Application can access the endpoints through a call to the endpoint added through the installed TAMD. The Test Autobahn Contract Models also need to be installed on every microservice, since TAMD is depending on that project. This part is explained more in section 4.3.

The test automation project has the Test Autobahn Attributes NuGet package installed and is using the Test Autobahn Automation Detector (TAAD) to register the test au- tomations. TAAD also has the Test Autobahn Attributes NuGet package installed and sending the detected test automations from the test automation projects to Test Auto- bahn through a REST API request. In section 4.4, the process of the test automation detection is explained more in detail.

In the GUI, all the documentation and statistics of the detected test automation coverage for the microservices and endpoints are presented. The frontend component controls the appearance of the GUI and fetches values from the database using REST API calls to the API of Test Autobahn Web Application.

4.1.1 GUI

The design of the website is simple, with two different views as shown in figures 4.2 and 4.3. In the first figure, the GUI is presented for displaying the information of all microservices in the Test environment. The different environment tabs are displayed at the top of the table, and if the “UAT” tab is pressed, the documentation for the microservices on the UAT environment will be switched to and displayed. Below the tabs, the number of microservices with test coverage, the total number of microservices, and

(35)

4.1. System architecture 29 the test coverage percentage for all microservices in the Test environment are presented.

Below that section, a table is showing all microservices in the Test environment. For each presented microservice the total number of endpoints, the total number of endpoints with test coverage, the coverage percentage, and the date for the latest detection are displayed. The name of a microservices can be pressed to see more information about that microservice. When the link is pressed, the site presented in Figure 4.3 is switched to.

Figure 4.2: The GUI for showing all the microservices in a specific environment, Test in this case.

In Figure 4.3, the GUI is presented for displaying the information of one microservice. As in the previous view, the tab for the different environments is still there. If one of those tabs is pressed, the GUI for showing all the microservices for the chosen environment is switched to. Below the tabs, the number of endpoints with test coverage, the total number of endpoints, and the test coverage percentage for all endpoints in the specific microservice are presented. In the table below that section, all the endpoints for the specific microservice are presented. For each endpoint, the path, method, coverage status, number of test automations that cover the endpoint, and the date for the latest detection, are displayed.

(36)

Figure 4.3: The GUI showing information for a specific microservice, microservice named “Mi- croservice1” in this case.

Both pages presented contain example data since some real information about Marginalen’s microservices cannot be presented due to confidentiality.

4.2 Test Autobahn

In this section, the implementation of Test Autobahn will be presented including code architecture, database structure, API, and the scheduled detection job.

4.2.1 Code architecture

The architecture in Test Autobahn is built on the onion layer architecture. As described in the article written by Singh Shekhawat [24], in the onion layer architecture the UI communicates to business logics through interfaces. In the architecture, there are four different layers: UI layer, service layer, repository layer, and domain model layer. These layers are pointed towards the center, which means that an outer layer can access an inner layer, but an inner layer cannot access an outer layer. The onion layer architecture implemented for Test Autobahn is shown in Figure 4.4.

In the outer layer, called the UI layer, these following components are located:

• UI (Web) – The web application for Test Autobahn containing controllers, models, and views.

• Infrastructure – Database context and repositories.

• Contract models – Models that are shared for external services.

(37)

4.2. Test Autobahn 31

• Attributes – Contains the custom attributes used on external test automation projects.

• Microservice detector – Contains the hosted service and the custom middleware that are deployed on external microservices.

In the second layer, called the service layer:

• Service Interface – Contains the services and their interfaces.

In the third layer, called the repository layer:

• Repository interface – Contains the interfaces for the repositories.

In the fourth and innermost layer, called the domain model layer:

• Domain models - Contains the models that are used in the database.

Figure 4.4: Onion layer architecture for Test Autobahn.

The folder structure for Test Autobahn’s projects is shown in Figure 4.5. In Test Au- tobahn, there are four different folders that all contain different projects as stated. The

(38)

projects in the folders “Outer layer – UI” and “Outer layer – Infrastructure” is part of the UI layer. In the folder “Application Core”, there exist three different projects. The project “TestAutobahn.Core.ApplicationService ” is the service layer, the project “Tes- tAutobahn.Core.InfrastructureServices.Interfaces” is the repository layer and the project

“TestAutobahn.Core.DomainModels” is the domain model layer. Each project is contain- ing functionality located in multiple C# classes.

Figure 4.5: Folder structure for Test Autobahn projects.

Inversion of Control (IoC) is a design pattern principle that is used to invert different kinds of control in object-oriented design to achieve loose coupling [25]. In the folder “IoC

(39)

4.2. Test Autobahn 33 Container”, the project “TestAutobahn.Ioc” is located containing the IoC functionality.

In Figure 4.6, the file structure for this project is presented. The project only contains one class, called “IoCContainer”, with one method, called ”ConfigureIoCContainer”, which registers the different services of Test Autobahn. This method is called from the Startup.cs class in the project “TestAutobahn.Web”.

Figure 4.6: Class diagram for the project TestAutobahn.IoC.

4.2.2 Database structure

The database is built with .NET Entity Framework Core (EF Core), which is a lighter cross version of the Entity Framework access technology. In EF Core, as described in the article about Entity Framework Core published by Microsoft [26], data access is performed using a model which is made up of entity classes and a context object that represent a session with the database. In Test Autobahn, the model is placed in the project “TestAutobahn.Core.DomainModels” and the database context is placed in the project “TestAutobahn.Infrastructure.Database”. The database model contains multiple entity classes that define the different tables in the database. These classes, called domain models, build the structure of the database which is shown in Figure 4.7. The database created with EF Core is a SQL database.

(40)

Figure 4.7: Database table structure for Test Autobahn.

The domain model “Microservice”:

• Id - The identification number of the microservice.

• Name - The name of the microservice.

• BaseUrl - The base URL to the microservice web application.

• Active - Marks if the microservice is active or not.

• Production - Marks in which environment the microservice is running at. Where 0 = ”Not set”, 1 = ”Test” and 2 = ”UAT”.

(41)

4.2. Test Autobahn 35 The domain model “Endpoint”:

• Id - The identification number of the endpoint.

• MicroserviceId - The identification number of the microservice the endpoint belongs to.

• Path - The path to the endpoint.

• Active - Marks if the endpoint is active or not.

• HttpMethod - Indicates which type of REST endpoint (GET, POST, etc.).

The domain model “TestAutomationProject”:

• Id - The identification number of the test automation project.

• Name - The name of the test automation project.

The domain model “TestAutomation”:

• Id - The identification number of the test automation.

• EndpointId - The identification number of the endpoint the test automation covers.

• TestName - The name of the test automation test method.

• Active - Marks if the test automation is active or not.

• TestAutomationProjectId - The identification number of the test automation project that the test automation is located in.

The domain model “DataCollectionTimestamp”:

• Id - The identification number of the data collection timestamp..

• Date - The time and date of when the timestamp was created.

The domain model “MicroserviceData”:

• Id - The identification number of the microservice data.

• MicroserviceId - The identification number of the microservice the data belongs to.

• NumCoveredEndpoints - The number of endpoints that are covered.

• NumTotEndpoints - The total number of endpoints.

• Status - The success rate for the test automation of the microservice.

(42)

• DataCollectionTimestampId - The identification number of the DataCollec- tionTimestamp, which indicates which time the data was collected.

• Covered - Marks if the microservice is covered or not.

The domain model “EndpointData”:

• Id - The identification number of the endpoint data.

• EndpointId - The identification number of the endpoint the data belongs to.

• DataCollectionTimestampId - The identification number of the DataCollec- tionTimestamp, which indicates which time the data was collected.

• Covered - Marks if the endpoint is covered or not.

• NumCovered - Contains the number of test automation that covers the endpoint.

4.2.3 NuGet package containing contract models

In the outer layer in the project “Marginalen.TestAutobahn.Web.Contracts”, the contract models for Test Autobahn are placed. The contract models are the models that external services will be able to use to send information to Test Autobahn’s endpoints. In Figure 4.8 a class diagram is shown with the different models and their dependencies.

Figure 4.8: Class diagram of the project “Marginalen.TestAutobahn.Web.Contracts”.

(43)

4.2. Test Autobahn 37 For external microservices to use these models, the project is produced as a NuGet pack- age. In the project CSPROJ file, these following rows are added:

In the <PropertyGroup>tag:

<RootNamespace>Marginalen.TestAutobahn.Web.Contracts </RootNamespace>

<AssemblyName>Marginalen.TestAutobahn.Web.Contracts </AssemblyName>

In the <ItemGroup>tag:

<Folder Include=”.nuget/>

To create a NuGet package all microservices can install and use, a pipeline is created in Azure DevOps. In Figure 4.9, the pipeline tasks are shown to create a NuGet package for the project. This pipeline is written in YAML and is only triggered if something in the project file is changed. In the NuGet Pack task the CSPROJ file for the project is packed and in the NuGet Push task, the NuGet package is published as an artifact.

Figure 4.9: The pipeline tasks for creating NuGet package on the project

“Marginalen.TestAutobahn.Web.Contracts”.

4.2.4 API

The API of Test Autobahn consists of several API REST endpoints located in different controllers in the project “TestAutobahn.Web”. In Test Autobahn, there are four differ- ent controllers with endpoints, and the API for each controller is stated below.

(44)

In the Detection Controller:

Table 4.1: REST API for POST requests that start a detection of microservices, endpoint, and test automations.

In the Microservice Controller:

Table 4.2: REST API for POST request that registers a microservice.

(45)

4.2. Test Autobahn 39 In the Test Automation Controller:

Table 4.3: REST API for POST request that adds test automations.

(46)

In the Documentation Controller:

Table 4.4: REST API for GET request that returns the overall statistics for the microservices during the latest detection.

(47)

4.2. Test Autobahn 41 Table 4.5: REST API for GET request that returns the endpoint statistics for a specific

microservice at a specific timestamp.

4.2.5 Scheduling detection run with Quartz.NET

Quartz.NET hosted service is implemented in Test Autobahn to schedule daily detection after microservices, endpoint, and test automations, as described in Section 3.5. The process of the schedule detection job is presented in two parts, presented in the flow diagrams in Figure 4.10 and 4.11.

In the first part, shown in Figure 4.10, the detection job fetches all the microservices from the database. For each endpoint, the detection job calls the endpoint connected to the custom middleware, which is explained in section 4.3. This endpoint returns all the REST endpoints for that microservice. For each endpoint found the endpoint is added or updated to the database, depending on if it already existed or not. When this process is done for every microservice in the database the process is going to the second part

(48)

shown in Figure 4.11.

Figure 4.10: Flow diagram of the first part in the detection job process, which detects all mi- croservices and their endpoints.

In the second part of the process, all the found active microservices are fetched for the database and a new DataCollectionTimeStamp is added to the database with the current time. For each microservice, all the active endpoints are fetched. For each active endpoint, the detection job looks after any active test automation covering that endpoint. If at least one test automation is found, the endpoint has coverage and a new EndpointData is added to the database with the “covered” variable set to true. If no test automations are found for an endpoint, the variable “covered” is set to false.

After all the endpoints have been checked, the total data of the microservice will be stored in the database. The total data of the microservice will be added by inserting a MicroserviceData object to the database that contains the calculated total number of

(49)

4.2. Test Autobahn 43 endpoints, the total number of coverage endpoints, and if the microservice is covered or not.

(50)

Figure 4.11: Flow diagram of the second part in the detection job process, which finds the corresponding test automations to the found microservices and endpoint and adds the found data to the database.

The detection job is scheduled to run every weekday at 6 in the morning. The timer is set by using CRON schedule, and the expression to run the detection job every weekday at 6 in the morning is “0 0 6 * 1-5” which is set in the scheduler.

4.3 Test Autobahn Microservice Detector

In this section, the implementation of Test Autobahn Microservice Detector (TAMD) will be presented containing the hosted service introduced in 3.1.3 and the custom mid- dleware introduced in 3.2.2. TAMD is a part of Test Autobahn and will be packaged as a NuGet package so that all microservices can access this service. In Figure 4.12, a class diagram of TAMD is shown with its three different classes. The class Microser- viceDetectionJob contains the custom hosted service that registers a microservice and the class TestAutobahnMiddleware contains the custom middleware that exposes the REST

(51)

4.3. Test Autobahn Microservice Detector 45 endpoints for the microservice. In the custom hosted service, called MicroserviceDetec- tionJob, information about the microservice using the service will be collected and then through a REST POST request, the information about the microservice is being regis- tered to Test Autobahn. In the custom middleware, called TestAutobahnMiddleware, all the endpoints are being exposed through a specified URL stated in the GetUrlBase() method. TestAutobahnExtension is an extension class that refers to these two other classes.

Figure 4.12: Class diagram for Test Autobahn Microservice Detector in the project Marginalen.TestAutobahn.Web.MicroserviceDetector.

The NuGet package for TAMD is implemented as the NuGet packages described in section 4.2.3 and the pipeline contains the same steps as in Figure 4.9.

4.3.1 Installation and deployment of TAMD in a microservice

For a microservice to install and implement TAMD, the following steps need to be done:

1. Install the NuGet packages “Marginalen.TestAutobahn.Web.MicroserviceDetector”

and “Marginalen.TestAutobahn.Web.Contracts”.

2. Add the following line in ConfigureServices(IServiceCollection services) in Startup.cs:

services.AddTestAutobahn();

(52)

3. Add the following line in Configure(IApplicationBuilder app, IWebHostEnviron- ment env) in Startup.cs:

app.UseTestAutobahn(statusControllerName);

, where “statusControllerName” is a string containing the name on the controller containing the health checks for the microservice. If a health controller does not exist, send an empty string as an argument.

4. In appsettings.json, add the following:

”TestAutobahnBaseAddressSetting”: ”TestAutobahnBaseAddressSetting”,

”TestAutobahnMicroserviceNameSetting”: ”TestAutobahnMicroserviceNameSetting”,

”TestAutobahnEnvironmentSetting”: ”TestAutobahnEnvironmentSetting”

In appsettings.development.json, add the following:

”TestAutobahnBaseAddressSetting”: ””,

”TestAutobahnMicroserviceNameSetting”: ””,

”TestAutobahnEnvironmentSetting”: ””

5. In the release pipeline in Azure DevOps, the variables in paragraph 4 must be set.

With these five steps, the TAMD is installed and deployed on a microservice, and the microservice and its endpoints will be monitored. A microservice with TAMD deployed and started on a server immediately sends the registration of the microservice to Test Autobahn.

4.4 Test automation detection

Based on the investigation presented in section 3.3, the test automation detection process is implemented with the console application called Test Autobahn Automation Detector (TAAD). As described in section 3.3.1, the console application reads the assembly to detect the custom attributes to detect which microservices and endpoints are being tested in a test automation project. To be able to read the test automation project assemblies, the console application receives a DLL file path as an argument. As described in section 3.4, the console application sends the found test automations to Test Autobahn through a REST POST request after the assembly has been read. TAAD is made as a NuGet package containing an EXE file so the console application can be running from every test automation pipeline in Azure DevOps. The NuGet package is installed on every test automation pipeline and the EXE file is executed from a command-line task with the DLL file path as an argument. This test automation detection process for a test automation project is shown as a state diagram in Figure 4.13.

References

Related documents

The proposed tester symbol blocks were performing different tests over the AVP110 and DVP112 blocks using the requirements developed for each one of them, and creating test cases

Nu finns behov av att utforska möjligheterna kring automatiska tester av Xperts Repo så att vi får ett modernare systemstöd och kan dra nytta av automatiska regressionstester samt

With our product, customers can reduce the amount of time consuming CAD-modelling processes and stay focused on building better products.. The product is basically

Based on both literature and data collected at the company, these objectives became: To design a test automation framework that fulfills the requirements that it

If it is primarily the first choice set where the error variance is high (compared with the other sets) and where the largest share of respondents change their preferences

Ändringen medför således att livstids fängelse kommer att kunna utdömas till la- göverträdare som vid tidpunkten för brottet var 18-20

The attentional reorienting and target detection tasks ac- tivated supramarginal regions (TPJc and TPJa, respec- tively) connected with the ventral attention network, whereas

The pouring of cement on to of sealing foam was the last experiment before moving on to making a larger sitting device. The technique of adding materials on top of each other, first