• No results found

Virtualized Functional Verification of Cross-Platform Software Applications

N/A
N/A
Protected

Academic year: 2021

Share "Virtualized Functional Verification of Cross-Platform Software Applications"

Copied!
35
0
0

Loading.... (view fulltext now)

Full text

(1)

Virtualized Functional Verification of

Cross-Platform Software Applications

William Antti

Computer Science and Engineering, bachelor's level

2019

Luleå University of Technology

(2)

Abstract

With so many developers writing code, so many choose to become a developer every day, using tools to aid in the work process is needed. With all the testing being done for multiple different devices and sources there is a need to make it better and more efficient. In this thesis connecting the variety of different tools such as version control, project management, issue tracking and test systems is explored as a possible solution. A possible solution was implemented and then analyzed through a questionnaire that were answered by developers. For an example results as high as 75% answering 5 if they liked the connection between the issue tracking system and the test results. 75% also gave a 5 when asked about if they liked the way the test results were presented. The answers they gave about the implementation made it possible to conclude that it is possible to achieve a solution that can solve some of the presented problems. A better way to connect various tools to present and analyze the test results coming from multiple different sources.

(3)

Preface

I would like to thank my supervisors Kristoffer Karlsson and Daniel Wollbro at Spotin AB for their guidance, support and for providing a great place to work at with the other employees at Spotin as well. Thank you to Peter Parnes my supervisor from Lule˚a University of Technology.

(4)

Contents

1 Introduction 1 1.1 Background . . . 1 1.2 Purpose . . . 1 1.3 Problem . . . 2 1.4 Delimitations . . . 2 1.5 Thesis structure . . . 2

2 Theory and Related work 3 2.1 Related work . . . 4 3 Methodology 6 3.1 Research . . . 6 3.2 Agile development . . . 6 3.3 Evaluation . . . 6 4 Implementation 7 4.1 system architecture . . . 7 4.2 Setup . . . 8 4.3 Visualization . . . 9 4.3.1 Functional . . . 10 4.3.2 Unit . . . 12 4.3.3 Unit coverage . . . 13 4.4 APIs . . . 14 4.4.1 GitLab . . . 14 4.4.2 Slack . . . 14 5 Results 16 5.1 Architecture results . . . 16 5.2 Evaluation . . . 16 6 Discussion 23 6.1 Method discussion . . . 24 6.2 Ethics . . . 25 6.3 Future work . . . 26 6.4 Conclusion . . . 26 References 28 Appendices 30

(5)

1

Introduction

In this thesis, I will present the findings on the possibilities of connecting mul-tiple of tools used by developers in aid of improving test management and visualizing the results.

1.1

Background

According to International Data Corporation there was an estimation of 22 mil-lion software developers in 2018 [8]. These milmil-lions of developers are writing code every day for various projects. Just in web development you have differ-ent devices with differdiffer-ent screen-sizes, differdiffer-ent interfaces for input, differdiffer-ent browsers, different operating systems to think of when developing applications for the web. According to statistics by W3Counters in April 2019 browsers such Chrome 73, Safari 12 and Firefox have a market share at 35%, 12% and 3% respectively. All the different platforms such as Windows 10, Android 8 and iOS 12 with their market share at 16%, 15%, 11% respectively. Together with the platforms and browsers with the top 3 screen resolutions 640x360, 366x768 and 1024x768 a pattern of great variance is shown [24].

Mistakes happen and that is only human and because mistakes happen there is a need of tests. The developers need to verify and validate the software that is created [19]. There are a bunch of frameworks to create tests such as JSUnit for unit testing [9], there are applications to run these tests automatically, Selenium can automate the tests for the web and different browsers, send them to run in virtual containers and then when the tests are done get the results back [15]. The results are often presented with a XML file or an HTML generated page that can present the results of the test with graphs for an example like JMeter does [10].

Right alongside the developers there are tools used to assist in the development process. There are issue-tracking systems, test programs, communication tools, version control applications, servers, virtual containers to name a few.

Seeing the importance of testing and the growing development area it is impor-tant to get an overview of the vast amount of test results. Using all the tools available to visualize and analyze the test results is an area a lot of work can be done in.

1.2

Purpose

The purpose of this thesis is to achieve an easier and a better way to verify the tests results coming from multiple different platforms right now and over time.

(6)

How a combination of the development tools used like issue-tracking systems, version control, the test programs and its results can aid in the verification and clarity of the results. Compiling, presenting and getting feedback on the tests is essential for the developers.

1.3

Problem

1. Is it possible to use a project management and issue tracking system for test management.

(a) Is it possible to bridge the tools used throughout the development process: test systems, project management, issue tracking, version control, team communication?

(b) Is it possible to gather the different test results (functional, unit, cov-erage) from all the different sources which has been executed on dif-ferent platforms (Chrome, Firefox, Safari, IOS, Android) and present them together in a unified way?

(c) Is it possible to store and present the results history to show how it changes over time? To make sure there is a record of the development verification-gap and problem areas in the code over time.

1.4

Delimitations

This is a bachelor level thesis and is a 10-week process. The solution will use test data from Spotin AB which means it is tests that ran on different browsers and devices. The tools and software used by the Spotin AB development team will also be the focus to use and connect. Making it for all test areas and tools would be too broad and would take too long. The implementation could be used as an example for the other areas. This thesis will describe the steps taken to achieve a easier way to verify the tests results coming from multiple different platforms.

1.5

Thesis structure

Section 2 is the theory with related work. It covers some theory behind testing and the tools used by developers. Section 3 is the methodology. The process on how the development was researched, planned and executed. Section 4 is about the implementation. It is about how each step was done and the decision making behind it. Section 5 is the results which show the finished solution and answers from the questionnaire. Section 6 is the discussion where an evaluation of the solution is done. Reflections on the method, results, ethics and future work are also in this section.

(7)

2

Theory and Related work

Software development life cycle is how to define, develop, test and maintain software. It is there to improve the development process and quality of the code. There are plenty of different models such as Waterfall, V model, incremental methods such as Agile [18].

Whichever popular model is picked they all have testing as part of it. Functional tests are usually there to answer if particular features work as described in the requirements specification [20]. It usually starts with unit tests. It is when testing the smallest part that is testable like functions that is when a unit test is done [21]. Test coverage is when things have been counted on whether they have been tested by some test [22].

In test management there two main parts which are planning and execution according to the educational website Gur99. Planning consists of risk analysis, test estimation, test planning, test organization. Execution consists of test mon-itoring and control, issue management, test report and evaluation [6]. Meaning that combining features and tools seems to be a great way of moving forward in the testing area to expand on test management. The variety of tools used will be explored down below.

With data coming from different sources, presenting the data can be done in various way, presenting data in the best way possible and in a unified way. According to David J. Slutsky graphs have a use when you have data that are unfit for text. If the text is too hard to understand or numerous graphs should be used [17].

According to In and Lee in their article, the data needs to be processed in some ways to be able to present it. There are three different ways: text form, tabular form or graphical form. Reading data as text is much slower compared to the other two methods, reading a long text and it being harder to understand makes it so. Text can be useful though to explain or highlight certain results. When having data that are collected for a specific purpose it is important to think through how and why. Making the data possible to analyze in a way that it fits the purpose the best is important. Presenting data in such a way that it deceives is not a good way [7].

Again, according to In and Lee when there is a small number of categories grouped together a pie chart is very suitable [7]. Having data from test results if they for example passed or failed seems very appropriate. Then being able to analyze data over time a line chart is perfect, very useful for observing patterns and trends [7].

Tabular data are good for summarizing and comparing changing data as the study by In and Lee show. Showing data with values of different units such as time and amounts is a great reason to use tables. Tabular data can be expanded on with multiple of different colors creating heat maps. Having tables that show

(8)

results and values linked to certain colors helps further visualize the information [7]. Here having the test results showing different colors in a table heat map would be very beneficial for presenting data.

2.1

Related work

There are not a lot of applications that can combine the multitude of tests from different platforms and display it in a clear matter along with being connected with the multiple tools used by the developers.

Programs like JSUnit gives the ability to create tests for client-side JavaScript for many different browsers. It can produce an XML report of the test results and also gives an easier overall view with a generated HTML site, it that shows the results for that specific test more clearly compared to the XML file [9]. There are plenty of programs that are used to create, run and produce simple test results just like JSunit.

The applications that are more related can give a broader view of the system and its test results. Application such as JMeter and Blazemeter are examples. JMeter can be connected and use multiple different test types and produce an easy to read HTML report of the results. It gives more information compared to only using JSunit. JMeter gives metrics based on APDEX which is Application Performance Index [10].

Blazemeter can be considered a candidate to achieve what is needed. Blazemeter can be connected with open source applications such as JMeter but are given much broader functions with for an example better test results reports because of the cloud storage that are provided with like Amazon Web Services. Blazemeter provides Software as a service also known as SaaS. It has a subscription fee above 10 tests, it is not open source [2]. Using an application like Blazemeter runs the risk of locking and tying yourself to a third-party application that can change at any moment.

GitLab should also be mentioned besides from the obvious source code man-agement it has project manman-agement features such as issue boards and it has continuous integration to name a few. But same as above with Blazemeter restricting the company to a third-party software with a subscription that is unfriendly to scalability is not beneficial. It is also not open source so it is not flexible to changes that might be needed to show the test results in a way that is beneficial, to show a current view and historical view might be hard [5]. Actual test management tools that combine some tools should also be men-tioned. qTest and PractiTest are examples, qTest have integrations with issue tracking tools, it has the ability to start tests as well. It can present the re-sults in different ways with graphs and lists [23]. PractiTest also have the issue tracking connection and the ability to run the tests [12]. Both of these tools have a subscription cost, not open source and runs the risk of tying yourself to a

(9)

third-party software as well. But the ability to have tests as issues in connection to the test management tools is a feature that is available and wanted.

There are a lot of applications that can run the tests and produce results. Above are there: Applications that are connected to a database to store the results. Applications that setup automatic tests. Test management applications with issue-tracking and test specifications. But there is not one that can give a clear and easy path to see the all the test results from all the various sources in connection with issue-tracking, version control, project management and team communication. Not one that connect all the above-mentioned tools and envi-ronments.

(10)

3

Methodology

The methodology of this thesis and how everything was done is described in this section. The process is broken down into a few parts. The research part is where the problem is researched by looking up relevant scientific articles and relevant tools that are used by the people in the field. The implementation part is to gather what is known from the research, what the wanted result from the thesis is and with agile development implement what is needed. Then there is an evaluation at the end on if the implementation is a successful solution to the problem or not.

3.1

Research

Researching related work, tools and development decisions for the implementa-tion was done continuously. Looking for tools was done early on to see what is being used by other people and what the desired functions seems to be. Reputable sources were looked after with various search engines, it will ensure quality and reliable information that can be used and expanded on. The popu-lar tools researched was inspired from the tools that Spotin AB uses and related tools that have similar functions.

3.2

Agile development

An agile development method according to the functional requirements was used. The desired results of the project were written down as requirements and then developed in parts with an agile method called Agile Kanban [1]. It means continuously creating issues for whatever needs to be done and putting them on an Agile Kanban board. This gives the developer a clear view of what is being done and what needs to be done in the future.

3.3

Evaluation

The evaluation of the solution was done through a questionnaire. A group of people was selected, they used the created solution and evaluated it. The selected group was the employees in the Lule˚a office at Spotin AB. The amount is only 4 but with their experience as developers and experience in the field their opinions carry a lot of weight. The questions were made as relevant as possible to be able to analyze the answers to see if the implementation had been a successful solution to the problem. Questions were the participants were asked to rate the usefulness or functionality between 1 to 5 are the type of questions that were asked. Having measurable values from the questionnaire results creates opportunity to discuss the solution on how the problems were met.

(11)

4

Implementation

This section covers the system architecture that are running tests from Spotin. The setup of the solution is covered. The next implementation section after the setup goes into depth about the actual features that were implemented as part of the solution. How they were implemented and the decisions behind them are mentioned.

4.1

system architecture

The system architecture of running tests can be seen below in Figure 1. Having GitLab runners execute the tests and get the test results back as seen in Fig-ure 1 means getting the test results data from GitLab is needed.

Figure 1: (Spotin 2019) Overview of system running tests

Getting the test results, putting them in to the Redmine database and then visu-alize the test results in the plugin is the overall picture seen in Figure 2.

(12)

Redmine is an open source project management and issue-tracking web appli-cation. It is a commonly used system and is also used by Spotin AB and is the chosen one to build the prototype with. It is open source with documentation to use and follow. It is built using Ruby on Rails frameworks. It supports multiple languages and databases. It is setup with MySQL which means that it is the database which the test results will be stored in. Redmine having the ability to create plugins allows us to connect the project and issues to the test results more directly in the same web application [13].

The intended solution is to bridge the various test results coming from differ-ent source and also have a connection with the variety of the tools used by the developers. The solution is a plugin which is setup using the framework Ruby on Rails and regular web development languages such as HTML, CSS and JavaScript.

Figure 2: GitLab and Redmine connection overview

4.2

Setup

Setup of the Redmine plugin was done following the Redmine developer guide using Ruby on Rails [14].

After the GitLab runners have been able to execute the tests and produce results they must be stored in the database. They are stored in the database for searchability and to have a history of the test results. The results come in folders that contains XML files with the results of the tests or a folder structure

(13)

of generated HTML files for the coverage tests.

A decision to strip the necessary content from the XML files and insert that into the database was done. It gives the plugin the data it needs to showcase the results. Since the database is a relational database with a schema, just adding XML directly to it did not seem to be the right decision. The database tables made for the database and the data needed can be seen in Figure 3. Each XML file is the results from a test suite and within the same file data from individual test cases. Each XML file is split into two different tables of called TESTSUITES and TESTCASES linked together with a foreign key inside the test case table which can also be seen in Figure 3. Also shown is the UNITTESTS table which store the unit tests results.

Then there is a function inside the Ruby on rails controller in the plugin that goes through the downloaded folder for every XML file. It scans through them for relevant data that is supposed to be stored in the database according to the database tables in Figure 3.

Figure 3: MySQL workbench - tables for the test data

4.3

Visualization

Because of the many different tests results there are different HTML tabs, in this case there are a functional, unit and coverage tab. HTML tabs were chosen because of its simplicity and because it is a already built in function in HTML. All of them present their results in the way that was chosen for them.

(14)

4.3.1 Functional

The functional tab, it is the first of the three, it is also the primary results. The functional tab presents multiple different tests in various ways.

Figure 4: Screenshot of the whole functional HTML tab with a test suite ex-panded and filters selected.

• Different presentations: Having test results come from multiple differ-ent devices means that they are all shown as seen in Figure 4. The results are listed with the issue name which is a link to the corresponding test suite/case in Redmine, name of the test and the most recent results for each individual browser and device. This give the user of the plugin a quicker way compared to previously mentioned ways in the theory to see the results of each test. In the default view they are listed up using a simple HTML table with each column for every browser. They are listed through a HTML table because it is a quick and easy way. The results are queried from the database and showcased with icons and the colors green, grey and red if they passed, skipped, failed or no color if there were no results at all.

What Figure 4 also shows is that when a test suite is clicked on it expands and shows the results for every test case in all the browsers in the same way as the test suite. It has a link to its issue page, its name and the results from the respective browser.

Because graphs are very beneficial in showing data [17] and multiple of the test related tools that show the test results also use graphs. Graphs

(15)

gives the ability to observes trends on the test results over time, so they are implemented in this plugin as well. Chart.js is the chosen JavaScript library to draw out the data with charts. It is open source with thorough documentation to follow which makes it a great tool to use in this plugin [3]. The charts used are the doughnut chart and the line chart as seen in Figure 5 and at the top of Figure 4. A query is made to the database to calculate the number of tests that have been passed, skipped and failed. Ruby on rails with active record gives a easy method called count to calculate the amounts, the results from the query is then used in the doughnut chart to present it in a clear way. The line chart gives a historic presentation on how the amount of passed, skipped, failed and total results have changed over a time, the x-axis shows the version number of the GitLab commit and the date which makes it possible to show trends over time. This is possible because as seen in the 3 the timestamps and version numbers are saved in the table and are then used to sort the data over time in the line graph.

Because of Redmine being a project management application, each test has been created as an issue. Each test issue describes the test and shows its history of creation and changes. Now with a Redmine hook [14] inside the test issue page the results of each test suite or test case similar to the plugin page is showcased. A HTML table but with only that specific test and the results for every test browser is shown. The link from the plugin page makes this page easy to access. Showing the results for that specific test makes it so the user does not have to go back to the plugin page to see the results.

• Filters: Because of the wide variance of the test results with test suites and test cases filtering as a option to show all or just what you want is beneficial. As seen in Figure 6 it is possible to filter on the individual browsers and test suites or test cases. The HTML table will be updated accordingly to the filters and so will the graphs as seen in Figure 5. Ruby on rails sends a POST request with the selected filters and then makes queries based on them if any are selected.

There is also a historical view in the filter options. It queries the database and will show all the results grouped by the test name and it will give more information compared to the regular list. The regular default list only shows the most recent result for each browser. The historical view shows all the data such as timestamp, version number and the result of the test in the same way as the default view. Together with the historical view there are filter options to show only passed, skipped or failed tests.

(16)

Figure 5: Screenshot of the graphs seen in the functional test tab

Figure 6: Screenshot of the filter options

4.3.2 Unit

The Unit tab is the second one. Unit tests are being done which creates results as well. The tab is there for the overall statistical analysis.

• Presentation: The presentation of the unit tests results is shown with a doughnut chart as shown in Figure 7. The amount of passed, skipped and failed tests are queried and calculated the same way as in the functional tests tab. There is no need to present a list of the individual unit tests since the problems with the tests are fixed immediately.

(17)

Figure 7: Screenshot of the unit test results tab

4.3.3 Unit coverage

Coverage tests third and last tab. It is the last type of testing results available from Spotin that have use in being showed.

• Presentation: Since previously mentioned some test programs produce HTML reports of their results. HTML reports for specific tests such as with functional tests does not give you a broad or historical view. But with the Coverage test linking to the HTML generated file is a good option. There is a whole folder structure of different HTML files that are generated to show the coverage test results [11].

The implementation is an HTML iframe which links to the HTML gener-ated file and presents it. Since the whole coverage test contains multiple HTML files, the whole folder is saved on the Redmine web server and then the source in the HTML iframe is linked to the index.html file in the folder. This will give the plugin the ability to navigate through the whole folder as seen in Figure 8 and has access to everything the coverage test has to offer.

(18)

Figure 8: Screenshot of the coverage test results tab

4.4

APIs

The APIs used are GitLab and Slack APIs. As previously explained Spotin uses GitLab runners to execute the tests and return the results. How the connection between Redmine and GitLab works is talked about in this section. Then the connection and its implementation to Slack is explained.

4.4.1 GitLab

The plugin can take the data from the test results through an API. GitLab has an API that makes this possible with their CI/Pipeline that run the tests. When the tests are done a after script make it possible for the plugin to download the results through a link that is generated by the script. To be able to access the results and being able to use the link a token is needed. With the link the token is then used to get the data until a new token is generated [4].

4.4.2 Slack

Slack is a widely popular communication tool used by companies. Slack also has very good documentation for their API that gives guidance to create applications that can receive data from outside applications [16].

A BOT was added to a channel in Slack. The BOT produce a webhook link that can be used by outside applications to send POST request to. The POST request can send anything with a JSON format, in this case the link is sending a POST messages with the total amount of passed, skipped and failed data, a

(19)

link to the plugin page and the current date. It gives the developing team an opportunity to share the results to their communication tool Slack in this case tool without everyone having to go to the plugin page.

The plugin users can press a simple HTML button which will send the POST request to the Slack channel through a JavaScript function.

(20)

5

Results

This section covers implementation architecture results, the overall architecture and the connections. It also covers the evaluation and it is presented here as clearly as possible.

5.1

Architecture results

The overall system architecture as seen in Figure 9 with the GitLab API con-nection, Slack API connection and the Redmine plugin that can be used as test management.

The functionalities of the plugin have been showed in the implementation part and how it works together with APIs.

Figure 9: GitLab, Slack and Redmine connection overview

5.2

Evaluation

The plugin was put live on the company’s Redmine server for the developers to use and evaluate. Since they are developers, they have the knowledge and experience to test the validity of the plugin. A Questionnaire was given to the developers after they had the time to test the plugin and the answers are shown here. The questions can be shown in appendix A.

Figure 10 and Figure 11 contain the results whether the developers found that the plugin and the issue-tracking system having a direct connection was a ben-eficial. The results show that 75% gave it the highest score in the first question

(21)

and 100% in the second. The responses show how successful the plugin is in bridging a issue-tracking tool and test results.

Figure 10: Issue-tracking connection with test results

Figure 11: Was the link between the issue page and test results beneficial?

Responses needed to figure out if the solution was successful in bridging multiple tools. Figure 12 shows whether they found that the connection to Slack a communication tool was beneficial. The opinions here are more mixed, 50% gave the answer 4 and the remaining are split between answering 3 and 5.

(22)

Figure 12: About feedback or notifications to Slack from the plugin

Figure 13 with results that answer if the test data coming from multiple various sources such as different browsers and operating system is easily to understand in the way it is presented in the plugin. High rated responses with percentages such as 75% that gave an answer of 5.

Figure 13: Is it easy to understand the presentation of the various results?

Figure 14 asks about filtering. Very mixed answers with highest value of 4 with 50%. Answers of 2 and 3 were also chosen.

(23)

Figure 14: How beneficial was the filters?

The graphs and their representation of the data are then asked about in Fig-ure 15 and FigFig-ure 16. Here the highest value was 5 were 50% of the participants chose it in both questions. Shows a strong liking to graphs with some mixed feelings since the remaining answers were split up between 2 to 4.

(24)

Figure 16: Is having a line chart that shows growth of the various test results good?

Questions about the historical view of the test results instead of just the recent data was asked about here in and shown in Figure 17 and Figure 18. Neither reached a top score of 5. But both with high percentages with answers of 4 with 100% and 50%.

(25)

Figure 18: beneficial value of timestamps in the historical view

The only thing showed in the unit test tab is a doughnut chart which is asked about. 50% gave it a 5 in Figure 19.

Figure 19: Usefulness of doughnut chart in unit test tab

Figure 20 and Figure 21 asked about the coverage tab. Coverage test linked right into the plugin with an HTML iframe. The participants strongly liked it with an 50% and 75% giving an answer of 5. Lowest value chosen was 3.

(26)

Figure 20: iframe connection in the coverage tab

(27)

6

Discussion

The implementation of the plugin is working and used on Spotin’s Redmine server. The functioning plugin and high rated responses from the questionnaire shows that the plugin is useful and successfully bridges the issue tracking system, test results and various other things used in the development process.

A 75% and 100% of the participants in the related questions of the test of the plugin said it was beneficial with the connection of the test results and issues in Redmine. Having them directly linked and showing the results in the plugin page and in the issue page was clearly a good choice.

The plugin has a much more mixed response to the Slack API connection was probably because when the questionnaire was made the test results were the same. Getting notifications or messaged by the Slack BOT with the same data is just unnecessary spam in the Slack channels. Having the plugin run over a longer time would probably be of more use to get more accurate data on the Slack connection. Comments were also made that only results sent from the plugin is not enough and should be of something more importance.

Displaying the various test results also gave high rated responses. Having 25% answering 3 in both the how the presentation of the results and the filters were beneficial was probably design. The design does not show which filters needs to be combined in order to work, like the passed filter only works together with history and history only works when browsers has been selected. During the evaluation it was unclear in which way the users should click, but it was fixed after. Questions on how the filter functioned correctly to display the correct data came up during testing. 25% gave an answer 2 on the filters as well showing a mixed opinion. But the overall numbers are high and adding better visual design to the filters and the HTML tables that presents the data would probably push up the numbers even more.

The graphs that show the percentages and growth over time in the line chart was very well received with a 50% in both questions related to graphs answering 5. Having a graphical view that can show percentages, historical view and growth is then very beneficial according to the results. But the test results live in the plugin only had results from one day so showing the growth from one day is not really relevant for analysis. Here again a longer test run with more data would have been more beneficial and could maybe have given a even higher response. 50% also gave an answer of 5 for the doughnut chart in the unit test tab with a 25% of answer 3. The difference might be explained because the limits of not showing anything at all or showing 100% of the test passed during the users experience with the plugin.

But having a historical view in the graphs was also accompanied with a filter option of showing tests grouped by name and timestamp. Nobody gave the answer 5 but the majority of the participants of 100% and 50% answered 4 in both asking the benefit of showing all the results in the way it did and the

(28)

timestamps. I mentioned earlier the design of the filter options seemed to be mildly confusing since questions came up during the testing. But still even though visual design might be poorly done it is a very strong result. It shows a big want and need of historically analyzing the test results. There was a 25% answering 2 in the benefits of timestamps. It is possible because of the narrow question because it is not the timestamps alone that makes it useful. It is all the information including the version number of from GitLab. Since every commit has a version number, combining the timestamps and the version number gives a very good presentation of the results over time.

The coverage tab inside the plugin gave very positive response of 50% and 75% answering 5 in the questions related to it. Connecting it through an iframe gave the users everything they need from the test. It shows everything and can easily be replaced by a new coverage test.

6.1

Method discussion

The method of researching and continuously developing worked well. The re-searching of all the tools was hard. There is no time to test all of them so the comparisons could have been better and more deeply connected. But overall the method was a good way, it gave quite a solid ground to stand on when developing the plugin.

The implementation could have gone smoother if more in depth research was done about Ruby and Ruby on rails framework. I was new to both of them so finding out how to do things correctly delayed things during the implementa-tion.

Redmine was a good choice to stick with. Because as previously mentioned Redmine is open source and they have a good documentation on how to create a plugin. But since I was new It was difficult, having more experience with Redmine would definitely help the state of the plugin.

Showing all the test results in various ways was fun to implement. Having a list view seemed to be a good choice according to the results. Using the simple HTML tables was good and did not force me to use outside libraries to achieve the same effect. Having the plugin be in a web application really gives a lot of easily accessible functionality. Which is why creating a plugin with Redmine was great. Tools that generate the regular XML or HTML files are very restricted in showing data. They are not open source so you can not present the data as you or your company want.

I had some struggles to choose the correct JavaScript library for the graphs. I tried and looked around for relevant libraries just for Ruby on rails but they were much harder to used compared to Chart.js. I have experience from before with Chart.js so the implementation was quick and easy. Which graphs to show was explained in the theory, it told us that the bar and line chart are very good

(29)

options to present the data we had from the test results. Exactly what to show in the graphs were as well something to think of. To achieve a historical and analytical the final way of showing the commit numbers and the date on the X-axis of the line chart was a good way I think.

GitLab giving the test results worked fine. GitLab has a good documentation on its API that can be setup so the test results can easily be downloaded as said in the implementation. Discussion on how to get the test results automatically came up during the implementation. Functions that can download the test results, unzip the files, scan through them, insert into database and delete the files is implemented. But every time there is a commit the test results are run and the link is now directed to the new test results. Downloading continually on every commit probably needs to happen in some way. But the way the GitLab API and Redmine is setup is creating some difficulties. Any other tool that can give the same results in XML files in the same format would have worked as well in the perspective of the plugin.

Slack as well could have been more deeply connected. Any communication tool that allows HTTP request could receive messages from the plugin which is a good thing. It could have been more connected in a way with more functions in Slack. The decision to now only send status updates with the amount of passed, skipped and failed tests were fine. But like mentioned above with little variance in the test results it is not really necessary.

The evaluation could have used more people for the questionnaire to give more answers. More participants would have given better statistics. It is quite a risk to have so few people answering the questionnaire, 4 in this case. Having more answers would have given a more solid confirmation about the validity of the plugin. The number of questions in the questionnaire also could have been more. Having clearer questions would have helped as well, to more directly know what it is about without having to make assumptions. But I do say that with the experience of the users behind the current answers it gives a very solid legitimacy.

6.2

Ethics

The data being used by the plugin which are the test results may be sensitive to handle. Showing how much of the code is working now and over time is perhaps sensitive data. In the graph it presents the test results for every commit which shows the quality of the code for every employee to everyone else with access. Outside interest like perhaps competitors can maybe take advantage of it. But since Redmine requires username and password to access it is okay. But handling the data outside of the Redmine plugin might be risky.

With the large number of tests that are being developed on multiple platforms for multiple platforms, then being run with different techniques creates a need

(30)

to verify quickly and well. It is ethically and morally important for the devel-opers to deliver what has been promised, they can with good confidence declare what works and what does not work if there are tests being done and using its results in the best way possible. Taking all the tools that are being used to by the developers and combine them to aid with test management is morally important.

6.3

Future work

Everything in the plugin can be in further development even more. Adding more test results from even more sources could give a very broad analytical view. Adding more filters to see exactly what you are looking for and nothing else for the current test results and for future ones. Giving a better design to the lists and filters would be very beneficial.

Expanding on the current plugin API connections with GitLab and Slack. Hav-ing more structured API calls with GitLab to communicate when the test results are done and can be taken is beneficial for the users to get the data quicker to the plugin. Having the plugin actually start the test could be very beneficial in test management. As seen in the related work section the programs to actually start the tests are the one that also present the results in. Having this imple-mentation with the ability to start the test through API calls would perhaps make it a very market worthy product.

Perhaps a more analytical view on the data could be very useful. You have all these data from different sources over time and by maybe using modern technology and techniques such as machine learning a lot of the data can be processed and learned from.

6.4

Conclusion

As a conclusion this thesis finds out if it is possible to create an application that presents the test results in a clear way and that is also merged together with various tools used by the developers.

Researching the area and its commonly used tools gave a solid background to implement a prototype. A plugin inside Redmine was created as with various features as a solution. The plugin presents the test results and has a connection with GitLab and Slack. A questionnaire was given to experienced developers to evaluate the validity of the plugin.

The functioning plugin in a live environment and the questionnaire results gave quite the definitive answers that it is possible to create an application that can present the test results inside an issue tracking tool. A 75% of the participants when asked about the how beneficial the connection is gave a solid 5 as their answer.

(31)

Having good results from the questionnaire also showed the presentation of the test results was very good. The recent results from multiple different sources and the results over time. Them being presented in lists and graphs gave numbers such as 50% answering 5 in related questions. Even 100% answering 4 in question about the historical view. High rated responses with 4s and 5s in all of the questionnaire answers gave a solid ground to conclude that the needed results were achieved.

The result also showed that there is work to be done with the design. Work in adding more filters. Expand on the APIs such as the GitLab API to being able to start the test to grow the plugin as a test management tool.

(32)

References

[1] Atlassian, What is a kanban board?,

https://www.atlassian.com/agile/kanban/boards, Visited (2019-04-07)

[2] Blazemeter, Jmeter and Performance testing,

https://www.blazemeter.com/, Visited (2019-04-10)

[3] Chartjs, Open source HTML5 charts for your website, https://www.chartjs.org/, Visited (2019-04-10)

[4] Gitlab, Gitlab API,

https://docs.gitlab.com/ee/api/, Visited (2019-04-24)

[5] GitLab, A full DevOps tool,

https://about.gitlab.com/, Visited (2019-04-15)

[6] Guru99, Test management,

https://www.guru99.com/test-management-phases-a-complete-guide-for-testing-project. html, Visited (2019-05-13)

[7] Junyong In, Sangseok Lee (2017), Statistical data presentation,

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5453888/, Visited (2019-05-13)

[8] IDC, IDC’s world developer consensus,

https://www.idc.com/getdoc.jsp?containerId=US44363318, Visited (2019-04-24)

[9] JSUnit, Java Script Testing Tool,

http://www.jsunit.net/, Visited (2019-04-10)

[10] JMeter, Apache JMeter,

https://jmeter.apache.org/, Visited (2019-04-10)

[11] PHPUnit, Code Coverage Analysis,

https://phpunit.de/manual/6.5/en/code-coverage-analysis.html, Visited (2019-04-15)

[12] PractiTest, QA test management tools that works for you, https://www.practitest.com/, Visited (2019-05-13)

[13] Redmine, Redmine is a flexible project management web application, https://www.redmine.org/, Visited (2019-04-01)

[14] Redmine, Redmine Plugin tutorial,

http://www.redmine.org/projects/redmine/wiki/Plugin_Tutorial, Visited (2019-04-01)

[15] Selenium, Selenium - Web Browser Automation, https://www.seleniumhq.org/, Visited (2019-04-10)

(33)

[16] Slack, Slack API,

https://api.slack.com/, Visited (2019-04-24)

[17] David J. Slutsky (2014), The effective Use of Graphs,

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4078179/, Visited (2019-04-10)

[18] Techopedia, Software Development Life Cycle , https://www.techopedia.com/definition/22193/

software-development-life-cycle-sdlc, Visited (2019-05-13)

[19] Tryqa, Why is tests needed?,

http://tryqa.com/why-is-testing-necessary/, Visited (2019-04-24)

[20] Tryqa, What is Functional testing,

http://tryqa.com/what-is-functional-testing-testing-of-functions-in-software/, Visited (2019-04-24)

[21] Tryqa, What is Unit testing,

http://tryqa.com/what-is-unit-testing/, Visited (2019-04-24)

[22] Tryqa, What is Test Coverage,

http://tryqa.com/what-is-test-coverage-in-software-testing-its-advantages-and-disadvantages/, Visited (2019-04-24)

[23] TRICENTIS qTest, Take The Pain Out Of Test Case Management, https://www.qasymphony.com/software-testing-tools/

qtest-manager/test-case-management/, Visited (2019-05-13)

[24] W3Counter, Browser & Platform Market Share April 2019,

(34)

Appendices

A

(35)

References

Related documents

Detta syftar dels till om någon företrädare för SD står för påståendet som ligger till grund för faktagranskningen, och dels till om SD granskas på något sätt,

Web storage is the combination of localStorage which allows for data to be saved locally and used while the web application is offline and session storage which allows for data to

The teachers at School 1 as well as School 2 all share the opinion that the advantages with the teacher choosing the literature is that they can see to that the students get books

government study, in the final report, it was concluded that there is “no evidence that high-frequency firms have been able to manipulate the prices of shares for their own

Enligt vad Backhaus och Tikoo (2004) förklarar i arbetet med arbetsgivarvarumärket behöver företag arbeta både med den interna och externa marknadskommunikationen för att

Facebook, business model, SNS, relationship, firm, data, monetization, revenue stream, SNS, social media, consumer, perception, behavior, response, business, ethics, ethical,

When Stora Enso analyzed the success factors and what makes employees "long-term healthy" - in contrast to long-term sick - they found that it was all about having a

3 The main result The basic stabilizability result can now be stated Theorem 1 For a system where A has one real eigenvalue > 0 and where the remaining eigenvalues have negative