• No results found

Distributed Queries: An Evaluation of the Microservice Architecture

N/A
N/A
Protected

Academic year: 2021

Share "Distributed Queries: An Evaluation of the Microservice Architecture"

Copied!
109
0
0

Loading.... (view fulltext now)

Full text

(1)

Master’s thesis, 30 ECTS | Information Technology 2020 | LIU-IDA/LITH-EX-A--20/019--SE

Distributed Queries:

An Evaluation of the

Microservice Architecture

Jesper Holmström

Supervisor : John Tinnerholm Examiner : Daniel Ståhl

(2)

Detta dokument hålls tillgängligt på Internet - eller dess framtida ersättare - under 25 år från publicer-ingsdatum under förutsättning att inga extraordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka ko-pior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervis-ning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säker-heten och tillgängligsäker-heten finns lösningar av teknisk och administrativ art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsman-nens litterära eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet - or its possible replacement - for a period of 25 years starting from the date of publication barring exceptional circumstances.

The online availability of the document implies permanent permission for anyone to read, to down-load, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility.

According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement.

For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

(3)

硕士学位论文

Dissertation for Master’s Degree

(工程硕士)

(Master of Engineering)

分布式查询:一项基于微服务架构的评估

Distributed Queries:

An Evaluation of the Microservice Architecture

Jesper Holmström 杰斯佩

2020年06月

(4)

国内图书分类号:TP31 学校代码: 10213

国际图书分类号:681 密级:公开

工程硕士学位论文

Dissertation for the Master’s Degree in Engineering

(工程硕士)

(Master of Engineering)

分布式查询:一项基于微服务架构的评估

Distributed Queries:

An Evaluation of the Microservice Architecture

硕 士 研 究 生 :

杰斯佩

导 师 :

HIT 苏统华、副教授

副 导 师 :

LiU Daniel Ståhl、副教授

实 习 单 位 导 师 :

Peter Halvarsson、软件工程师

申 请 学 位 :

工程硕士

学 科 :

软件工程

所 在 单 位 :

软件学院

答 辩 日 期 :

2020 年 9 月

授 予 学 位 单 位 :

哈尔滨工业大学

Classified Index: TP311

U.D.C: 681

(5)

Dissertation for the Master’s Degree in

Engineering

Distributed Queries

An Evaluation of the Microservice Architecture

Candidate: Jesper Holmström

Supervisor: A. Prof. Tonghua Su

Associate Supervisor: A. Prof. Daniel Ståhl Prof. Kristian Sandahl

Doc. Candidate, John Tinnerholm Industrial Supervisor: Peter Halvarsson

Academic Degree Applied for: Master of Engineering

Specialty: Software Engineering

Affiliation: School of Software

Date of Defence: September 2020

(6)

I

摘 要

微服务体系结构是一种新的体系结构风格,与具有单个可执行文件的传统 整体方法相反,它将应用程序结构化为一组小型的、可独立部署的微服务。微 服务体系结构是一种分布式系统,会带来新的挑战并增加复杂性。本文扩展了 先前的相关研究,并研究了使用“一个服务一个数据库”模式的意义以及由此 引入的跨多个微服务的查询需求的解决方案。本文展现了两种应用,一种具有 微服务架构,另一种具有整体式应用程序,对它们在响应时间和吞吐量方面进 行比较,并选择 API 组合模式,作为分布式查询的解决方案。实验结果有助于 更加深入的理解分布式查询的困难和 API 组成模式的优点以及局限性,同时也 表明 API 组合模式是分布式查询的有效解决方案。然而,在响应时间和吞吐量 方面的表现,API 组合模式确实不如整体式原型,由此产生的见解是在应用 API 组合模式和微服务架构时必须根据系统的要求仔细斟酌和选择。 关键词:微服务 1,分布式查询 2, API 组合模式 3, 软件体系结构

(7)

Abstract

The microservice architecture is a new architectural style that structures an application into a set of small, independently deployable microservices, as opposed to the traditional monolith approach with a single executable. The microservice architecture is a distributed system that results in new challenges and increased complexity. This study expands the previous related research and investigates the implications of using the one-database-per-service pattern and a solution to the introduced need of queries spanning multiple microservices. In this thesis, two applications are presented, one with the microservice architecture and one monolithic counterpart, which are compared in terms of response time and throughput. As a solution for the distributed queries, the API Composition pattern was chosen. The results of the experiments conclude in a greater understanding of the difficulty in distributed queries as well as the benefits and limitations of the API composition pattern. It shows that the API composition pattern is a valid solution for distributed queries. However, it does perform worse in terms of response time and throughput than the monolith prototype. This results in the insights that one must carefully choose, with respect to the requirements of the system, when to apply it.

Keywords: Microservices; Distributed Queries; API composition pattern; Software Architecture

(8)

III

Acknowledgments

I would like to thank Peter and all the people at Zenon for giving me the opportunity to conduct this thesis and all the guidance received. I would like to extend my thanks to my examiner Daniel Ståhl for continuous support and helpful advice. I would like to express the same thanks to my supervisor John

Tinnerholm for all the help and laughs. I would also want to express my thanks to Oscar Andell for all the discussions, my opponent Andreas Lundquist for

providing great feedback on my work and Wei Zheng for the translations. I would like to extend a final thanks to all of the people who have made all the extensive tryharding during the last five years easy.

(9)

Glossary

ACID Atomicity Consistency Isolation Durability – A set of attributes that help to ensure that database transactions are reliable.

AMPQ Advanced Message Queuing Protocol – Open standard protocol used in client/server messaging and Internet-of-Things devices.

API Application Programmable Interface A specification consisting of protocols and routines. It specifies how components or software systems should communicate. Back/Front-end Separating the concerns within the system, where the

back-end is the data processing layer feeding data to the front-back-end (presentation layer).

CPU Central Processing Unit – Electrical circuit and a core component of a computing device. It executes the instructions of a computer program.

DBMS Database Management System – Software interacting with the database, users, and the system.

DDD Domain-Driven Design – An alternative approach to software development where needs are complex. In this approach, the implementation is connected to an evolving model.

DevOps Development Operations – A set of tools and strategies that automate processes between software development and IT

(10)

V

teams that increases an organization’s ability to deliver applications and services more frequently and reliably. HTTP HyperText Transfer Protocol – Protocol defining messages

and transmissions, used by the World Wide Web.

IDE Integrated Development Environment – A software application that has a set of necessary tools a developer can use for development and provides these tools as a single application, making it easier for the developer.

JSON JavaScript Object Notation – An open standard file format commonly used for sending data between an application and server.

Microsoft Azure Microsoft Azure – Cloud hosting and computation platform. ODM Object Document Mapping – Schema-based library that is

used for modeling of application data.

ODBPS One-Database-Per-Service – Design pattern for databases within the microservice architecture where the database is private and can only be accessed by the data owning microservice.

REST Representational State Transfer – An architectural style that provides interoperability between computer systems on the Internet. RESTful web services allow for requesting and modifying text resources within the service.

SOA Service-Oriented-Architecture – Enterprise-wide software architecture that provides functionality by services, closely

(11)

related to the microservice architecture, which is more in the scope of an application.

SUT System Under Test – Specifies the system that is currently being tested.

UI User Interface – The interaction place between human and computer.

ZenApp The working name for the application developed in this thesis.

(12)

目 录

摘 要 ... I ABSTRACT ... II ACKNOWLEDGMENTS ... III GLOSSARY ... IV CHAPTER 1 INTRODUCTION ... 4 1.1BACKGROUND ... 4

1.1.1 Zenon AB and the future ZenApp ... 5

1.2THE PURPOSE OF THE PROJECT ... 6

1.3THE STATUS OF RELATED RESEARCH ... 7

1.4DELIMITATIONS ... 10

1.5MAIN CONTENT AND ORGANIZATION OF THE THESIS ... 10

CHAPTER 2 THEORY ... 12

2.1MONOLITHIC ARCHITECTURE ... 12

2.1.1 Benefits and drawbacks ... 13

2.2MICROSERVICE ARCHITECTURE ... 15

2.2.1 Characteristics of the microservice architecture ... 16

2.2.2 Building microservices ... 19

2.2.3 Data management in microservices ... 20

2.3DATA JOINING ALGORITHMS ... 25

2.4BENCHMARKING ... 26

2.5USABILITY, FEEDBACK, AND RESPONSE TIMES ... 27

2.6EMPIRICAL RESEARCH IN SOFTWARE ENGINEERING ... 28

CHAPTER 3 SYSTEM REQUIREMENT ANALYSIS ... 30

3.1THE GOAL OF THE SYSTEM ... 30

3.1.1 High-level system design ... 30

3.2REQUIREMENTS ... 32

3.2.1 The functional requirements ... 33

(13)

3.3BRIEF SUMMARY ... 33

CHAPTER 4 SYSTEM DESIGN ... 35

4.1SYSTEM ATTRIBUTES SELECTION ... 35

4.2MONOLITHIC ARCHITECTURE AND DESIGN ... 37

4.3MICROSERVICE ARCHITECTURE AND DESIGN ... 40

4.4SEQUENCE DIAGRAMS ... 43

4.4.1 Requesting the user object ... 43

4.4.2 Checking a user’s subscriptions ... 46

4.5BRIEF SUMMARY ... 48

CHAPTER 5 SYSTEM IMPLEMENTATION ... 49

5.1THE ENVIRONMENT OF THE SYSTEM IMPLEMENTATION ... 49

5.1.1 Deployment environment ... 49

5.2SCOPE OF IMPLEMENTATION ... 50

5.3KEY PROGRAM FLOWCHARTS ... 50

5.4BRIEF SUMMARY ... 53

CHAPTER 6 METHOD ... 54

6.1INFORMAL LITERATURE PRE-STUDY ... 54

6.2EXPERIMENT GOAL & HYPOTHESES ... 54

6.3PERFORMANCE EXPERIMENTS ... 55 6.3.1 Experiment context ... 56 6.3.2 Experiment setup ... 56 6.3.3 Metrics ... 58 6.3.4 Pre-experiment ... 59 6.3.5 Experiments ... 61 CHAPTER 7 RESULTS ... 63 7.1EXPERIMENT #1... 63 7.2EXPERIMENT #2... 68 7.3EXPERIMENT #3... 73 7.4HYPOTHESES ... 77 CHAPTER 8 DISCUSSION ... 79

(14)

8.2METHOD ... 82

8.2.1 Reliability ... 82

8.2.2 Threats to validity ... 83

8.3THE WORK IN A WIDER CONTEXT ... 84

CHAPTER 9 CONCLUSION ... 86

9.1FUTURE WORK ... 88

REFERENCES ... 89

STATEMENT OF ORIGINALITY AND LETTER OF AUTHORIZATION ... 96

(15)

Chapter 1 Introduction

1.1 Background

In today's rapidly developing software landscape, more and more large, complex systems are being built. The traditional way has been companies using monolithic architectures, with one single executable. As the development progresses, new features are added, and the application continues to grow into a large complex codebase [1, 2]. It can outgrow the capacity of the monolithic architecture and then enter a state which some call “the monolithic hell” [2].

In this state, the monolithic architecture decays and many problems emerge. Versions of add-on software packages are dependant on other package versions, resulting in many build-time dependencies, a so-called “dependency hell”[3]. IDE's are overloaded because of the sheer codebase size. The large complex codebase makes it hard for new co-workers to become productive, and the time to build the project is extended. If there is much traffic, scaling requires new instances of the entire application instead of just the bottleneck component. When a small component is updated, the entire application must be redeployed [2, 4]. As a result of all these problems, the production rate decreases, and the incorporation of continuous practices becomes hard [2].

A software architecture that has risen in popularity over the last decade is the microservice architecture [1, 5], partly as a response to the existing problems with the monolithic architecture [4]. The microservice architecture decouples components/modules into small microservices. These microservices can be independently updated, scaled, redeployed, and rewritten without having to interfere with the rest of the system [1]. Therefore, microservices make it easier to adopt continuous practices and DevOps1 reducing time-to-market2 [5].

This architecture tries to solve many of the problems that exist with the monolithic architecture. However, several of the tasks that once were trivial in a monolithic application are now complex in the microservice architecture as a result of the nature of a distributed system [2, 6]. New challenges are emerging, for example,

(16)

distributed database transactions and queries, data persistence, latency between independent services, service discovery, and orchestration/choreography of microservices [2, 4, 7]. Some troublesome aspects, for example, build-time dependency management is reduced, but new dependencies emerge in runtime dependency management instead.

In many systems, the Database Management System (DMBS) is a core component. A poor choice will negatively affect the entire system [8]. The choice becomes even harder in a distributed system, such as a microservice architecture.

To take advantage of the benefits the microservice architecture brings, it is important to keep the independence between microservices high. As a result, the data management design often follows the ODBPS-pattern “one-database-per-service pattern”[9], meaning that the single database is split into multiple databases. With this pattern, it follows that a microservice’s data is private and cannot be accessed by another microservice directly. Instead, the data can only be accessed through the provided network interface [7]. As a consequence of this, transactions and querying of data that is provided from several microservices are no longer straight-forward and simple [2, 10-12]. In the perfect scenario, the splitting of the data source never results in a need of joining the data together later on. However, this is seldom the case in a real-world scenario [10, 13].

The research area of the microservice architecture is new, and this thesis aims to investigate the implications of distributed data from a querying perspective in a microservice architecture.

1.1.1 Zenon AB and the future ZenApp

This research is conducted in the industry context of an investigation for a planned future application for the consultant company Zenon AB.

The purpose of the application, named ZenApp, is described as a subscription-based user position application. The intended use of ZenApp is that a user should be able to subscribe to a service, and when this service is available and within a distance of interest3, the user should be notified. As an example, a user subscribes to a carwash service and specifies how long queue time it can tolerate, and the distance of interest. The user then continues with daily activities. The app is

(17)

continuously tracking the position of the user, and when the user is within the distance of interest to the carwash, it checks the current queue time of the carwash. If the queue time is within the user’s tolerance, the user is notified.

Two prototype versions of the application were developed for the purpose of this thesis. The applications were carefully designed in order to isolate the architectural characteristics under investigation and avoiding unwanted complexity affecting the results. The functionality also helps to gain insight into how a fully-featured future version of ZenApp could look.

1.2 The purpose of the project

The purpose of this project is to increase the knowledge about the microservice architecture connected to distributed data management. Several related studies have been conducted in the area of performance benchmarking, where a monolithic architecture application was compared to a corresponding microservice architecture counterpart. The related studies show that the response time will be marginally higher in the microservice architecture compared to the monolithic counterpart. The related work also shows that in a microservice architecture, when a single database converted into a decentralized data management system, queries spanning multiple services are no longer trivial, and several new challenges appear. However, the previous work regarding performance benchmarking, do not include multiple data sources within the microservice architecture. Furthermore, thus only measure the response time as an effect of increased network communication, which originate in the decentralization of the system.

This thesis expands on this and includes another characteristic of the microservice architecture, namely, distributed data management and the non-straightforwardness of data queries spanning multiple data sources.

These distributed queries increase network hops and traffic but also introduce the need for joining data together since a single microservice in some cases does not hold all the data in order to resolve the query. As a result of this, the performance, in terms of throughput and response time, is affected.

This thesis aims to investigate this gap in the research of distributed queries and the joining of data together. This is done within a microservice setting and investigates what implications the distribution has on performance. The chosen query solution is

(18)

the API composition pattern, and the join operation is done from an industry and engineering perspective with the use of a standardized join algorithm.

Research questions

The following research questions were derived:

RQ1. What are the implications on performance when using a microservice architecture with distributed data instead of a traditional monolithic approach with a single database regarding response time and throughput? RQ2. How does the joining of multiple services’ data affect the response time

and throughput of a distributed query in a microservice architecture compared to the traditional monolithic approach?

RQ3. How does any change in response time and throughput affect the choice of either a traditional monolithic architecture or a microservice architecture?

1.3 The status of related research

This section covers the relevant research in the area that is most central to the purpose and aim of the thesis. This section is complemented by Chapter 2, theory, which includes relevant topics to create a solid knowledge foundation.

Benchmarking of microservice architectures

Villamizar et al. performed a case study summarized in their paper, Evaluating the Monolithic and the Microservice Architecture Pattern to Deploy Web Applications in the Cloud [14]. Villamizar et al. develop the same application in two different architectures, the monolithic and the microservice architecture. They discuss the different architectures with the lessons they learned in their processes of designing and implementing the applications in both architectures. Their monolithic application consisted of one front-end built in angular.js4, together with a web application consisting of two services: 𝑆 – a CPU intensive payment plan generator based on several in-parameters, returning a new payment plan and with a set of future payments, and 𝑆 – returning existing payment plans together with sets of payments. A PostgreSQL5 database stored persistent data.

The microservices application had the same services, 𝑆 and 𝑆 , but as microservices. 𝑆 had the connection to the PostgreSQL database, the final service was the API gateway. The microservice application required each request to pass

4 Angular.js – Open-source framework for front-end development in JavaScript 5 PostgreSQL - Open source object-relational database - https://www.postgresql.org/

(19)

through the gateway, and both services, i.e., no direct path between the microservices. Both applications were hosted on AWS6.

Tests were executed by using JMeter7 and ran continuously for ten minutes. 30 requests were sent to 𝑆 and 1100 to 𝑆 each minute. The average response time was marginally higher on the microservice architecture. Resulting in that the difference between the architectures is not that great in terms of response time.

O. Al-Debagy and P. Martinek, [15] conduct a comparative review of performance differences between the monolithic architecture and the microservice architecture. As in the related work of Villamizar et al. [14], they develop two applications. JMeter was used for the performance tests.

The microservice application consisted of an API gateway talking to three microservices. Response time and throughput per second were measured. The tests show that the architectures have similar performance under regular load. Although, the monolith performing slightly better on a low number of threads/users. On high load, the monolithic application performed better by 6% in the aspect of throughput. Regarding the response time, it was very similar under both load scenarios, slightly higher for the microservice application and a bit more distinct in the higher load scenario.

Both studies implement a microservice architecture application and compare it with a monolithic counterpart application. Their findings agree that the difference in response time is rather small but increases with the load. However, in both implementations of the microservice architecture, the applications lack the characteristics of distributed data sources and the one-database-per-service pattern. This makes their system oversimplified in terms of both queries and transactions. Kistowski, J. et al. [16] recognized the need for microservice architecture applications developed for research. To assist in a deeper understanding of the microservice architecture, Tea Store, a “state-of-the-art” microservice application, was developed. This application is to be benchmarked, tested, and evaluated in research to gain better and deeper knowledge about the real-world applicability of the microservice architecture. A monolith counterpart does not exist, and the Tea Store application does only use one database for storing persistent data.

(20)

Decentralized data management

Villaça et al. investigate and evaluate different solutions for querying data in a microservice architecture implementing decentralized data management [10]. The evaluation included five different querying solutions in a decentralized data microservice context. These querying techniques are evaluated with regards to the quality attributes described in ISO/IEC 25010 (excluding functional suitability and security):

 Reliability (R)

 Performance efficiency (PE)  Usability (U)

 Compatibility (C)  Maintainability (M)  Portability (P)

Their results consist of the strengths and weaknesses of each query solution. The summary of their evaluation is shown below in Table 1.

Table 1 Query strategy evaluation according to ISO 25010 [10] Strategies / ISO

25010 Criteria

R PE U C M P

Shared DB 0% 100% 100% 0% 0% 0%

Data via Service Call (API composition)

0% 0% 100% 0% 80% 100%

CQRS 33% 100% 0% 100% 40% 100%

Event Data Dump 33% 100% 0% 100% 40% 100%

Canonical Data Model

0% 0% 100% 100% 80% 100%

The work by Villaça et al. is only theoretical, and no actual implementations or performance tests were executed. They comment on their results, highlighting that there is no strategy outperforming the rest in contrast to all the quality attributes, and all of the strategies had downsides. However, in some scenarios, a best strategy exists. This paper’s main contribution is to leave a set of benefits and drawbacks of the solutions to be implemented and observed in future work. Their evaluation of the API Composition pattern agrees what is stated by Richardson [2], which states that the API composition pattern “is simple and should be used whenever

(21)

possible” but also highlights performance issues. These patterns are described in section 2.2.3 in Chapter 2, Theory. This thesis extends their research and aims to increase the understanding of the API Composition pattern.

1.4 Delimitations

This problem of distributed queries will be viewed from a microservice data management point of view, and not in the form of an optimizing query point of view in a distributed data-setting. The study focuses on queries that span multiple databases owned by different services and is not looking into managing data consistency in a distributed data management system.

This study will not be conducted on a real-world system. Instead, two proof-of-concept ZenApp prototypes will be developed, one monolith version as well as a microservice version. These proof-of-concept prototypes will not be fully-featured. Instead, the aim is to enable the characteristics under investigation by well-designed systems with populated databases allowing for tests and experiments. The implementation will use a standardized algorithm for joining data and a well-known strategy for the multiservice querying. This is to increase the transferrable knowledge and generalizability of the thesis’ results. Results about how the distribution of data in a microservice architecture affects the performance.

1.5 Main content and organization of the thesis

The following content of the thesis is organized into four main parts.

The first part, Chapter 2, Theory, creates an extensive understanding of the research area, and it is complemented by the above section status of related work, see section 1.3. The chapter covers relevant topics to the research questions, method, and purpose of this thesis.

The second part is the next three chapters, 3, 4, and 5, which cover the requirements of ZenApp and the design and implementation of the prototypes in their respective architecture.

The third part, Chapter 6, Method, describes how the experiments and analyses were conducted. With the related work and project purpose from Zenon, the research questions were derived. The research questions were concretized and helped formulate three hypotheses to help guide the analysis. From the hypotheses,

(22)

experiments were constructed. These experiments provide the data needed to revisit the hypotheses and research questions to guide the final analysis.

The final part covers the outcome of the experiments and connects the result back to the research questions and aim of the thesis. First, Chapter 7, Results, shows and explains the outcome of the experiments together with the tests of the hypotheses. Then the results are further on discussed and elaborated in Chapter 8, Discussion. In Chapter 9, Conclusion, the research questions are summarized and answered. The thesis is then wrapped up in a discussion about if the purpose and aim of this thesis have been fulfilled, as well as a section about ideas for future work.

(23)

Chapter 2 Theory

This section covers the relevant theory of this thesis. First, it covers the benefits and drawbacks of the monolith and microservice architectures. This is to create a broad understanding of the research area. Additionally, the microservice section includes insights for building and deciding for the microservice architecture. The final part of the microservice section covers the microservice architecture from a data management point of view. This section contains the “shared database pattern” and “one-database-per-service pattern.” Since the “one-database-per-service pattern”[2] makes querying non-trivial and introduces the problem of distributed queries, querying strategies and data joining algorithms are included. Finally, in order to put the thesis in a wider context, the theory also covers relevant topics such as the importance of response time, performance evaluation, and best practices on how to conduct research within the field of computer science.

2.1 Monolithic architecture

To better understand the microservices architecture and how it emerged and has evolved, one must first define, examine, and understand the traditional monolithic architecture.

Dragoni et al., define the monolithic application or monolithic architecture, as “a monolith is a software application whose modules cannot be executed independently” [4]. The complexity of these single executables is broken down into modules that use the same resources, such as memory, CPU, and data storage. M. Fowler, and J. Lewis, describe it in a wider context, business applications usually consist of three units 1) client UI, 2) server-side unit (back-end), and 3) some kind of data storage unit [1]. In this case, the server-side unit is a typical monolithic application, “a single logical executable.”

(24)

Figure 1. Example monolith application adapted from [2]

The monolithic architecture can be summarized as a single executable that provides several services for external use, as seen in Figure 1. Since all development is made on a single executable, a single modification of the code can affect all of the services the application provides, resulting in a new build and redeployment of the entire application [2, 17].

2.1.1 Benefits and drawbacks

The monolithic architecture has several benefits, such as development and tools are oriented around a single executable. This makes testing and deployment are easier. Scaling becomes easier as multiple instances can be run behind a load balancer. Finally, to some extent, even big changes can be considered trivial since it is only a single application, which can be redesigned, rebuilt, and redeployed [2].

However, these benefits over time gradually disappear when the codebase of the single executable grows larger. The large size leads to an increase in complexity as a result of a large number of dependencies, causing tight coupling [2, 4]. The increase of complexity and increasing codebase often become a vicious cycle, as the difficulty of understanding the codebase for a developer leads to newly implemented features or modifications being overly complex. For each iteration, the overall structure becomes worse. The module boundaries that once were easy to obey now need to be crossed, i.e., a module only performs the tasks it was designed for and nothing else. This consequently leads to a decrease in productivity because the development process of correcting bugs and developing additional features becomes slow. It also leads to a decreased learning rate for new

(25)

developers, and an increased start-up time when loading codebase into the IDE (Integrated Development Environment) or web container [2].

Parallel to the codebase growing, the deployment process gets long and troublesome [2]. Many developers are always working and committing code to the same codebase. This makes it time-consuming to get the build version into a releasable state. Often many developers want to change the same piece of code, but the ownership is unclear [18]. Feature branches can be used as a solution, but it only partially solves the problem and instead moves it to big merges, followed by large suites of testing when the new feature is integrated into the build. This way of working makes it challenging to incorporate continuous practices, like continuous integration, continuous deployment, and continuous delivery as a part of the development process [2].

Scalability, once a benefit for the monolithic application, becomes inefficient for large monoliths [4]. When an application is dealing with an increase in incoming traffic, it is not typically evenly distributed among the features of the application. Commonly, it is only a subset of the application’s functionality that is the cause for the need of additional instances to handle the increased workload. This results in inefficient scaling and an increment of operational costs.

In many cases, the interior modules of the monolithic applications have different needs in terms of resources. Some depend upon fast memory, whereas others need considerable computational power. Seldom does a one-solution-fits-all environment exists when it is time to deploy [4].

When functionality is delivered by libraries, software packages, or third-party providers, many compile-time dependencies are introduced into the application. Monoliths often fall victim to something called “dependency hell’’ [3, 4]. Adding and updating software packages lead to conflicting behavior where the system might not compile or have unexpected behavior as a result of the dependencies.

The large monolithic application leads to a ''technology lock-in'', meaning that it might be very costly to change programming language or frameworks [2].

As a result of all these drawbacks, the architectural style of microservices emerged to deal with the problems [1, 4].

(26)

2.2 Microservice Architecture

The microservice architecture style is an alternative way of implementing an application in contrast to the single monolith discussed in the previous section. The keywords of the microservice architectural style are flexible, modular, maintainable, and scalable software [4]. However, there is no clear definition of the term microservice architectural style. Martin Fowler and James Lewis describe it [1]:

“There is no formal definition of the microservices architectural style, but we can attempt to describe what we see as common characteristics for architectures that fit the label. As with any definition that outlines common characteristics, not all microservice architectures have all the characteristics, but we do expect that most microservice architectures exhibit most characteristics.”

Dragoni et al. simplify the microservices architecture definition to [4]: “A microservice architecture is a distributed application where all its modules are microservices.”

Furthermore, to follow up on what a microservice is:

“A microservice is a cohesive, independent process interacting via messages.”

This can be summarized in a system consisting of multiple independent, distributed services. Each service adds to the system functionality, compared to the monolithic approach, where the monolithic application delivers all functionality as one unit, see Figure 2.

(27)

The microservice architecture has the characteristics to solve and mitigate many of the problems and weaknesses of the monolithic architecture [2, 4]. The microservice architecture, with its independent, loosely coupled services, makes development and deployment easier in the sense that a service can be redeveloped, rebuilt, and redeployed without taking the entire system down. A service can be maintained and managed by a team independently, which requires less coordination than the workflow of the monolithic architecture, where many developers are working on the same codebase. The individual microservice provides clear module boundaries, fewer dependencies, and allows for tailored technology stacks for each service. It is, however, no silver bullet in software architecture practices. The distributed nature of microservices introduces new challenges and complexity both on an organizational level and implementational level [4, 13, 19-21]. This means that complexity is moved from the component level to the integration, i.e., the architectural level [22].

2.2.1 Characteristics of the microservice architecture

Zimmerman [23] performed a literature and case study review and found that the following attributes often occurs within the microservice architecture projects. To explain the microservice architecture, it will be summarized around common characteristics that Fowler and Zimmerman propose in their research [1, 23]: Organizational:

A service encapsulates related functionalities. It contains business logic and data storage. These services are often centered around business capabilities. Teams follow the microservice from development to deployment and delivery. Teams are no longer organized in developers, testers, etc. Instead, the teams are small and so-called cross-functional teams. The cross-functional teams often contain members from the entire technology stack [1, 10].

There is no agreement on how small or big a microservice should be. The name suggests that the granularity should be rather small. How this is interpreted depends on the team: is it a couple of lines of code, a couple of thousands of lines of code, is the database included? The results from the study by Soldani et al. agree that many industries have great difficulty in identifying the business capability/bounded context for each microservice [13].

(28)

The loose coupling, independence of the microservice, and small team sizes enable effective use of DevOps (Development Operations) [5, 10]. Not having to coordinate and orchestrate changes with other teams in the organization on the same level as in the monolithic application, is one of the benefits of the microservice architecture [2]. This benefit leads to teams being able to make local decisions without referring to the overall architecture and system design. However, this requires cross-service discussions so that the self-governed teams understand the overall business and system goals [7].

Distributed system:

A distributed system is more complex in its nature. Instead of simple method calls as in the monolith architecture, communication between services uses lightweight protocols such as REST/HTTP or AMQP8. [2]. Writing tests that span multiple services is challenging. A microservice in isolation is easy to test. However, only because a microservice works in isolation does not mean a deviation in behavior cannot occur when collaborating with other microservices [1, 2, 4, 24].

It is important to keep the independence between services and in-between-services-communication to the allowed interfaces and paths. This enables services to be developed, updated, and deployed without interfering with each other [24]. As a result of the distributed model, new ways to fail are introduced [2]. One type is partial failures, which are microservices not responding, or suffering from high response times. When the health of an individual service decay or suffer from a complete failure, it is difficult to observe the status and health of each service [24, 25]. The failures are a big challenge, and failures will occur. As a result of this, the microservices need to be designed for failure in order to be successful [1, 7, 12, 24]. The distribution also introduces an overhead in memory and CPU, since each microservice must be an executable [2].

Automation:

With the distributed nature of microservices comes the challenge of managing and deploying a multitude of different services and instances of services. There exists a need for well-developed automation processes in areas such as testing, auto-scaling, observability, deployment, and failure detection in order to manage observation, maintenance, continuous deployment, and delivery [12, 24].

8 AMQP – Advanced Message Queuing Protocol – Open standard protocol used in client/server

(29)

The advancements of cloud services in recent years make managing microservices more manageable [1]. Containerization helps to enable this and is commonly used within microservices[26]. Containerization is a standardization-technology and makes deployment easy in any environment through the use of lightweight containers packed with dependencies, libraries, and code [24, 27, 28]. This is achieved by essentially uncoupling the deployment environment from the underlying infrastructure.

Continuous practices:

Microservices helps to enable continuous practices, as deployment and delivery are harder to manage when it comes to large, complicated systems [2, 4, 29]. Continuous integration, delivery, and deployment require high testability, and automated tests and builds are necessities. Deployment is easier because of the basic fact that one small change of the system or within a microservice does not require the entire application to be rebuilt and redeployed. The small size of the service also makes the redeployment time very low, since each service can have its own pipelines for continuous integration and delivery [30]. Being able to incorporate continuous practices in the software development lifecycle brings many benefits, such as verification of changes, quick deployments of changes to the product, and what can be seen as less time between an idea and generated business value [31].

Data persistence and management:

The criteria of loosely coupled services often result in the use of the one-database-per-service pattern (see section 2.2.3). This means that services are not allowed to access other services' data storage directly. Transactions and queries that span over multiple sources becomes complex and are not as straightforward as a single database used in, for example, a monolith application [2]. Soldani et al. [13] highlight this challenge in their study, stating that data consistency, distributed transactions, and query complexity are big challenges of the microservice architecture.

Scalability:

Scaling within the microservice architecture does not imply scaling of the entire application, which is the case with the monolithic approach. In the microservice architecture, only the subset of services that is under stress can be

(30)

scaled, resulting in saving resources, and operational costs are reduced [1, 2, 17, 32].

Technology lock-in:

The technology lock-in experienced in the monolithic architecture is also mitigated since there is no dependency between the microservices in terms of tools, frameworks, or programming languages. Resulting in that each service can be tailored case-by-case, enabling polyglot programming9 [4, 6, 10]. However, due to the cloud providers, some vendor lock-in still exists.

2.2.2 Building microservices

It is clear that the microservice architecture brings several benefits. However, with the architecture, drawbacks are also introduced in the form of extra complexity and new difficult challenges that need to be researched [33]. Comparing the benefits and drawbacks discussed above, its clear that microservice architecture is no silver bullet. In some cases, it will deliver benefits, whereas, in other cases, it will not [6]. Companies such as Netflix [34], LinkedIn [35], and Soundcloud [36] are using the microservice architecture in large-scale products with great results. However, it is important to remember that the traditional monolithic architecture is a solid option, and not all applications are fit for a microservice structure [18].

Newman, in his book, states, “Microservices are not the goal, you do not win by having microservices,” and propose to every team or company considering adopting a microservice architecture to ask three simple questions [18]:

1. What are you hoping to achieve?

2. Have you considered alternatives to using microservices? 3. How will you know if the transition is working?

To understand what the goal is with adopting a microservice architecture is vital, and to know if it is on the right path or if the goal has been achieved. Goals can vary, for example, improve team freedom, reduce time-to-market, streamline scaling, or reduce costs [12]. However, even if the goal is clear, many different development strategies and pitfalls exist, and the opinion on which strategy is the most successful diverge [18, 37-39].

(31)

The strategies can be divided into two groups, monolith first approach and the microservice first approach [38, 39]. The idea behind the monolith first strategy is that it can be hard to define the boundaries between the microservices without a full understanding of the system. To develop the monolith first gives a solid understanding of the dependencies of the application. The monolith first approach exists on different levels. One way is to develop the monolith and later on migrate parts or the entire monolith into microservices when boundaries and dependencies are better understood [18, 38]. Another is first to develop the monolith, and when problems emerge, build the additional new features as microservices. A counterpart to this approach is the microservice first approach. The reasoning behind this strategy is that it can be hard to build and then migrate an existing monolith because functionality can be hard to cut loose. One way of starting with the microservice approach first is to start with larger microservices and turn them more fine-grained over time [37, 39].

2.2.3 Data management in microservices

This section covers data management solutions, strategies, and patterns that are state-of-the-art in the context of microservices and decentralized data management [10]. Finally, all of the strategies and patterns covered in this section are visualized in Figure 6.

Persistent data storage implementations differ in many cases from the monolith’s single database, two groups of database architectural solutions are the “shared database pattern” and “one-database-per-service pattern” [2, 10, 11].

In the shared database pattern, each service holds a connection to the shared database. The different microservices directly access data owned by other services [40]. This results in that the data is not private to each microservice. This has the benefits of simple queries and regular ACID10 transactions. However, with this solution, an increase in the coupling between services is introduced, which increases the need for coordination between teams and makes development more inefficient. It slows down the performance of the system when performing transactions since each service block other services from accessing the database. Different database paradigms might be suitable for different services, which is not

(32)

an option since the shared database locks all services into one database type. This pattern should be seen more as an anti-pattern in the data management area of the microservice architecture [41]. However, it can be valid in some scenarios, for example, a system that rarely accesses and stores persistent data.

The one-database-per-service pattern means that data is private to each microservice and can only be accessed through the microservices API endpoints [9].

Figure 3. Microservices with one-database-per-service pattern adopted from [21] This pattern can be implemented in three different ways, 1) each service has an independent database as the name entails, 2) a shared database, with private tables only accessible from the service that owns the data, 3) a shared database, with a private schema only accessible from the service that owns the data [9, 41]. A summary of the two patterns is shown below in Table 2.

Table 2. Database Pattern overview Attribute / Data Storage Solution Shared DB without private data Independent DB server per service Shared DB with private schemas Shared DB with a private table

Tight Coupling Yes No No No

Changes impacts

other services Yes No No No

Tailored solution

per Service No Yes No No

Distributed transactions and

queries

(33)

As a result of the distribution by using the one-database-per-service pattern, transactions, and queries across multiple services are now more complex [2]. Queries that only concern data owned by one service are still trivial to implement. However, queries of data stored in different service’s databases and only accessible through the corresponding service’s APIs are no longer trivial.

Figure 4. Microservices multiple database schemas example

Consider the simple data schemas of an example microservice architecture application seen in Figure 4. It contains one service handling the customer information, the Customer Service, and another handling order information, the Order Service. In this scenario, a query could be a “query all email addresses to customers with non-sent orders.” This query cannot be resolved without involving both the microservices.

The next part of this section covers strategies for these distributed queries. As mentioned in the related work by Villaça et al. [10], five state-of-the-art strategies exist for decentralized data management. The first strategy investigated by Villaça et al. is the shared database pattern, which has already been covered and should be seen more as a database architectural pattern than a query strategy.

API composition pattern:

The API composition pattern [2, 10, 12, 42] solves the above distributed query example by implementing a composer service with a findEmails() query function. This function is responsible for performing the distributed query. The composer calls each of the microservices that owns the data needed to complete the query. The services holding the data query their databases and returns the data to the composer service. Then the composer joins the returned data and resolves the

(34)

API composer pattern can be implemented in three different ways [2], i) on the client-side, ii) in the API gateway, iii), or in a separate microservice.

Figure 5. API Composition example adopted from [2]

The findEmails() query introduces two different scenarios of composition, parallel composition, and flow composition [43].

 Scenario 1: Parallel composition, see Figure 5, is that the composer invokes two requests to the services in parallel, see scenario 1, in Figure 5. No filtering of results can be performed in the Customer Service since it cannot know which customers that have unsent orders, resulting in an overhead of returned documents since the Customer Service returns all customer objects. The Order Service, however, can return all orders with status unsent resulting in no overhead.

 Scenario 2: Flow composition, see Figure 5, is when the composer uses or sometimes needs the response from one service to use as input for the next service query. In this scenario, the customer ids from the Order Service are used as input to the Customer Service, resulting in less overhead since no customers that are not a part of the final response from the API composer is returned from the Customer Service. The flow composition, however, has a higher response time because of the chaining of requests.

(35)

The drawbacks of the API composition pattern are that complex queries require large in-memory joins of datasets, multiple requests instead of the single request in a monolithic application as well as a lack of consistency. However, the pattern “is simple and should be used whenever possible” [2] and does not introduce the same complexity to the system as the Command Query Responsibility Segregation pattern [2].

Command Query Responsibility Segregation pattern (CQRS):

Another solution for queries that span multiple services is the CQRS-pattern. CQRS segregates the command side from the query side, splitting the data model into a command side and a query side [2]. The databases are replicated into a view-only database, sometimes referred to as a materialized view. This materialized view is responsible for supporting and handling the queries. For the materialized view to know when data has been manipulated on the command side, the system needs to use the Domain Event Pattern. The Domain Event Pattern is used in order to let other services know that a service has updated its data. This means that the materialized view is subscribing to domain events of the command side. The domain events are published each time data is updated.

The materialized view subscribes to all the data service providers needed in order to resolve the distributed query. The materialized view can be implemented into a database suited for the querying, while the command side can be optimized for updates, reads, and deletes, and can be scaled independently [44]. This solution introduces a replication latency when the view is updated, and a lot more complexity to the architecture compared to the API composition pattern.

From a querying perspective, if the materialized view is up to date, the query is not different from the monolith data source.

Event Data Pump:

The event data pump is an extension of the data pump pattern. The data pump pattern consists of a central reporting database [10, 12]. This central reporting database continuously receives pumps of data on specific time intervals from the connected microservices databases. The extension with Event Data Pump instead uses events to know when to update the data of the reporting database.

Canonical Data Model (CDM):

(36)

strategy, a central orchestrating service exists, which contains a single data model and creates it by accessing the data providing microservices in order to unify all the data into one model [10].

Figure 6. Summary of data patterns adopted from [45]

In Figure 6, an illustration of all the discussed data patterns are shown and the relations between them. The CDM, together with the CQRS pattern, is more of complete data management systems that handle queries, while the Event Data Dump Pattern and API Composition pattern are more of pure query strategies resulting in a higher relevance for this thesis.

2.3 Data joining algorithms

The API-composer pattern requires an in-memory join of data. In query processing and optimization, the join operation is one of the most time-consuming [46]. All of the following join algorithms perform two-way joins. However, these can be chained to join more than two tables, for example, first join table A with table B, then join the result with table C.

The three algorithms were chosen as they have high industry relevance and are plausible to be used by the average developer without the need for database

(37)

experts. The algorithms are well understood, and standard practices of implementing the join operation found in coursebooks [46-48] (for pseudocode, see Appendix A):

Nested Loop Join:

This is a brute force algorithm. The algorithm uses two nested loops in order to join the two tables. For each item in table A (outer loop), iterate over every item in table B (inner loop) to check if the join condition is satisfied.

Sort-Merge Join:

A two-phase algorithm. First, both tables A and B are sorted on the join condition key. Then the tables are iterated in order to join them together [49]. In this joining phase, both tables are iterated at the same time, and the joining condition is checked. If there is a match, both table pointers are incremented. Instead, if the key of table A is less than the key of table B, the table pointer of table A is incremented, and the opposite if table A is smaller. The algorithm requires a small modification if the joining condition attribute is not unique. Hash Join:

Consists of the building phase and the probing phase. In the building phase, a hash table is constructed [50]. The hash table is preferably constructed from the smallest of the tables. This is done by hashing the joining condition key value. Then map all rows containing that join attribute value to it. In the probing phase, the second table is scanned and finds matching rows by hashing the join key value and looking it up in the hash table [46].

2.4 Benchmarking

With the use of benchmarking, an understanding of the microservice architecture’s performance compared to the monolithic architecture can be achieved [51]. Benchmarking “is the process of measuring quality and collecting information on system states…” and “… benchmarking requires a very high degree of control over the SUT to make results reproducible“ [52]. Benchmarking microservices often measure client-observable attributes, and because of the nature of the cloud-hosting, the benchmarking can be seen as a type of “blackbox benchmarking“ [52].

(38)

application. Artillery.io is an open-source npm-library, and locust.io is a python package. These tools simulate various amounts of users making different requests to an application and triggers different parts of the SUT (system under test) in order to perform the benchmarking.

2.5 Usability, feedback, and response times

The time it takes for a system to give feedback to the user profoundly affects the user satisfaction of the system. The user satisfaction cascades into the question of will the system be used. Usability in software engineering can be defined as “the ease of use and acceptability of a system for a particular class of users carrying out specific tasks in a specific environment” [56]. The longer the response time of a user’s input, the more crucial the feedback to the user is. J. Nielsen, in a study from 1994 [57], divides the response times and the importance of feedback into a “power of 10”-scale.

0.1s Within this time limit, the user feels their actions are instant. The fast response time leads to that no feedback between the initiation and finish of the action is needed.

1s The one-second limit is the limit of a user’s mind not to trail away from the initiated action. The delay in the response is noticeable. Nielsen J. argues that feedback is not required, but it is essential to understand that the user feels like it is the system that delivers the result rather than their own actions.

10s The 10-second limit is critical to the user’s attention. If the response time exceeds the limit, the users are likely to lose focus of the task altogether. Within this limit, feedback to the user is critical.

In more recent studies, 2005, G. Lindgaard et al. [58] tightens the limits even further in the context of first impressions of a webpage. The results show that users can form opinions about a website in exposure times as short as 50 ms. The correlations were even stronger with the 500ms exposure time. This shows the importance of short response times and feedback within a system.

(39)

Different systems are affected differently depending on the response time. Low response time might be crucial in some systems leading to some architectural solutions being rendered useless, for example, in real-time gaming, high-frequency trading in financial transactions, or in a live presentation application [59].

2.6 Empirical research in software engineering

Experiments are often conducted in an isolated environment within a narrow scope. The nature of experiments is often disciplined and under control of the researcher [60]. The goal is to systematically change one defined variable and keep the rest locked. The change results in an outcome that is analyzed and compared to the pre-stated hypothesis [61].

Case studies are often conducted in the context of real-world projects [62]. Data is collected continuously as the project progresses, and from the data, an analysis can be conducted. Case studies are often centered around a particular aspect within the project or compare relations between characteristics. The control over the environment is smaller compared to the controlled nature of experiments [60]. Experiments are described as “research in the small” in contrast, the case studies are described as “research in the typical” [63].

Kitchenham et al. [64] published a paper, “preliminary guidelines for empirical research in software engineering.” The purpose of this paper was to help researchers within the field of software engineering to increase the quality of empirical research. The guidelines are divided into six groups.

Experimental context

In software engineering, the context of the experiment is important [64]. The context includes background information about the research area and what has led to the current research topic. It is important to analyze the problem and the problem statement and not to oversimplify the techniques used in the industry. Furthermore, the hypotheses should not be a description of the tests performed.

Questions about what previous research has been conducted on the topic need to be answered. This includes summaries of the related work and answering, how does it relate to the current research topic?

Experimental design

(40)

proposed experiments are good enough to answer the questions of the research topic. It is important to define what is being investigated, how data will be collected, and on what criteria the number of test runs based upon. The conduct of pre-experiments can answer many of these questions. Another question to answer is, what areas of the experiment introduce biases, and what can be done to mitigate the introduced biases.

A threat to validity section is important to include. It should clearly state if the researchers or any other entity in any way have affected the results.

Conduct of the experiment and data collection

When conducting empirical studies within software engineering, replicability is key [64]. To ensure the replicability of an experiment, it is important to define and design data collection processes together with descriptions of entities, attributes, and counting rules. However, it is not only important to have sufficient data collection processes, but the processes for validating the data is also equally important. These validation methods need to be described.

Analysis

It is important to specify how the analysis of the results was performed and to understand the nature of the data analyzed [64]. Do the results point to this conclusion because of some attribute of the data? Kitchenham et al. warn for “fishing for results,” and it should be asked in the process of the analysis, “How sensitive is the data analyzed?”. Finally, it is also important to make sure the data fulfills the requirements of an experiment and question if the data allow the studied phenomena to arise.

Presentation of results

Kitchenham et al. highlight the importance of the presentation of the results [64]. The results should allow fellow researchers to learn from the study and understand the studied topic. However, it is also important that from the raw data together with the procedure and analysis procedures, third parties should be able to replicate the results from the analysis.

Interpretation of results

A conclusion section should accompany the results. In this section, it is important to discuss the results in the context of related research [64]. Limitations of the study should be discussed, which discusses the internal and external validity.

(41)

Chapter 3 System Requirement Analysis

The following three chapters, System Requirement Analysis, System Design, and System Implementation, cover the two prototypes of ZenApp, from requirements to deployment. Kitchenham’s et al. guidelines [64] were followed to the extent possible when defining requirements, designing, and implementing the application. This in order for the results to be replicable, but also give an understanding of why decisions and selections were made.

3.1 The goal of the system

The goal of the implemented prototypes is both to show how an application of ZenApp can be designed as a proof-of-concept, but more importantly, to enable the characteristics of the phenomena under investigation to be able to find valid data on the proposed research problem.

3.1.1 High-level system design

Figure 7. High-level system diagram [65]

(42)

service. The user also specifies a distance of interest together with a criteria specification, in this carwash scenario, the user’s tolerable queue time of the carwash. The user’s position is tracked, and when the user is within the distance of interest, the queue time tolerance is checked. The user is notified when the criterion is fulfilled.

In Figure 7, the high-level system design and usage are shown. In the figure, the entity named Service 1 is the carwash, and the Third-Party service 1 API is the entity providing the queue time to the application. In Table 3, the application and system entities, together with a description, is explained in more detail.

Table 3. System and entities description – carwash scenario

Subscription A subscription entails a user, a third-party service, service criteria, and distance of interest.

Service Criteria

A service criteria specifies a third-party service entity and depends on the type of service. The user decides when the criteria are satisfied, for example, queue time in the carwash < 5 minutes.

Distance of Interest

The distance of interest specifies in what range a user is interested in a specified service, for example, if the distance between a user and the carwash is less than 2km. Both the service criteria and distance of interest need to be fulfilled for a user to receive a notification.

Service-Handler

A service-handler refers to the part of the system that handles the connection and logic between the third-party service and the system.

Third-Party Service API

An external entity from which information about the state of the service is gathered, for example, the carwash API.

RESTful API

The part handling the users’ interactions from the front-end to the back-end, such as subscribing and unsubscribing to services.

(43)

Figure 8. Use Case Diagram [65]

In Figure 8, a use case diagram shows how the user will interact with the fully-featured version of ZenApp. However, the two versions of the application realized in this thesis, as described in section 1.1.1, are proof-of-concept applications and only concern the back-end of the system.

3.2 Requirements

The system requirements were derived from multiple sources. The requirements concerning the functionality of the application were derived from requests and suggestions by Zenon. The requirements concerning non-functional requirements were derived from the attribute selection section 4.1, together with requests and suggestions by Zenon [65]. However, they were also influenced by related research and the personal preference of the researcher.

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Features include: original Bluestone wall, gorgeous front and central courtyards ensuring an abundance of natural light through the property, a cleverly arranged open plan

In order to create an experience of the universe within textiles and to invite others into that world, I draw parallels between the miniature scale of the woven

The EU exports of waste abroad have negative environmental and public health consequences in the countries of destination, while resources for the circular economy.. domestically

The software architecture is there whether we as software engineers make it explicit or not. If we decide to not be aware of the architecture we have no way of 1) controlling

Cache is designed to reduce the memory access time. Its access time is much faster than memory and it could save data or instruction which will be used

While trying to keep the domestic groups satisfied by being an ally with Israel, they also have to try and satisfy their foreign agenda in the Middle East, where Israel is seen as

Regarding the external communication between the implemented services, the approaches that will be of use is REST style with HTTP and JSON, as well as Google’s gRPC.. The former will