• No results found

The run-time impact of business functionality when decomposing and adopting the microservice architecture

N/A
N/A
Protected

Academic year: 2022

Share "The run-time impact of business functionality when decomposing and adopting the microservice architecture"

Copied!
56
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS

STOCKHOLM SWEDEN 2018,

The run-time impact of business functionality when decomposing and adopting the microservice architecture

RASTI FARADJ

KTH ROYAL INSTITUTE OF TECHNOLOGY

SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE

(2)
(3)

The run-time impact of

business functionality when decomposing and adopting the microservice architecture

RASTI FARADJ

Master in Computer Science Date: September 26, 2018 Supervisor: Hamid Faragardi Examiner: Elena Troubitsyna

School of Computer Science and Communication

(4)
(5)

iii

Abstract

In line with the growth of software, code bases are getting bigger and more complex. As a result of this, the architectural patterns, which systems rely upon, are becoming increasingly important. Recently, de- composed architectural styles have become a popular choice.

This thesis explores system behavior with respect to decomposing system granularity and external communication between the resulting decomposed services. An e-commerce scenario was modeled and im- plemented at different granularity levels to measure the response time.

In establishing the communication, both REST with HTTP and JSON and the gRPC framework were utilized.

The results showed that decomposition has impact on run-time be- haviour and external communication. The highest granularity level implemented with gRPC for communication establishment adds 10ms.

In the context of how the web behaves today, it can be interpreted as feasible but there is no discussion yet on whether it is theoretically de- sirable.

(6)

iv

Sammanfattning

I linje med de växande mjukvarusystemen blir kodbaserna större och mer komplexa. Arkitekturerna som systemen bygger på får allt större betydelse.

Detta examensarbete utforskar hur upplösning av system som tilläm- par mikroservicearkitektur beter sig, och hur de påverkas av kommu- nikationsupprättande bland de upplösta och resulterande tjänsterna.

Ett e-handelsscenario modelleras i olika granularitetsnivåer där REST med HTTP och JSON samt gRPC används för att upprätta kommuni- kationen.

Resultaten visar att upplösningen påverkar runtimebeteendet och den externa kommunikationen blir långsammare. En möjlig slutsats är att påverkan från den externa kommunikationen i förhållande till hur webben beter sig idag är acceptabel. Men om man ska ligga inom teoretiskt optimala gränser kan påverkan ses som för stor.

(7)

Contents

1 Introduction 1

1.1 Problem description . . . 2

1.2 Objective . . . 2

1.3 Delimitation . . . 3

1.4 Methodology . . . 3

1.5 Results . . . 4

1.6 Research contribution . . . 4

1.7 Outline . . . 5

2 Background 6 2.1 Software architecture . . . 6

2.1.1 Monolithic architecture . . . 7

2.1.2 Service-oriented architecture . . . 8

2.1.3 Microservice architecture . . . 9

2.2 Software decomposition . . . 10

2.3 External communication pattern . . . 12

2.3.1 REST . . . 12

2.3.2 gRPC . . . 13

2.3.3 Apache kafka . . . 14

2.4 Data Serialisation . . . 15

2.4.1 JSON . . . 15

2.4.2 Protocol Buffers . . . 16

3 Related work 17 4 Research method 21 4.1 Research objective . . . 21

4.2 Research challenges . . . 21

4.3 Research question . . . 22

4.4 Scientific Perspective . . . 22

v

(8)

vi CONTENTS

4.5 Research methodology . . . 23

4.6 Validity . . . 23

4.6.1 Construct validity . . . 24

4.6.2 Internal validity . . . 24

4.6.3 Conclusion validity . . . 24

5 Implementation 26 5.1 Granularity and communication . . . 26

5.2 Modelling of the architectural cases . . . 27

5.3 Implementation and realization . . . 29

5.4 Platform specification . . . 32

5.5 Measurement tool and parameters . . . 32

6 Results 33 6.1 Results from conducted experiments . . . 33

7 Conclusion 36 7.1 Discussion . . . 36

7.2 Recommendation . . . 38

7.3 Sustainability . . . 39

7.4 Future work . . . 39

Bibliography 41

(9)

List of Figures

2.1 Evolution of System Architecture [34] . . . 7

2.2 gRPC architectural overview [34] . . . 14

2.3 Component orchestration in Apache Kafka [19] . . . 15

4.1 Research methodology. . . 25

5.1 Chosen communication patterns . . . 26

5.2 Chosen granularity levels to implement . . . 26

5.3 Monolithic architecture of order-placement . . . 27

5.4 Decomposed architecture consisting of two services . . . 28

5.5 Decomposed architecture consisting of eight services . . 29

5.6 Relations between services in figure 5.5 . . . 29

5.7 Service communication with HTTP and JSON . . . 31

5.8 Service communication with gRPC . . . 31

vii

(10)

List of Tables

2.1 Example of endpoints utilizing HTTP verbs . . . 13

5.1 Format of data sent between services . . . 30

5.2 Parameters for performance measurement . . . 32

6.1 Monolithic architecture - 10 Requests . . . 33

6.2 Decomposed HTTP and JSON - 10 Requests . . . 34

6.3 Decomposed with gRPC - 10 Requests . . . 34

6.4 Monolithic architecture - 1000 Requests . . . 34

6.5 Decomposed HTTP and JSON - 1000 Requests . . . 35

6.6 Decomposed with gRPC - 1000 Requests . . . 35

viii

(11)

Chapter 1 Introduction

This thesis aims at exploring different software architectural styles.

Furthermore, it investigates the relation between decomposition and run-time behavior of business functionality in software systems. Nowa- days, we have reached the point where large software design and de- velopment are coming to an end. The increased complexity in big code stacks is making them unsustainable and unmaintainable. To deal with this, new design approaches have emerged in recent years. An example of this is the MicroService Architecture (MSA).

In this thesis, a known e-commerce scenario is developed using the decomposition of different granularities. Together with distinctive communication patterns, the systems is implemented using the MSA style. The impact of decomposition, in particular, the run-time impact, is analyzed for the different granularity levels and communication pat- terns.

As systems grow, they become harder to maintain and develop be- cause of the internal dependencies and relations. The growth also has an impact on the deployment phases because any change in the system requires rebuilding and redeploying the whole system. The emergence of MSA has made it possible to decouple and decompose the system into more manageable components and services [16, 20, 1].

The MSA approach itself is relatively new and unexplored. There is no consensus regarding its formal definition [20, 36]. The first us- age of the term "microservices" dates back to early 2011. Until then, this new approach of building systems, which resembles the concepts of Service-Oriented-Architecture (SOA), has existed under different names. Netflix, for instance, used a similar style called fine-grained

1

(12)

2 CHAPTER 1. INTRODUCTION

SOA. The main idea behind this innovation is to decompose large monolithic systems, where the whole application runs as a single unit, into smaller independent and loosely coupled components that make up the desired system.

There are several publications regarding MSA and its benefits. For instance, the scalability factor and the faster deployment cycles have been discussed in many journals and articles. This is due to the loosely coupled services, which result in more efficient development processes.

On the other hand, there is still a gap in the knowledge regarding un- derstanding the granularity and complexity of decomposing systems.

It is hard to interpret the actual impact of decomposition and benefit that comes from further decomposition. Improving the understanding of the impact of different granularity levels and different messaging patterns would indeed be of interest [41, 5, 36, 21].

When decomposing a system following the MSA principles, effort must be put on the establishment of external communication between the resulting services. There are several messaging patterns, the most commonly used is REST style with HTTP and JSON. There are other alternatives one can make use of, for instance, Google’s gRPC or the distributing streaming platform Apache Kafka.

1.1 Problem description

Despite novelty for this new architectural style of microservices, sev- eral big IT-companies have adopted the idea. Today, there is no stan- dardized approach to deciding on the granularity of decomposition, nor is there any theoretical support to analyze the system behavior af- ter the decomposition. One can only assume that the impact of the introduction of external communications is felt on decomposed ser- vices, but to what extent? When decomposing a system, the run-time impact of business functionality should be analyzed, especially if there are requirements regarding the response time to be fulfilled.

1.2 Objective

The primary goal of this thesis project is to investigate the run-time impact of decomposing a monolithic architecture and to analyze its effects on the system behavior, especially regarding the run-time of

(13)

CHAPTER 1. INTRODUCTION 3

business functionality. Furthermore, it is necessary to identify com- mon methods which are effective in establishing external communica- tion between services in a microservice architecture.

An architectural example will be presented and implemented at different granularity levels along with the use of different patterns of establishing communications. Regarding the external communication between the implemented services, the approaches that will be of use is REST style with HTTP and JSON, as well as Google’s gRPC. The former will be in a textual format while the later will be in binary, to get a comprehensive idea of the possibilities and their resulting outcome.

The ultimate objective of this project is to gain an understanding of the impact of the decomposition process on system behavior to be used as basis in the decision-making process regarding granularity levels.

1.3 Delimitation

The thesis only covers a limited number of granularity levels, which are enough to give a broad indication. They all concern a single busi- ness functionality. However, the result could be used to understand the impact on a system as a whole.

The choice of communication patterns selected is an outcome of a pre-study. The chosen ones have either gained momentum lately or are already commonly used.

The cases covered in this thesis are relevant to different environ- ments. Placement of the server and individual services can vary geo- graphically. For consistency and reproducibility, this thesis only per- forms local tests. Testing on different hosting alternatives could have been carried out, but for now, only local tests have been performed.

Therefore, the measured overhead from the decomposition is mainly serialization/deserialization, not the network latency. Consequently, the environment specific parameters such as CPU, RAM, and the net- work are neglected.

1.4 Methodology

This thesis adopts the empirical approach where quantitative data are gathered. Furthermore, a qualitative method is applied to derive con- clusions and recommendations. The practical part consists of tests

(14)

4 CHAPTER 1. INTRODUCTION

which are executed in an iterative manner to allow for continuous im- provements.

The first phase of the thesis consists of a state-of-the-art overview, which is meant to take advantage of the current research within the software architectural style area as well as the communication pat- terns.

1.5 Results

The obtained results demonstrate the impact of different granularity levels on run-time behavior. The highest tested granularity level of eight services gives a penalty of 10ms in comparison to a monolithic ar- chitecture and usage of gRPC within the 90th percentile increase. With the usage of REST, the measured penalty was 46ms.

Therefore, it can be concluded that among the examined commu- nications styles: REST with HTTP and JSON, and gRPC, the latter per- formed better and is preferable because it serializes/deserializes faster and hence, results in better response times.

These numbers may, at first glance, have significant impact, but looking at how the web behaves today they can be acceptable depend- ing on the use-case.

1.6 Research contribution

The thesis is valuable for those adopting the MSA architectural style since it would benefit in giving guidance on the relationship between granularity and its effect on run-time.

The deepened understanding of the behavior will contribute to to- day’s research and the development of systems. The target audience can be both system architects and developers with the interest in de- veloping sustainable systems.

(15)

CHAPTER 1. INTRODUCTION 5

1.7 Outline

The structure of the thesis is as follows:

Chapter 2, Background This chapter provides the theoretical back- ground and technologies used to conduct the study.

Chapter 3, Related work This chapter reviews related work in the scope of this study.

Chapter 4, Research method This chapter describes the implementa- tion and how the results are obtained.

Chapter 5, Implementation This chapter describes the method as well as the methodology. This chapter also includes reasoning about the validity of this thesis.

Chapter 6, Results This chapter includes the obtained results as well as a discussion about the results.

Chapter 7, Conclusion This chapter consists of the conclusions and discussion. This chapter also includes possibilities for future work in the scope of this thesis.

(16)

Chapter 2 Background

In this chapter, there is an overview background information needed to understand the problem that this thesis is aiming to explore. As a result of this, different architectural styles of software development, techniques involved in decomposing large code bases as well as differ- ent messaging technologies for communication between services are discussed.

2.1 Software architecture

Software architectures consist of several different components depend- ing on the intended functionality. When a client-server architectural style is used there are three recurring parts: a client-side user inter- face, a server-side service - comprising the core logic, and a database used in storing all the data.

The architectural style for the server-side has changed over the years to accommodate possible challenges that may arise. The history map given in figure 2.1 shows the overview of how the patterns have evolved and resulted in today’s microservice architectural style.

6

(17)

CHAPTER 2. BACKGROUND 7

Figure 2.1: Evolution of System Architecture [34]

The following subsections show some of the architectural styles needed to understand the background and origin of the microservice architec- ture. Different patterns as well as in-depth knowledge of microservice architecture will be looked into.

2.1.1 Monolithic architecture

As mentioned in the previous section, software architectural styles usually have a server-side service that holds all the logic and func- tionality. A monolithic architecture is characterized by one cohesive tightly coupled unit of code as illustrated in figure 2.1. It is devel- oped to work together sharing same resources and memory space. It can run processes simultaneously across multiple CPUs that share the same hardware and operating system. It is important to note that this approach has its advantages as well as disadvantages [43].

The major advantage of a monolithic architecture is that the in- process invocations are carried out through language-level calls. In a decomposed system, external communication must be established, which introduces an overhead that does not exist in a monolithic ar- chitecture.

One could argue that when having one code base, the complexity regarding deployment, testing and monitoring is less complicated be- cause it is one-unit indifferent to a decomposed architecture. When examining a decomposed architecture, one has to deal with hopping across machines and network boundaries, whereas each of them may introduce new variables and faults into the system.

(18)

8 CHAPTER 2. BACKGROUND

Advantage. Scaling a monolithic architecture can be done in two ways. One way is through the duplication of the whole server-side, which is called horizontally scaling. Multiple instances are placed be- hind a load balancer to distribute the incoming requests. However, this is costly due to duplication of the whole code and not the intended functionality. The other approach is through vertically scaling, this includes adding more computational power in the form of CPU and RAM. However, this is limited by the size of the server due to the fact that the system cannot be scaled infinitely. For a decomposed architec- ture, one can horizontally scale a specific service instead of the whole architecture. This approach is cost-effective and requires less effort.

Nevertheless, if stateful workload is considered – memory consumed by temporary storage and processing of state data - then scaling will be a difficult task. More discussion about this will be presented in section 2.2. [22]

Disadvantage.With a monolithic architecture, modification within the system is complicated. Adding new features or actions to an exist- ing system requires rebuilding and redeployment, in some cases even correction of other parts of the system, for it to work as a whole. Devel- opment cycles get longer and additional time is spent coordinating the development teams. Another drawback is the limitations that comes with the tools involved, by definition a monolithic structure is imple- mented using a single development stack, which can limit the ability to choose the right tool for the task.

2.1.2 Service-oriented architecture

The OASIS Reference model for Service-Oriented-Architecture (SOA) describes the term as follows:

"a paradigm for organizing and utilizing distributed capabilities that may be under the control of different ownership domains." [31]

In other words, it can be defined as the services that encapsulate business functionality that is independent of the state or context of other services and which can be accessed through well-defined inter- faces. This new pattern gained much momentum in the mid-2000s and various companies adopted this new architectural style to bring better business functionality reuse into their organization. [31, 37, 40, 46]

(19)

CHAPTER 2. BACKGROUND 9

SOA provides a flexible architecture by decomposing large server- sides into smaller services. It delivers agility by enabling rapid de- velopment and modification of software and services that only con- cerns a specific business functionality. Although the term SOA has been discussed over the last few years, no convincing answer has been reached regarding the granularity of its services. There are both ad- vantages and disadvantages of fine- and coarse-grained services. The more coarse-grained the services are, the more the disadvantage of po- tential reuse and redundancy. The more coarse-grained they are, the bigger is the encapsulation the less is reusability. In contrast, the more fine-grained the services are, the higher the number of services needed to realize a function. This requires more effort to compose. [31, 29, 45, 37]

Adoption of SOA requires a highly distributed communication and integration backbone. It is often utilized with an Enterprise Service Bus (ESB) [37, 40, 45]. The functionality provided by the ESB is an integration platform that supports a variety of communications over multiple transport protocols and delivers capabilities for the SOA [37, 45]. In other words, it provides a connectivity layer between different services. It is pertinent to point out that ESB does not include any business logic, it only constitutes infrastructure for inter-connecting services. Figure 2.1 above depicts an illustration of an SOA with an ESB utilization.

2.1.3 Microservice architecture

In recent years, this new architectural style, the MicroService Archi- tecture (MSA), has advanced and gained a lot of attention, [16, 50, 20].

The approach itself is still relatively new and unexplored, and there is yet no academic consensus regarding the formal definition of MSA [20, 36]. However, it can be characterized by small, independent and loosely coupled components that encapsulate business functionality and through their interaction composes the desired systems [20, 15, 50, 21].

The usage of the MSA term was noted, for the first time, at an archi- tectural workshop in 2011. Until then, this new approach of building systems, which resembles the concepts of SOA, has been observed un- der different names. Netflix, for instance, used a similar approach but called it fine-grained SOA [23].

(20)

10 CHAPTER 2. BACKGROUND

As indicated, this is an approach that emerged from the previous SOA style. These two architectural styles have similar advantages as well as disadvantages. They are both different to a monolithic archi- tecture composed through services that handle different functionality.

Distinguishing these two architectural styles is a difficult task mainly due to the lack of formal definition for MSA. However, MSA has been discovered to be mainly different in server size as well as scope when compared with SOA, and as a result of this, the services in MSA tends to be significantly smaller. Some people also claim that the difference is in the way they handle data storage. SOA tends to have shared stor- age while MSA can have independent storages for respective service.

Others claim that there is difference in their means of communication because services in SOA often utilize an ESB, which could be a sin- gle point of failure, while services in MSA are less elaborate and use simple messaging systems. [33, 6, 8, 39]

In MSA, communication is established through lightweight message- based communication, which can rely on synchronous and asynchronous messaging. The Representational State Transfer (REST) architecture has become one of the most common alternatives mainly due to the usage of JSON which is a completely independent platform. Googles gRPC is another approach - a framework for cross-platform commu- nication that uses protocol buffers for data serialization. The working principles of these aforementioned communication patterns are clari- fied later in this chapter followed by the discussion of differences be- tween them.

2.2 Software decomposition

Some of the main issues with decomposed architectural style such as MSA and SOA have to do with choosing the level of granularity and composition of services. In other words, the question of ‘how are the decomposed services structured?’ must be answered. Many factors have to be taken into consideration when decomposing a system to obtain a good result. The problem of decomposition itself identify- ing modules, components, and services is not new, the literatures re- viewed have already addressed this challenge [7].

One of the first steps regarding the decomposition of a monolithic architecture is the identification of components. Components needed

(21)

CHAPTER 2. BACKGROUND 11

for reconstruction of the system represent independent services. The most challenging part is the identification of the overall functionality and splitting them into smaller components. During this phase one should consider the coordination and composition of components in order to avoid extra complexity in the form of communication estab- lishment [21].

The concepts of coupling and cohesion are of importance when it comes to reducing external communications. Coupling is the degree of interdependence between modules. In the context of this thesis, it will be seen as the degree of interdependence between the decoupled ser- vices. Highly coupled systems have service units that are dependent on each other while loosely coupled services are made up of highly in- dependent units. Cohesion is the measure of how well modules within services fit together. When it comes to building evolutionary architec- ture for software systems that are able to evolve rapidly and safely, the properties of both coupling and cohesion must be considered.

One way to decompose is with respect to the organizations busi- ness capabilities [42]. Business capability can be seen as a unique building block of a business [28]. This approach requires an under- standing of the business, organizations’ structure and processes so that the system will be structured according to the organization.

Another approach is to decompose with respect to domains [9].

Domain Driven Design (DDD) [17] provides methodologies to analyze the underlying domain, guiding the design by information flow and processes. DDD strategic patterns guide the creation of a Context Map - showing all bounded contexts and their relation to each other and contracts between them - which can lay foundation for decomposition of the service. The advantage of this approach is that it gives room for the acquisition of a system that effectively reflects the reality [47].

Decomposition is not always a trivial task. If the examined sys- tem includes a stateful workload then some consistency mechanism or orchestration between the services must be utilized. ‘Statefulness’

shows dependencies on conditions and entities at an instant in time.

In other words, data consistency is required in decomposition, there- fore, stateful services are dependent on these moments and have to behave accordingly. Even though the desired characteristics for MSA is statelessness, there are scenarios where it is not achievable. There are different solutions to this, for instance, the Saga pattern. The idea behind the Saga pattern is that there are transactions that span over

(22)

12 CHAPTER 2. BACKGROUND

multiple services, i.e., transactions that update the concerned value and publish/trigger the next transaction. If a violation occurs, a series of compensatory transactions are triggered to undo the initial transac- tion [4, 44]. Stateful systems will not be covered in this thesis, but its importance in system decomposition is vital and therefore mentioned.

2.3 External communication pattern

Software systems can make use of different types of communication patterns between services within a system. One can, among other things, make use of the REST style with HTTP and JSON, and another option is the use of Google’s gRPC. It is difficult to point out which one is preferable because their usage depends on the context and the intended outcome. The following section will give an insight about the two different communication patterns.

2.3.1 REST

Representational State Transfer, REST, is an architectural style that de- fines a set of constraints, which as a whole, describes how resources are defined and addressed. Roy Fielding describes the style as:

”. . . a set of architectural constraints that, when applied as a whole, em- phasizes scalability of component interactions, generality of interfaces, inde- pendent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems.” [18]

REST itself does not specify the underlying protocols, transporta- tion protocol, and the serialization method to use. However, due to the acknowledged advantages such as the simplicity of HTTP, direct sup- port of request/response-style communication and no requirement on the intermediate broker, HTTP protocol combined with JSON has be- come the most widely used alternative.

When utilizing HTTP as a transfer protocol one can make use of the capabilities that work well with the REST style. For example, the HTTP verbs (e.g., GET, POST and PUT) all have well-understood meanings as to how they work with resources; GET retrieves and POST creates resources. This facilitates the creation of different endpoints,

(23)

CHAPTER 2. BACKGROUND 13

for instance, only one endpoint is necessary for the methods create- Order and editOrder, where the operations are merged into the HTTP protocol. Table 2.1 below illustrates this use-case.

Table 2.1: Example of endpoints utilizing HTTP verbs Method URL

POST /order PUT /order

The style works in such a way that when a resource is requested based on the endpoint and the HTTP verb, the data is processed and the results are returned. The reason REST has become a widely used alternative for communication establishment is mainly due to the in- dependency of the platform.

2.3.2 gRPC

gRPC is a Remote Procedure Call framework developed by Google.

The gRPC framework can be used in a cross-platform environment thus supporting a variety of languages. gRPC is based on definitions of services that specify methods that can be called remotely with corre- sponding parameters and return types. The usage is, in short, carried out through the implementation of the interface on the server side, to handle client calls and implementation of the local object known as stubs on the client side which implements the same methods as the server. Figure 2.2 depicts an overview of the system as well as the correlation between the services. [13]

(24)

14 CHAPTER 2. BACKGROUND

Figure 2.2: gRPC architectural overview [34]

When it comes to data serialization, gRPC uses protocol buffers per default - Google’s developed mechanism for serializing structured data. This will be presented in details later in this chapter.

The gRPC framework follows the HTTP semantics over HTTP/2 that use binary rather than text to keep the payload compact and ef- ficient. This framework has several advantages that can be utilized.

There is, among other things, support for both synchronous and asyn- chronous communication. With gRPC, one can also make use of the streaming capabilitie, gRPC even allows one establish client, server and bidirectional streaming between services. [13]

2.3.3 Apache kafka

Apache Kafka is a publish-subscribe distributed messaging system that logs messages as they arrive. The main components in Apache Kafka are the producers, brokers, consumers, and topics. These com- ponents cooperate in such a way that the producer writes messages to a topic, which is a category that stores messages. The broker, which is a Kafka cluster in charge of maintaining topics (in scenarios with large sets of data, they are partitioned) takes care of the messages be- tween the producers and consumers. The messages are later received by consumers that subscribe to one or more topics. It is important to mention that Apache Kafka is not self-sufficient, it depends on an ex-

(25)

CHAPTER 2. BACKGROUND 15

ternal service called Apache ZooKeeper. This is an infrastructure that acts as a scheduler and a shared database in which all the components connected get messages, e.g., producers will pull an instance, get the address of the broker for a topic and consumers will store the offset of the last message they have read. The relation between the mentioned components can be seen in figure 2.2. [19]

Figure 2.3: Component orchestration in Apache Kafka [19]

To be able to handle more data than what could be managed by one machine, partitions are created and spread out on a cluster of ma- chines. The partitioning, which enables handling of larger sets of data, makes Apache Kafka scalable as regards total data size. [19]

2.4 Data Serialisation

In a connected system, such as a microservice architecture, servers and clients are dependent on data exchange to accomplish tasks. Serializa- tion is the process of converting data into a format that can be trans- mitted over a network and then be deconstructed and reconstructed in a different environment.

The data serialization formats that will be covered in this section is the textual JavaScript Object Notation (JSON) and the binary Protocol Buffers.

2.4.1 JSON

JavaScript Object Notation (JSON) is a syntax for serialization of struc- tured data. It uses human-readable text to transmit data objects con-

(26)

16 CHAPTER 2. BACKGROUND

taining pairs of names and values. The names, consisting of strings and values, can be any of the supported primitives or an array. A JSON object is represented by curly brackets encapsulating an arbi- trary number of pairs of names and values. [14]

2.4.2 Protocol Buffers

Protocol Buffers is an automated mechanism for serializing structured data. This is done by defining the structure of data through specially generated source code that can later access the structured data.

The process is carried out through specification of how the data is to be serialized. This is done by defining message types through what is known as the ".proto" files. Each message has one or more uniquely defined field (at least one), where each field has a pair of names and values. Once the messages are defined, one can run the protocol buffer compiler for the desired language over the ".proto" file. This generates data access classes which provide the accessors specified fields as well as methods to serialize and parse the whole structure to and from raw bytes. [11]

Message serialized in a binary format is much lighter than any tex- tual format. This makes them faster to parse. On the other hand, the messages encoded in a binary protocol format is not in a textual for- mat, which is a more convenient and human-readable representation that is easy for debugging and editing. [11]

(27)

Chapter 3

Related work

This chapter gives an overview of effect on the research on MSA and its related fields such as decomposition effect on the system, and the prior MSA styles. The related research is presented in a chronological order based on their publication date. This is done to give a historical overview of the progress of research within the topic of this thesis.

In [37], the technologies and approaches that consolidate the princi- ples of SOA are reviewed. The authors conclude that a highly dis- tributed communication and integrated backbone is required, which is provided by an ESB. The research focuses on ESB and the range of functionality it offers. The authors propose different approaches to get essential ESB requirements such as orchestration, integrity, security of messages among other things.

In [29](2011) Krammer et al. discuss the choice of service granularity and, from an economic perspective, present a decision model. The au- thors explain how granularity affects development and maintenance which, for non-optimal solutions, contribute to higher costs. They show that the finer the granularity, the higher the number of services and the more effort directed towards composition. In contrast, coarse- grained have higher implementation cost and lower reusability.

In [49], comparisons are conducted for different data serialization for- mats with emphasis being placed on serialization speed, data size, and usability. The serialization formats examined are XML, JSON, Thrift and Protocol Buffers. The two first formats are well known text-based

17

(28)

18 CHAPTER 3. RELATED WORK

formats while the latter two are relatively new binary formats. The re- search work concludes that Protocol Buffer is more efficient regarding both speed and size in comparison to JSON. The author also concludes that textual format JSON is more beneficial than the binary format be- cause it is parsable on any platform.

In [30], there is a discussion about techniques used in identifying mi- croservices in a system by considering business areas. Creating a de- pendency graph based on functionality and storage has been concluded helps decomposition. The author concludes that there are scenarios that require additional efforts with respect to migration to a set of mi- croservices and they are:

• Subsystems that share the same database table.

• Microservice that represents an operation that is always in the middle of another operation.

• Business operations that involve more than one business subsys- tem on a transaction scope.

In [40], the author compares different architectural characteristics specif- ically between MSA and SOA. The author differentiates the styles based on the fact that MSA is built on a share-as-little-as-possible pattern, whereas SOA is a share-as-much-as-possible pattern. Furthermore, the author discusses that SOA with its integrated ESB runs the risk of a po- tential single point of failure when compared to MSA and API-layered communication establishment.

In [25], Hassan and Bahsoon emphasize that there is a lack of aca- demic consensus regarding the definition as well as the properties of the paradigm of microservices. The authors further argue that there is a lack of design patterns for the microservices.

Problems of finalizing the level of granularity of microservice are discussed in [25]. The trade-off in size versus the number of microser- vices shows that optimal level depends on the scenario in which the system is operating. Trade-offs are presented as indicators of design- ing level of granularity and a road-map to be used when extracting microservices.

(29)

CHAPTER 3. RELATED WORK 19

In [38], the authors discuss communication between services, espe- cially in the scope of the Internet of Things. They discuss the recog- nition of JSON as the most efficient way to transfer data. Furthermore, Google’s Protocol Buffer is introduced as a good method in a standard- ized communication between services. They stress the benefits of Pro- tocol Buffer. For instance, they discuss the effect of reduced network load and reduced size in sent messages on the reduced overhead.

In [32], the authors use a case-study of a service-based system devel- oped on the Domain Object (DO) approach. The case study demon- strates DO to be suitable for service-based components and also rec- ommend that its applicability to MSA should be examined.

In [16], the authors give an academic point of view on MSA, with at- tention placed on some practical issues and potential solutions. The authors also discuss how the interaction between small independent services can lead to complex network activity and how the complexity could be exploited and lead to external attacks. They look at MSA in an evolutionary perspective rather than from the revolutionary angle.

In [24], the authors stress that there is still lack of a general and system- atic approach to design decisions when modeling microservices, opti- mal boundaries and optimal granularity levels are yet unknown. They formulate the research as a potential run-time decision problem, since most uncertainties relate to the behavior of the system, the run-time context is more suitable for consideration. They introduce granularity adaption aspect which helps run-time triggers and which can be used during monitoring microservices under run-time.

In [35], Mustafa and Marx Gomez conduct an empirical experiment to observe the response time and CPU utilization for a specific scenario with a particular workload. They test functionality in the granular- ity level of two and three services. The results show that more fine- grained is preferable in term of response time and CPU utilization.

The work does not conclude anything rather than that the work needs to be implemented in a broader perspective, concerning more metrics, and in larges scales to strengthen the outcome.

(30)

20 CHAPTER 3. RELATED WORK

In [6], an automated process of identifying microservices is presented.

This is done through semantic analysis of the concepts in the input specification concerning a reference vocabulary. The identification pro- cess consists of decomposition by matching the terms used in the Ope- nAPI specification as input against a reference vocabulary. The idea is that the operations that share the same reference concepts are highly cohesive and should, therefore, be together. Future work would com- promise additional non-functional aspects, such as response time, mem- ory allocation and all other aspects that could affect the decomposition process.

In [15], Dragoni et al. state the basic building principles of MSA. They show that scalability is one of the key features provided by this new paradigm. They further argue that scalability is required for perfor- mance reasons, to cope with the high load. They also discuss scalabil- ity of MSA in comparison to a monolithic architecture and highlight the problem of how well the paradigm will integrate with the emerg- ing platforms such as IoT and the cloud, which will, according to them, most likely dominate in the near future.

In [26], the author explores the service granularity aspect of MSA be- cause it may have a considerable impact on latency. This is because message passing increases as the services become finer-grained, conse- quently also affecting the response time. This is why the optimal level of granularity is of interest. The work also focuses on identification of factors affecting latency and response time. The author also discusses how finer-grained services increase the number of in-process invoca- tions and if it is tolerated within same host or container. However, when requests are made to external services, additional vulnerabili- ties are introduced, i.e., by the network connection and the load on the service. The author concludes that services co-existing within the same host have better performance in comparison to service invocations to other hosts through network links.

(31)

Chapter 4

Research method

This chapter gives a comprehensive and scientific perspective of the re- search conducted. The chapter includes a description of the objective, the challenges, and the research question. The chapter also covers the methodology as well as the validity of the thesis.

4.1 Research objective

The overall objective of this thesis is to analyze the run-time impact when decomposing a monolithic architecture and adopting a decom- posed architectural style such as the MSA. To achieve this objective, numerous aspects and challenges must be considered, for instance, the system decomposition and the communication establishment between the resulting services. Consideration of these factors should result in a fair and comprehensive report.

4.2 Research challenges

One of the major challenges that were faced in this research has to do with decomposing an architecture and establishing communication between resulting services

Targeting the challenges of system decomposition required inves- tigation of, among other things, components and how they are coor- dinated. Because the possibilities to decompose are numerous, espe- cially when it comes to component granularity, this thesis will only focus on single business functionality. The selected functionality was

21

(32)

22 CHAPTER 4. RESEARCH METHOD

decomposed and tested at different granularity levels. The outcome should be applicable to a system as a whole since no constraints have been considered and because granularity levels and run-time impacts were the aspects focused.

The communication establishment was another significant challenge.

As in the case of several loosely coupled services, the communication establishment plays a major role. There were many different protocols and frameworks available for the communication. In this thesis, both binary and textual formats have been investigated. Another aspect considered regarding communication establishment is the location of services. They could be placed in different geographical places which, depending on the distance between them, would introduce latency. It is an important aspect and cannot be forgotten. However, this thesis has been narrowed down and only considered local simulations.

4.3 Research question

Based on the challenges and previously conducted research, and to narrow down the research to be conducted, the research question was specified.

• What is the impact on the system run-time behavior when de- composing a business functionality and adopting an architec- tural pattern such as MSA?

4.4 Scientific Perspective

The field of system architectural patterns and communications styles is a broadly researched area, thereby, a large number of publications and research articles were evaluated. The previous evaluations con- sisted of statistical data, which required to make comparison among the alternating outcomes.

To answer the research question, a qualitative research technique was adopted as the most appropriate method. This is a form of re- search technique defined as being exploratory and with the ability to answer the questions on why and how [10]. This approach has been discovered to be the most appropriate when it comes to interpretation of the information within the field of this study. However, because

(33)

CHAPTER 4. RESEARCH METHOD 23

a qualitative approach is not effective in analyzing statistical data, a quantitative research method was utilized. This method makes use of empirical study through the use of statistical means to investigate observable phenomena [3]. Therefore, a combination of the two was required to conduct this research.

4.5 Research methodology

The research methodology that was used to fulfill the objective of this thesis is outlined in figure 4.1.

The research started with a project conducted together with an in- dustrial partner, IBM. The project was related to system complexity and identification of factors that has impact on the response time when adopting decomposed architectural styles such as the MSA. By study- ing different literatures about different architectural styles and the adop- tion of decomposed architectural styles challenges were identified and a research question was formulated.

The literature review investigated the current state-of-the-art meth- ods, trends and their corresponding challenges. The research looked at different architectural styles, decomposition patterns and different ways to establish communication. The limitations in research resources resulted in this thesis. To narrow down the scope of the thesis the deci- sion was made to only look at a sub-problem; the relationship between granularity level and system impact in terms of response time.

With the research question and its challenges specified, the next phase was to propose a solution. Once again, major related works were needed to be thoroughly studied in order to be able to implement and analyze their results.

The actual implementation and simulation were made as what can be described as an iterative process. Drawbacks were found and ad- justments and refinement were made before re-implementation.

4.6 Validity

This section focuses on the test for validity of this research work and the three different types of validity that were covered are construct validity, internal validity and conclusion validity.

(34)

24 CHAPTER 4. RESEARCH METHOD

4.6.1 Construct validity

Construct validity is achieved by an accurate representation of real- world scenarios through theoretical constructs. It is an indication of how well an experiment measures what it claims to do [52].

Construct validity is an important aspect to consider when vali- dating the developed environments. In this thesis, this validity was used to examine different parameters along with their corresponding output to see if they follow expectations according to theory.

4.6.2 Internal validity

Internal validity is related to the presence of confounding variables - a variable that has not been taken into account and that has an effect on dependent variables. This validity affects the outcome by introducing bias and increasing variance [48].

This thesis conducted experiments through simulation of environ- ments with the ability to limit and change the number of parameters.

This enabled experimenting with the parameters such as the number of requests, parallel users and different communication protocols and measures their specific effect. Further actions could have been taken such as investigating hardware specific parameters. However, that is a suggestion for future work. The actions that have been taken into consideration include capturing the effect of different parameters to reduce the chance of confounding variables and obtain high internal validity.

4.6.3 Conclusion validity

Conclusion validity is a measurement of the extent to which conclu- sions about relationships of variables are reasonable. This is strongly related to the data collected and the analysis that was carried out on them [51].

In the end, there are essentially two possible outcomes from the data, one can either conclude a relationship or not. In the thesis, this has been reflected through detailed measurement of different vari- ables, both dependent and independent, to reduce the chances of in- correct conclusions to be drawn. For this thesis, comments from re- searchers and engineers, with both academic and industrial background, have been taken into account in drawing the conclusions.

(35)

CHAPTER 4. RESEARCH METHOD 25

Figure 4.1: Research methodology.

(36)

Chapter 5

Implementation

This chapter gives an overview of how the implementation was car- ried out along with the frameworks used. The different phases and the decision making within them are also clarified.

5.1 Granularity and communication

The selection of both communication patterns, and granularity levels (see figure 5.1 and 5.2) was a result of a pre-study. These choices had to facilitate the analysis and conclusions necessary to fulfill the objective of the thesis.

Figure 5.1: Chosen communication patterns

Figure 5.2: Chosen granularity levels to implement

26

(37)

CHAPTER 5. IMPLEMENTATION 27

5.2 Modelling of the architectural cases

To achieve the objective of the thesis - investigating the run-time im- pact when decomposing an architecture into loosely coupled services - a scenario was modeled at different granularity levels. We modelled a business functionality within an e-commerce scenario, order place- ment to be specific. The functionality was not the essential aspect, the decomposition and communication patterns between the services were subjects of consideration. The analysis and outcome should ap- ply to any business functionality regardless of the scenario. This is due to the fact that no constraints or component-specific executions have been performed. We were only interested in the granularity level and the communication establishment.

Modeling an e-commerce scenario is explained by the fact that the architectural components should be recognizable and self-explanatory, the same applied to the relations between the decomposed compo- nents.

The modeled architecture in figure 5.3 constitutes the base case - a monolithic architecture used as a reference point for comparisons with the other decomposed architectures. As seen in figure 5.3, it consists of a single service containing all of the core-logic, all of the functionality is within the service and there is no external communication.

Figure 5.3: Monolithic architecture of order-placement

The second modeled architecture, as shown in figure 5.4, is a de- composed structure derived from the base case in figure 5.3. As seen

(38)

28 CHAPTER 5. IMPLEMENTATION

in the figure the inner functionality is divided into two cohesive ser- vices. To fulfill the initial order placement functionality, coordination and communication among the loose services are required.

As visualized in both figure 5.4 and 5.5, the decomposed archi- tects consisting of more than one service utilize an API gateway. The purpose was to have a single unified entry point for the external con- sumers. It also creates an abstraction level and is independent of the number of internal services.

Figure 5.4: Decomposed architecture consisting of two services The modeled architecture in figure 5.5 constitutes the most fine- grained case. All of the extracted services were required to place an order, the difference, in comparison to previous examples, is that each service is accountable for less computation and its responsibility is nar- rowed down.

Modeling of architectures with granularity levels between the first depicted architecture in figure 5.4 and the second, figure 5.5, has been done. The modeled granularity levels are two, four, six and eight.

They all followed the logic of decomposing the previous architecture into smaller cohesive services concerning business capabilities.

Relations between the services in figure 5.5 is depicted in figure 5.6, it shows how the services coordinate internally and the flow of messages to execute the intended functionality.

(39)

CHAPTER 5. IMPLEMENTATION 29

Figure 5.5: Decomposed architecture consisting of eight services

Figure 5.6: Relations between services in figure 5.5

5.3 Implementation and realization

The various architectures that have been implemented were all mod- eled in section 5.2, all of which consisted of a varied number of ser- vices. The architectures and its corresponding services were all hosted through the Spring Boot framework - a Java-based framework for build- ing web and enterprise applications - using Java serverlets. To sum things up, Java serverlets are a server-side program that runs inside a Java-capable server such as Apache Tomcat Server. Tomcat is essen- tially an open-source Java servlet container developed by the Apache Software Foundation. The services were all implemented in Java, and as mentioned the utilized communication patterns were REST style

(40)

30 CHAPTER 5. IMPLEMENTATION

with HTTP and JSON, and Googles gRPC. The reason why other pat- terns such as Apache Kafka were not implemented was because they are not suitable for the modeled case. The main scope of Apache Kafka is publish/subscribe systems, which is the opposite of the re- quest/response system that the modeled case was built upon.

The primary interest when decomposing a system lies in the run- time impact, an aspect that is hard to quantify since it is dependent on the functionality range as well as the size. The computation within a service could hypothetically be seen as a constant time, in this case, the computation time was neglected. Therefore, only communication establishment is considered which gave a more general result that ap- plies to other scenarios. Thereby, the implemented services only con- sisted of message passing logic.

The messages sent between the services consisted of an object in the magnitude of five characters. Different size of the transmitted data may be of interest. It could affect the serialization time, but previous research has already explored serialization/deserialization of different sizes [49, 38]. Therefore, no consideration was made for the sending of different sizes of data. The choice of a consistent size of data provided reproducible observations. The format of a message sent between ser- vices is depicted in table 5.1.

Table 5.1: Format of data sent between services Data

Name ( char(5) ) Msg ( char(5) )

In theory, a client could access each service directly through the re- spective endpoint. However, this option has its limitations, one of the main problems is the imbalance between the needs of the client and the enabled services. For instance, if a client wants to place an order, given that the system is constructed as the model in figure 5.3, the client has to make eight separate requests to obtain the intended result. In more complex scenarios, for instance, when rendering a page that requires several functionalities with each consisting multiple services, it would result in a high number of requests. Over a LAN it is conceivable, but over the public Internet or a mobile network, it would be impractical.

Internally, the communication is established through a point-to- point style. The services will communicate directly without a gate- way mediator. This is due to the presence of the API gateway which

(41)

CHAPTER 5. IMPLEMENTATION 31

purpose is to handle external traffic. As mentioned, the two messag- ing patterns that were examined are REST style with HTTP and JSON, and also gRPC. Regardless of the internally established communica- tion, the functionality is accessed through an HTTP request returning a JSON object.

The communication established between the services using HTTP is depicted in figure 5.5. The communication is carried out through initialization of a TCP channel between the dedicated services. The calling service, also known as the client, sends an HTTP GET request to the targeted service, the targeted service sends a response and closes the TCP connection.

Worth mentioning is that the services are all hosted locally, thereby, the latency factor is neglected. However, it is possible to locate services on geographically distinct location. Hypothetically this would affect the overall impact, in line with increasing granularity, the response time would increase. A distributed version of the implementation is planned as future work.

Figure 5.7: Service communication with HTTP and JSON

The life cycle of a gRPC communication establishment is depicted in figure 5.6. There are different types of communication patterns when using gRPC, in this case, the unary stream was implemented - the client sends a single request and gets back a single response.

Figure 5.8: Service communication with gRPC

(42)

32 CHAPTER 5. IMPLEMENTATION

5.4 Platform specification

The experiments were performed locally on a machine with an Intel Dual-Core i5 running at 2.3 GHz and 8GB RAM. The machine ran on macOS High Sierra version 10.13.4 operating system.

5.5 Measurement tool and parameters

The testing and gathering of results were performed with the use of Apache JMeter framework - an open source software for load testing functional behavior and measurement of performance. Jmeter enables measurement of many different server/protocol types but for this spe- cific case, the HTTP protocol was used.

To fulfill the objective of the thesis - investigating the run-time im- pact when decomposing an architecture into loosely coupled services.

Parameters to be tested were chosen and presented in the table below:

Table 5.2: Parameters for performance measurement

Parameter Description

Samples Total number of executions.

Internal requests Number of internal requests between the decomposed services.

Average Average elapsed time.

Min Minimum elapsed time.

Max Maximum elapsed time.

90% Line 90% Percentile, is the value below which 90% of the samples fall.

Standard Deviation A measure of how spread out numbers are.

(43)

Chapter 6 Results

This chapter presents the performance results from the experiments conducted. The modeled and implemented architectures along with the different communication patterns have been tested, and the results obtained are presented in tables.

6.1 Results from conducted experiments

The outputs for the different scenarios are all presented in tables 6.1- 6.6 below. The first three tables, 6.1-6.3 display the behavior when the business functionality was tested with a load of ten requests. The three following tables, 6.4-6.6, shows the behavior of a load with one thousand requests.

The tables consist of output from both monolithic and decomposed architecture whereas the two communication styles REST and gRPC have been utilized.

Table 6.1: Monolithic architecture - 10 Requests Services Samples 90% Line Avg

(ms)

Min (ms)

Max

(ms) Std. Dev.

1 10 5 3 2 7 1.57

33

(44)

34 CHAPTER 6. RESULTS

Table 6.2: Decomposed HTTP and JSON - 10 Requests Services Samples Internal

request

90%

Line

Avg (ms)

Min (ms)

Max (ms)

Std.

Dev.

2 10 1 10 9 7 20 3.55

4 10 3 16 16 11 39 9.34

6 10 5 28 26 22 72 9.21

8 10 7 51 48 44 102 17.59

Table 6.3: Decomposed with gRPC - 10 Requests Services Samples Internal

request

90%

Line

Avg (ms)

Min (ms)

Max (ms)

Std.

Dev.

2 10 1 5 4 2 7 1.03

4 10 3 8 6 5 17 2.96

6 10 5 15 12 9 24 5.21

8 10 7 15 14 11 41 5.41

As displayed in tables 6.1-6.3, the decomposition and the higher granularity affect the response time. With the use of eight services within the 90thpercentile and using gRPC, the penalty compared to the monolithic scenario was measured to be 10ms. With the use of REST, the penalty was recorded to be 46ms. The most interesting change in behavior is when the services were further decomposed from four to six and with three to five internal requests, the run-time almost dou- bled. This major difference can be seen for both gRPC and REST.

Table 6.4: Monolithic architecture - 1000 Requests Services Samples 90% Line Avg

(ms)

Min (ms)

Max

(ms) Std. Dev.

1 1000 2 1 <1 11 0.79

(45)

CHAPTER 6. RESULTS 35

Table 6.5: Decomposed HTTP and JSON - 1000 Requests Services Samples Internal

request

90%

Line

Avg (ms)

Min (ms)

Max (ms)

Std.

Dev.

2 1000 1 8 6 4 40 1.70

4 1000 3 13 10 7 80 3.54

6 1000 5 46 35 16 158 11.69

8 1000 7 58 64 39 364 22.19

Table 6.6: Decomposed with gRPC - 1000 Requests Services Samples Internal

request

90%

Line

Avg (ms)

Min (ms)

Max (ms)

Std.

Dev.

2 1000 1 4 2 1 18 1.22

4 1000 3 6 3 2 37 1.79

6 1000 5 14 10 4 74 7.82

8 1000 7 15 9 5 129 6.03

Tables 6.4-6.6 show that the penalty within the 90th percentile for the different granularity levels are similar to the scenario of a load with ten requests. Once again, it can be seen that, when going from four to six services, and three to five internal requests, there is a bigger difference in run-time in-comparison to between the other levels.

However, one can also interpret higher maximum and lower mini- mum response-time. This could most likely have been improved with some form of scaling, that is replication of services and distribution of load.

By looking at the two tested scenarios, loads of ten and a thousand requests, it can be deduced that usage of gRPC, in comparison to REST, performed more than three times better. It can be seen that more than four services and three internal requests have an impact on the out- come greater than among the other steps. It is also important to clarify that these numbers only display the impact of response-time, which is a negative impact. However, it must be noted that higher granu- larity improves other non-functional parameters, more about this is discussed in the conclusions at chapter 7.

(46)

Chapter 7 Conclusion

7.1 Discussion

This thesis empirically shows that the decomposition of a business functionality indeed does impact the run-time negatively - an impact that increases in line with the granularity level. However, using this as a basis for not using a decomposed architecture is misleading.

First of all, to what extent does decomposition of business func- tionality impact the run-time? In the tables at section 6.1, the impact outlined for different granularity levels can be seen. It is possible to confirm that Google’s gRPC performs better than the REST-style with HTTP and JSON, this has been confirmed by previous works as stated in chapter 3.

The result shows that the gRPC performed better in-comparison to REST with HTTP and JSON. Therefore, it can be argued that among the examined communication patterns and with the use of run-time as the deciding aspect, gRPC is preferable.

Comparing the run-time between the monolithic and the decom- posed architecture displays differences that can be seen as relatively small, of course, the magnitude is relative, and it depends on the con- text. However, it is possible to reckon that the penalty with the usage of gRPC, within the 90thpercentile and with a granularity of eight ser- vices, is 10ms. Simultaneously, usage of REST gives a penalty of 46ms.

To make sense of these numbers, they can be related to how the web behaves today. The HTTP Archive - an archive storing data crawling from the most popular sites - shows that the time for onLoad - load- ing resources and its dependent resources - within the 90th percentile

36

(47)

CHAPTER 7. CONCLUSION 37

is measured to be 16.3 seconds on a desktop and 42.3 seconds on mo- bile. Therefore, the observed delay with the use of gRPC is 0.06% on mobile and 0.02% on desktop, and with the usage of REST the delay is 0.02% on mobile 0.01% on desktop. In non time critical scenarios this is acceptable. That these numbers are exclusively from monolithic ar- chitectures cannot be concluded. However, they give a fair indication of how the web behaves and enables putting the generated result in a context. [2, 12, 27]

Based on the results obtained, it can be concluded that when going from four to six services, and three to five internal requests, the run- time impact almost doubled. Which is not the case between the other granularity levels. This behavior turns out to be true for both of the conducted load tests, both ten and a thousand requests gave a special run-time impact. From this, we can interpret that there seems to be some kind of threshold that impacts the run-time more severely. Pos- sibly due to the communication establishment, the initially requested service calls other services and has to await response, and this shows that there seems to be a more negative impact when there is a line of more than four services.

As highlighted, the time penalty is relatively small, however, if a system constitutes of multiple functionalities where all of them are de- composed into high granularity levels, then the accumulated impact could indeed be of concern. It is also important to acknowledge that in some cases one might have performance requirements to consider, depending on how they are devised, and if the run-time is to be be- low some strict limit then decomposition might not be possible. Once again, this observation is solely dependent on the context. In scenarios where the business functionality cannot be decomposed into such high granularity levels and the system does not consist of so many function- alities, the overall run-time impact will be relatively small and possibly accepted in exchange for the benefits that a decomposed architecture entail.

As mentioned, the choice of applying decomposed architecture like MSA may not solely lie upon run-time impact. There are cases where the effect is accepted in exchange for, for example, faster development cycles and better re-usability. Consider working with a monolithic ar- chitecture where the team is divided based upon business functional- ity, if a new feature or a fix was to be released they must rebuild and redeploy the whole system. This means that time must be spent on co-

(48)

38 CHAPTER 7. CONCLUSION

ordination among all teams and in some cases correction of other parts that are necessary for the system to perform. When utilizing MSA and its loosely coupled services, this is not the case. Services can be re- built and redeployed independently, the only coordination required is ensuring that endpoints and data consumed by other services are not deprecated. Another essential difference between decomposed and monolithic architecture is the ability to scale. Scaling a monolithic ar- chitecture is done through vertical scaling where more computational power is added. When compared to a decomposed architecture where there can be scaling of some specific components and placing them behind a load balancer, it would be discovered that the decomposed architecture is better. With this being said, the run-time impact is in- deed of interest, but there are other aspects that must be considered.

7.2 Recommendation

With respect to the discussion above and the support of run-time be- havior for the different granularity levels, the outcome is that decom- posed architectural style is recommended. As concluded, decomposed architecture does affect the run-time negatively, but as discussed, the magnitude is small relative to its context and to how the web behaves.

Furthermore, based upon the results obtained, a business functional- ity should not be decomposed into higher granularity level than four services, this is due to the fact that it seems to be a threshold, further decomposition impacts the run-time more severely. A decomposed system will most likely results in far more than four services, but a single business functionality could give unwanted impact on the run- time when further decomposed.

As long as the use case does not have any strict requirements re- garding the run-time, making it an exclusive decision making factor, then decomposed styles have advantages that make it preferable such as better scaling capabilities and more maintainable code bases.

As also mentioned in the discussion, decomposed architecture with loosely coupled services gives development teams the the ability to in- dependently rebuild and redeploy features and fixes. If this is not al- lowed, the time needed in coordinating would be very time-consuming.

However, it is important to point out that the choice taken is always a trade-off which is dependent on the use case. Many of the mentioned

References

Related documents

By comparing the data obtained by the researcher in the primary data collection it emerged how 5G has a strong impact in the healthcare sector and how it can solve some of

A kind of opening gala concert to celebrate the 2019 Sori Festival at Moak hall which has about 2000 seats.. Under the theme of this year, Musicians from Korea and the world

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

DIN representerar Tyskland i ISO och CEN, och har en permanent plats i ISO:s råd. Det ger dem en bra position för att påverka strategiska frågor inom den internationella

Úkolem je navrhnout novou reprezentativní budovu radnice, která bude na novém důstojném místě ve vazbě na postupnou přestavbu území současného autobusové nádraží

The new campanies are: Nordea Bank Finland Pk, owner of all assets and liabilities related to the bankingbusiness in the demerged Nordea Bank Finland Pk, Nordea

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating