• No results found

Container overhead in microservice systems

N/A
N/A
Protected

Academic year: 2022

Share "Container overhead in microservice systems"

Copied!
75
0
0

Loading.... (view fulltext now)

Full text

(1)

IN

DEGREE PROJECT COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS

STOCKHOLM SWEDEN 2018,

Container overhead in microservice systems

VILHELM FRIÐRIKSSON

KTH ROYAL INSTITUTE OF TECHNOLOGY

(2)

Container overhead in microservice systems

Vilhelm Friðriksson

2018-10-01

Master’s Thesis

Examiner

Gerald Q. Maguire Jr.

Academic adviser Anders Västberg

KTH Royal Institute of Technology

School of Electrical Engineering and Computer Science (EECS) Department of Communication Systems

SE-100 44 Stockholm, Sweden

(3)

Abstract

Containers have been gaining popularity in recent years due to their ability to provide higher flexibility, higher reliability and dynamic scalability to enterprise software systems. In order to fully utilize containers, software developers aim to build their software using microservice architecture, meaning that instead of working on a single large codebase for the whole project, the software is split into smaller units. These microservices can be deployed in their own container instead of the traditional virtual machine setup where a server has to configured with all necessary dependencies. Moving away from the mono- lithic software architecture to containerized microservices is bound to bring performance penalties due to increased network calls between services and container overhead. The integration must therefor be carefully planned in order to fully utilize the container setup while minimizing the overhead. The purpose of this thesis project was to measure how much overhead can be expected due to containers in an enterprise environment. By using a combination of virtual machines and Docker containers, a microservice system was deployed with four different deployment strategies and the system’s performance was measured by analyzing request response times under various loads. The services were made to run on a single server and on multiple servers, with and without Docker. The performance measurements showed that the system performed worse in every case when Docker was used. Furthermore, the results showed that Docker can have significant negative impact on performance when there is a heavy load on the system.

Keywords: Microservices, Containers, Docker, Virtual machines, Cloud computing

(4)

Sammanfattning

Containers har blivit populärare under de senaste åren tack vare deras förmåga att ge högre flexibilitet, högre tillförlitlighet och dynamisk skalbarhet för företagspro- gramvarusystem. För att fullt ut kunna använda containers har programutvecklarna för avsikt att bygga sin program- vara med hjälp av mikroservicearkitekturen, vilket innebär att programvaran delas upp i mindre enheter istället för att arbeta på en enda stor kodbas för hela projektet. Dessa mikroservices kan distribueras i sina egna containers istället för den traditionella virtuella maskininstallationen, där en server måste konfigureras med alla nödvändiga beroenden.

Att flytta sig från monolitisk mjukvaruarkitektur till con- taineriserade microservices kommer att få prestandaförsäm- ringar på grund av ökade nätverksanrop mellan tjänster och container-overhead. Integrationen måste därför noggrant planeras för att fullt ut utnyttja containeruppsättningen och minimera overhead. Syftet med detta avhandlingspro- jekt var att mäta hur mycket overhead kan förväntas på grund av containers i en företagsmiljö. Genom att använda en kombination av virtuella maskiner och Docker- containers, implementerades ett microservices-system med fyra olika implementeringsstrategier och systemets pre- standa mättes genom att analysera anropens svarstid under olika belastningar. Tjänsterna gjordes för att köras på en enda server och på flera servrar, med och utan Docker.

Prestandamätningarna visade att systemet var sämre i alla fall när Docker användes. Dessutom, visade resultaten att Docker kan ha signifikant negativ inverkan på prestanda när det är tung belastning på systemet.

Keywords: Mikroservices, Containers, Docker, Virtuella maskiner, Molntjänster

II

(5)

Acknowledgements

I would like to express my sincere gratitude to my examiner, Professor Gerald Q.

Maguire Jr., who always provided thorough feedback when asked and helped set the course straight when I found myself in trouble. I consider myself very lucky to have had the chance to work under his guidance.

Special thanks to the people at Betsson Group for giving me the chance to work on this project, all the help they provided and for making me feel welcome at Betsson.

Finally, heartfelt tanks to my parents, Friðrik Vilhelmsson and Ingibjörg María Ingvadóttir, for their endless support and encouragement.

Stockholm, September 2018 Vilhelm Friðriksson

(6)

Contents

1 Introduction 1

1.1 Problem Description . . . 3

1.2 Purpose . . . 3

1.3 Goals . . . 3

1.4 Research methodology . . . 4

1.5 Delimitations . . . 4

1.6 Outline . . . 5

2 Background 7 2.1 Betsson Group . . . 7

2.2 Software architecture . . . 8

2.2.1 Monolithich architecture . . . 8

2.2.2 Microservices . . . 9

2.3 Virtual Machines . . . 10

2.4 Containers . . . 12

2.5 Docker . . . 12

2.6 Using Docker . . . 16

2.7 Orchestration . . . 17

2.8 Cloud computing . . . 18

2.9 Technology stack . . . 19

2.9.1 .Net Core . . . 20

2.9.2 Couchbase . . . 20

2.9.3 RabbitMQ . . . 20

2.9.4 Taurus . . . 21

2.10 Zipkin . . . 22

2.11 Nmon . . . 22

2.12 Previous Work . . . 23

2.12.1 Virtualization . . . 23

2.12.2 Microservices . . . 25

2.13 Summary . . . 26

3 Methodology 27 3.1 The Test System . . . 27

IV

(7)

3.2 System setup . . . 29

3.3 Test scenarios . . . 30

3.4 Measurements . . . 32

4 Results 35 4.1 Taurus results . . . 35

4.1.1 Single server setup . . . 35

4.1.2 Multiple servers setup . . . 38

4.2 Zipkin . . . 41

4.3 Summary . . . 44

5 Discussions 45 5.1 Time measurements . . . 45

5.2 Network calls . . . 48

5.3 Nmon . . . 50

5.4 Resource contention . . . 51

5.5 Previous research . . . 51

6 Conclusions and future work 53 6.1 Conclusions . . . 53

6.2 Limitations and future work . . . 54

6.2.1 Different hosting environments . . . 54

6.2.2 Container scheduler . . . 55

6.2.3 Grouping containers together . . . 55

6.2.4 Network configuration . . . 56

6.3 Reflections . . . 56

References 59

(8)

List of Figures

2.1 Microservices can be deployed independently based on workload . . . . 10

2.2 Hypervisors of type 1 and 2 . . . 11

2.3 Docker containers on a host machine . . . 13

2.4 Virtual machine and container setups . . . 15

3.1 Overview of the test system . . . 28

3.2 System architecture with services running on multiple servers. . . . 30

3.3 System architecture with services running on a single server. . . . 30

4.1 Log values of the median response times when using a single server . . 36

4.2 CDFs of request times when using a single server with Docker . . . . . 37

4.3 CDFs of request times when using a single server without Docker . . . 37

4.4 Log values of the median response times when using multiple servers . 38 4.5 CDFs of request times when using multiple servers with Docker . . . . 39

4.6 CDFs of request times when using multiple servers without Docker . . . 40

VI

(9)

List of Tables

2.1 Orchestrator tasks . . . 18

3.1 m4.xlarge instance specification. . . 29

4.1 Taurus load testing results for a single server with Docker . . . . 36

4.2 Taurus load testing results for a single server without Docker . . . . . 36

4.3 Taurus load testing results for multiple servers with Docker . . . . 38

4.4 Taurus load testing results for multiple servers without Docker . . . . 39

4.5 Docker related overhead for a single server . . . . 40

4.6 Docker related overhead for multiple servers . . . . 41

4.7 Zipkin trace results for the User service when using a single server . . . 42

4.8 Zipkin trace results for the User service when using multiple servers . . 42

4.9 Zipkin trace results for the Blacklist service when using a single server 42 4.11 Zipkin trace results for the Geolocation service when using a single server 42 4.10 Zipkin trace results for the Blacklist service when using multiple servers 43 4.12 Zipkin trace results for the Geolocation service when using multiple servers . . . 43

5.1 Measured waiting time for the blacklist service when using a single server 46 5.2 Measured request time for the blacklist service when using a single server 46 5.3 Measured waiting time for the blacklist service when using multiple servers . . . 46

5.4 Measured request time for the blacklist service when using multiple servers . . . 47

5.5 Measured waiting time for the geolocation service request time when using a single server . . . . 47

5.6 Measured waiting time for the geolocation service when using multiple servers . . . 48

5.7 Measured network request time for blacklist service when using a single server . . . 48

5.8 Measured network request time for blacklist service when using multiple servers . . . 49

5.9 Measured network request time for geolocation service when using a single server . . . . 49

(10)

5.10 Measured network request time for geolocation service when using multiple servers . . . . 49 5.11 Measured Docker overhead of network calls to the blacklist service . . . 50 5.12 Measured Docker overhead of network calls to the geolocation service . 50

VIII

(11)

List of Listings

2.1 Dockerfile example . . . 16

2.2 Docker build command . . . 16

2.3 Docker run command . . . 16

2.4 Example Taurus configuration file . . . 21

2.5 Running Taurus test . . . 21

2.6 Installing nmon . . . 22

2.7 Starting nmon . . . 22

3.1 Taurus load test file . . . 32

(12)

List of acronyms and abbreviations

AMQP Advanced Message Queuing Protocol

API Application Programming Interface

APM Application Performance Management

AWS Amazon Web Services

CDF Cumulative Distribution Function

CSV Comma Separated Values

HTTP Hypertext Transfer Protocol

IaaS Infrastructure as a Service

JSON JavaScript Object Notation

MQTT Message Queuing Telemetry Transport

NoSQL Not only SQL

PaaS Platform as a Service

RPC Remote Procedure Call

RPS Requests Per Second

SaaS Software as a Service

SDK Software Developement Kit

STOMP Streaming Text Oriented Messaging Protocol

UI User Interface

X

(13)

Chapter 1

Introduction

The Internet has come a long way since its early days and today has become an integral part of modern society. Web services are expected to run every second of the day and be available to the whole world. Running services that people rely on, whether it is for work or play, requires plans for reliability and scalability. In addition to being robust and reliable, users also expect services to respond instantly, no matter what the current load is. As the quality requirements for web services have gotten more demanding, so has the complexity of running them. Realizing the importance of meeting these increasing demands, engineers have continuously worked to improve how their services are run.

The latest trend in the industry to realize fast, reliable, and scalable systems is to break software down into so-called microservices, where each service has a clear and well defined functionality. While it may seem to the user that he or she is interacting with one service, underneath there are multiple services interacting with each other generate a response. Microservices have gained popularity because they make it possible to speed up both operations and development. It is easier to scale, upgrade, and handle failures when a service is composed of many isolated entities instead of being monolithic. However, the complications of running such a service increases as the required capacity of the service grows because the system realizing the service consists of hundreds or thousands of independent services that need to be configured and monitored.

Containers are another technology that has been gaining popularity in recent years. Containers offer a way to isolate software from other processes running in the same environment. One host can run multiple containers which all are isolated from each other. Additionally these containers can include everything that is needed to run a service (i.e., the code, system tools, packages, and settings), thus escaping the dependency hell [1] that operators have had to deal when the requirements of multiple applications running on the same host could cause conflicts. Today it has become a common practice to combine microservices and container technology so

(14)

CHAPTER 1. INTRODUCTION

that the services are developed, tested, and deployed using containers.

Lastly, virtual machines have played an important part in realizing web services for a long time. They enable operators to quickly set up new (virtual) machines with the hardware and network specifications they require without having to go through the hassle of dealing with bare-metal servers. Virtual machines are the foundation of many modern web services as in many cases the containerized microservices are running on top of virtual machines. This is the way the main cloud providers, such as Amazon Web Services and Google Cloud Platform, run their container services. On the surface the operator works with containers, but underneath the cloud platforms spawn virtual machines to host those containers. The performance analysis of modern web services can be complex because not only are the services them selves potential bottlenecks, but also the nested virtualization technology that is being used further complicates performance analysis.

Betsson Group has started to move their services into containerized micro- services. Their aim is to transition the production back end platform from a traditional Windows/IIS-based server architecture into a more scalable Docker architecture in hopes of reaping benefits, such as higher flexibility, higher reliability, and dynamic scalability.

When running a large system of containerized services, automation becomes increasingly important. Container orchestration systems are used to handle most operational tasks, such as deployments, updates, assessments of resource utilization, and health checks. Therefore evaluations of container orchestrators are an important part of Betsson’s future work.

Equally important is to figure out how much overhead can be expected from moving away from the monolithic software to containerized microservices.

According to W. Felter, et al. Docker has been shown to have minimal performance overhead compared to the host it is running on and virtual machines have constantly improved performance wise [2].

Moreover, the complexity of large scale systems requires more complicated benchmarks in order to estimate the virtualization overhead for such systems.

However, limited research has been done on measuring the overhead in microservice based systems. Microservices tend to make multiple network calls in order perform the same work that a single thread handles in a monolithic application. When combining virtual machines, containers, and microservices the performance impact can therefor be substantial if the system is not developed in an appropriate manner.

The thesis focuses on estimating the performance impact Betsson can expect when transitioning away from their well tested architecture to a containerized microservice architecture running on top of virtual machines. Additionally, the

2

(15)

1.1. PROBLEM DESCRIPTION

thesis takes looks at where the overhead is coming from and if it can be reduced.

1.1 Problem Description

Containerized microservices have become very popular system architecture in rececent years. However, this architecture comes with a cost. Docker containers have been been shown to have minimal overhead compared to their hosts, but some workloads may cause issues, particularly when there are high disk and network workloads. Microservices can be particularly network heavy, as a single user request may involve communications between several services. In contrast, because the response from a monolithic application will generally be generated by a single thread there is an obvious performance impact. Little research has been done on the impact of containers on microservices and the impact of running containers on virtual machines (as is common practice in production systems.) This degree project aims to close that gap by performing performance evaluations on a small microservice system running on virtual machines, both with and without container technology. Furthermore it aims to answer the following questions:

• Is it possible to run containerized microservices on top of virtual machines without incurring an unacceptable decrease in performance?

• Where does the overhead mainly come from when using containers?

1.2 Purpose

The thesis discusses the performance overhead incurred from running microservices in containers hosted on virtual machines. The purpose of transitioning into microservice architecture is to have the ability to run a more flexible, reliable, and scalable system. Previous work has shown that virtualization comes with a performance cost. The purpose of this work is to quantify how much performance loss can be expected when running a system of microservices in containers on top of virtual machines compared to running the same system of microservices directly on virtual machines.

1.3 Goals

The goal of this work is to evaluate the performance penalties experienced when running microservices in containers compared to a non-containerized solution.

Furthermore, a goal is to explain the nature of the overhead incurred, hence these evaluations can help create a system design appropriate for containerized microservices.

(16)

CHAPTER 1. INTRODUCTION

1.4 Research methodology

Quantitative and experimental research methods are used in order to perform the work for this project. The nature of experimental research is to work with variables and establish relationships between them. Manipulating variables in the environment may change the results of the experiment, which can then be further analyzed. This method is ideal when analyzing system performance. In this project a laboratory system will be tested with and without Docker and quantitative analysis is performed on the gathered performance metrics in order to compare the performance of the different experimental configurations (i.e., with and without containers).

As this project is to be carried out for a company, it is important that everything that is said in confidentiality remains confidential.

1.5 Delimitations

The thesis project only examines Docker as the container technology. There are various other technologies available, such as rawer implementations (e.g., LXC [3]) or popular open source projects (e.g., rkt [4]). However, Docker has been steadily gaining popularity in the industry in recent years and is currently used by Betsson in production; therefore, it is considered the most logical choice for their use together with microservices.

The experimental system does not depend on a container orchestrator or a service discovery mechanism to run. In a real-world dynamic containerized environment, the use of orchestration is very important. However, since the experimental system is not required to be dynamic or scalable it was decided that setting up orchestration would only add unnecessary overhead. Furthermore, this overhead of the orchestration would simply be additive; hence it would be independent of whether containers were used or not; therefore it was unnecessary for the experiments.

Security is not considered in this work in order to simplify the building of the system. The application programming interface (API) calls use Hypertext Transfer Protocol (HTTP)[5] without encryption, secrets are stored in plaintext configuration files, and no security precautions are made in how the system runs on the isolated experimental cluster. This of course would not be acceptable for a production system. However, again the additional overhead of security would be additive to either alternative (i.e., with or without containers), hence irrelevant to the experiments reported in this thesis.

The system is hosted in the cloud with a pay-as-you-go payment model. Cost 4

(17)

1.6. OUTLINE

evaluations, such as finding the most cost effective way to run the system, were not a part of this project.

1.6 Outline

The following chapters describe the work performed for this degree project and the results of the project. Chapter 2 provides the background needed to understand the rest of the thesis. Specifically, it describes the virtualization techniques and the software architecture used for the project along with providing insights about the technology stack used. Finally, it presents previous work done related to the subject. Chapter 3 describes how the main work of the project was carried out.

It explains how the laboratory system was built for this project, its architecture, how it was set up, and how the performance evaluations were conducted. Chapter 4 showcases the results from the performance evaluations and Chapter 5 discusses these results. Lastly, Chapter 6 states the project’s conclusions with a summary of the results, the limitations of the project, and suggestions for possible future work.

(18)
(19)

Chapter 2

Background

This chapter explains the concepts needed to understand the thesis and offers insights into related work. The first section explains what Betsson Group is and what product it offers. Later sections explain more technical parts of the work, such as the difference between a monolithic software architecture and microservices, along with what virtual machines and containers are and how they are used to run enterprise systems.

Enterprise systems are usually built with large technological stacks and Ssection 2.10 explains the various technologies used to build and run the system this thesis focuses on. In Section 2.8 container orchestrators are explained. The last section focuses on previous work regarding performance analysis of containers and microservice systems.

2.1 Betsson Group

Betsson Group is a Swedish gaming company and it is one of the largest listed gaming companies in the world, operating approximately 20 brands around the globe. Betsson offers a large variety of products. The largest revenue stream comes from their mobile casino which aims to offer offline casino experience to remote players that is as close as possible to the real thing. With over 1,500 available games it is one of the world’s largest mobile casinos. Betsson’s sportsbook offers a large variety of bets on a range of sports - as well as political events, markets, and more. Betsson offers live streaming of various sports as well together with real-time in-game betting. Aside from the casino and betting services Betsson has also a range of other games, such as poker variations, bingo, and scratch cards. Due to its popularity, Betsson’s system must handle a large amount of requests every day and must be capable of handling large spikes in traffic during high profile events, such as final matches of popular tournaments.

(20)

CHAPTER 2. BACKGROUND

2.2 Software architecture

The monolithic software architecture has been popular for a long time and may be considered as the industry norm. In recent years, developers have been turning towards microservices due to the many advantages they can bring; but as in all software related matters, there are no silver bullets and each use case must be considered carefully before choosing which architecture to use. The project focuses on microservices as the main software architecture. This section explains what microservices are and how they differ from the more traditional monolithic architecture.

2.2.1 Monolithich architecture

A monolithic (software) architecture means that all the software functionality is bundled into one component, hence creating a monolith. The result is a single deployable application consisting of one codebase. The reason for the increased popularity of microservices is due to the problems that may arise as the monolith grows larger.

Take for an example a simple web application. At first it enables users to visit a website in order to view its content. Later on it is decided to add a REST application programming interface (API) adds functionality so that users can also interact with a web application (webapp) through an API. As the monolith’s nature dictates, all of this new functionality is added to the application’s codebase. If done correctly, this should not greatly increase the complexity of the codebase. The monolithic way of dealing with complexity is to break down the program into modules. However, the application still remains a single executable artifact. As the application grows larger and more independent services are added, the complexity increases and the codebase becomes harder to maintain and evolve. At some point in time there may be several independent teams working on the same codebase, each team responsible for what is effectively a different service. Unfortunately, the teams are forced to use the same technology as the original application uses, although there may be technologies that are better suited for the service they are developing. Moreover, an update to one service means an update and redeployment of the whole application.

For large projects the whole process of making changes, testing, and deployment can prove difficult and time consuming. In many cases it also means downtime.

Scaling the application may also prove a nuisance. A certain functionality of the application, the REST API for example, may become very popular causing an increased number of requests. In order to handle an increased load the application can be deployed on multiple servers. However, although the increased load is just for a single functionality the whole application must be deployed.

This example focused on a small application in order to explain the difficulties 8

(21)

2.2. SOFTWARE ARCHITECTURE

that may arise with the monolith. The purpose is not to assert that the monolithic approach is inferior. Instead the argument is that the the architecture must be chosen to suit the project and in many cases the monolith is the right choice.

Additionally, it is common to start with the monolithic approach and then evolve the system architecture into microservices if needed.

2.2.2 Microservices

Microservices have become a widespread software architecture in recent years due to their many advantages over the older monolithic model. There is no official standard defining what constitutes a microservice. But rather it is a term used to describe a certain methodology that software engineers have started to embrace.

A variety of information on the methodology can be found in articles and lectures available via the internet. Additionally, a few books have been written as well, such as Building Microservices by Sam Newman [6].

The idea behind microservices is to build a single application out of multiple smaller services that interact with each other through a well defined API. On the surface it looks like a single application with multiple functionalities, just like a large monolith. Behind the interface it is a combination of smaller services. A single microservice can be described as an application with a single responsibility that can be deployed, scaled and tested independently [7]. The services are ideally business oriented. For example, splitting an application into the traditional three layers: user interface, application logic, and database layer - does not constitute a microservice architecture. Instead a service handles one business capability, user data for instance, and does so on every necessary layer (i.e., UI, logic and persistent data storage). In the earlier monolithic example, the web application and the REST API would be divided into two independent services. If done correctly, this may remedy pitfalls mentioned earlier about the monolitcih architecture.

Dividing the software into loosely coupled modules should also make it easier to work with. In large projects, responsibilities for different services can be distributed among teams. This makes updates to the codebase easier, because now a given team updates and deploys a single service instead of working with the whole application.

Furthermore, this improves fault isolation and scalability. When a service goes down the other services continue to work, except for those functionalities relying on the failed service. As shown in Figure 2.1, services can be deployed independently, hence a heavily used service can be replicated on several servers without needing to replicate the other services. The servers on the left run monolithic applications and therefore an instance of each service is deployed on every server. The servers on the right run independently deployed microservices. Each service is also technologically independent, thus allowing each team to pick different technology stacks - such that is the best suited for its specific job. Of course each technological decision must be well thought out since running too many stacks may impose other problems on the

(22)

CHAPTER 2. BACKGROUND

company.

Figure 2.1: Microservices can be deployed independently based on workload

The cost of a microservice architecture must also be considered. Working with distributed systems is complicated. A single process is no longer responsible for handling a request, as a service may need to call upon other services remotely and these in turn may have to call on other remote services. The complexity and latency is additive, hence each inter-service interaction must be thoroughly analyzed. Additionally, working with persistent data becomes more complicated.

A monolith can update several things in one function call but with microservices each service may be responsible for a single update causing issues about consistency unless handled correctly. Running multiple services requires an operational culture that can handle these multiple services. Running hundreds of services requires both a good overview and careful management. Many tools are available to help, as will be discussed later in the thesis (specifically in Section 2.11). Moreover, it is much harder to debug a distributed system when things go wrong.

All things considered, which architecture to choose depends on the project and the problems it is supposed to address - as well as the expected loads that the service must support.

2.3 Virtual Machines

The computers, or servers, used to run distributed systems can either be bare- metal or virtualized. These servers are usually located in data centers, i.e., facilities specially designed to house large number of computers. As the naming indicates, bare-metal simply means that the server is a physical instance of a single computer.

The hardware of such a server has its own motherboard and this computer is not shared with others. In contrast a virtual machine may be one of ten instances running on top of the same physical computer hardware. Performance-wise, bare- metal servers have been shown to offer better than virtual machines. However,

10

(23)

2.3. VIRTUAL MACHINES

virtual machines are more desirable, especially in a large scale system that requires flexible scaling.

Working with bare-metal means that each server needs to be set up from scratch.

That includes buying the individual computers, setting them up in a data center and then configuring and operating them. This is a slow and expensive process. In order to be able to handle high workloads and achieve appropriate fault tolerance, multiple replicas of each server need to be available further increasing costs and complexity.

In contrast, virtual machines are not bound by these limitations and thus they are ideal for running larger system applications. In simple terms, virtual machines offer virtualization at the hardware level, thus they isolate a portion of the available hardware resources while making these resources available to the application giving the impression of working with a bare-metal server. Virtual machines at application level also exist but are not relevant in the context of this thesis.

The entity that creates and runs the virtual machines is called a hypervisor.

Hypervisors are conventionally categorized into two types (as shown in Figure 2.2).

A type 1 hypervisor runs on the bare-metal while type 2 hypervisors run on top of an operating system (just like normal applications). For both type 1 and type 2 hypervisors there are multiple solutions available for hardware virtualization. For large scale systems, type 1 hypervisors are usually used and the actual system runs on top of special enterprise level hardware.

Figure 2.2: Hypervisors of type 1 and 2

When using virtualization, operators do not have to work with individual physical computers. Instead a single physical computer can be used and its resources split into isolated pieces to run as a virtual machine. Additionally each virtual machine can run an operation system of its own choice. Each virtual machine can be configured just as a physical computer would. In a large enterprise setting, a

(24)

CHAPTER 2. BACKGROUND

whole server stack could be virtualized on one computer (instead of hundreds of physical machines with one physical machine for each logical server). In order to achieve reliability and fault tolerance the server’s stack should be replicated on at leas one other computer. When there is a need to scale out (increase the server’s capacity), it is simply a matter of allocating resources to new virtual machines and start these virtual machines and their entire software stack. That can be done in a matter of minutes rather than having to order new hardware, wait for its deliver, and set it up from scratch. Moreover, when less capacity is needed some of the virtual machines can be shutdown, hence the resources that were previously occupied can be assigned to other customers of the cloud provider.

2.4 Containers

Linux containers are not new, but only in recent years have they become a mainstream solution in enterprise systems. The main reason for their increased usage is project such as Docker [1], an open source platform for containerization.

Linux containers differ from virtual machines because they run on a host system that has an operating system installed. They can be considered as a virtualization on the operating system level and are considered to be lightweight because they avoid the need for a hypervisor. When creating a container a portion of the host’s resources are made available to the container. This degree project focuses on Docker as the container solution.

2.5 Docker

Docker makes use of kernel technology to create lightweight Linux containers that run on the host system. Its relatively simple interface makes it easy to create, start and destroy containers. Docker adds various features that differentiate it from plain LXC technology. These features include portable deployment, automatic build, versioning and an application focused API. These features have made Docker popular in enterprise settings.

Docker is built upon LXC and uses the same Linux kernel technology, namely namespaces and control groups, to isolate containers and monitor their resources.

Namespaces are responsible for process isolation, thus processes running inside a container cannot affect other processes on the host system and vice versa.

Namespaces introduce new network capabilities by providing each container with a network device and each interface with its own IP address. Control groups monitor the resources used by the container and limit resources. These resources include CPU, memory, network, disk I/O. etc.

Docker uses the client-server architecture. The server being the Docker daemon which does most of the work such as building and running containers. The client is

12

(25)

2.5. DOCKER

used to interact with the daemon. This can be done through a terminal on a system with the Docker command line tool installed. When using Docker’s run command to start a container, the client sends the request to the Docker daemon which in turn handles the request. The client and the daemon do not have to be running on the same system, hence the client can be connected to a Docker daemon that is running remotely. Additionally, a client is not limited to communicating with only one daemon. The interaction between the two can be via a REST API, UNIX sockets, or a network interface.

Figure 2.3: Docker containers on a host machine

Docker has three main network configurations. It is important to understand them in order to choose the one most suitable for the desired execution environment since network overhead is one of Docker’s main sources of negative impacts on performance. The configures are:

Bridge networking is the default configuration. It creates a private network that all containers on the host connect to, giving them the ability to communicate with each other. Containers can receive external connections by using port mapping.

Host networking disables all network isolation and uses the host’s network interface, thus exposing containers to the public network. Running a container on port 80 in this configuration means its applications will be available on port 80 on the host’s IP address. This configuration is faster than using bridge, but it comes with increased security risk.

An overlay network configuration is used to create a distributed network between several hosts by creating an overlay on top of the hosts networks. Docker handles the routing of packets to and from containers that are running on these

(26)

CHAPTER 2. BACKGROUND

network’s hosts. Docker provides IP address management, service discovery, multi- host connectivity, encryption, and load balancing.

For this project it was decided to use the bridge network configuration. The reasons for this choice were that this is the default configuration and it has the least security risks in a production system.

Docker uses bind mounts or volumes to store persistent data. Bind mounts are the more limited option, as they simply mount a file or a directory on the host machine into the container. Bind mounts perform well but depend on the directory structure of the host machine. When using volumes a directory is created inside Docker’s storage directory on the host machine. The content of the volume is managed by Docker. Volumes do not increase the size of the containers using it and its existence is independent of the container’s life cycle. They are also easier to backup and migrate and safer to share among containers.

In order to use Docker, Docker must be installed just as any other application.

Docker is available for Microsoft’s Windows, Apple’s MacOS, and various Linux distributions. The container acts as an executable image which Docker runs. This image is a package that should include the application to run as well as all of the dependencies needed to run it. A single image can be used to run many different independent containers. The difference between a virtual machine and Docker should be clear in a production environment (see Figure 2.4). When using a virtual machine, an OS must be installed, the desired software set up, and all the dependencies installed. A single virtual machine can run multiple applications.

Virtual machines have been an industry standard for a long time. However, having to think about dependencies can prove difficult when trying to maintain a service. Moreover, sometimes a different version of software libraries are needed for different services. Moreover, updating one library for one service may break another (unrelated) service. When using Docker, the service and its dependencies are bundled together in an image. In order to avoid a potential dependency race as may occur with a virtual machine setup, it is possible to run a single service in each container. A single server can run many different applications each in their own isolated container. When code changes need to be applied to an application or other libraries installed, a new image can be created. Additionally, this new image can replace the old container without impacting any other applications running on the underlying physical system.

14

(27)

2.5. DOCKER

Figure 2.4: Virtual machine and container setups

Docker can also play a role in easing the whole process of developing software because if you can run a Docker image on one machine you can run it on any other machine that has Docker installed. Usually there are multiple environments needed in order to get software from the development environment to run in the production environment. In such a deployment process developers need to set up the dependencies themselves in order to try out their software. Moreover, a test environment needs to be set up in order to test the software and finally, as stated before, the production environment needs to be set up. When using Docker, developers create their images and run their software on their own machines.

Subsequently these images can be run in the test environment before being deployed into production. In theory, Docker should be able to greatly reduce the operation overhead of software development and deployment.

In order to deploy the containers in production they can be stored in a registry.

A registry can have many repositories and each repository can contain multiple images. After creating Docker images, these images are pushed to a registry of the developer’s choice and subsequently these images can be accessed by the production computers that will run the application(s). Deploying a container in production can be done using a single Docker command (as will be described in the next section). Docker offers their own registry solution called Docker Hub [https://hub.docker.com/]. This registry can host both public and private repositories, but enterprises may want to set up their own registry for security purposes.

The increased popularity of containers does not result in virtual machines becoming obsolete because the two virtualization techniques serve different pur- poses. In a production environment, virtual machines and containers tend to be run together. A virtual machine is set up, allocated the resources to run the desired applications but rather than of setting up the applications and running them natively, Docker can be installed and the services run in containers. This process can reduce the run time overhead and greatly decreases the operations and maintenance effort required for the production servers.

(28)

CHAPTER 2. BACKGROUND

2.6 Using Docker

To create a Docker container a Dockerfile must first be created. Listing 2.1 shows the Dockerfile for one of the services that was built in this project.

1 FROM m i c r o s o f t / aspnetcore : 2 . 0 AS base

2 WORKDIR /app

3 EXPOSE 80

45 FROM m i c r o s o f t / aspnetcore −b u i l d : 2 . 0 AS b u i l d 6 WORKDIR / s r c

7 COPY U s e r S e r v i c e . s l n . /

8 COPY U s e r S e r v i c e / U s e r S e r v i c e . c s p r o j U s e r S e r v i c e / 9 RUN dotnet r e s t o r e −nowarn : msb3202 , nu1503

10 COPY . .

11 WORKDIR / s r c / U s e r S e r v i c e

12 RUN dotnet b u i l d −c Release −o /app 1314 FROM b u i l d AS p u b l i s h

15 RUN dotnet p u b l i s h −c Release −o /app 1617 FROM base AS f i n a l

18 WORKDIR /app

19 COPY −−from=p u b l i s h /app .

20 ENTRYPOINT [ " dotnet " , " U s e r S e r v i c e . d l l " ]

Listing 2.1: Dockerfile example

The container is based on the official Microsoft ASP.Net Core Version 2.0 Docker image. The source code for the service is copied from its local folder, built within the container and then the service is started. The EXPOSE command tells the container to listen for TCP connections on port 80 giving external network access to the service.

The container is built and tagged with a name chosen by the developer by executing a terminal command - as shown in listing 2.2

1 docker b u i l d −−tag NAME_OF_CONTAINER PATH_TO_DOCKERFILE

Listing 2.2: Docker build command

Listing 2.3 shows a terminal command to run the Docker container locally.

1 docker run −d −p 80:80 NAME_OF_CONTAINER

Listing 2.3: Docker run command

The -d is used to run the container as a background (daemon) process and the -p is used for port mapping. In listing 2.3, the host’s port 80 is mapped to exposed port 80 on the Docker host. As a result the service is available via localhost on port

16

(29)

2.7. ORCHESTRATION

80. Note that this service is limited to access via other processes also running on this same host, as the bridge networking configuration was used.

2.7 Orchestration

As the size of the enterprise systems grow so does the importance of automation to manage all of the virtual machines or containers. Virtual machines and container technology provide the means to run a reliable and scalable distributed system.

However, in order to fully utilize these technologies, orchestration technology is needed. The purpose of orchestration technology is to automate the system administration work needed to run a distributed system. This orchestration technology should reduce the burden of manually administering the system and, ideally, do it better than any human could.

Automation has always been an important tool in a system administrator’s arsenal. The simplest form, writing scripts and setting up recurring jobs, can save a lot of manual labor in the long run. With time, automation tools have become more capable and complex. Paired with virtualization, servers can now be spawned and setup automatically in a matter of minutes without any human interaction.

The emergence of containers takes things even further when building dynamic and robust systems. The focus shifts from running machines to running applications. This change in focus makes it easier to make changes to the system, such as setting up new hardware and upgrade operating systems, without impacting already running applications. Developers and system administrators do not have to worry about machine and OS details since everything necessary for the service runs in containers [8]. However, this increased complexity calls for a highly capable orchestration tool. A system consisting of few virtual machines may be running hundreds or thousands of containers. Currently there are many different solutions available, each with their own ideology. Some are built to run a system completely on their own, while others are expected to be bundled together with other tools. Most orchestration tools offer declarative configuration, meaning that operators set the desired state of the system. It is then up to the orchestration tool to match the current system stated to the desired state. The desired state is a combination of many requirements. For example, what applications need to run and where they need to run, how many replicas of each machine or service should be instantiated, and how much resources an application is allowed to use. Table 2.1 contains short descriptions of fundamental tasks that must be taken care of in order to run a distributed containerized system and orchestrators can help with.

(30)

CHAPTER 2. BACKGROUND

Table 2.1: Orchestrator tasks

Container management The most basic functionality. When the orchestrator senses the desired state does not match the current state it will automatically spawn or take down containers.

Naming and service discovery

When running a dynamic system with multi- ple services configuring them becomes more difficult. The orchestrator should makeit possible for applications to dynamically find and interact with other services.This can be implemented with a lightweight DNS or a distributed key-value store.

Monitoring In order to keep the desired state the orchestrator must contin-uously monitor the system. Application health checks can be used to make sureservices are running properly.

When a service goes down for some extended periodthe container can be restarted or when a host goes down it containers can be movedto another host. This self-healing capability is a fundamental part of a reliabledistributed system.

Application-centric load balancing

Can be used to make full use of thesystem’s capabilities. When running replicated services the load is balanced between them.

2.8 Cloud computing

Virtualization has been one of the main driving forces for the establishment of cloud computing[9]. The concept of cloud computing can be described as a model that enables users to gain access to a shared pool of computing resources. The resources can be of various nature, such as servers, storage or applications.

There are three major service models identified within cloud computing:

Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS). SaaS is most likely the most common service model. In this model consumers have access to an application that is running on a cloud infrastructure.

The SaaS model enables software providers’ provide customers access to specific software through various methods, such as web browsers and mobile applications.

These consumers only access the applications and do not know about the underlying infrastructure where these applications are running. PaaS enables customers to deploy their own applications on a cloud provider’s infrastructure. The application

18

(31)

2.9. TECHNOLOGY STACK

owners avoid the hassle of setting up the infrastructure (such as servers and networks) and focus only on deploying their application. IaaS enables customers to provision computing resources, such as virtual machines, storage, and network for their own usage. They do not have access to the underlying cloud infrastructure but they are able to set up their own environments which are run on computers, networks, storage, etc. by the cloud provider.

There are four major deployment models. A private cloud is used by a single organization. It can be managed by the organization, a third party or a combination of both, and it can be hosted on premise or at a third party’s location. A community cloud is similar to the private cloud, but instead of a single organization being the customer there is a community of customers that use the cloud. Public clouds exist on the cloud provider’s premises and can be used by the general public. Finally, hybrid clouds are a combination of the other deployment models.

The 2018 IaaS magic quadrant by Gartner [10] identifies Amazon Web Services, Microsoft Azure and Google Cloud as the current main cloud providers. They offer various services to run systems which gives companies the option to avoid setting up and running their software on onpremise hosts. With the increased popularity of containerized microservices, these three cloud providers have also begun to offer container services to provide their customers the option off deploying their containers straight into the cloud. This abstracts away the underlying infrastructure that runs the containers; therefore facilitating the whole process. As stated before, containers are advertised as being fairly lightweight. However, these containers running in the cloud are actually running on top of virtual machines. For this project, virtual machines were used instead of a container service, but if there is a noticeable overhead measured when load testing applications in this way it should also occur for a cloud container service as long as it uses virtual machines instead of bare metal servers. Using bare metal servers would avoid the hypervisor overhead and therefore presumably reduce the overall overhead.

2.9 Technology stack

In order to better mimic a production system and increase the complexity of the laboratory system, some technologies from the Betsson’s tech stack were chosen.

The system itself is created using .NET Core, an open source implementation of the widely used .NET Standard. Couchbase Community edition is used for persistent storage and RabbitMQ for messaging. Docker was used to run RabbitMQ in its own container and Couchbase was installed on a dedicated database server. Each of these will be described below.

(32)

CHAPTER 2. BACKGROUND

2.9.1 .Net Core

The .Net framework is essentially used for two things: (1) an execution engine to run applications on Windows and (2) a large library of reusable code for developers to use in their own applications. The .Net framework provides various services to the running applications and their developers such as memory management, a common type system, development frameworks, language interoperability and version compatibility. In 2016, an open source implementation of the .Net Standard was released, called .Net Core. This can run on various OSes, including Apple’s MacOS and some Linux distributions. The microservices were built using ASP.NET Core [11], which is a part of the .NET core stack. ASP.NET Core is primarily used to create internet applications and it allows developers to easily create REST APIs.

Kestrel, the default web server for running ASP.NET Core projects, was used to run the microservies for both the Docker and native solution.

2.9.2 Couchbase

Couchbase[12] is a commercial NoSQL (Not only SQL) database, marketed as an engagement database. It is built to be lightning fast, easily scalable and able to hold multiple copies of data entities for high availability and data safety. Couchbase can store JSON documents and binary data. The data is stored in data containers called buckets which can be replicated up to three times in the Couchbase cluster.

Multiple buckets can be created in the cluster for various purposes. Each data entity has a unique Document ID which is used to decide what server(s) it shall be stored on. Couchbase offers a REST API for client applications to interact with, but also offers a software development kit (SDK) for various programming environments for easy integration. In this project the SDK for C# is used. A community edition of Couchbase is available and as stated earlier was used in this project. Even though the clustering ability of Couchbase is one of its main strengths there will only be one instance running in the test environment.

2.9.3 RabbitMQ

RabbitMQ[13] is a lightweight open source message broker. It is written in Erlang and was originally built to support AMQP[14] but the current version supports various other protocols such as STOMP[15], MQTT[16] and HTTP.

Essentially, RabbitMQ acts as an intermediary between a message sender (producer) and message receiver (consumer). RabbitMQ can be used to implement various messaging patterns such as worker queues, publish/subscribe, routing, topics, and remote procedure calls (RPC). The SDK for C# was used for easy integration of RabbitMQ into the system. The SDK is based upon AMQP 0-9-1 with additional abstractions.

20

(33)

2.9. TECHNOLOGY STACK

2.9.4 Taurus

In order to create a workload on the system and analyze its performance for different configurations a load testing tool was needed. This could be done with custom made code but there are various open source load testing tools available. Taurus[17] was chosen for this project due to its easy configuration syntax and versatility. Taurus is not in itself a load testing tool but works as a wrapper for other load and functional testing tools such as Apache JMeter[18], Gatling[19] and Selenium[20]. The testing scenarios are configured with .yaml files in which the underlying testing tool and test parameters are configured. Taurus used JMeter as the load testing tool for all performance tests in this project. An example Taurus configuratino file is shown in Listing 2.4 while the command to start Taurus is shown in Listing 2.5.

1 ex ec ut io n :

2 concurrency : 10

3 ramp−up : 1m

4 hold−f o r : 10m

5 s c e n a r i o : l o g i n 67 s c e n a r i o s :

8 l o g i n :

9 timeout : 500ms

10 k e e p a l i v e : f a l s e 11 r e q u e s t s :

12 u r l : http : / / l o c a l h o s t :5001/ api / user / l o g i n

13 method : POST

14 body :

15 username : test_user

16 password : test_password

17 headers :

18 Content−Type : a p p l i c a t i o n / j s o n

Listing 2.4: Example Taurus configuration file

1 bzt taurus−c o n f i g . yaml

Listing 2.5: Running Taurus test

Listing 2.4 shows an example Taurus configuration file. The main workload parameters are concurrency, throughput, ramp-up and hold-for. Concurrency stands for the target value of concurrent virtual users. Instead of using a target value for requests per second, Taurus spawns concurrent users that interact with the test system in order to mimic a more realistic scenario. Throughput sets a limit to the maximum requests per second created. It can be used to establish a stable load of requests throughout the load test. The ramp-up parameter controls how quickly Taurus will spawn the target number of concurrent users. This can be set to 0 to start immediately with the full number of users but it can also be useful to slowly ramp up the number in order to find the breaking point when the system can not handle the load. Hold-for controls for how long the test will run with the full number of concurrent users.

(34)

CHAPTER 2. BACKGROUND

The scenario shown in listing 2.4 will send HTTP POST requests to http://local host:5001/api/user/login. The requests contain json data with two fields, username and password. Given that Taurus is installed on the tester’s computer and is accessible via console, Taurus tests are run with a bzt command as shown in listing 2.5. It will open up a display to show how the testing is going in real time. Once the test finishes a folder is created containing information about the test.

2.10 Zipkin

Understanding the behavior of distributed systems can be very difficult. Zipkin[21]

can help with this in a microservice system by collecting request time data from the running services and showing the results in a clear manner. Zipkin is based on Google Dapper[22]. It was originally created by Twitter but the project is now being handled voluntarily by the OpenZipkin organization. Services are configured to send the timing data to Zipkin. When a request enters the system it may require multiple services to finish a workload before the response is sent back. Zipkin shows how long each service took to finish their workload for any given request. The overall and individual processing times are then shown in Zipkin’s (user interface) UI. This is useful when debugging latency problems because any bottlenecks can be easily found. For this project the Zipkin4Net[23] package was used as it makes it fairly easy to configure services to send data to a running Zipkin instance.

2.11 Nmon

Nmon[24] was used to conduct performance monitoring on the Linux servers running the microservices. Nmon can be installed on a Debian/Ubuntu system using the normal apt-get install command as shown in Listing 2.6.

1 sudo apt−get i n s t a l l nmon

Listing 2.6: Installing nmon

Nmon is a powerful tool capable of showing various performance metrics regarding CPU, memory, network and disk usage. It can show information in real-time but for this project the data used to save data in a comma separated value (CSV) file while the load tests were running. Listing 2.7 shows how nmon is configured to save the state of the server every 30 seconds for 30 iterations in a file named docker_loadtest.nmon. This saved data can be analyzed to learn about the state of the system, while handling underload from the experiments. The resulting performance metrics can be compared for the different environments that were evaluated.

1 nmon −F docker_loadtest . nmon −s 30 −c 30

Listing 2.7: Starting nmon

22

(35)

2.12. PREVIOUS WORK

2.12 Previous Work

This section presents previous work that is related to the project. Studying this work was used to gain a better understanding of the subject. The focus of the thesis is to understand the performance penalties incurred by running microservices using Docker. As this section shows, when compared to the monolithic approach, performance penalties are expected due to the nature of containers and microservices. However, due to the complexity of such systems it can prove difficult to pinpoint exactly where the system is experiencing performance penalties.

Multiple aspects must be considered, ranging from the performance of virtualization techniques used to the granularity of the microservice system. This section starts by presenting work done to measure the performance of virtualization techniques, with the focus being on Docker. It then moves on to microservices and microservices running in containers.

2.12.1 Virtualization

Most direct performance studies follow the same methodology. Multiple benchmark programs are run on different setups in order to show how virtualization compares to bare-metal performance. E. Casalicchio and V. Perciballi [25] researched tools to measure the workload generated by containerized applications with regards to CPU and I/O performances and showed the importance of carefully interpreting the results from monitoring tools, as different tools showed different results that are correct per se. For example, Docker’s stream of resource usage statistics for containers was found to show the requested CPU resources by containers while other tools showed the actual resource usage on the host system. Such differences showed that fully understanding the monitoring tools being used was of great importance.

Felter, et al.[2] ran several benchmarks on a single bare-metal machine, KVM and Docker setups and compared their performances. The tests measured workload metrics for scenarios where one or more hardware resources were fully utilized.

Their results showed that Docker and KVM introduce negligible overhead for CPU and memory intensive workloads. However, Docker and KVM did not perform as well as the bare-metal machine when it came to I/O intensive workloads. Docker’s NAT was also shown to have a negative performance impact on workloads with high packet rates.

Li, et al[26] performed performance evaluations on a standalone Docker container running on a physical machine and compared this against a standalone virtual machine running on VMWare Workstation 12 Pro. Similar to earlier findings, their experiments showed that modern virtualization techniques usually have minimal performance overhead. A key takeaway is that the overhead varies not only on a feature-by-feature basis, but also on job-to-job basis.

(36)

CHAPTER 2. BACKGROUND

Sharma, et al.[27] experimented with virtualization technologies in a data center environment and compared the performance of containers and virtual machines when more than one application was running on each physical server. The baseline results are similar to other findings, as there is a small overhead for CPU and memory operations (as they do not go through the hypervisor) while the performance of I/O intensive applications was poor. In multi-tenant situations, where there are multiple applications competing for resources on their physical host, containers experience a higher performance penalty due to interference from other tenants. This can potentially be reduced with strategic placement of containers.

J. Shetty, et al.[28] showed that Docker and bare-metal performance are similar while virtual machines always have poorer performance. A noteworthy observation is how severe the virtual machine overhead was for write operations (54% less throughput while Docker’s overhead was 13%). These authors also showed how the higher network latency of virtual machines negatively impacts the performance of HTTP servers. In particular, the Openstack server had 32% lower throughput when running on Docker than the bare-metal server.

However, despite the evidence of performance penalties, running containers on top of virtual machines is common practice in enterprise environments, hence this is the main focus of this thesis. However, little literature is available on this subject.

Karatza and Mavridis [29] investigated how running Docker on top of KVM affected the containers’ performances by running several resource focused benchmarks. They showed that the extra virtualization layer brings expected performance penalties for all main resources, i.e. CPU, memory, disk and network interface.

Amazon Web Service offers the possibility to both deploy and run services in the cloud. They offer both virtual machines and containers. However, their container based services run on top of virtual machines. By comparing the same services running on AWS, both on a VM and in containers, Salah, et al.[30] showed that while containers may have less overhead on a bare-metal machine, running containers on top of VMs has a clear negative impact on performance. Their research analyzed throughput, response time, and CPU utilization and showed that while the container solution had a better CPU utilization, it clearly suffered in terms of a lower rate of request handling.

Although there seems to be a consensus that containers have less of a perfor- mance impact than virtual machines, this is not always the case as Rosenberg[31]

showed in a whitepaper for VMWare. Due to VMWare’s vSphere 6.5 sophisticated resource scheduling, running the benchmark applications on VMs was shown to have the highest performance, outclassing even bare-metal servers. This whitepaper showed that running Docker on VMs proved to be better than on the bare-metal servers. The Docker and VM combination performed worse than VMs without Docker because of the overhead added to storage and network stacks.

24

References

Related documents

Figure 5.5: Maximum number of high priority stations with average access delay of at most 20 ms for various levels of offered low priority traffic load... 5.4 we plot the maximum

För att åstadkomma en rationell och objektiv metod för analys och kart- läggning av sprickförekomst och dess svårighetsgrad behövs först och främst ett system för insamling av

Cache is designed to reduce the memory access time. Its access time is much faster than memory and it could save data or instruction which will be used

Nisse berättar att han till exempel använder sin interaktiva tavla till att förbereda lektioner och prov, med hjälp av datorn kan han göra interaktiva

(Thanks to Martin Eineborg for pointing this obvious fact out to me.) However—which Joakim Nivre pointed out to me—if N is interpreted as “number of”, the original version

registered. This poses a limitation on the size of the area to be surveyed. As a rule of thumb the study area should not be larger than 20 ha in forest or 100 ha in

The main result of the tests at service- load levels is that many bridges have a “hidden” capacity and could carry higher loads than what is obtained applying ordinary design

Conventional received signal strength (RSS)-based algo- rithms as found in the literature of wireless or acoustic networks assume either that the emitted power is known or that