• No results found

A comparative study of Docker and Vagrant regarding performance on machine level provisioning

N/A
N/A
Protected

Academic year: 2021

Share "A comparative study of Docker and Vagrant regarding performance on machine level provisioning"

Copied!
54
0
0

Loading.... (view fulltext now)

Full text

(1)

A comparative study of Docker

and Vagrant regarding

performance on machine level

provisioning

MAIN FIELD:​ ​Computer science

AUTHOR:​ ​Martin Malmström & Viktor Zenk SUPERVISOR:​​Kim Lood

(2)

 

 

 

 

 

 

 

 

 

 

 

 

 

This final thesis has been carried out at the School of Engineering at Jönköping University within computer science. The authors are responsible for the presented opinions, conclusions, and results.

Examiner: Johannes Schmidt Supervisor: Kim Lood Scope: 15 hp

Date: 2020-05-28

 

Postal address: Visiting address: Phone:

Box 1026 Gjuterigatan 5 036-10 10 00 551 11 Jönköping

(3)

Abstract

Software projects can nowadays have complex infrastructures behind them, in the form of libraries and various other dependencies which need to be installed on the machines they are being developed on. Setting up this infrastructure on a new machine manually can be a tedious process prone to errors. This can be avoided by automating the process using a software provisioning tool, which can automatically transfer infrastructure between machines based on instructions which can be version controlled in similar ways as the source code. Docker and Vagrant are two tools which can achieve this. Docker encapsulates projects into containers, while Vagrant handles automatic setup of virtual machines.

This study compares Docker and Vagrant regarding their performance for machine level provisioning, both when setting up an infrastructure for the first time on a new machine, as well as when implementing a change in the infrastructure configuration. This was done by provisioning a project using both tools, and performing experiments measuring the time taken for each tool to perform the tasks.

The results of the experiments were analyzed, and showed that Docker performed significantly better than Vagrant in both tests. However, due to limitations of the study, this cannot be assumed to be true for all use cases and scenarios, and performance is not the only factor to consider when choosing a provisioning tool. According to the data collected in this study, Docker is thereby the recommended tool to choose, but more research is needed to determine whether other test cases yield different results.

Keywords

Docker, Vagrant, provisioning, DevOps, Infrastructure as Code, virtualization, containerization, software development.

(4)

Sammanfattning

Moderna mjukvaruprojekt kan ha en komplex infrastruktur bakom sig, i form av bibliotek och andra beroenden som måste installeras på utvecklarmaskiner. Att konfigurera denna infrastruktur på en ny maskin manuellt kan vara en tidskrävande process, som även kan leda till en ofullständigt eller felaktigt konfigurerad lösning. Detta kan undvikas genom att automatisera processen med hjälp av provisioneringsverktyg, som automatiskt kan överföra infrastrukturer mellan maskiner baserat på instruktioner som kan versionshanteras på liknande sätt som källkoden. Docker och Vagrant är två verktyg som kan användas till detta ändamål. Docker kapslar in projektet i containers, medan Vagrant hanterar automatisk konfiguration av virtuella maskiner.

Denna studie jämför Docker och Vagrant avseende deras prestanda för mjukvaruprovisionering på maskinnivå, både när det kommer till en förstagångsinstallation av infrastrukturen på en ny maskin, och även implementering av en ändring i konfigurationen av infrastrukturen. Denna jämförelse gjordes genom att implementera båda lösningarna, och sedan utföra experiment för att mäta tidsåtgången för båda verktygen att lösa de två uppgifterna.

Resultaten av experimenten analyserades, och visade att Docker presterade bättre än Vagrant i båda tester. På grund av begränsningar i studien kan detta inte antas vara sant för alla användningsområden och scenarier, och prestanda är inte den enda faktorn att ha i åtanke när ett provisioneringsverktyg ska väljas. Baserat på datan insamlad i denna studie är Docker därmed verktyget som rekommenderas, men mer forskning krävs för att avgöra om andra testområden ger andra resultat.

Nyckelord

Docker, Vagrant, provisionering, DevOps, infrastruktur som kod, virtualisering, inkapsling, mjukvaruutveckling

(5)

Glossary

 

VM Virtual Machine

IaC Infrastructure as Code

DevOps A set of practices and tools to increase

delivery times

CM Configuration Management

CI Continuous Integration

CD Continuous Deployment

RDP Remote Desktop Protocol

WinRM Windows Remote Management

SSH Secure Shell

SQL Structured Query Language

Git Version-control tool to track changes in

source code.

OS Operating System

TCP Transmission Control Protocol

(6)

Table of contents

Introduction 1

Background 1

Selection of provisioning tools 1

Problem statement 2

Purpose and research questions 2

Scope and delimitations 2

Company 3

Disposition 3

Theoretical framework 4

Link between research questions and theory 4

Virtual machines 4 Docker 5 Images 6 Containers 6 Docker Hub 6 Docker Registry 7 Vagrant 7 Synced folders 7 Vagrant cloud 7 Previous research 7 Methodology 8

Link between research questions and methods 8

Experiments as method 8 Formulating hypotheses 8 Work process 8 Approach 9 Data collection 9 Data analysis 9

Validity and reliability 9

The provisioned project 10

Configuration of Docker 10 Dockerfile 11 Compose 12 Configuration of Vagrant 14 Vagrantfile 14 Experiments 16 First-time installation 17 Docker 17 Vagrant 17 Configuration change 17 Docker 18 Vagrant 18

(7)

Empirical data and analysis 19

Provisioning fresh installation with Docker 19

Provisioning fresh installation with Vagrant 19

Provisioning configuration change with Docker 19

Provisioning configuration change with Vagrant 20

Statistical tools used 20

Normal distribution 20

Confidence interval 20

T-test 20

First-time installation analysis 21

Configuration change analysis 22

Combined metric 23

Conclusions and discussion 26

Findings 26

Implications 26

Limitations 26

Conclusions and recommendations 27

Further research 27

References 28

Appendices 30

Appendix A - build.ps1 30

Appendix B - Solr dockerfile 35

Appendix C - boot.cmd 37 Appendix D - docker-compose.yml 38 Appendix E - .env 41 Appendix F - main.cmd 41 Appendix G - InstallChocolatey.ps1 41 Appendix H - InstallBoxStarter.bat 41 Appendix I - RunBoxStarterGist.bat 41 Appendix J - BoxStarterGist.txt 42 Appendix K - install-sif220.ps1 42 Appendix L - installSolr.ps1 43 Appendix M - configureSolr.ps1 45 Appendix N - sqlServer.ps1 45 Appendix O - setupSitecore.ps1 47

(8)

1

Introduction

1.1

Background

Software development projects nowadays depend on environments and libraries supporting and connecting various components. Infrastructure needed to compile and execute software must often be installed onto machines used for development. As projects grow larger and more intricate, so do the infrastructures, which can introduce tedious work when a new machine is to be used for developing, such as when upgrading computers, or bringing a new developer onto the team ​[1]​.

Infrastructure as Code (IaC) refers to the process of managing and provisioning infrastructures on machines automatically, using versioned files with instructions. These files only need to be configured once and can then be reused. IaC enables Development and Operation (DevOps) teams to easily set up new environments, configure machines, and install new software that will be applied for the whole development team ​[2]​. ​Using automated tools to provision infrastructure minimizes the time required for setting up the environment, maximizing the time available for development, while also minimizing configuration errors, which increases the efficiency of the development team ​[3]​.

Provisioning means automatically installing development environments, to ensure that all dependencies and configurations are in place regardless of which machine the project is run on. This way, the project is ensured to be identical on all machines ​[4]​. So, a new developer downloads the project, starts the configuration, and waits for everything to be installed automatically. Thereafter, they are ready to start developing.

1.2

Selection of provisioning tools

Docker and Vagrant are two tools that could be used to provision an infrastructure. Vagrant is a tool used to automate the configuration of virtual machines, while Docker encapsulates the project into containers ​[5]​.

Other available tools for provisioning are, among others, Terraform and Cloud Formation. Terraform is developed by HashiCorp, who also develop Vagrant, and they themselves state that Vagrant is more suitable for maintaining local development environments, which is the main focus of this study; provisioning on local machines. Terraform, in contrast, builds infrastructure focused on the cloud ​[6]​. Cloud Formation is Amazon Web Services’ (AWS) own tool for provisioning, which also focuses on cloud infrastructure ​[7]​.

There are also tools used for Configuration Management (CM) ​[8]​, such as Chef, Puppet, Ansible, and Saltstack. CM tools are more suitable for automating scripts, while provisioning tools are more suitable for automating installations of entire infrastructures ​[9]​. Docker and Vagrant are used within a closed environment, which facilitates migrating the environment between local machines. CM tools can however be used as a complement to a solution using Docker or Vagrant.

CM tools are thereby not of interest, as they are designed to maintain configurations, not setting them up ​[10]​. Docker and Vagrant were selected over other provisioning tools like Terraform and Cloud Formation, as the latter two are designed for cloud servers, while this study focuses on local development. Another argument for selecting Docker is that it is a popular tool used extensively in the sector. This is due to several reasons, including its efficiency, scalability, and portability ​[11]​. The two tools will be compared to evaluate the performance of each alternative.

(9)

1.3

Problem statement

When a project is initiated locally on a machine, all services and dependencies are only installed on that machine. Problems can then arise when the project needs to be moved to a new machine​[12]​. When the project is installed on a new computer, all dependencies must be installed manually, which can be a long and tedious process ​[1]​. A manual workflow can also lead to dependencies towards specific persons, as the situation may arise where only a few employees know how to conduct the installation process. If employees with this knowledge were to leave, it could cause problems for the company. The manual installations can also be a source of problems, as dependencies may be overlooked, resulting in an incomplete configuration. This poses a risk of difficult bugs, misunderstandings, or old versions of library and service dependencies.

The installations must be conducted each time the development environment is to be installed on a new machine. This could be for example when hiring new staff or consultants, or when a developer’s computer is upgraded or crashes. Similarly, when the infrastructure is altered, the change must be iterated out to all development machines, which is also a process that would benefit from being automated similarly to the versioning of the code. By using provisioning in a closed environment, the ​“it works on my machine”​ syndrome can be avoided ​[13]​.

1.4

Purpose and research questions

To resolve the issues and the long processes, a versioned infrastructure solution can be implemented. There are several frameworks for achieving a versioned infrastructure, among them are Docker and Vagrant. The purpose of this study is to compare Docker and Vagrant, to establish which alternative performs better for provisioning of infrastructure, both when it comes to the installation time on a new machine, but also the iteration time of a configuration change. Hence, the following research questions are central to meet the purpose of this report:

What differences are there between Docker and Vagrant for machine level provisioning regarding the time required for a first-time installation of a development environment?

What differences are there between Docker and Vagrant for machine level

provisioning regarding the iteration time of a configuration change in a provisioned development environment?

The results of this study can be of value to all developers choosing a framework for

implementing similar solutions to reduce their installation time, and to efficiently propagate changes out to other team members, which enables a more efficient working process.

1.5

Scope and delimitations

The study will focus on a performance comparison between Docker and Vagrant. Specifically, the time taken by each tool to provision a development environment on a new machine will be measured, as well as the iteration time for a configuration change. The study will be conducted on machines running Windows, as the provisioned project is limited to a Windows environment. It is worth noting that both Docker and Vagrant use Linux internally, and it cannot be ruled out that using Linux throughout the study would yield different results. This is however not within the scope.

As previously mentioned, CM tools are more suitable for maintaining configurations on

existing​servers or local machines, but not for setting up whole infrastructures. Therefore, CM tools are not within the scope of this study. Even though some form of configuration will be assessed, adding CM tools would make the scope too broad.

(10)

1.6

Company

The study is conducted with the support of OEM International AB, which is a technical trading company with over 30 affiliated firms. This study is supported by OEM’s digital marketing team, who manage all development, support, and maintenance of the company wide digital business platform.

OEM do not currently have a solution in place for development environment provisioning, which is causing issues because of the reasons outlined above. They have large resource costs associated with setting up the environment on each new machine and have been tied to one consulting firm because of the unportable nature of the current environment. However, these problems are not unique to OEM. In fact, they are problems that must be addressed in all development environments to ensure efficient development ​[4]​.

OEM are interested in evaluating the alternatives for provisioning to make informed decisions on how to manage the configuration of their development environment. Today, they have support for Continuous Integration and Continuous Deployment (CI/CD) in their test and production environments, but not for provisioning of their development environment.

1.7

Disposition

The remainder of this report will be broken down into the following categories

Chapter 2 - Theoretical framework: An introduction to the theoretical background of the tools used and evaluated.

Chapter 3 - Methodology: A description of the study and the work process.

Chapter 4 - Empirical data: Data that was gathered from the experiments are presented here. Chapter 5 - Analysis: An analysis of the collected data.

Chapter 6 - Conclusions and discussion: A summary of the study is discussed.

(11)

2

Theoretical framework

2.1

Link between research questions and theory

The choice of Docker and Vagrant as tools to compare was made because they are two popular tools, which can be used for the same purpose, but function differently internally. They are thereby not only attractive options for the technical solution, but also lend themselves well to a comparative study.

2.1.1 Virtual machines

Virtual machines (VM’s) are a concept which was developed in the 1960’s ​[14]​, allowing a computer system to be emulated on another physical computer. Among the possibilities of early VM’s were things such as running Windows oriented programs in Linux, despite the different architectures.

Virtual machines are able to run with the help of ​hypervisors​, also known as ​virtual machine

monitors​. Hypervisors allocate the host computer’s resources, such as processing and memory, to the virtual machine ​[15]​. Hypervisors also allow multiple VM’s to run on a single machine, and they come in two types. Bare metal hypervisors and hosted hypervisors. Bare metal hypervisors run directly on the hardware while hosted hypervisors run as processes on the host machine. ​[16]​.

There are some security benefits of using a virtual machine for certain kinds of services. A virtual machine has its own contained environment emulating a computer, with its own operating system, libraries, and binaries. The closed environment brings some benefits to the table. If something malicious hits the VM it can simply be torn down and brought up again without the malicious software ever seeing the host computer.

A drawback with virtual machines are high demands of hardware since they require to share resources from the host machine’s hardware. Virtual Machines are resource heavy, and require a fixed amount of resources (RAM, processing power, etcetera). Because of this it can be difficult and resource-intensive to run multiple VMs on one physical machine, and VMs are therefore less suitable for handling deployment of smaller applications ​[17]​.

(12)

Figure 2.1​ - Architecture of virtual machines ​[18]

2.1.2 Docker

Docker is an open source tool ​[19]​, which encapsulates projects into so called containers. The containers are controlled by a ​Dockerfile​, which specifies how the project is to be configured. This file can be versioned (e.g. using Git), which ensures that all instances of the project on all machines function identically and are using the correct configuration.

Docker supplies a runtime engine to manage the containers. The project is split up into containers to simplify the development, which makes it easier to install the project anywhere. Regardless of whether the project is run on a virtual machine, in the cloud, or on local machines with operating systems, the container will function identically ​[20]​.

Such a solution facilitates the development of a project in large teams, where developing without it can create issues. If a project is developed in one operating system (e.g. Linux) but will be deployed on a server running a different OS (e.g. Windows), problems may arise as dependencies are managed differently. A containerized infrastructure solves this issue, by packing the project into a container with everything required to run it, which can then be run on any system. Due to the modular nature of containers, they are also easily scalable.

(13)

Figure 2.2​ - Architecture of Docker ​[18]

2.1.2.1 Images

An image contains “instructions” used to create one or several containers. An image consists of compressed software which can for example have programs, services, packages, and dependencies required to create the container.

2.1.2.2 Containers

In many ways a container is similar to a virtual machine, but there are several key differences. A container does not have its own operating system like virtual machines. This way, it does not require the same amount of performance from the hardware. Containers also take up significantly less space than VMs, taking up space in the megabytes, as opposed to VMs taking up gigabytes of space ​[21]​. Since a virtual machine has its own operating system, libraries, and binaries, it demands higher performance from the hardware.

Containers are lightweight runnable instances of an image, which are executed in the host machine's operating system​[22]​. This makes them efficient, and no unnecessary resources are taken from the hardware.

2.1.2.3 Docker Hub

Docker Hub is a cloud platform for Docker container images, where developers can push and pull images when needed to build a project. There are public images that anyone can pull, and it also supports private repositories for private images. This enables teams to have their images that their containers build from in one place ​[23]​.

(14)

2.1.2.4 Docker Registry

There are some limitations to Docker Hub, which can convince a development team to host their own images, by setting up a Docker Registry. This can be done for a few reasons. For one, Docker Hub only offers hosting of one private image for free, and a monthly fee must be paid to host additional private images. There are also other benefits to hosting the images on your own, such as having full control and ownership of the storage and distribution pipeline. Self-hosting however, does of course come with drawbacks in the form of maintenance costs, and potentially a more hassle-ridden development process ​[24]​.

2.1.3 Vagrant

Vagrant is open source and text based, which makes it suitable for versioning ​[25]​. Vagrant does not use containers like Docker, it is instead used to maintain VMs. Vagrant helps the initiation of VMs using a ​Vagrantfile​, which contains properties such as operating system and software to be installed on the VM. The Vagrantfile can be versioned just like a Dockerfile, and works in a similar way ​[26]​.

2.1.3.1 Synced folders

When running code in an automatically configured VM, one needs a way to copy the code and other relevant files into the VM. With Vagrant, this is typically done using a synced folder. This is a folder on the host machine that is specified to be copied into a specific path on the VM. This way, all development can be done on the host machine, and be seamlessly deployed to the VM ​[27]​.

2.1.3.2 Vagrant cloud

To start a VM using Vagrant, a . ​box ​file is needed, containing the basic installation of the machine, not least the OS. Such box files can be obtained in different ways, but most easily from the​Vagrant Cloud​, where the developer community can upload and download .box files with different configurations ​[28]​.

If a specific box is needed, which cannot be found in Vagrant Cloud, Vagrant can also be used to generate a custom .box file from an existing VM, using the vagrant packagecommand [29]​.

2.2

Previous research

In the initial phase of this work, a literature study was performed, reading articles, blog posts, and websites regarding the technical aspects of the tools to be evaluated, as well as a study of the current state of scientific research in the field. Regarding the former, a lot of resources in various forms were found, ranging from argumentative blog posts ​[4], [5], [30], [31] to technical documentation​[32], [33]​. Regarding scientific resources however, there was not a similar abundance. There have been many studies of Docker, containers, etcetera, and separately also of VMs and (to a lesser extent) Vagrant. Despite this, we were not able to find many works about comparisons of these technologies, and none comparing specifically Docker and Vagrant for provisioning of software environments. Because of this, we can confidently claim that this study is novel.

The most similar work encountered was a bachelor thesis from KTH Royal Institute of Technology, ​Improving Software Development Environment, Docker vs Virtual Machines

[34]​, which qualitatively compares development in a Docker environment, versus in a VM environment. Vagrant was however not within the scope, nor was it a quantitative study of performance, as this study is. We are thereby of the opinion that this work is significantly different from theirs.

(15)

3

Methodology

3.1

Link between research questions and methods

3.1.1 Experiments as method

The work was performed as a comparative study of the two tools, where ​experiment was the method used. Both solutions were implemented, after which the implementations were evaluated. Experiments are an essential part of well conducted research, and offer an objective comparison between two systems​[35]​. Based on this, experiments were chosen as a method, to scientifically and with quantitative data be able to assess whether Docker or Vagrant performs better based on the research questions.

3.1.2 Formulating hypotheses

Since a comparison between Vagrant and Docker was conducted, a hypothesis for each research question needed to be formulated. A first-time installation required Vagrant to set up a Virtual Machine by installing an entire operating system, and installing all required software, while Docker starts containers using images describing components of the infrastructure. With this more lightweight approach, our inference was that Docker would be faster.

Hypothesis 1:

Docker performs better than Vagrant regarding the time required for a first-time installation of a development environment.

For the second research question, a configuration change was implemented. The configuration change chosen was increasing the Solr-memory that is allocated to a Java Virtual Machine [36]​. This was predicted to be faster than installing the entire environment from scratch, and there was less confidence in which system would be faster. However, due to the same reasons as above, the hypothesis was that once again Docker would be faster.

Hypothesis 2:

Docker performs better than Vagrant regarding the iteration time of a configuration change in a provisioned development environment.

3.2

Work process

In the initial stages of the work process, a literature study was conducted to base the study on a deeper understanding of the field. Previous research, as well as a significant number of articles and blog posts were studied, in order to assess the current state of the art, and to ensure the questions were not already answered.

The first practical step was to implement the provisioning solution with each of the two tools, in a way that meets the criteria set in the problem statement. When both solutions were implemented the experiments were conducted. The experiments were conducted on a machine with neither Docker nor Vagrant solutions installed, where the project was provisioned using the Vagrant solution, and the time was measured. The same experiment was conducted using the Docker solution. Once the configurations had been installed, the second research question could be investigated by measuring the time taken for each tool to implement a configuration change.

(16)

To increase validity and reliability all experiments were performed five times. When all the experiments were completed, the data gathered from the experiments were analysed to evaluate the systems. Finally, the results and conclusions were presented in this report.

Figure 3.1​ - Workflow

3.3

Approach

The study used a quantitative approach, with a literature study and experiments. The literature study was conducted to form a deeper understanding in the field, to ensure quality in the research. Through experiments quantitative data was generated, which could be analysed to draw conclusions, further developing the knowledge in the field.

3.4

Data collection

The data collection consisted partly of studying literature and partly of collecting empirical data from experiments. A literature study was done in order to have a solid starting point for the study, as well as getting a technical understanding of how Vagrant and Docker are configured. Both tools are well documented, allowing for an efficient learning process.

The quantitative and main part of the data collection was done through experimentation. By measuring the time taken for the tools to perform the tasks, quantitative data was collected, which could be used for analysis and conclusions.

3.5

Data analysis

When all empirical data had been gathered from the experiments, a statistical evaluation of the repeated experiments was conducted, to be able to draw conclusions on consistent differences in performance of the two tools. The analysis was conducted with the help of statistical tools such as confidence intervals and t-tests, to evaluate the significance of the differences in the empirical data. Through a rigorous statistical analysis, the measured results could be validated, which increases the reliability of the conclusions.

3.6

Validity and reliability

To ensure high reliability, suitable preparations and literature studies were carried out before the solutions were implemented or tested. Finding reliable sources of information, and gathering sufficient information regarding the subject was given extra attention, to ensure that the claims in this report can be trusted.

For the test results to be as reliable as possible, each test was conducted five times. This was done to mitigate any temporary factors which might have affected the tests, such as network speed or unexpected resource usage by background processes. As mentioned, a thorough statistical analysis was conducted to validate the data collected. The conclusions drawn from the results were also given close attention, to properly investigate what could be concluded from the results, and ensure that no over-generalizing statements were made.

(17)

3.7

The provisioned project

The project used in the study was a web platform consisting of three main services, Sitecore, Solr, and an SQL database. Sitecore, as the name implies, acts as the core for the web platform. It provides a CMS (Content Management System), with which components for the web pages can be created and managed. Sitecore also provides a variety of additional tools, such as e-commerce. Sitecore is only available for Windows operating systems ​[37]​.

Solr is a search server, used to continuously index the contents of the web page, including file contents from files such as JSON, XML, and CSV. The indexes are stored in a document based database, and enable the site to be searched for data contents efficiently.

The database used is an SQL database, implemented in Microsoft SQL Server Express.

3.8

Configuration of Docker

There are two versions of Docker available for Windows, ​Docker Desktop and ​Docker

Toolbox​. This study was conducted using ​Docker Desktop​, as it supports Windows containers, which​Docker Toolbox​does not. Windows containers were a requirement for the project as it used Sitecore, which is bound to Windows. Docker Desktop is however limited to Windows 10, and specifically the versions​Pro​, ​Enterprise​, and ​Education​. This is because Docker Desktop requires Hyper-V, which is only included in the mentioned versions ​[38]​.

In order to start containers with Docker, images with instructions are needed. In some cases, there are public images available free of charge to be pulled directly from the Docker Hub [23]​, but since Sitecore is proprietary software requiring a licence ​[39]​, such a solution could not be used. Instead, a private repository was set up locally on one machine, to act as a server to pull the images from.

The registry itself was built using a public Docker image ​[40]​, which was pulled using ​docker pull stefanscherer/registry-windows​. This command downloads the image from Docker Hub. To start the repository the following command was run:

docker run -d -p ​5000​:​5000 --restart=always --name registry -v C:\registry:C:\registry stefanscherer/registry-windows:​2.6​.​2​, which starts a container hosting the registry. The command contains some notable flags:

● -d​: Runs the container in the background and prints the ID of the container. ● -p ​5000​:​5000​: Forwards the container’s port 5000 to the host’s port 5000 ● --restart=always​: Starts the container each time Docker is started. ● --name registry​: Sets the name of the container to “registry”.

● -v C:\registry:C:\registry​: Mounts a volume. Since the container is built from scratch each time it is started, the data within the container is lost. Mounting a volume makes the data persist across sessions.

● stefanscherer/registry-windows:​2.6​.​2​ specifies the image to be used. Once the container was running it could be accessed locally, for example by viewing the catalog onhttp://localhost:5000/v2/_catalog. At this point the registry does not contain any images, and the catalog returns an empty JSON array.

(18)

Figure 3.2​ - Empty registry up and running

Images for Sitecore, Solr, and SQL Server were needed in the repository. A community maintained repository was used, containing all these images​[41]​. Once the repository had been pulled, the following command was used to build the images to the local registry:

.\Build.ps1 -Registry

"localhost:5000"

-SitecoreVersion

9.3

.

0

-includeSpe

. See​appendix A for the full ​Build.ps1 ​script. Despite specifying only version 9.3.0, this build took several hours to complete. Once completed, the catalog displayed a registry filled with images ready to be pulled by other machines.

§

Figure 3.3​ - The filled repository catalog

3.8.1 Dockerfile

Docker builds images by reading instructions from a Dockerfile, and executes the commands within it step by step. See code block 3.1 for a shortened version of the Dockerfile used for the Solr image. For the complete file, see ​appendix B​.

RUN​ Expand-Archive -Path ​'C:\\temp\\packages\\*.zip'​ -DestinationPath

'C:\\temp';

RUN​ New-Item -Path ​'C:\\downloads'​ -ItemType ​'Directory'​ -Force | Out-Null; `

curl.exe -sS -L -o C:\\downloads\\solr.zip

$(​'https://archive.apache.org/dist/lucene/solr/{0}/solr-{0}.zip'​ -f $​env​:SOLR_VERSION); `

Expand-Archive -Path 'C:\\downloads\\solr.zip' -DestinationPath

'C:\\temp'​; `

Move-Item -Path ​'C:\\temp\\solr-*'​ -Destination ​'C:\\solr'​;

FROM $BASE_IMAGE

USER​ ContainerAdministrator

COPY --from=builder ["C:\\solr", "C:\\solr"]

RUN​ MKDIR c:\\data

EXPOSE​ ​8983

(19)

COPY Boot.cmd .

CMD​ Boot.cmd c:\\solr 8983

Code block 3.1​ - Dockerfile for Solr image (shortened)

The ​RUN ​instructions execute commands in a layer on top of the image that will be built and commit the results to the image. For example in the code block 3.1, a zip file is downloaded from the Solr website, extracted and moved to a specific location within the container.

FROMspecifies the image to be used for subsequent instructions. In this study, the images were pulled from a local repository, but they can also be pulled from Docker Hub.

USER​ sets the username used when running the image, the default value is ​root. COPY​ copies files or directories from the host into the filesystem of the container. EXPOSE 8983 exposes the port 8983, by default using TCP.

The CMD Boot.cmd instruction will run the​Boot.cmd script (see ​appendix C​). It also passes two parameters,c:\\solrand8983. In this case, the script will start Solr on port 8983. This is not to be confused with the RUN command, as CMD does not execute at build time, but specifies the command for the image.

Dockerfiles are typically run using docker build and are usually located at the root of the project. The context is then run by the Docker daemon, meaning it is run in the background. Starting the containers can be done using the docker run​ command ​[42]​.

3.8.2 Compose

To run multiple containers that will work together in a project a compose file was used. A compose file is used to configure all services and then start them all with one command. The community maintained repository provided some custom compose files for different versions and use cases.​docker-compose.xp.yml was used as a base, and was modified to fit the study. Among others, the modifications included removing images, mounting volumes, and specifying that the previously configured repository was to be used as source for the images. See code block 3.2 below for a snippet of the compose file used. The full file is available in appendix D​.

(20)

version

:

"2.4"

services

:

sql

:

image

:

192.11.7.226:5000/sitecore-xp-sqldev

volumes

:

-

.\data\sql:C:\Data

mem_limit

:

2GB

ports

:

-

"44010:1433"

environment

:

SA_PASSWORD

:

${SQL_SA_PASSWORD}

ACCEPT_EULA

:

"Y"

solr

:

image

:

192.11.7.226:5000/sitecore-xp-solr

volumes

:

-

.\data\solr:C:\Data

-

.\volumes\solr\bin:C:\solr\bin

mem_limit

:

10GB

ports

:

-

"44011:8983"

Code block 3.2​ - docker-compose.yml

image specifies the image the Docker daemon will pull from. This references the previously

configured repository, which was running on another machine, and accessed over a local network.

volumeswill symlink folders and files from the host to the container. This is done to share

data to the container, similarly to Vagrant’s ​Synced Folders​. The second volume listed for the Solr image is of particular interest to the study, as it contains the configuration for Solr. This is how the configuration change for the second research question was sent to the container. When specifying the JVM memory limit in the linked configuration files, the configuration change was implemented the next time the compose file was run.

mem_limit​ specifies the amount of RAM allocated to the container. ports​ exposes ports from the container to the host.

Running the compose file is done using docker-compose up​, which pulls the images from the

repository, builds the containers, and starts all the components. To stop the instances and destroy the containers, ​docker-compose down​ is used.

(21)

3.9

Configuration of Vagrant

For Vagrant to start a VM, it needs a base installation to start from. This is done using a ​box

file​. A box file is a compressed VM environment, not least containing the OS ​[43]​. Just as finding container images on Docker Hub, box files can be found on Vagrant Cloud. For this study however, a custom box was created. This was done for a couple of reasons, one being that a Windows box was needed, and Vagrant Cloud does not have as many Windows boxes. Building a custom box also enables full control over the configuration.

The box was configured to be as lightweight as possible. Thus, only the OS was installed, and no additional software. Instead, all software needed would be installed by provisioning using Vagrant. This way, developers also have full control over the VM, and any software can easily be modified, or removed if no longer needed, without having to modify the box file. The box was optimized to use as little resources as possible. To do this, a VM was created using VirtualBox, and configured with all the settings needed. The operating system chosen was Windows 10 Professional, as it has support for Remote Desktop Protocol (RDP), which is used to remote into the virtual machine from the host. Windows Remote Management (WinRM) was also enabled, which is an equivalent to SSH (secure shell) for Windows. SSH supplies a secure connection to a machine over insecure networks. It is however not available for Windows by default, so WinRM was used instead. Powershell was configured to run commands without restrictions, and lastly the drives were cleaned of unused files to reduce the size of the box. Once the VM was configured, it was packaged into a box file using the

command vagrant package --base VirtualBoxVMName --output

/path/to/output/windows.box​. Finally, the box was added to the Vagrant project using vagrant box add /path/to/output/windows.box --name WindowsBox

3.9.1 Vagrantfile

The purpose of the Vagrantfile is to describe what type of machines are required for a project and how to configure and provision those machines ​[44]​. The Vagrantfile is usually located at the root of the project, and typically ​one​ Vagrantfile is used for ​one​ project.

The VM is started using the vagrant upcommand. Provisioning the solution is done either

by the vagrant provision command or by adding the ​--provision flag to ​vagrant up​.

When running these commands Vagrant will look at the Vagrantfile and perform the necessary actions.

See code block 3.3 below, containing the Vagrantfile used in this study. The configuration of the Vagrant machine is specified in the Vagrant​.configure(2) ​do |​config​| ​segment, ending with end​ on the final line ​[44]​.

(22)

Vagrant.configure(2) do |config| config.vm.box = ​"WindowsBox" config.vm.guest = :windows config.vm.communicator = "winrm" config.vm.boot_timeout = ​600 config.vm.graceful_halt_timeout = ​600

config.vm.network ​"private_network"​, ip: ​"192.168.50.4"

config.vm.network :forwarded_port, guest: ​3389​, host ​3389​, id ​"rdp"​, auto_correct: true

config.vm.network :forwarded_port, guest: ​5985​, host ​5985​, id ​"winrm"​, auto_correct: ​true

config.vm.network :forwarded_port, guest: 8983, host 8983, id "solr", auto_correct: true

config.vm.network :forwarded_port, guest: ​443​, host ​4443​, id "sitecore"​, auto_correct: ​true

config.winrm.username = ​"vagrant" config.winrm.password = ​"vagrant" config.vm.provider "virtualbox" do |vb| vb.memory = ​"12288" vb.cpus = ​2 vc.name = ​"Windows_Vagrant" end

config.vm.provision :shell, path: ​"shell/main.cmd"

config.vm.provision :shell, path: "shell/InstallBoxStarter.bat" config.vm.provision :shell, path: ​"shell/RunBoxStarterGist.bat" config.vm.provision :shell, path: ​"shell/solr/installSolr.ps1" config.vm.provision :shell, path: ​"shell/solr/configureSolr.ps1" config.vm.provision :shell, path: "shell/sql/sqlServer.ps1"

config.vm.provision :shell, path: ​"shell/sitecore/setupSitecore.ps1" end

Code block 3.3​ - Vagrantfile

config.vm.box​ ​specifies which base box the VM will be built after.

Vagrant defaults the operating system to Linux and since this is a Windows machine it had to be specified under ​config.vm.guest​. ​Vagrant also needs to know this information for specific OS configurations such as mounting folders and configuring networks.

Similarly, ​config.vm.communicator needed to be changed to ​ "winrm" from the default "ssh"​ since it is a Windows machine.

(23)

config.vm.boot_timeout​and ​config.vm.graceful_halt_timeout default to 300 seconds and 60 seconds respectively, but Windows machines can be quite slow, for example due to Windows updates during startup and shutdown. Therefore, increasing these thresholds can prevent bricking the VM.

After that follow some necessary network configurations, using​config.vm.network​. The first line is a static ip address used to access the VM. This is used to connect from the host to the SQL server running on the VM. The other network lines are forwarded ports. By doing this the ports on the VM can be accessed from the host.

By specifying ​config.winrm.username and ​config.winrm.password WinRM will automatically log in to the VM.

config.vm.provider​specifies the resources available to the VM. This can be modified at any time, to allocate hardware resources to the VM.

Lastly, ​config.vm.provision contains all the scripts that will be run and provisioned. The scripts can be read in full in the ​appendices F-N​, and function as follows:

main.cmd is the first script to run, which downloads and installs Chocolatey onto the VM. Chocolatey is a software management automation tool, which simplifies downloading and installing software from powershell. Then, using Chocolatey, ​InstallBoxstarter.bat installs Boxstarter, which is a tool used to automatically handle any reboots needed when installing additional software. After that, the​RunBoxStarterGist.bat script will follow the instructions in ​BoxStarterGist.txt (see ​appendix J​), installing prerequisites such as Java Runtime Environment, SQL Server Express, and various Sitecore prerequisites. Finally, the Powershell scripts perform the final steps in installing, configuring, and starting Solr, SQL Server, and Sitecore.

3.10 Experiments

Once two working implementations were in place, they were evaluated according to the research questions. These evaluations were conducted in the form of experiments, where the time taken for a first-time installation and for a configuration change was measured using both tools. The machine used for the experiments was a laptop of model Lenovo V130. See the technical specification of the machine in the table below.

OS Windows 10 Pro

CPU Intel Core i5-7200U 2.50 GHz

RAM 8 GB

Drive 256 GB SSD

Given the two test cases corresponding to the two research questions, and the two tools used for each test case, four experiments were designed. To increase the reliability of the results, each experiment was conducted five times.

(24)

3.10.1 First-time installation 3.10.1.1 Docker

For the first-time installation using Docker, the registry server was run on a separate machine, and the test machine was connected over a local wireless network. docker-compose upwas

run, and a timer was started to measure the time. Once the project was fully configured, the timer was stopped and the time was recorded. To reset the experiment, firstly the containers were destroyed using docker-compose down​. In order to simulate another “first-time”

installation, all image data also had to be removed. This was done using ​docker rmi $(docker images -a -q). The process was repeated five times.

3.10.1.2 Vagrant

For Vagrant, the box file was manually transferred to the testing machine. In a real use-case the box file would be stored on a server for the developers to download. The command used to start the VM wasvagrant up --provision​. Again, a timer was started when the command

was run, and stopped once the VM was fully configured. Resetting the experiment was done using vagrant destroy, which tears down and deletes the VM, allowing for more experiments to be conducted as if for the first time.

3.10.2 Configuration change

To answer the second research question, a configuration change needed to be implemented, and the implementation time measured. The configuration change used was doubling the RAM allocated to the JVM in Solr, from the default 512 MB to 1024 MB (1 GB). This was chosen in collaboration with OEM, as it represents a typical configuration change which might be used in a real use case.

The JVM memory allocation is specified in the configuration file ​solr.in.cmd​. To measure the time taken to implement the change, this file was edited. In a real life use case, it might be edited by one developer, and then transferred to the others using tools such as git. Once the altered​solr.in.cmd had been obtained, the experiment served to measure how long it takes to implement it into the provisioned environments.

Figure 3.4​ - solr.in.cmd, with JVM RAM allocation set to 512 megabytes

Figure 3.5​ - Solr dashboard displaying 512 megabytes of JVM-memory

(25)

Figure 3.6​ - solr.in.cmd, with JVM RAM allocation set to 1 gigabyte

Figure 3.7​ - Solr dashboard displaying 1 gigabyte of JVM-memory

3.10.2.1 Docker

Thus, the command docker-compose up was run once again, which built the containers

using the new configuration. The configuration was mounted to the container in question using volumes. Since the images had not been deleted before these experiments, Docker did not need to pull them again, and this test was significantly faster. Between each of the five iterations of this experiment, docker-compose downwas run, destroying the containers, but

preserving the images. 3.10.2.2 Vagrant

Applying the configuration change in Vagrant was done in a similar way as with Docker. The configuration file was altered, after whichvagrant up --provisionwas run to rebuild the

VM. Since no vagrant destroy was run in between tests, the initial installation was

preserved, allowing for a faster process than the first-time install. Instead, the configuration was simply reverted in between the five tests

(26)

4

Empirical data and analysis

This chapter presents the raw empirical data collected from the experiments. All times were measured in whole seconds.

4.1

Provisioning fresh installation with Docker

Test number Time (mm:ss)

1 20:50

2 21:13

3 22:13

4 23:42

5 26:46

4.2

Provisioning fresh installation with Vagrant

Test number Time (mm:ss)

1 31:40

2 41:38

3 40:32

4 41:24

5 35:48

4.3

Provisioning configuration change with Docker

Test number Time (mm:ss)

1 01:22 2 01:29 3 01:12 4 01:22 5 01:17 19

(27)

4.4

Provisioning configuration change with Vagrant

Test number Time (mm:ss)

1 04:49

2 03:52

3 02:00

4 02:25

5 02:08

4.5

Statistical tools used

In order to process the empirical data, a statistical analysis was conducted. The analysis investigates the difference and significance of the data collected, and uses t-tests to determine whether or not there exists a significant difference between the groups of data. The following tools are among those used in this statistical analysis. For the calculations, Microsoft Excel and its built in functions were used.

4.5.1 Normal distribution

If the experiments were to be run an infinite number of times, the results would be expected to be normally distributed. The goal of the analysis is to as thoroughly as possible define these normal distributions in order to compare them to each other. Since it is impractical to conduct such large (or infinite) numbers of tests, a sample size of five tests was used for each experiment. The five results were analyzed as normally distributed, and suitable conclusions were drawn as to how the data could be extrapolated to reveal information about the entire data population.

The ​mean and ​standard deviation are key data points defining the distribution of the data. The mean tells us the average result of all the data considered, while the standard deviation tells us how much we can expect the data to deviate from the mean ​[45]​. However, the mean and standard deviation of the sample set cannot be assumed to be equal to those of the entire data population. Thus, other tools must be used to assess how closely this models the actual data, and how they compare to other data sets.

4.5.2 Confidence interval

The confidence interval is a measurement used to give an idea of the accuracy of the mean value ​[45]​. It uses a ​confidence level​, which in this analysis has been set to 95%. The confidence interval gives a range (defined by a deviation in either direction of the mean), and states that the actual mean of the entire data set, with 95% certainty, lies within that range. Thus, a small confidence interval indicates that the found mean is likely close to the actual mean, while a large confidence interval indicates that the actual mean might differ from the found mean to a greater extent.

4.5.3 T-test

When five results of each tool had been collected, two different mean values had been obtained. However, a mean difference in the sample data does not necessarily imply that there

(28)

is a difference in the entire data population, it could be caused by a sampling error. Therefore, a t-test is used to determine the significance of the difference between two data sets.

The t-test defines a ​null hypothesis​, stating that there is no difference between the two means. Then, given the data, the probability of the null hypothesis being true is calculated. The result of this calculation (i.e. the probability), is known as the ​p-value (0 ≤ p ≤ 1). A high p-value indicates that the null hypothesis is likely to be true, meaning that the data sets are similar. A low p-value indicates that the data sets likely ​do not​have the same mean, and that the results are significantly different ​[45]​. In this study, a p-value below 0.05 is considered significant.

4.6

First-time installation analysis

See below the data analysis of the experiments regarding the first-time installation.

Docker

Test number Time

1 20:50 2 21:13 3 22:13 4 23:42 5 26:46 Mean 22:57 Standard deviation 2:24 Sample size 5 Confidence level 95% Confidence interval 2:07 Vagrant

Test number Time

1 31:40 2 41:38 3 40:32 4 41:24 5 35:48 Mean 38:12 Standard deviation 4:21 Sample size 5 Confidence level 95% Confidence interval 3:49

Figure 4.1​ - First-time installation data analysis

This data can be visualized using a bar chart as seen in figure 5.2 below. The blue bars visualize the mean times in the collected data, and the black error bars show the calculated confidence interval.

(29)

Figure 4.2​ - First-time installation data diagram

Given this data we can see that Docker performed better in the experiments. To determine the significance of the observed difference, a two-tailed, paired, t-test was carried out. The p-value obtained was 0.0024, which is well below the significance threshold of 0.05. Thus, we can expect Docker to perform better than Vagrant over a larger sample size as well.

4.7

Configuration change analysis

See below the data-analysis and visualization of the experiments regarding the configuration change.

Docker

Test number Time

1 1:22 2 1:29 3 1:12 4 1:22 5 1:17 Mean 1:20 Standard deviation 0:06 Sample size 5 Confidence level 95% Confidence interval 0:06 Vagrant

Test number Time

1 4:49 2 3:52 3 2:00 4 2:25 5 2:08 Mean 3:03 Standard deviation 1:14 Sample size 5 Confidence level 95% Confidence interval 1:05

Figure 4.3​ - Configuration change data analysis

(30)

Figure 4.4​ - Configuration change data diagram

Again, Docker performed better in the experiments. It is worth noting that the times recorded for Vagrant fluctuated far more than those of Docker. The slowest recorded build time (4:49) was more than double that of the fastest (2:00). This is also reflected in the high standard deviation, as well as in the large confidence interval.

The t-test for this data generated a p-value of 0.0311. This is also below the significance threshold, but by a smaller margin than for the first-time installation. This is again mainly due to the large variance in the Vagrant data.

4.8

Combined metric

While it does not fall within the research questions or main scope of this study, a combined metric was calculated out of interest in finding the best tool across both metrics. To calculate the combined metric, the two metrics were normalized in order to be able to compare them to each other. The combined metric is defined as an equally weighted mean of the two measured metrics, and the same statistical tools were applied as for the main part of the analysis.

(31)

Docker

Metric First-time installation (normalized)

Configuration change

(normalized) Combined metric

Test 1 1.00 0.95 0.98 Test 2 0.98 0.92 0.95 Test 3 0.93 1.00 0.97 Test 4 0.86 0.95 0.91 Test 5 0.71 0.98 0.85 Mean 0.90 0.96 0.93 Standard deviation 0.12 0.03 0.05 Sample size 5 5 5 Confidence level 95% 95% 95% Confidence interval 0.10 0.03 0.05 Vagrant

Metric First-time installation (normalized)

First-time installation

(normalized) Combined metric

Test 1 0.48 0.00 0.24 Test 2 0.00 0.26 0.13 Test 3 0.05 0.78 0.42 Test 4 0.01 0.66 0.34 Test 5 0.28 0.74 0.51 Mean 0.16 0.49 0.33 Standard deviation 0.21 0.34 0.15 Sample size 5 5 5 Confidence level 95% 95% 95% Confidence interval 0.18 0.30 0.13

Figure 4.5​ - Combined metric analysis

(32)

Figure 4.6​ - Combined metric visualization

Unsurprisingly given the previous data, the combined metric also shows that Docker performed better than Vagrant. Note that a higher score in the normalized values are considered better, as opposed to the time values in the previous analyses. A t-test was once again conducted, generating a p-value of 0.0020, which once again suggests that the results are not likely to be caused by any sampling error.

(33)

5

Conclusions and discussion

5.1

Findings

In the initial stages of this study, a literature study was conducted. It was in this stage that the scope of the study was defined, as well as selecting the tools to be compared. Docker and Vagrant were chosen due to their popularity, extensive documentation, and the fact that they are systems which can be used for the same purposes, but function very differently internally. This means that they lent themselves well for a comparative study.

The research questions regarding performance of both first-time installations and configuration changes, were chosen in collaboration with OEM International AB. The involvement of the technology firm ensured relevance to the study, as it thereby treats areas that are of interest in the industry.

To compare the two tools, provisioning solutions were implemented in both tools. The project used was a Sitecore web system, with additional modules such as Solr and SQL Server Express. Since Sitecore is bound to Windows, a Windows VM was used in Vagrant, and Windows containers were used in Docker.

To evaluate performance differences, experiments were designed and conducted. The experiments measured the time taken to perform the provisioning actions, and showed that Docker performed better than Vagrant in both use cases. A statistical analysis was conducted to compare the data objectively, and the analysis confirmed that the differences were sufficiently significant (i.e. not a result of sampling errors).

5.2

Implications

Synchronizing development environments between development machines is an issue which must be addressed by all development teams. A multitude of tools exist to assist this process, and deciding which to use can be a daunting and time consuming task. In this study, the performance of Docker and Vagrant regarding machine level provisioning is thoroughly studied, which can support developers in choosing which system to use.

5.3

Limitations

There are a few factors to consider when interpreting the results of this study, and some aspects which could have been conducted differently. One such factor is the fact that only one machine was used for testing. The initial plan was to use several machines, but due to resource limitations and the specific requirements from the tools tested, only one usable machine was available. Conducting the experiments on several machines would not only increase the reliability of the acquired data, but a more detailed analysis of how hardware resources affect the performance could have been carried out.

A better method of measuring the time taken to run the commands could also have been used. There exists a built-in Powershell function to measure time, ​Measure-Command​, which measures the time until the command is fully completed. In the case of Docker, the

docker-compose upcommand for the project does not finish in that sense. Instead, once all the containers have been set up and started, it starts logging data. Thus, due to lack of time available for finding a more precise method, the time was measured using a regular stopwatch. The results were rounded to the nearest second, as any further decimal precision would indicate a higher than actual accuracy.

(34)

Another error source in the measurements is posed by the network speed. Since parts of the elapsed time was taken up by downloading data, the resulting time will to some extent depend on the network speed. To mitigate this error, all tests were conducted in the same location, and connected to the same network. However, fluctuations in the network speed may have impeded the results.

The scope of the study was also limited to the project at hand. This in turn limited the study to using Windows environments, while the tools are more commonly used with Linux. For a more complete assessment of the tools to be made, other types of projects and environments could have been studied.

5.4

Conclusions and recommendations

In this study, Docker and Vagrant have been compared, to assess the differences in performance on machine level provisioning. Both tools have been used to set up provisioning for a Sitecore web platform, using Solr as an indexing server, and Windows Server Express as a database. The two solutions have then been tested with experiments, to determine the time taken for each tool to install the provisioned infrastructure on a new machine, as well as implementing a configuration change in said infrastructure. The results gathered from these experiments were analyzed, showing that Docker performed better than Vagrant in both test cases.

It is important to recognize the limitations of the study before drawing generalizing conclusions. Sufficient data has not been collected to confidently state that Docker will always perform better than Vagrant in all cases. However, considering the large performance difference shown in the experiment results, Docker is likely to perform better than Vagrant in most cases, at least those similar to the project configured in this study.

Pure performance is not the only factor which is relevant to consider when choosing a provisioning tool for a project. The usability, security, and documentation are all important aspects of a provisioning tool, which have not been discussed in this study. However, looking only at performance, this study concludes in a recommendation to choose Docker over Vagrant for machine level provisioning.

5.5

Further research

As specified in section 6.3, there are factors regarding this study which could be improved upon in future studies in the area. Other projects with different environments could be provisioned, as well as comparing the performances on different machines with varying specifications. Conducting similar studies using Linux environments would be particularly interesting, as both tools use Linux internally, and are in many ways more optimized for Linux environments. Additionally, in recent Docker versions there is experimental support for running Windows and Linux containers simultaneously​[46]​, the performance of which could also be interesting to evaluate.

Another tangentially related area worth investigating is that of different Container Orchestration tools, such as ​Docker Swarm and ​Kubernetes​. It could for example be interesting to assess different tools’ performance of load balancing.

References

Related documents

The testing algorithm prediction with the lowest mean square error deviation from the ground truth would be used as the value to be predicted in the selection algorithm.. Figure

Härtill kommer vidare, att d:r de Brun utgår från ett, så vitt jag förstår, kulturhistoriskt fullkomligt barockt antagande, nämligen att alla eller några av dessa viktsystem

The feature sets that will be used for developing the models in this study will either contain the decimal or the binary representation of the calendar features using the

The docker stats command returns a live data stream for running containers, that is: the CPU utilization, memory used (and the maximum available for the container), the network

, , , Cost function associated with termination times for CM, PM, POM and POS Cost for a CM action Cost for a PM action Cost for a POM action Cost for a PO action The demand for

The gateway received data from a Modbus ATV32 frequency converter, which measured specific features from an induction motor.. The imbalance was created with loads that

The previous steps creates the Terraform configuration file, while the last step is to execute it. The command terraform apply is used to execute a Terraform config- uration

For example, the need for a module that recorded sensor data was only needed if there was actual access to a machine with live data. For testing purposes, the need for multivariate