• No results found

Security Implications for Docker Container Environments Deploying Images From Public Repositories: A Systematic Literature Review

N/A
N/A
Protected

Academic year: 2021

Share "Security Implications for Docker Container Environments Deploying Images From Public Repositories: A Systematic Literature Review"

Copied!
37
0
0

Loading.... (view fulltext now)

Full text

(1)

Security Implications for Docker

Container Environments Deploying

Images From Public Repositories: A

Systematic Literature Review

Bachelor Degree Project in Information Technology

IT610G, G2E, 22.5 HP

Spring term 2020

Date of examination: 2020-08-31

Dennis Tyresson

(2)

Abstract

Because of the ease of use and effectiveness, Docker containers have become immensely popular among system administrators worldwide. Docker elegantly packages entire applications within a single software entity called images, allowing fast and consistent deployment over different host systems. However, it is not without drawbacks, as the close interaction with the operating system kernel gives rise to security concerns. The conducted systematic literature review aims to address concerns regarding the use of images from unknown sources. Multiple search terms were applied to a set of four scientific databases in order to find peer-reviewed articles that fulfill certain selection criteria. A final amount of 13 articles were selected and evaluated by using means of thematic coding. Analysis showed that users need to be wary of what images are used to deploy containers, as they might contain malicious code or other weaknesses. The use of automatic vulnerability detection using static and dynamic detection could help protect the user from bad images.

(3)

Table of Contents

1 Introduction... 1 2 Background... 2 2.1 What is virtualization... 2 2.2 Containers... 2 2.3 Images... 3 3 The problem... 5 3.1 Research question... 5 3.2 Motivation... 5 3.3 Limitation... 5 4 Methodology... 6

4.1 Systematic Literature Review... 6

4.1.1 Databases... 6 4.1.2 Search Terms... 7 4.1.3 Selection criteria... 8 4.2 Thematic analysis... 9 4.3 Threats to Validity... 10 4.3.1 Construct validity... 10 4.3.2 Internal validity... 10 4.3.3 External validity... 10 4.3.4 Conclusion validity... 11 4.4 Ethical considerations... 11

4.5 Practical article selection... 12

5 Analysis... 14

5.1 Threats to security... 16

(4)

5.1.2 Technical lag... 18 5.2 Tagging... 19 5.2.1 Challenges... 19 5.2.2 Solutions... 20 6 Synthesis... 21 6.1 Threats to security... 21

6.1.1 Vulnerabilities and exploits... 21

6.1.2 Technical lag... 22 6.2 Tagging... 22 6.2.1 Challenges... 22 6.2.2 Solutions... 22 7 Discussion... 24 7.1 Reviewing process... 24 7.2 Result validity... 24 7.3 Future research... 24 8 Conclusion... 26 Appendix A – Bibliography of Accepted Articles

(5)

1 Introduction

The use of operating system virtualizaiton has furthered the technology that has allowed centralizing resources in cloud computing. Virtualization is a technology that allows multiple applications to live isolated on the same host computer. A newer type of virtualization called Operating System (OS) virtualization has surged in recent years, due to the release of the software Docker. Docker builds upon the use of OS virtualization and packs complete services inside containers that can be deployed fast and easily on any machine running the Docker engine. A service that build upon the LAMP-stack (Linux, Apache, MySQL and PHP), a set of services used for web hosting, which would usually require the user to install separate components manually, could use a containerized solution to deploy a ready to use application with all components preinstalled. This feature has made containerization immensely popular. The technology of OS virtualization is known to increase security risk to the host system by the close interaction with the host kernel. This means that the container lives much closer to the highest order of system execution than more classic virtualization and threats to the container are of much more concern for the entire host system. Containers are deployed from images obtained from repositories. Regular Docker users download their images from publicly available repositories at Docker Hub. Anyone is free to create a repository and upload their image to Docker Hub and a big number of repositories are developed by individual users rather than software organizations. By downloading an image from these repositories, the user willingly put their faith in an unknown source and could expose the host system to any number of threats. This research will focus on finding out the security implications by deploying containers from publicly obtained images.

The project will be conducted as a systematic literature review, as this method is estimated to give the broadest view of the suggested topic and ensure high probability of finding relevant information. A variety of scientific databases are chosen to look for research articles to evaluate. The articles are selected based on predefined criteria for exclusion and inclusion. Upon

successful completion of accepted bibliography the articles will be evaluated using thematic coding methodology as explained by Clarke and Braun (2006).

The rest of the reported is structured as follow:

Chapter 2 will discuss the background for this work, giving the reader an introduction to

(6)

2 Background

This section will briefly explain the concepts that are required to understand the aim of this research paper, as well as an overview of scientific work that has been conducted in the area relating to this project.

2.1

What is virtualization

Computers are an essential part of modern society and the demand for computing power is constantly increasing. In order to make better use of computing resources, the technology of

virtualization has been adopted as an industry standard since the early 2000's. The idea of

virtualization has existed for several decades, and a modern definition of a virtual machine was formulated by Popek and Goldberg (1974), as an efficient, isolated duplicate of the real machine. In their paper, they explain the concept of a virtual machine as a piece of software that is run on the real hardware, which creates an isolated environment that, to the virtual machine, is almost indistinguishable from the real hardware. The exception being resource control that enables strict allocation of resources to the virtual machine. Modern computing ties programs to

different privilege levels, which allow for restriction of what operations a program can perform. The kernel is a collection of programs that constitutes the central core of a computer operating system. It has complete control over everything that occurs in the system and sits at the level closest to the actual computer hardware. User space is a portion of the system memory in which non-privileged processes run. Any access to hardware by processes in the user space has to be approved and performed by kernel space. In order to virtualize a computer, the virtualization program has to be able to exist in the user space, as well as execute operations in the kernel space – all while still being isolated. There are two generalized main strategies that are used to achieve this.

In Full virtualization, the virtual machine does not know that it is virtualized and any access to hardware is controlled by the virtualization software, the virtualization monitor, or VMM. The VMM exists in a space between the user and kernel space and uses specific instructions to switch between host and guest system. The virtual machine requires its own kernel and since it can only see what resources the VMM allocates, it is completely isolated from the host system. In Operating system virtualization, the virtual machine uses the same kernel as the host machine and is defined by a set of features in the host kernel that allows the virtual machines to exist as isolated entities in user space, called containers.

2.2 Containers

Operating system virtualization is often referred to as containers. Containerization is the isolation of complete software packages that can be deployed as a service. There are three main features that make current containerization technology possible:

Namespaces

(7)

et al. (2014) describe the function of pid namespaces as a mechanism to group processes in order to control their ability to see and interact with one another.

Control Groups

Control groups, or cgroups, limit resource usage for a container by grouping its processes and applying restrictions to the group. Cgroups controls the groups usage of main memory, cpu utilization and disk I/O. LaureEn et al. (2017) explain the importance of control groups ”Because of their ability to limit resources used by containers, control groups are a key mechanism for preventing denial-of-service type attacks against

containers.” They also explain that the grouping of processes is performed hierarchically and allows for further restrictions of nested groups that inherit parent limitations.

Capabilities

In traditional UNIX related environments, the root user has absolute power over the system, without restrictions of access or modifications. This behaviour is not desired when launching containers as root user. An individual container should e.g. not be able to adjust the hostwide system time. Capabilities limit the privilege of a process by only allowing it a portion of the capabilities of the root user. LaureEn et al. (2017) point out that capabilities are an important security mechanism since they enable limiting the program privileges to a minimum.

Since the container lives much closer to the host kernel than a classical virtual machine, there is a general concern for lack of security in containerized systems. In a study conducted by the company Tripwire (2019), where they asked 311 IT security professionals working with containers at companies with 100 or more employees, they showed that 94% were concerned about container security and that 60% had had security incidents involving containers within the past year.

2.3 Images

Containers are deployed from images. The image is a prebuilt instruction for deployment of the container. Docker (2020) defines images as a lightweight, standalone, executable package of software that includes everything needed to run an application. Compared to running an

application in a fully virtualized machine, where the user has to install and configure all software packages manually, the container image is ready for use upon download.

(8)

The images are developed by community contributors, both official organizations and individual users. Henriksson and Falk (2017) point out that official repositories may be reviewed and promoted by Docker Hub, but the unofficial repositories far outnumber the official ones. As with all software that is installed from unknown sources, there is a question of security. The images may be insecure due to age and missing security updates or even contain malicious code. Henriksson and Falk (2017) scanned for vulnerabilities in Docker images and showed 70% of top 1000 Docker images contained high severity issues and 54% contained critical severity issues.

(9)

3 The problem

As explained in chapter 2.3, the images come from various sources that may be unknown to a system administrator. Docker Hub is the largest registry with over 100.000 repositories and images available for users to download and deploy on their system. The repositories hold images that may be developed by anyone wanting to contribute their work to the registry. Without proper vetting of repositories, the images may contain code that is harmful for the host system or user deploying a container from the downloaded image. A simple search for popular software may result in tens of thousands of hits, possibly exposing the user to dangerous repositories.

3.1 Research question

The aim of the study is to map current work in the field of container security with regards to the repositories from which the container is deployed. The research questions that this literature study is looking to answer is:

”For Docker container environments deploying images from public repositories on Docker Hub, what are the security implications?”

3.2 Motivation

Even though containerization has become such a popular tool, it is not without its limitations. The close interaction with the kernel space requires strict monitoring in order to secure the system. Docker is possibly the most popular containerization software, with millions of users world-wide. By focusing the research on Docker, the results will be of relevance for as many people as possible, looking to use a containerized environment. Docker containers are also leveraged by orchestration tools, such as Kubernetes, which aims to provide a layer of

abstraction in order to automate management and scaling of large container environments. Such software, building upon docker containers, will by extension also benefit from this research. The images used for deploying containers are often accessed from third-party repositories where the administrator has little to no control over the image contents. These images may contain vulnerabilities which can be leveraged by an attacker or incorrectly configured software that result in resource exhaustion. To understand what threats and vulnerabilities exist and how a system administrator can protect against them is the motivation of this study.

3.3 Limitation

(10)

4 Methodology

This chapter will explain in detail the method used for the review to answer the question described in chapter 3. The motivation for the use of a systematic literature review will also be shown, as well as clarifications for any decisions made regarding the performed review.

4.1 Systematic Literature Review

The popularity of containerization has brought about a significant amount of research. The conducted research touches many different areas and in order to make use of this gathered information, a systematic literature study is considered the most valid and rational approach. The use of a systematic literature study as a method to answer the question of this research is supported by Kitchenham (2004), who describes that "a systematic literature review is a means of identifying, evaluating and interpreting all available research relevant to a particular research question, or topic area, or phenomenon of interest" (p. 1).

Furthermore, Kitchenham (2004) mentions that systematic literature reviews can "provide information about the effects of some phenomenon across a wide range of settings and empirical methods" (p. 2). Because this research is performed by evaluation of a number of different sources with their own individual parameters, this is an important part of the motivation behind using systematic literature review as a research method.

Jesson et al. (2011) state that in order for a literature review to be considered systematic, it needs to follow six essential stages:

1. Define the research question

2. Design the plan

3. Search for literature

4. Apply exclusion and inclusion criteria

5. Apply quality assessment

6. Synthesis

Following these stages, the plan will be explained in detail in the following chapters.

4.1.1 Databases

Brereton et al. (2007) talk about the importance of using more than one electronic source in order to find as complete a set of data as possible. A single database is highly unlikely to hold all the available research. They also discuss the disadvantage of using several databases, as the databases might employ different methods for their search engines. This may require the researcher to define different search terms for the respective search engines, which will need to be considered.

(11)

material, the databases used for this project are restricted to those provided for free by the University. Brereton et al. (2007) identify several valid databases to search for material regarding software engineering. Due to the overlap in area of topic, the databases in question are considered to also be relevant for the topic discussed in this paper. These databases are:

IEEExplore

ACM Digital Library ScienceDirect SpringerLink

4.1.2 Search Terms

One of the important aspects of a structured literature review is the collection of data to investigate. Kitchenham (2004) stresses the importance of a fair synthesis of existing work by employing a predefined search strategy. The search strategy must allow sources to be included that both support and refute the research hypothesis that is being researched. It must also be correctly documented in order for the reader to evaluate the rigour and completeness of the research.

Kitchenham (2004) as well as Brereton et al. (2007) show that boolean operators used for database searches can enhance and refine the results. For example, the term “container” and “operating system virtualization” may refer to the same thing, but generate vastly different results. Initial searches have shown that the term “container” on its own will yield results regarding physical containers that are used for transportation. That is why the use of boolean operators are an important tool for refining search strings. While booleans may be an important tool, it is also important to account for the difference in implementation for different search engines, as suggested per Brereton et al. (2007). If there is a need to use multiple search strings for different databases, the individual search strings should be as semantically identical as possible while adhering to the databases implementation of boolean operators.

Initial testing of search terms show that the term “container” yields vastly more results than “operating system virtualization”. Results did not show significant difference when searching for an alternate version of the term “operating system virtualization”. Because of the varying features offered by the database's search engines, several search strings were designed. The first search string is used for the databases with options of looking only at abstracts (IEEEXplore and ACM), which yielded more accurate results:

Title:(docker OR containe* OR "operating system virtualization") AND Abstract:(repositor* OR security OR vulnerabilit*)

The search string makes use of a wildcard feature to match either container, or in plural,

containers. The same apply to repositories and vulnerabilities. SpringerLink does not offer the

option of searching exclusively through abstract and ScienceDirect implement the feature in a different way, so a second search string was created to be more inclusive in order to generate more search hits:

(12)

Neither of the second set of databases support wildcards, so the search string reflects expected hits of results containing the words as either singular or plural. In order to sort out unwanted and irrelevant results, all of the searches were followed by:

AND NOT (shipping OR cargo OR nuclear OR geological)

4.1.3 Selection criteria

According to Wohlin et al. (2012), the criteria used for inclusion and exclusion is the basis for primary studies and in order to avoid bias, should be developed before the search for articles. They also mention that the criteria might be changed due to the researcher gaining knowledge of aspects that were not known before the search is conducted.

Kitchenham (2004) further claims that candidate studies should be processed with the backing of predefined selection criteria in order to assess if the studies are suitable to answer the research question.

Inclusion Criteria

IC1. Peer reviewed

IC2. Published in journal or conference IC3. Written in English

IC4. Published between 2018 and present day IC5. Relevant to the research topic

Exclusion Criteria

EC1. Does not meet inclusion criteria EC2. Requires payment or login to access EC3. Identical articles

EC4. Inconsistent description of scientific application EC5. Obviously deviating results compared to related studies

Table 1: Inclusion and Exclusion criteria

(13)

The exclusion criterias are defined to further assure the quality of the articles. EC4 requires the articles to adopt and describe a clear scientific method. If an article appears to fail this condition, it will be discarded. Similarly, studies that fail to explain an obvious deviation in result in

comparison to related studies, will be discarded according to EC5.

4.2 Thematic analysis

Thematic analysis is a qualitative type of analysis used for scientific evaluation of data. Braun and Clark (2006) explain that thematic analysis is a method for identifying, analysing and reporting patterns within data. By employing this method to the structured literature review, the researcher is able to draw conclusions based on the whole content.

Alhojailan (2012) writes that ”By using thematic analysis there is the possibility to link the various concepts and opinions of the learners and compare these with the data that has been gathered in different situations at different times during the project” (p. 40). This supports the aim to collect and interpret research in a meaningful way, from individual sources where parameters may vary. It is achieved by creating general themes that link all of the sources together at an abstract level. Braun and Clark (2006) summarize the method into six phases:

1. Familiarizing with the data

2. Generating initial codes

3. Searching for themes

4. Reviewing the themes

5. Define and name the themes

6. Producing the report

The first step is to read all the data in order to get familiar with it. For this research, it means to read all of the papers that were found at least once in order to get a grasp of its contents and taking initial notes. In the next step, the initial codes are generated based on extracts from the data. Braun and Clark (2006) claim that codes identify a feature of the data that appears interesting to the analyst. The coding may depend on whether the themes are data-driven or theory-driven. For this research, there is a specific question to be answered, which means that the themes will be more theory-driven and so the coding will encompass the research question. Braun and Clark (2006) also emphasize the importance of working systematically through the data set, giving equal attention to all items.

(14)

themes, and also to consider how the themes fit into the overall story in relation to the research question. The last step is to conduct a final analysis and write the report to demonstrate the results. Themes should be given an explanation by extracts from the data that support the story that the researcher identifies in compelling arguments.

4.3 Threats to Validity

The validity of research is a consideration of scientific method and explanation of possible outcomes that yields a level of confidence in the presented work. Research with a low level of validity should not be considered when examining an area of topic. This chapter will discuss possible threats against validity during this research.

Wohlin (2012) declares "The validity of a study denotes the trustworthiness of the results, and to what extent the results are true and not biased by the researcher's subjective point of view" (p. 68). As all scientific work needs to consider threats to validity, there are also several considerations to take when performing a literature review. To explain how researchers can mitigate threats to validity, Wohlin et al. (2012) describe four categories of validity to consider. These four categories and how they relate to this research project, is depicted in the following chapters.

4.3.1 Construct validity

The aspect of construct validity is a reflection of how the researcher plans to investigate a research question. The research question and the applied method should relate so that the chosen method accurately measures and describes what the researcher intends to.

In discussions with the supervisor, a systematic literature review has been deemed most suitable to answer the proposed research question. While the number of articles may prove great, it is implausible that a research project such as this will find all relevant material in existence, and more probable that articles are left out because they are unknown. To compensate for this, multiple databases with extensive content in related fields, and free

availability to the researcher, have been chosen to produce as inclusive searches as possible. The search terms are also designed to further produce inclusive results.

4.3.2 Internal validity

The aspect of internal validity concerns examined relations. Any scientific work with the aim to explain relations must not be affected by factors beyond the researchers control.

The use of multiple databases and inclusive search terms also help generate a sample size that is big enough to increase the internal validity. The inclusion and exclusion criteria support the validity by keeping the examined data relevant to the research. To mitigate any personal bias by the researcher, the criteria is predefined and the selection process is conducted only by these criteria.

4.3.3 External validity

(15)

This threat to validity should not be a problem for this project. The most dominating container software is Docker, and it is a fair assumption that the popularity of Docker will be reflected in the amount of research conducted with Docker as the target software. Considering that Docker is the main focus for this project, most of the data should be relevant. However, focusing on Docker Hub means that the research will not be able to be generalized for other registries.

4.3.4 Conclusion validity

I.e. reliability, describes the reproducibility of the research. The scientific method should be detailed and consistent so that the research could be reproduced with a statistically viable relation of reproduced work and results.

The threat to conclusion validity may be the biggest concern for a project involving an analysis by thematic coding. The analysis depends heavily on the analyst and how the person interprets the material. The researcher needs a fundamental understanding of the topic to be able to draw out the data of interest, and also to judge the relevance of an article. To ensure the

reproducibility of the research, all work must be detailed in the resulting research paper. Any data of interest that are found should be both considered by itself, and as a whole. The context of any quoted data may not be omitted when presented to the reader.

4.4 Ethical considerations

Ethical considerations are an important part of the scientific method. The purpose of ethical considerations is to protect anyone that might be affected by the conducted research, both directly and indirectly. The Swedish Science Council (2017) summarize some thoughts about ethics:

1. You shall tell the truth about your research.

2. You shall consciously review and report the basic premises of your studies.

3. You shall openly account for your methods and results.

4. You shall openly account for your commercial interests and other associations.

5. You shall not make unauthorised use of the research results of others.

6. You shall keep your research organised, for example through documentation and filing.

7. You shall strive to conduct your research without doing harm to people, animals or the environment.

(16)

As the foundation of this project is based upon the work of others, it is important to distinguish between original and cited work so that no plagiarism occurs. The authors of the research that is examined during this project are directly affected by it. Correct citations are also important in order to convey the complete context and meaning as intended by the original author, and to not obfuscate any findings or to skew results based on pre-existing bias towards the subject. Other directly affected parties include the representatives of the software that is subject for this research, whose work must be given fair presentation.

Container software is becoming a big part of the IT world, used by both industry professionals and private users. The results of this research could potentially sway the opinions of software users already using or considering using a containerized solution. However, as mentioned in chapter 2, the topic of container security is well known and the motivation to help the understanding of container security is still the sensible decision.

4.5 Practical article selection

The four databases (ACM, IEEE Xplorer, ScienceDirect and SpringerLink) and the search terms used for collecting articles are described in chapter 4.1.2. Application of the criteria for inclusion and exclusion were performed at the time of search and to sort the material that were found.

Database Hits EC2 IC5 (1st iter) IC5 (2nd iter)

ACM 34 32 5 1

IEEE 69 69 21 11

ScienceDirect 67 64 5 0

SpringerLink 219 212 14 1

Total 389 377 45 13

Table 2: Practical article selection

Inclusion criterion 2 (published in journal or conference) and 4 (publication between 2018 and

present day) are satisfied by applying filters to the searches for all the databases before the

search is performed. All of the database searches were performed at 2020-04-30, which yielded a total of 389 initial hits. To satisfy EC2 (requires payment or login to access), articles that require additional resources in order to be accessed are simply not followed and resulted in a total of 377 articles. To further sort out the relevant articles for this study, the inclusion and exclusion criteria were applied. The main criterion that sorted out the unwanted articles were IC5 (Relevant to the research topic), which were applied in 2 iterations.

(17)

double checked against published sources to ensure that they have been peer-reviewed. The entire article selection procedure is summarized in table 2.

(18)

5 Analysis

This chapter will describe the application of the scientific method described in previous

chapters. The 12 remaining articles that have been accepted will be analysed by using thematic coding, as described in chapter 4.2, in order to answer the proposed research question. Also according to the presented method in chapter 4.2, the process will be presented in the different phases.

 Phase 1:

The first step in the analysis process was to get familiar with the data, by reading all of the accepted articles in their entirety and taking initial notes. Everything that is relevant to the research topic is noted. The notes taken in the first phase will be used as the foundation for all the following phases in a continuous workflow and are organized by individual articles and main categories, the results are structured in a way that will ease phase 2.

 Phase 2:

During this phase, the notes are compiled into codes. Similar or recurring notes are grouped and shortly described. This short description will be the code, covering all of the included notes. Phase 2 results in a list of all the compiled codes that are then to be interpreted in the next phase. The complete list of codes can be found in Appendix B.

(19)

 Phase 3:

The resulting list of codes from the previous phase is interpreted into themes. These themes describe a broader aspect of the data they represent, in relation to the research topic. The codes that were generated in the previous phase may be used as a theme on their own or several codes may be grouped together to form a theme. Some of the themes may also have sub-themes. A thematic map, see figure 2, is generated to visualize all of the themes and their relations to each other. The theme Practical threats sits in the center of the map and address all of the practical threats that are mentioned in regards to the research topic, as well as a sub-theme called Docker Hub shortcomings, which describe some possible enhancements that the registry is currently lacking. Technical lag is the theme for everything in relation to how images are kept up-to-date in order to provide security and Vuln(erability) detection challenges address the challenges of providing secure container images by detecting vulnerabilities and exploits before the image is run as a container. The Tagging theme explains why and how images should be tagged in order to provide the users with more helpful information.

 Phase 4:

In the fourth phase, the candidate themes created in the previous phase are refined by going back and looking at the coded extracts and see how they fit with the respective themes. The research articles used for the dataset are reread in order to find any extracts that might have been missed in the previous phases. Again, a thematic map is generated to visualize the results, see figure 3. As seen in the thematic map, some of the themes have collapsed into each other and the relations between the themes have also been changed to better fit the data.

(20)

 Phase 5:

The final phase that is aimed at creating themes is to further refine the themes. Identified themes should tell a story of about the coded extracts that are included within the themes. Closely connected themes are merged and the names are revised in order to describe the theme more accurately and easy to understand. The final thematic map is generated, figure 4.

5.1 Threats to security

Looking at the security aspect of containers can be centered around the threat models proposed by A13. Since this research focuses on images from unknown or semi-known sources, there are several proposed models that are of interest as they are in direct relation to images, and provide for better understanding of the problem at hand. A13 presents a set of four use-cases that can take place during an attack against a containerized environment, from which there are three relevant use-cases specific to the use of publicly obtained images. A8 expands on this same idea by adding 3 more use-cases, but they are not relevant in the aspect of images or are already covered by the existing use-cases.

Use-case 1: Application to Container Attack

This scenario assumes that an application inside the container tries to attack the container itself. The application inside the container might be configured incorrectly or with malicious intent in order to take over the container.

Use-case 2: Container to Container Attack

In this scenario, not only the application running inside the container, but the container itself might try to target other containers running on the same host or running on other hosts in the network. The container could try to learn about the existence of other containers, their resource usage and target the container integrity or availability.

Use-case 3: Container to Host Attack

This scenario turns the container directly against the host. The container might try to attack the host confidentiality, integrity or availability, for example by exhausting the host resources. It could also try to break free of restrictions in order to gather information.

(21)

All of the mentioned attack scenarios can be realized by a number of attacks. A8 have created an attack taxonomy based on which layer the attack takes place. Again there are several attacks that are directly related to the use of publicly obtained images and constitutes the real practical threats towards a container. Images downloaded from the internet are foremost affecting the container and application layers. A13 further supports these claims by presenting similar attack scenarios. Given the research aim of this study, the possible attacks and weaknesses are

considered from a point where the container image presents the threat on its own, without further instructions by an attacker at runtime. The threat taxonomy is presented in table 3.

Threat Scenario

Malware The image contains hidden malware such as viruses, worms, trojans or ransomware.

DoS The instantiated container tries to exhaust the host resources or cause kernel panic.

Privilege escalation An attacker tries to gain root privileges by having the container modifying memory or files.

ARP spoofing/ MAC flooding

The container tries to intercept or manipulate communication on the virtual network to which the container is connected, e.g. in order to carry out a man-in-the-middle attack.

Poisoned image Images are verified by a signed manifest, but they are not checked when instantiated. An attacker might use a signed manifest in order to provide a different image than expected.

Vulnerable software Applications inside the container might run software that contain vulnerabilities.

Image configuration defects

Unnecessary enabling SSH or running applications with root privileges.

Table 3: Threat taxonomy

Image configuration defects relate to default configurations that the user is unaware of and does not include errors made by the users own configurations. It is important to understand all of the relevant scenarios in order to find the vulnerabilities that can be exploited to realize an attack or that might cause harm in general. By knowing about possible threats beforehand, a user can better harden the system in order to protect the containerized environment.

5.1.1 Vulnerabilities and exploits

(22)

code and behaviour, as they propose an automated system that integrates static detection scanning in its process. Images are often built upon other images, resulting in an image-in-image structure. A1, A4, A5 and A9 all point out that the static scanning needs to be able to check the layered structure of Docker images. The most basic image is usually an operating system image from an official repository. Any vulnerabilities in this image will propagate to the final image used for deploying a container and so each layer possibly introduces new threats and

vulnerabilities.

There are several challenges that arise when trying to detect vulnerabilities and exploits in containers. The very nature of containers make them short-lived, dynamic and lightweight. A1 claims that this makes it hard to detect exploitations as little training data is provided, detection schemes can not make assumptions about behaviour and may not impose overhead that would impact performance. A1 also shows that packages that are manually installed from source code are not detected by popular scanning tools, as they do not extract image features. Scanning tools rely on remote vulnerability databases to detect vulnerabilities, meaning they will not be able to detect zero-day vulnerabilities and that the detection rates are heavily dependent upon the databases used being up-to-date. A4 and A5 both present scanning tools with the aim to monitor and inspect installed software packages inside container images and reporting the findings in a way that is easy to interpret by the user. Docker does provide something similar with the software Docker Trusted Registry, which is an enterprise image storage solution. The software comes with a vulnerability scanner built-in, but like the other proposed scanning tools it only scans package versions.

Regular antivirus software can be used to scan an image for viruses. A2 notes that these softwares suffer from the same downsides as the vulnerability scanning tools, as the viruses have to be known prior to the scan in order to be detected and so they propose a way to detect viruses by leveraging machine learning techniques in their dynamic threat detection framework. The machine learning algorithm looks at the file information and predicts the probability of the file being malicious. Even if an image is found to contain vulnerabilities or virus, A3 notes that Docker Hub does not have an official report system to flag the image as potentially dangerous. Dynamic detection requires the image to be instantiated as a container. This means that the container and all its vulnerabilities and exploits must be allowed to run in order for the dynamic detection tool to evaluate the container behaviour. A3 explains that this operation is performed by monitoring processes and changes to registry and firewall rules, scanning ports and network activity. A1 looks at system metrics, such as CPU utilization, and system calls to determine if a container works as expected. The container subjected to dynamic vulnerability detection can not be deployed in a production environment, and requires its own sandboxing server. A3 further suggests to automate this process by using Docker-in-Docker to provide isolation. Both A2 and A3 suggest monitoring the host network connection for suspicious activity. By filtering for source IP, the container DNS queries can be inspected and possibly malicious queries can be stopped by employing a DNS blacklist. A2 mentions that some malware will try to dynamically generate domains in order to circumvent detection. In their paper, they utilize a prediction model to find more potential malicious DNS requests.

5.1.2 Technical lag

(23)

1.1.1, this means that the technical lag is 2 versions. Keeping the packages up-to-date is important in order to minimize software vulnerabilities. A9 and A7 both report that all

containers contain software packages with high severity vulnerabilities. A6 further supports this by demonstrating that all official node-based images have vulnerabilities. However, A7 provides a possible explanation, noting that some of the vulnerable packages are so common that they are used by nearly all container images. Even so, A9 cites that both official and community images have been found to contain an average of 180 vulnerabilities and many images have not been updated for approximately 100 days. The findings of A7 also support this, saying that more than half of Docker images have not been updated for 120 days.

A6 and A7 argue that the reason that packages are held back or not updated could be that maintainers prioritize stability and compatibility over having the latest packages installed. Updating packages might also add more work for the maintainers. A7 and A6 both see a correlation between age and number of vulnerabilities in container images, and A7 also concludes that newer image versions tend to be more up-to-date. Even though images suffer from vulnerabilities, A7 found that vulnerabilities are fixed faster than other types of bugs and that high severity vulnerabilities are fixed the fastest, with an average of 2.1 months. In addition, they found that 65% of the bugs are without a fix.

The results from A7 and A6 can not be generalized in their entirety, as their research focuses on more specific use-cases such as a specific operating system base image and package family. However, they still show important data with conclusions drawn from the respective research. It is worth pointing out again that vulnerabilities in parent images tend to propagate to child images.

5.2 Tagging

Docker Hub does arguably not provide a system for tagging repositories. It does provide something called tags, but is more a way to distinguish between image versions rather than between different repositories. A10, A11 and A12 all agree that tagging could be a crucial tool for users to find exactly the right image to download. A10 provides an example of two seemingly similar community repositories, but upon further inspection prove that the images are in fact completely different. Most notably is that one of the images comes with SSH installed, something that could prove to be a security risk for an unknowing user, as discussed in chapter 5.1.

Docker Hub currently has over 100.000 repositories. A10, A11 and A12 are in complete agreement that tagging could help categorize and manage repositories in a way that is easy for the users to browse when looking for images, while also providing the user with information about the repository without needing to read its documentation.

5.2.1 Challenges

(24)

Another thing to consider is the reasonability of automated tags and their accuracy at describing the image content. A10, A11 and A12 all measure measurements in recall, which is a

measurement for accuracy where the captured relevance is divided by the total relevance. It is important that the automatically generated tags describe the parts of the image content that are of interest for the user.

5.2.2 Solutions

The solution found to the challenges of automatic tagging for Docker repositories by all of A10, A11 and A12, is to incorporate all the available information about the repository. This includes reading the dockerfile, which contains instructions for how the image will be built. It determines the base image to build the new image from, what instructions to run, files to copy and

commands to run within the finished container. In addition, the repository description can also be used to extract information for tag generation.

(25)

6 Synthesis

This chapter will further condense and synthesize the results from chapter 5. Out of the 13 accepted research articles, explained in chapter 4.5, 28 initial notes were produced which then were finally put into 2 themes with 2 sub themes each. For more information about the themes, see the procedure in chapter 5. The goal for this chapter is to synthesise the results from chapter 5 in relation to the question this research is looking to answer.

6.1 Threats to security

Sultan et al. (2019) provide a foundation when looking at container security by defining four use-cases to which any and all container threats can be related. Out of the four use-cases, three are deemed relevant in direct relation to using images from unknown sources, such as public repositories. Tomar et al. (2020) support these claims and present an attack taxonomy that shows at which layer of the containerized environment an attack takes place and to which use-case they can be tied. Both of these research papers agree that there are several attack vectors and weak points to consider when dealing with a containerized environment.

6.1.1 Vulnerabilities and exploits

Images that contain vulnerabilities and exploits pose a threat to all three of the proposed use-cases. The methods used for detection are either static or dynamic, and explained by Brady et al. (2020) as the evaluation of code before runtime or evaluation of behaviour during runtime. When using static vulnerability and exploit detection, Tunde-Onadele et al. (2019), Kwon and Lee (2020), Zerouali et al. (2019) and Watada et al. (2019) all agree that the layered structure of Docker images result in all the layers needing to be checked because vulnerabilities will be inherited from all the used images. Tunde-Onadele et al. (2019) claims that the nature of containers provide little training data for scanning programs and show that popular scanning tools have a hard time detecting manually installed packages. They also discuss the limitations of static detection, as they are dependent on vulnerability databases that need to be updated and will never be able to find zero-day exploits. Kwon and Lee (2020) and Zerouali et al. (2019) support this when they present their respective vulnerability scanning tools. Huang et al. (2019) show that antivirus software suffer from the same drawback and provide a way to increase the effectiveness of antivirus by using machine learning.

(26)

6.1.2 Technical lag

The term technical lag is described by Zerouali, Mens et al. (2019) as the delta value between used software package version and latest version. Zerouali, Mens et al. (2019) as well as Watada et al. (2019) mention that all images contain high severity vulnerabilities. This is also supported by Zerouali, Cosentino et al. (2019) that show all official node-based images contain

vulnerabilities. Watada et al. (2019) refer to Shu et al. (2017) that showed that many images are left without updates for approximately 100 days and are supported by Zerouali, Mens et al. (2019) who found that more than half of Docker images remain without updates for around 120 days. It is a reasonable assumption that maintainers would like to keep images up-to-date in order to keep the secure, but Zerouali, Mens et al. (2019) and Zerouali, Cosentino et al. (2019) argue that the reason for images to be held back is due to maintainers prioritizing stability and backwards compatibility over latest updates.

Zerouali, Mens et al. (2019) show that vulnerabilities are fixed the fastest among all types of bugs and the average time to fix is 2.1 months. However, 65% of the bugs remain without a fix and older images and versions tend to have an increased number of vulnerabilities. Similar findings by Zerouali, Cosentino et al. (2019) support this claim. However, these research papers can not be generalized on their own as they focus on specific operating system and software package images, but together they do support the argument that images and containers should be kept up-to-date.

6.2 Tagging

Yin, Chen et al. (2018), Yin, Zhou et al. (2018) and Chen et al. (2019) all agree that repository tagging is a crucial component for the users in order to be able to select the optimal image to download. Yin, Zhou et al. (2018) even provide an example where the lack of tagging could be used to fool an unsuspecting user where two repositories are seemingly similar by name, but completely different in reality. All three papers show that automatic tagging would provide the users with better understanding about the difference between repositories on Docker Hub and would, by extension, help protect the user from downloading malicious or outdated images.

6.2.1 Challenges

Manually tagging images is a time consuming process that needs to be automated. However, automating the process has some related challenges. Yin, Chen et al. (2018) found that a large number of repositories lack a description, and so the training data that could be used for tagging is limited. Some repositories tie their description to GitHub topics from which their application is built, but Yin, Chen et al. (2018), Yin, Zhou et al. (2018) and Chen et al. (2019) all show that relatively few repositories have this tie. Furthermore, Chen et al. (2019) found that the quality of GitHub topic can greatly vary. Any tagging that is automatically generated must also be

reasonable and accurate in describing the repository from which it stems.

6.2.2 Solutions

(27)
(28)

7 Discussion

This chapter will discuss implementation of the conducted research process and the validity of the process as well as the found results. It will also discuss ethical considerations and conclude with suggestions regarding future research.

7.1 Reviewing process

The conducted review process is made up of several individual phases which in turn can be broken down into even smaller parts. In the first phase, the plan to execute the review process is defined. Database selection and search process proved to be somewhat more difficult than anticipated. The databases showed differences in search engine implementation and multiple search terms had to be construed in order to include as many relevant articles as possible while also excluding irrelevant results.

For the second phase of the research, the planned process was executed and resulted in 389 articles. The process of eliminating unwanted articles was tedious and labour intensive, as all the articles had to be manually inspected. Efforts were made to extract information about the articles using software scripting, but the results proved to be unreliable and ultimately the process had to be performed manually. By employing inclusion and exclusion criteria, the number of articles were finally brought down to 13. This number could be considered relatively low, but as the research is qualitative in nature the amount of articles were deemed enough and the work continued. The accepted articles were then analysed by using methods of thematic coding as described by Braun and Clark (2006). See chapter 5 for a complete description of the process. Analysing the articles proved to be the most time consuming part of the research as all of the articles had to be read and reread as a part of the thematic coding process.

7.2 Result validity

The question of result validity is mentioned in chapter 5.1.2 and 6.1.2. The criterion IC4 (published between 2018 and present day) is fairly restrictive and could mean that relevant articles are left out. Motivation for the criterion is to limit the amount of research articles and to ensure the relevance of the data by being up-to-date. Two of the selected articles focus on somewhat narrow use-cases and can arguably not be generalized to the entire Docker Hub registry without further research. However, their respective results do support each other and together they raise the probability of seeing similar results in future research.

The methodology for this research is well documented to provide transparency and reproducibility in order for the reader to fully assess its legitimacy. In addition, chapter 4.3 contains a comprehensive discussion about threats to validity, resulting in a thesis work that should be unbiased and credible.

7.3 Future research

(29)
(30)

8 Conclusion

For this research project, a set of scientific databases is searched with a selection of search terms that are related to the research topic. Articles are accepted into the research based on

particularly chosen criteria for exclusion and inclusion. The final bibliography is then evaluated by using thematic coding methodology to create an overview of the research topic and draw conclusions about the implications of using container images from public repositories. Following the explained research process, the proposed research question can finally be answered and the whole procedure can be summarized:

 There are several threats against a containerized environment that users should be aware of. By looking at the layers of the containerized environment, the possible attacks and weaknesses can be better explained and appropriate security measures

implemented.

 Most images have known vulnerabilities. There are tools that can be used in order to detect such vulnerabilities in order to secure the container. It is generally a good idea to keep containers up-to-date with the latest version, in order to minimize the amount of vulnerabilities.

 Users looking to download images from Docker Hub should be vigilant about what repositories they download from. The name of the repository includes very little information and if the repository is missing a description it might even be dangerous to use the image if it is not scanned. A system for automatic tagging is needed in order to help the users evaluate repositories.

 There are several challenges related to automatic tagging. By looking at all the available information and using a predictive algorithm, tags can be automatically generated with reasonable accuracy.

Looking at the outcome of this thesis project, it would be of great use for Docker Hub to

implement an automatic tagging system and for anyone looking to download an image to employ an automatic tool for static and dynamic vulnerability and exploit detection. Docker Hub is the main source for many to find and download images to deploy on their system. This research shows that users and system administrators deploying Docker containers from public

repositories should take appropriate security measures and investigate the images that are to be deployed.

Based on the conducted research, there are several mitigations that the user should take when downloading an image from unknown repositories.

1. The user should be aware of the maintainer of the repository, preferably only downloading from trusted sources, such as official repositories or well-known

maintainers and verify the image before downloading. It is possible to configure Docker to only download images that can be verified with an option called Docker Content Trust. 2. Images should be scanned by static vulnerability detectors and antivirus software, to

ensure that the image does not contain harmful code or vulnerabilities.

(31)
(32)

References

Alhojailan, M. I. (2012). Thematic analysis: a critical review of its process and evaluation.

West east journal of social sciences-december, 1 (2012), pp. 29-47.

Brady, K., Moon, S., Nguyen, T. & Coffman, J. "Docker Container Security in Cloud Computing,"

2020 10th Annual Computing and Communication Workshop and Conference (CCWC), Las

Vegas, NV, USA, 2020, pp. 0975-0980. doi: 10.1109/CCWC47524.2020.9031195

Braun, V. & Clarke V. (2006). Using thematic analysis in psychology. Qualitative Research in

Psychology, 3:2, 77-101.

Brereton, P., Kitchenham, B., Budgen, D., Turner, M. & Khalil, M. "Lessons from applying the systematic literature review process within the software engineering domain". The Journal

of Systems and Software 80, pp. 571–583 (2007).

Chen, W., Zhou, J., Zhu, J. et al. Semi-Supervised Learning Based Tag Recommendation for Docker Repositories. Journal of Computer Science and Technology 34, 957–971 (2019). doi: 10.1007/s11390-019-1954-4

Docker (2020). What is a Container? Available at: https://www.docker.com/resources/what-container (Accessed at 2020-04-07)

Falk, M. & Henriksson, O. (2017). Static Vulnerability Analysis of Docker Images. [Master thesis, Blekinge Institute of Technology]

Huang, D., Cui, H., Wen, S. & Huang, C. "Security Analysis and Threats Detection Techniques on Docker Container," 2019 IEEE 5th International Conference on Computer and

Communications (ICCC), Chengdu, China, 2019, pp. 1214-1220.

doi: 10.1109/ICCC47050.2019.9064441

Jesson, J., Matheson, L., & Lacey, F. M. (2011). Doing your literature review: Traditional and

systematic techniques. Los Angeles, CA: Sage Publications.

Kitchenham, B. (2004). Procedures for performing systematic reviews. Keele, UK, Keele

University, 33 (2004), pp. 1-26.

Kwon, S. & Lee, J. "DIVDS: Docker Image Vulnerability Diagnostic System," in IEEE Access, vol. 8, pp. 42666-42673, 2020.

doi: 10.1109/ACCESS.2020.2976874

LaureEn, S., Reza Memarian, M., Conti, M. & LeppaSnen, V. (2017) Analysis of Security in Modern Container Platforms. Research Advances in Cloud Computing. Springer, Singapore.

doi: 10.1007/978-981-10-5026-8_14

Lloyd Sealy Library (2019). Evaluating Information Sources: What Is A Peer-Reviewed Article? Retrieved march 13, 2020 from https://guides.lib.jjay.cuny.edu/c.php?

g=288333&p=1922599

(33)

Popek, G. J. & Goldberg R. P. (1974). Formal Requirements for Virtualizable Third Generation Architectures. Communications of the ACM. 17 (7): 412–421.

doi: 10.1145/361011.361073

Reshetova E., Karhunen J., Nyman T., Asokan N. (2014) Security of OS-Level Virtualization Technologies. Lecture Notes in Computer Science, vol 8788. Springer, Cham.

doi: 10.1007/978-3-319-11599-3_5

Sultan S., Ahmad I., Dimitriou T. (2019). Container Security: Issues, Challenges, and the Road Ahead. IEEE Access, 7, 52976 – 52996.

doi: 10.1109/ACCESS.2019.2911732

Swedish Research Council (2017). Good research practice (2017).

https://www.vr.se/download/18.5639980c162791bbfe697882/1555334908942/Good-Research-Practice_VR_2017.pdf

Tomar, A., Jeena, D., Mishra, P. & Bisht, R. "Docker Security: A Threat Model, Attack Taxonomy and Real-Time Attack Scenario of DoS," 2020 10th International Conference on Cloud

Computing, Data Science & Engineering (Confluence), Noida, India, 2020, pp. 150-155.

doi: 10.1109/Confluence47617.2020.9058115

Tripwire. (2019). State of Container Security Report. Tripwire

https://3b6xlt3iddqmuq5vy2w0s5d3-wpengine.netdna-ssl.com/state-of-security/wp- content/uploads/sites/3/Tripwire-Dimensional-Research-State-of-Container-Security-Report.pdf

Tunde-Onadele, O., He, J., Dai, T. & Gu, X. "A Study on Container Vulnerability Exploit

Detection," 2019 IEEE International Conference on Cloud Engineering (IC2E), Prague, Czech Republic, 2019, pp. 121-127.

doi: 10.1109/IC2E.2019.00026

Watada, J., Roy, A., Kadikar, R., Pham, H. & Xu, B. "Emerging Trends, Techniques and Open Issues of Containerization: A Review," in IEEE Access, vol. 7, pp. 152443-152472, 2019. doi: 10.1109/ACCESS.2019.2945930

Wohlin, C., Runeson, P., HoSst, M., Ohlsson, M. C., Regnell, B., & WessleEn, A. (2012).

Experimentation in software engineering. Heidelberg: Springer

Yin, K., Chen, W., Zhou, J., Wu, G. & Wei, J. "STAR: A Specialized Tagging Approach for Docker Repositories," 2018 25th Asia-Pacific Software Engineering Conference (APSEC), Nara, Japan, 2018, pp. 426-435.

doi: 10.1109/APSEC.2018.00057

Yin, K., Zhou, J., Chen, W., Wu, G., Zhu, J., & Wei., J. 2018. D-Tagger: A Tag Recommendation Approach for Docker Repositories. In Proceedings of the Tenth Asia-Pacific Symposium on Internetware (Internetware ’18). Association for Computing Machinery, New York, NY, USA, Article 3, 1–10.

doi: 10.1145/3275219.3275220

(34)

Zerouali A., Cosentino V., Mens, T., Robles, G. & Gonzalez-Barahona, J. M. "On the Impact of Outdated and Vulnerable Javascript Packages in Docker Images," 2019 IEEE 26th

International Conference on Software Analysis, Evolution and Reengineering (SANER),

Hangzhou, China, 2019, pp. 619-623. doi: 10.1109/SANER.2019.8667984

Zerouali A., Cosentino V., Robles G., Gonzalez-Barahona, J. M. & Mens, T. "ConPan: A Tool to Analyze Packages in Software Containers," 2019 IEEE/ACM 16th International Conference

on Mining Software Repositories (MSR), Montreal, QC, Canada, 2019, pp. 592-596.

doi: 10.1109/MSR.2019.00089

Zerouali A., Mens, T., Robles, G. & Gonzalez-Barahona, J. M. "On the Relation between Outdated Docker Containers, Severity Vulnerabilities, and Bugs," 2019 IEEE 26th

International Conference on Software Analysis, Evolution and Reengineering (SANER),

Hangzhou, China, 2019, pp. 491-501. doi: 10.1109/SANER.2019.8668013

(35)

Appendix A – Bibliography of Accepted Articles

A1 A Study on Container Vulnerability Exploit Detection Tunde-Onadele, O., He, J., Dai, T. & Gu, X. (2019)

A2 Security Analysis and Threats Detection Techniques on Docker Container

Huang, D., Cui, H., Wen, S. & Huang, C. (2019)

A3 Docker Container Security in Cloud Computing Brady, K., Moon, S., Nguyen, T. & Coffman, J. (2020)

A4 DIVDS: Docker Image Vulnerability Diagnostic System Kwon, S. & Lee, J. (2020)

A5 ConPan: A Tool to Analyze Packages in Software Containers

Zerouali A., Cosentino V., Robles G., Gonzalez-Barahona, J. M. & Mens, T. (2019)

A6 On the Impact of Outdated and Vulnerable Javascript Packages in Docker Images

Zerouali A., Cosentino V., Mens, T., Robles, G. & Gonzalez-Barahona, J. M. (2019)

A7 On the Relation between Outdated Docker Containers, Severity Vulnerabilities, and Bugs

Zerouali A., Mens, T., Robles, G. & Gonzalez-Barahona, J. M. (2019)

A8 Docker Security: A Threat Model, Attack Taxonomy and Real-Time Attack Scenario of DoS

Tomar, A., Jeena, D., Mishra, P. & Bisht, R. (2020)

A9 Emerging Trends, Techniques and Open Issues of Containerization: A Review

Watada, J., Roy, A., Kadikar, R., Pham, H. & Xu, B. (2019)

A10 STAR: A Specialized Tagging Approach for Docker

Repositories

Yin, K., Chen, W., Zhou, J., Wu, G. & Wei, J. (2018)

A11 D-Tagger: A Tag Recommendation Approach for Docker

Repositories.

Yin, K., Zhou, J., Chen, W., Wu, G., Zhu, J., & Wei., J. (2018)

A12 Semi-Supervised Learning Based Tag Recommendation

for Docker Repositories

Chen, W., Zhou, J., Zhu, J., Wu, G. & Wei,J. (2019)

A13 Container Security: Issues, Challenges, and the Road

Ahead.

S. Sultan, I. Ahmad and T. Dimitriou (2019)

(36)

Appendix B – List of Coded Extracts

Coded Extracts

Dynamic scan: automated threat detection Layered structure need to be checked Docker Hub flaws

Vulnerability detection challenges Static automatic threat detection, weak Vulnerability detection tools

Vulnerability database/scoring Antivirus

Dynamic scan: network monitoring Practical threats

Miscellaneous Technical lag

Age and version in relation to bugs All images are affected

Community response Hinder updates

Difference official/community Repository/deployment tools OS packages and dependencies

(37)

Coded Extracts

Threat models

Benefits tagging: identify content Current problems: lack of data

Workaround tagging: use available information Effectivity automatic tagging

Tagging Algorithm Previous work tagging Benefits tagging: categories Influence from GitHub

References

Related documents

To calculate the best possible theoretical speedup of each step sequence iteration the execution times from table 5.8 was used.. From this data each FMUs’ step sequence execution

Nyanza, Rift Valley and Western are the provinces where a high share of households exhibits a positive net benefit ratio, as well as a relatively higher mean, implying

Kvale (1997) skriver att ett bra sätt att registrera en intervju är genom bandspelare eller videobandspelare, detta för att intervjua- ren då kan ”koncentrera sig på ämnet

This can have various reasons, in the case of Stockholm-Solna for example its position as the capital city increases the attractiveness for growth (Ades & Glaeser,

The films studied in this work are grown by physical vapor deposition in a magnetron sputtering system and are representatives of amorphous carbon (a-C),

På de utländska marknaderna försöker det hustillverkande företaget skapa närhet till sina kunder - som i detta fall utgörs av agentföretag täckande ett visst land eller region

Dock finns alltid risken att alla människor inte har möjlighet att ta till sig av denna miljö, vilket skulle kunna leda till motsatsen av en stödjande miljö. Det

Detta framgår av tabell 2, vilken visar andelen markeringar som uppfyller föreslaget BYA-krav för torra vägmarkeringars specifika. luminans uppmätt