• No results found

Slow rate denial of service attacks on dedicated- versus cloud based server solutions

N/A
N/A
Protected

Academic year: 2021

Share "Slow rate denial of service attacks on dedicated- versus cloud based server solutions"

Copied!
39
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköping University | Department of Computer and Information Science

Bachelor thesis, 16 ECTS | Information technology

2018 | LIU-IDA/LITH-EX-G--18/031--SE

Slow rate denial of service

attacks on dedicated- versus

cloud based server solutions

En jämförelse mellan resursbindande denial of service attacker

mot dedikerade och molnbaserade serverlösningar

Albin Andersson

Oscar Andell

Supervisor : Simin Nadjm-Tehrani Examiner : Marcus Bendtsen

(2)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet – eller dess framtida ersättare – under 25 år från publiceringsdatum under förutsättning att inga extraordinära omständigheter uppstår. Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Överföring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och tillgängligheten finns lösningar av teknisk och admin-istrativ art. Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet ändras eller presenteras i sådan form eller i sådant sam-manhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart. För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet – or its possible replacement – for a period of 25 years starting from the date of publication barring exceptional circum-stances. The online availability of the document implies permanent permission for anyone to read, to download, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the con-sent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement. For additional information about the Linköping Uni-versity Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

c

Albin Andersson Oscar Andell

(3)

Students in the 5 year Information Technology program complete a semester-long soft-ware development project during their sixth semester (third year). The project is completed in mid-sized groups, and the students implement a mobile application intended to be used in a multi-actor setting, currently a search and rescue scenario. In parallel they study several topics relevant to the technical and ethical considerations in the project. The project culmi-nates by demonstrating a working product and a written report documenting the results of the practical development process including requirements elicitation. During the final stage of the semester, students create small groups and specialise in one topic, resulting in a bache-lor thesis. The current report represents the results obtained during this specialisation work. Hence, the thesis should be viewed as part of a larger body of work required to pass the semester, including the conditions and requirements for a bachelor thesis.

(4)

Abstract

Denial of Service (DoS) attacks remain a serious threat to internet stability. A specific kind of low bandwidth DoS attack, called a slow rate attack can with very limited resources potentially cause major interruptions to the availability of the attacked web servers. This thesis examines the impact of slow rate application layer DoS attacks against three different server solutions. The server solutions are a static cloud solution and a load-balancing cloud solution running on Amazon Web Services (AWS) as well as a dedicated server. To identify the impact in terms of responsiveness and service availability a number of experiments were conducted on the web servers using publicly available DoS tools. The response times of the requests were measured. The results show that the dedicated and static cloud based server solutions are severely impacted by the attacks while the AWS load-balancing cloud solution is not impacted nearly as much. We concluded that all solutions were impacted by the attacks and that the readily available DoS tools are sufficient for creating a denial of service state on certain web servers.

(5)

Acknowledgments

We would like to thank our supervisor Simin Nadjm-Tehrani for keeping us on track during the creation of this thesis as well as giving us valuable feedback. We would also like to thank our fellow students for giving us feedback, inspiration and support during this semester.

(6)

Contents

Abstract iii

Acknowledgments v

Contents vi

List of Figures viii

List of Listings ix List of Tables x 1 Introduction 1 1.1 Aim . . . 1 1.2 Research questions . . . 2 1.3 Related work . . . 2 1.4 Delimitations . . . 2 2 Background 3 2.1 Slow rate application layer denial of service attacks . . . 3

2.2 Server solutions . . . 5

2.3 Tools for performing denial of service attacks . . . 6

3 Performing Denial of Service attacks 7 3.1 Experimental environment . . . 7

3.2 Experiments . . . 9

4 Results 13 4.1 Slow header attack . . . 13

4.2 Slow body attack . . . 16

5 Discussion 19 5.1 Results . . . 19

5.2 Method . . . 21

5.3 Work in a wider context . . . 21

6 Conclusion 22 6.1 Future Work . . . 22

Bibliography 24 7 Appendix 26 7.1 Observer script . . . 26

(7)

7.3 Experiment 4 . . . 28 7.4 Experiment 8 . . . 29

(8)

List of Figures

2.1 Legitimate HTTP GET header . . . 4

2.2 Wireshark trace of a slow header TCP stream. . . 4

2.3 Illegitimate HTTP POST request . . . 4

3.1 Topology of the experiment setup . . . 7

3.2 Overview of load generation and observation . . . 9

4.1 Average response times with a load of 200 concurrent connections while under a slow header attack running variable number of web sockets . . . 13

4.2 Response time of 10000 requests under a slow header attack using 250 web sockets 14 4.3 Response time of 10000 requests with a slow header attack using 500 web sockets . 15 4.4 Average response times with a load of 200 concurrent connections with a slow body attack running variable threads . . . 16

4.5 Response time of 10000 request with a slow body attack running 30 threads . . . . 17

5.1 Up-close view of the load-balancing happening in experiment 7 Figure 4.5 . . . 21

7.1 Response times of 1000 request with a load of 10 and 200 with a slow header attack using 250 web sockets . . . 28

7.2 Response times of 1000 request with a load of 10 and 200 with a slow header attack using 500 web sockets . . . 29

7.3 Response times of 1000 request with a load of 10 and 200 with a slow body attack using 20 threads . . . 29

(9)

List of Listings

2.1 Implementation of keeping connections alive in slowloris.py . . . 6

2.2 Implementation of keeping connections alive in Tor’s Hammer . . . 6

3.1 Part of the Python script used for saving response times . . . 9

(10)

List of Tables

3.1 Setup Cloud-based server . . . 8

3.2 Load-balancer and auto-scaling configuration . . . 8

3.3 Setup Dedicated server . . . 8

3.4 Parameters of experiment 1: Effectiveness of slow header attack . . . 10

3.5 Parameters of experiment 2: Server point of failure when under slow header attack 10 3.6 Parameters of experiment 3: Slow header attack over time . . . 10

3.7 Parameters of experiment 4: Effects of load on a slow header attack . . . 11

3.8 Parameters of experiment 5: Effectiveness of slow body attack . . . 11

3.9 Parameters of experiment 6: Server point of failure when under slow body attack 11 3.10 Parameters of experiment 7: Slow body attack over time . . . 12

3.11 Parameters of experiment 8: Effects of load on a slow body attack . . . 12

4.1 Server points of failure while under a slow header attack . . . 14

4.2 Average response time of 1000 requests with a slow header attack using 250 web sockets . . . 15

4.3 Average response time of 1000 requests with a slow header attack using 500 web sockets . . . 16

4.4 Server points of failure while under a slow body attack . . . 17

4.5 Average response time of 1000 requests with a slow body attack using 20 threads . 18 7.1 Full auto scale configuration . . . 27

(11)

1

Introduction

Many aspects of modern life are dependent on near instant access to services and systems via the internet. Online transactions, credit card payments and communication via email and social media are a daily and necessary part of the lives of many. Similarly, companies and organizations providing online services are reliant on their systems being accessible for their users.

A denial of service attack is an attack that targets the availability of a system. These kinds of attacks occur frequently and have in the past caused high profile services and websites to become unavailable. In October 2016 a cyber attack directed at the DNS provider Dyn resulted in services like Twitter, Spotify and CNN suffering major interruptions during a couple of hours [1]. Efforts and research to defend against these attacks have been ongoing for many years but they still remain a very serious threat to internet stability.

Denial of service can be accomplished in a number of ways by exploiting different weak-nesses of network protocols and web servers. It is important to understand how these attacks affect the attacked services in order to be able to protect them. One type of denial of service attack is called a slow rate attack. It is called that way due to it requiring very little bandwidth or computational power from the attacker. Slow rate attacks typically target the application layer of the network stack by for example using malformed requests to exhaust the server’s available resources. In this thesis we will take a closer look at how slow rate denial of service attacks targeting the application layer affect web applications hosted on virtual machines in the cloud and how they compare to web applications hosted on physical dedicated servers.

1.1

Aim

The purpose of this thesis is to investigate and evaluate the impacts of two types types of ap-plication layer slow rate denial of service attacks on web apap-plications running on dedicated, physical servers versus web applications running on cloud based servers. After reading this thesis the reader should have an understanding of the threats posed by slow rate denial of service attacks, how they function and how they can affect web services hosted in the cloud and on dedicated servers and also how these differ from each other.

(12)

1.2. Research questions

1.2

Research questions

This project aims to answer the following questions:

• What is the impact of application layer slow rate denial of service attacks against cloud based- and dedicated server solutions?

• How does the impact on performance on dedicated- and cloud based server solutions compare against each other while affected by an application layer slow rate denial of service attack?

These questions will be investigated by running experiments on virtual web servers in the cloud and on a self-hosted dedicated server. The dedicated server is implemented using an Apache HTTP server while the cloud-based server is implemented using an Amazon Web Services (AWS) solution. The experiments will be conducted on as close as possible the same hardware, software and network conditions.

To carry out the experiments, two different scripts designed for server testing were used. These and similar tools are easily accessible on the web making it very easy for a potential adversary to execute an attack. These two scripts were specifically chosen for this thesis because of their ease of use and accessibility.

1.3

Related work

Slow rate denial of service attacks have been explored thoroughly in other works. Similar to this thesis, Bronte et al. [2] look at slow rate application layer attacks on web applications. The authors of the article run tests by launching slow rate attacks against an Apache web server and propose possible ways of detecting such attacks.

Similarly Aqil et al. [3] use combined stealthy application and transport layer attacks against a Unix system hosting an Apache web server. In a similar manner to this thesis they use an observer machine that issues legitimate requests to the server and measure the re-sponse times to determine the effects of the attack. What differentiates their implementation from ours is that they don’t seem to utilize concurrent requests to simulate legitimate load. Finally they show an approach to detect such stealthy DoS attacks.

Muraleedharan and Janet [4] look at various slow rate HTTP denial of service attacks and analyze the abnormal network traffic generated by these attacks. The goal is to be able to use the traffic data as a way to detect incoming attacks.

While these works share similarities in implementation with this thesis project they do not cover scenarios in cloud environments. Helat [5] reviews cloud security with a focus on slow rate HTTP attacks. He provides a statistical and visual analysis of the attacks and the impact they have on the virtual cloud servers.

1.4

Delimitations

Server configurations, methods and settings to try to mitigate the attack will not be explored. The thesis is also limited to one machine to simulate an attacker. This results in distributed denial of service attacks not being evaluated.

(13)

2

Background

The following chapter will cover the theory and background of the thesis which includes the tools used for the experiments.

2.1

Slow rate application layer denial of service attacks

Slow rate denial of service attacks are types of attacks that require a very small amount of bandwidth and computing power to achieve the goals of the attack [6]. Because of this, these kinds of attacks can be conducted by a single or few attackers and still have the effectiveness of large, flooding based denial of service attacks. These kind of attacks are generally very hard to detect since they mostly manifest like normal network traffic. Instead of overwhelming the networks or servers with massive amounts of traffic they instead use clever ways of attacking by either consuming huge amounts of resources on the server or by using exploits to crash it. Application layer attacks, also called layer 7 attacks, are denial of service attacks specif-ically targeting the application layer of the network stack [7]. These kinds of attacks target protocols such as HTTP, HTTPS and DNS and typically target things such as CPU and mem-ory resources by effectively locking these resources with incomplete requests and slow trans-mission rates. This means a single slow layer 7 attack has the potential to crash an entire web server, regardless of the hardware the server is running on [4].

Slow header attack

A slow header denial of service attack [8][6][3], often called a slow loris attack, uses HTTP GET request to fill up a web server’s available connections. This attack can be carried out with a limited number of machines that send incomplete requests to the server. The malicious requests are created by not sending the string ”\r\n\r\n” representing a double line break specifying the end of the HTTP-header. Figure 2.1 shows a legitimate HTTP GET request.

(14)

2.1. Slow rate application layer denial of service attacks

Figure 2.1: Legitimate HTTP GET header

The highlighted extra line break ”\r\n” tells the server that the request header has been completed. By omitting this line break from the HTTP-header the server will continue to keep the connection alive until a double line break is received or an eventual timeout. The attacker will then continue to send illegitimate HTTP-header fields resulting in a trace as shown in Figure 2.2.

Figure 2.2: Wireshark trace of a slow header TCP stream.

These incomplete requests will fill up the server’s connection pool since the server will not break the connection until the request is complete. This results in legitimate connections not being served by the server. Since the attack only sends a small amount of data for every incomplete request, the attacker requires only minimal computational power and bandwidth to execute this kind of attack.

Slow body attack

A slow body denial of service attack [8][6][5] uses HTTP POST requests to fill up a server’s available resources and thus making the server unresponsive to other, legitimate connections.

Figure 2.3: Illegitimate HTTP POST request

In Figure 2.3 an illegitimate HTTP POST request is shown. Unlike the slow loris attack, the header ended with an empty line. The complete header specifies an abnormally large body content length, 10000 bytes in the case of Figure 2.3. An attacker will then send the contents of the body of the POST request at a very slow rate, often single bytes at a time while waiting several minutes between packets. Figure 2.3 shows the body of the request which consist of random characters. Since the content length is defined to be so long this will take a very long time to complete and will use up the server’s available connections without using almost any of the attackers computational power and network. This will deny service to any legitimate traffic since the server will be busy handling the incoming illegitimate traffic.

By doing this an attacker can keep many connections to the server active for a prolonged amount of time which, if done on a large enough scale, will deny service by the server.

(15)

2.2. Server solutions

2.2

Server solutions

There exist multiple solutions for hosting websites or other applications on the web and they all function differently. For this thesis the solutions detailed below are examined.

Cloud servers

Cloud computing makes a service available to users over the internet that is often shared or distributed between many machines and enables the application to allocate computing power where it is needed [9].

There are many benefits of cloud computing [5]. It allows companies to access computing power and storage on demand without the need to buy and configure additional IT infras-tructure. It also allows resources to scale dynamically to meet the demand of workload. Another benefit of employing cloud services is that it reduces the workload of software and hardware maintenance and other IT related work. Despite the benefits, cloud computing is vulnerable to many of the threats faced by traditional infrastructure such as loss of data, hardware failure and insecurities in API:s and interfaces [10].

Amazon Web Services (AWS) is a cloud computing platform which offers a wide variety of services and allows users to create virtual machines to host web servers and various other in-ternet applications. The service which is used for this thesis project is AWS Elastic Beanstalk. Elastic Beanstalk is a service that allows customers to deploy and manage web applications in the AWS cloud. The service allows users to set up their applications in either single instance or load-balancing and auto-scaling environments [11]. Auto-scaling means that application instances are added and removed dynamically to handle increases and decreases of traffic to the application. Traffic to the application instances are distributed by a load-balancer, which acts as an access point to all of the instances. By default the load-balancer listens to HTTP traffic and forwards it to the environment. AWS also monitors the health of the application and routes incoming network traffic to available instances.

Dedicated servers

A dedicated server is a server running on hardware which isn’t shared with other servers as opposed to cloud solutions which typically share the hardware between many virtual ma-chines. Doing this gives the customer full control over the server settings as well as the configurations of the operating system which results in a very flexible environment. It could also increase performance since the environment can be adjusted to the specific workload it is employed to do.

Apache

The server solutions examined in this thesis are all using an Apache HTTP web server. The Apache HTTP Server Project is an open source HTTP server developed by the Apache Soft-ware Foundation. It was released in 1995 and quickly became the leading server solution [12]. According to W3Techs [13], it is currently the most used web server, used in 47.1% of servers surveyed.

Apache is a thread-based web server and is known to be vulnerable against slow rate at-tacks. This is because Apache, by default, dedicates resources to every connection instead of dynamically allocating resources where they are needed [14]. Apache servers have a maxi-mum timeout after which ongoing connections are dropped which for most Apache servers is 300 seconds [7]. This leads to a resource depleting attack such as a typical application layer slow rate denial of service having a big effect since it effectively can bind all resources of the

(16)

2.3. Tools for performing denial of service attacks

2.3

Tools for performing denial of service attacks

In this section the tools we used for performing denial of service attacks are described.

slowloris.py by Gokberk Yaltirakli

Slowloris.py is an open source low bandwidth denial of service tool developed by Gokberk Yaltirakli [15]. This simple Python script lets users execute a slow header attack on a server by specifying a target url address and the number of web sockets to be used in the attack. The script will then establish the specified number of connections to the target server and keep them alive for as long as possible, occupying the server threads. This is accomplished by sending keep-alive headers on all ongoing connections at 15 second intervals (Shown in List-ing 2.1). Broken connections are discarded and recreated keepList-ing the number of connections constant.

Listing 2.1: Implementation of keeping connections alive in slowloris.py

while True:

for s in list(list_of_sockets): try: s.send("X-a: {}\r\n".format(random.randint(1,5000)).encode("utf-8")) except socket.error: list_of_sockets.remove(s) ... time.sleep(15)

Tor’s Hammer

Tor’s Hammer is a low-bandwidth tool written in Python that is used for performing a slow body HTTP attack. The version used in this thesis [16] works by first sending a complete HTTP POST header and after that sending a random character followed by sleeping for some-where between 0.1 and 3 seconds (as shown in Listing 2.2). The script uses multithreading to allow multiple active illegitimate connections at once and the number of threads used by the attacking machine is specified by the attacker.

Listing 2.2: Implementation of keeping connections alive in Tor’s Hammer socks.send("POST / HTTP/1.1\r\n" "Host: %s\r\n" "User-Agent: %s\r\n" "Connection: keep-alive\r\n" "Keep-Alive: 900\r\n" "Content-Length: 10000\r\n" "Content-Type: application/x-www-form-urlencoded\r\n\r\n" % (host, random.choice(useragents))) for i in range(0, 9999): p = random.choice(string.letters+string.digits) socks.send(p) time.sleep(random.uniform(0.1, 3))

(17)

3

Performing Denial of Service

attacks

This chapter explains the testing environment and configurations used for the conducted tests and executions of the experiments.

3.1

Experimental environment

Figure 3.1: Topology of the experiment setup

The experiments were conducted on two different sets of environments, a cloud environment with settings shown in table 3.1 and a dedicated environment with characteristics shown in table 3.3. The topology of the testing environment is shown in Figure 3.1. The attacker ma-chine will separately execute the attacks on the different server solutions while the observer

(18)

3.1. Experimental environment

Configuration of cloud servers

Table 3.1: Setup Cloud-based server AWS Virtual enviroment t2.micro

AWS Environment Type Single instance or load-balanced, auto-scaling

OS Ubuntu Server 16.04 LTS

Webserver Apache/2.4.2

CPU Intel Xeon E5-2676 v3 @ 2.40 GHz running 1 thread

RAM 1 GB

Disk 8 GB

By using the free tier of Amazon Web Services, a virtual machine running Ubuntu Server 16.04 LTS was created. This virtual machine was set up to deploy two separate web applica-tions with the AWS Elastic Beanstalk service.

The first web application was configured to run in a load-balancing and auto-scaling en-vironment. This allows more instances of the application to be added to accommodate an increase in load. The specific auto-scaling parameters and configuration used for this project are shown in Table 3.2. The auto-scaling configuration uses the average latency as a metric for detecting an increase in load. This metric was chosen due to it being similar to the main met-ric used in our experiments, i.e. response time. Measurement period is the time between the points at which the server evaluates its current state and health. To be able to quickly respond to a denial of service attack this is set to the minimum value of 60 seconds. The idle timeout for the load-balancer is set to the default and recommended value of 60 seconds. This means that the load-balancer will close ongoing connections after 1 minute. The full configuration for the load-balanced and auto-scaling server can be seen in Table 7.1 and 7.2 in the appendix. From now on we will refer to the load-balanced and auto-scaling server as simply the load-balanced server.

Table 3.2: Load-balancer and auto-scaling configuration Scale based on Average latency Add instance when > 5 seconds Remove instance when < 1 second Measurement period 60 seconds

Idle timeout 60 seconds

The second application was configured to run in a single instance environment which means it does not have a load-balancer and does not allow more instances to be dynamically added. Due to the limitations of the AWS free tier, the virtual machine is running on hardware located in the US.

Dedicated server configuration

Table 3.3: Setup Dedicated server OS Ubuntu Server 16.04 LTS Webserver Apache/2.4.2

CPU Intel Core i5-2450M @ 2.50 GHz

RAM 4 GB

(19)

3.2. Experiments

The Linux machine acting as a dedicated server (Table 3.3) hosts the web application on a Apache/2.4.2 web server configured with the default configurations. This means that among other things, the maximum number of clients is set to the default value of 256. This server was hosted on the same local network as the attacking and observation machines. This means that the dedicated solution will have a naturally shorter round trip time to the server compared to the cloud solutions because of the physical distance between them.

3.2

Experiments

To measure the effectiveness of the denial of service the response time of the applications was used as a metric. The response time was measured by issuing multiple HTTP GET requests to the server and recording the time between sending the request and the response from the server on the same machine, the observer one. The variations in configurations and settings of the tools and load generators will be detailed under each specific experiment. All of the server solutions were set up to serve a Python 3.6 application which outputs static HTML content.

Load generation and observation

Figure 3.2: Overview of load generation and observation

To simulate legitimate web traffic and measure response times to the servers we created the Python script shown in Appendix 7.1. The script uses multiple concurrent threads to send HTTP GET request to a chosen website URL. The program will measure the amount of time between sending the request and an ”HTTP 200 OK” response from the server, as shown in Listing 3.1. Other responses such as ”500 Internal Server Error” are discarded and counted as failed requests. For each request the timestamp and response time are saved to file to allow further analysis of the data.

Listing 3.1: Part of the Python script used for saving response times response = requests.get(url)

if response.status_code == 200:

arr.append([response.elapsed.total_seconds(), time_stamp])

In this script we define two different parameters, which can be specified by the user. The first is the total number of measurement requests. This specifies the total number of HTTP

(20)

3.2. Experiments

concurrently. Figure 3.2 shows the relation between the measurement requests and load. The measurement requests are placed in a queue until a thread is available and is then sent to the server.

During the experiments, this script will be run on the observer machine as seen in Figure 3.1. Henceforth in this thesis we will call this script the observer script.

Slow header attack

The slow header attack experiments were executed with the slowloris.py script described in section 2.3. The attacking script was left running on the attacking machine for each inten-sity of the attack until all measurements were complete. Tables 3.4, 3.5, 3.6 and 3.7 show the different slow header test configurations. The number of attacking web sockets for each experiments is called attacking web sockets.

Table 3.4: Parameters of experiment 1: Effectiveness of slow header attack

Attack type Slow header

Load 200

Measurement requests 500

Attacking web sockets 0-1000, increments of 100

The goal of experiment 1 (Table 3.4) was to examine how the different servers reacted to the number of illegitimate connections generated by the slow header attack. In the exper-iment, the load script was configured to test response time of the servers 500 times with a load of 200 concurrent connections. For each server, the number of web sockets used by the attacking machine was incremented by 100. The attacks have the potential to cause very long response times and the time to perform the experiments could potentially become very long. Because of this we chose to use 500 measurement requests in most experiments.

Table 3.5: Parameters of experiment 2: Server point of failure when under slow header attack

Attack type Slow header

Load 200

Measurement requests 500

Attacking web sockets Incremented until failure

In experiment 2 (Table 3.5), the goal was to find if there is a point of failure where the servers becomes completely unavailable. This is done by incrementing the number of ille-gitimate connections and measuring the number of failed requests. For this test ”completely unavailable” is defined to be the point where 100% of the requests fail.

Table 3.6: Parameters of experiment 3: Slow header attack over time

Attack type Slow header

Load 200

Measurement requests 10000 Attacking web sockets 250, 500

In experiment 3 (Table 3.6) we measure the effectiveness of the attack over a longer period of time to determine if the effects on the servers remain constant or change during the du-ration of the attack. This is done by increasing the number of measurements in the observer script and presenting them in the order they were sent. This means that we can observe how the server response times change over time. The test was carried out under a slow header attack using 250 and 500 web sockets.

(21)

3.2. Experiments

Table 3.7: Parameters of experiment 4: Effects of load on a slow header attack

Attack type Slow header

Load 10 & 200

Measurement requests 1000 Attacking web sockets 250, 500

To explore if and how the slow header attack is affected by competing legitimate traf-fic, the slow header attacks are executed on servers with different levels of load. In this experiment the observer script has two configurations. The first configuration runs the mea-surements using only 10 concurrent connections while the second uses 200. Since we only evaluate two intensities of the slow header attack in this experiment it will be quicker to preform. Because of this the number of measurement requests is set to 1000 to potentially observe more long term effects.

Slow body attack

The configurations of the slow body experiments are shown in tables 3.8, 3.9, 3.10 and 3.11. The attacks were launched using the tool Tor’s Hammer as described in section 2.3. Similar to the slow header attack the script was left running for each intensity of the attack until all measurements were complete. The number of threads used by the attacking machine is called attacking threads.

Table 3.8: Parameters of experiment 5: Effectiveness of slow body attack

Attack type Slow body

Load 200

Measurement requests 500 Attacking threads 5 - 400

Similar to experiment 1, experiment 5 examines how the server reacts to different inten-sities of the same attack. This is to get an overview of how effective the slow body attacks are against the servers. In this experiment the observer script was executed 20 seconds after the attack tool was started as recommended by the creator of the tool. The number of threads used in the attack was incremented by 5 until we reached 40 active threads. The number was then incremented by 10 to 50 active threads and later 60 active threads. The next set of tests started at 100 threads and was incremented by 50 until we reached the final test of 400 attacking treads.

Table 3.9: Parameters of experiment 6: Server point of failure when under slow body attack

Attack type Slow body

Load 200

Measurement requests 500

Attacking threads Incremented until failure

Similar to experiment 2, experiment 6 examines if there exists a point where the servers become completely inaccessible while under attack from a slow body denial of service attack. The measurements where made approximately 1 minute after the attack was started to ensure that the attack has taken full effect.

(22)

3.2. Experiments

Table 3.10: Parameters of experiment 7: Slow body attack over time

Attack type Slow body

Load 200

Measurement requests 10000 Attacking threads 30

Experiment 7 seeks to examine how the slow body attack affects the servers over a longer period of time. Like experiment 3 this is done by increasing the number of requests made by the observer script and presenting them in chronological order.

Table 3.11: Parameters of experiment 8: Effects of load on a slow body attack

Attack type Slow body

Load 10 & 200

Measurement requests 1000 Attacking threads 20

Similar to experiment 4 this experiment aims to explore how the attack is affected by competing, legitimate traffic. As in experiment 4 the load of the first test of the experiment is 10 concurrent connections followed by another test with a load of 200 concurrent connections. This test also uses 1000 measurement requests for the same reason as experiment 4.

(23)

4

Results

In this chapter the results of the experiments detailed in chapter 3 will be accounted for.

4.1

Slow header attack

Experiment 1: Effectiveness of slow header attack

In Section 3.1 we introduced our three server solutions examined in this thesis, the dedicated-, the static cloud- and the load-balanced cloud server. The first experiment which is presented in Figure 4.1 looks at the average response times and the number of failed requests to the servers under different intensities of the slow header attack. A logarithmic scale was used for Figure 4.1 since the results varied too much in size to be easily represented in a linear scale.

(24)

4.1. Slow header attack

As can be seen in Figure 4.1 there are some big differences between the different server solutions. The dedicated server was not noticeably affected at all until between 200 and 300 sockets where it stopped responding all together.

However the cloud solutions behaved differently. The static cloud solution steadily in-creased in response time while the load-balanced cloud solution remained roughly the same across all tests.

An anomaly can be observed in the static cloud server. When the number of attacking web sockets reached 600 about 1/5 of the measurement requests failed, while in the next test with 700 web sockets, no requests failed. We are unsure about why exactly this occurred but we observed that the static cloud server became very unstable past 500 attacking web sockets.

Experiment 2: Point of failure for server under slow header attack

Table 4.1: Server points of failure while under a slow header attack Server Point of failure

Static cloud server „1500 Load-balanced cloud server None

Dedicated server 256

The results of experiments 2 is shown in Table 4.1. The dedicated server became completely unavailable after 256 web sockets were used in the attack. An observation is that this is the same as the default number of max clients as mentioned in section 3.1, which is most likely the reason for this behaviour. For the static cloud servers the point of failure is not as clear. We found no clear cutoff point where the server was completely unavailable for all request but after 1500 sockets a vast majority of the requests failed or had an response time of more than 20 seconds. The load-balancing cloud server isn’t noticeably affected by the header attack and has no point of failure.

Experiment 3: Slow header attack over time

(25)

4.1. Slow header attack

Figure 4.2 shows the effects of the slow header denial of service attack over 10000 HTTP-GET requests in a scatter plot. The right column shows the baseline test for the different server solutions. Each request is presented as a dot and the request are ordered in the order they were sent to the server. For the static cloud and dedicated server, the figure shows significant effects on the response times of requests, however the effects are sporadic. In the case of the load-balancing cloud server, no clear effects of the denial of service attack can be seen.

Figure 4.3: Response time of 10000 requests with a slow header attack using 500 web sockets With 500 web sockets used in the attack the dedicated server is unavailable and no re-quests were served. Because of that Figure 4.3 only shows the results of the cloud solutions. During the attack, the static cloud server experienced three major spikes in response time, with some request taking over 60 seconds to complete. The load-balanced server also experi-enced a spike in response time, with maximum response times of around 5 seconds.

Both Figure 4.2 and Figure 4.3 shows a large spread in response times, indicating that the slow header attack is not able to constantly maintain a denial of service state on the targeted severs.

Experiment 4: Effects of load on a slow header attack

Table 4.2: Average response time of 1000 requests with a slow header attack using 250 web sockets

Server 10 Load 200 Load

Static Cloud 0.31s 4.92s

Load-balanced Cloud 0.29s 0.51s

Dedicated 0.16s 2.96s

Table 4.2 shows that when the intensity of the attack is lower (250 sockets), the response time is negatively effected by a larger load. The case with a load of 10 concurrent connections does

(26)

4.2. Slow body attack

Table 4.3: Average response time of 1000 requests with a slow header attack using 500 web sockets

Server 10 Load 200 Load

Static Cloud 27.68s 13.68s

Load-balanced Cloud 0.34s 0.45s

Dedicated Unavailable Unavailable

In the test with a higher attack intensity (500 sockets) the opposite is observed. Figure 4.3 indicate that a larger load has a shorter average response time than the case with a load of 10. While this is noteworthy, the complete scatter plots shown in appendix 7.1 and 7.2 do not indicate any outliers that might unproportionally affect the results. One possible explanation for this behavior is that with 250 sockets the attack occupy some but not all of the server’s available connections. That means that with high load the legitimate connections have to compete with each other for the few available resources, while with low load the few available connections will be sufficient to serve the legitimate connections. Then when the number of web sockets used is increased to 500, all of the servers available connections are occupied. Legitimate traffic has to compete with illegitimate to reach the server. This makes the attack less effective when the server is under higher load.

4.2

Slow body attack

Experiment 5: Effectiveness of slow body attack

Figure 4.4: Average response times with a load of 200 concurrent connections with a slow body attack running variable threads

A logarithmic scale was used for experiment 5 in Figure 4.4.

As can be seen in Figure 4.4 the response times of the dedicated server stayed fairly con-stant until the point of failure between 250 and 300 threads used, when the server became unresponsive and failed all requests. The static and load-balancing cloud servers both fol-lowed roughly the same pattern of steadily increasing in response times and the number

(27)

4.2. Slow body attack

of failed requests as the number of threads used in the attack increased. In the case of 400 threads all the requests sent from the observer to the load-balanced server failed. The static cloud server barely remained responsive with response times of up to 2 minutes and 4 out of 5 requests failing.

Experiment 6: Point of failure for server under slow body attack

Table 4.4: Server points of failure while under a slow body attack Server Point of failure

Static cloud server „400 Load-balanced cloud server None

Dedicated server 256

The dedicated server became unavailable when the attack used 256 threads. This is unsurpris-ing for the same reasons explained in the results of experiment 2, that the default maximum clients in Apache is set to 256. The points of failure of the cloud servers is harder to define. At around 400 threads the static cloud server became unresponsive but this number varied a bit between experiments. The load-balanced cloud server became unresponsive for a short while but later returned to normal levels. This meant that the load-balancing cloud solution had no clear point of failure.

Experiment 7: Slow body attack over time

Figure 4.5: Response time of 10000 request with a slow body attack running 30 threads Results of experiment 7 is shown in Figure 4.5. The dedicated server remained consistent with experiment 5 (Figure 4.4) and no noticeable effects of the attack is shown. The first requests sent to the load-balancing cloud server experienced response times of around 50 seconds.

(28)

4.2. Slow body attack

response time of up to two minutes while failing approximately 40% of all requests as can be seen in the gaps in the plot.

This experiment seemed to create a more consistent denial of service state on the static cloud solution compared to the slow header attack shown in experiment 3 (Figures 4.2 and 4.3). The slow body and slow header attacks both manage to create response times of around a minute but the major difference is that in the case of the slow header attack this only occurs in spikes while it happens quite consistently in the case of the slow body attack.

Experiment 8: Effects of load on a slow body attack

Table 4.5: Average response time of 1000 requests with a slow body attack using 20 threads

Server 10 Load 200 Load

Static Cloud 4.21s 23.50s

Load-balanced Cloud 0.54s 7.82s

Dedicated 0.17s 3.10s

The results of experiment 8 can be seen in Table 4.5. While under a load of 10 concurrent connections only the static cloud server was impacted in a major way while the other solu-tions remained fairly healthy. Every server solution tested was impacted when using a load of 200 concurrent connections and experienced a major increase in response times. The load-balancing cloud server however was only impacted for a short duration of the test before returning to normal levels. This experiment indicate that the slow body attack is also affected by legitimate load, similar to the slow header attack.

(29)

5

Discussion

In this chapter we will discuss, examine and evaluate our results and methodology. We will also be looking at our work in a wider context.

5.1

Results

The experiments clearly show that all of the different tested server solutions show vulnera-bilities to slow rate application layer denial of service attacks. There are however differences in how the servers are affected. The experiments show that the slow header and slow body attacks have different properties and affect the servers differently. In this section we will discuss the attacks and the different servers separately.

Denial of service attack impacts

Using the slow header attack, the attacking machine can create a state of denial of service on both the dedicated and static cloud server, causing large delays in response times or even making the requests to the server fail to retrieve the page content. The experiments also show that the slow header attack does not noticeably affect the load-balanced cloud server. The slow header attack is also shown to be affected by legitimate server load, but this impact seem to depend on the intensity of the attack, as shown in experiment 4 Tables 4.2 and 4.3.

The slow body attack is shown to be able to cause a state of denial of service to all server solutions presented in this thesis, with varying effectiveness. The attacking machine could make both the static cloud server and the dedicated server unavailable for an indefinite amount of time. While not being able take it down completely, the slow body attack could also cause interruptions to the load-balanced cloud server. Experiment 8 presented in Table 3.11 shows that the amount of load also has an impact on the effectiveness of the slow body attack. The experiment seems to indicate that a server under heavy load is affected more severely.

(30)

5.1. Results

header and body attacks since it becomes completely unavailable for an indefinite amount of time when the number of illegitimate connections exceeds the maximum, shown in Tables 4.1 and 4.4. An explanation for this result is that the attacks manages to use up the Apache servers client slots which defaults to 256. When there were still connection slots available the server was accommodating all the requests but as soon as all slots filled up the service was completely denied.

Static cloud server

The static cloud server showed a gradual decrease in performance as the intensity of the tests increased. Figure 4.1 and Figure 4.4 from experiments 1 and 5 illustrate this point quite well. The static server did however resist a crash when subjected to these loads, it slowed down significantly but stayed online even after the dedicated server had long since become unavailable. There could be many reasons for this behaviour. One explanation might be the proximity of the attack. The cloud servers are located in the US while the attacks are launched from Sweden. This makes things like packet loss more likely to occur and the attack is not able to occupy server resources as effectively. It could also be an effect of the cloud architecture.

Load-balanced cloud server

The slow header experiments (Section 4.1) seem to show that the slow loris attack has no or very little impact on the server. Response times do not increase when increasing the intensity of the attack and no point of failure was found. On the other hand the slow body experiments (Section 4.2) show that the server is severely effected by this type of attack. We speculate that the reason the effectiveness of the different attacks vary so widely is because of the load-balancer in front of the server. The load-load-balancer forwards HTTP request to the correct server instance but this does not seem to occur in the case of a slow header attack. This could be because the slow header attack never completes the HTTP-header and as such is never interpreted as an HTTP request and will not be forwarded to the server. This explanation raises another question. Why is not the load-balancer itself affected by the slow header denial of service? One possible answer is that the load-balancer might be running on a different web server solution than Apache and is not as vulnerable to these kinds of slow rate application layer attacks.

When the load-balancing server is subjected to the slow body attack it initially seems to be affected in a similar manner to the static cloud server. The results of experiment 5 (Figure 4.4) show long response times and that a large number of requests fail. The reason why this attack is effective while the header attack is not might be because of the slow body attacks actually completing the HTTP-header. This might cause the load-balancer to forward the malicious traffic to the targeted server.

When looking at the attack over a longer period of time in experiment 7 (Figure 4.5), we observe that the initial requests sent from the observer have very long response times of around 50 seconds. After roughly 50 seconds the response times drop significantly. This could possibly be the result of the of the auto-scaling feature. The auto-scaling is triggered by the average latency and creates new instances to accommodate the load. Another possible explanation is that the load-balancer closes the illegitimate connections after they exceed the timeout period. No matter the explanation the attack is effectively mitigated.

While the attack was mitigated this does not necessarily mean that the load-balanced so-lution has any kind of protection against these kinds of attacks. It could simply be a symptom of this solution having more resources than the other solutions. The load-balancer can scale the web application to accommodate for the illegitimate connections. This results in a higher cost since you pay for the resources you use which is why you typically don’t let it scale in-definitely. This means that an attack using more illegitimate connections should be able to cause a state of denial of service similar to the static cloud solution.

(31)

5.2. Method

5.2

Method

The choice to use response time as the main evaluation metric gives a general understanding of how the servers are impacted. It does have some potential problems however. It does not take into account failed requests when the server is completely unresponsive. During the majority of the tests, this had no or very little effect on the results since the parameters used in the tests where chosen to slow down but not completely crash the servers. In the case of experiment 1 and 4 where it did have an impact we showed the number of failed request for each server. Another problem with response time in terms of the load-balancing and auto-scaling server is that it does not take into account the time for the server to mitigate the attack. During the tests this server initially experiences very long response times followed by a return to the baseline, as seen in Figure 5.1. This can potentially skew the average and give a misleading view of the results.

Figure 5.1: Up-close view of the load-balancing happening in experiment 7 Figure 4.5 The application running on the web servers was a simple Python application serving static content. The results might have been different and more realistic if the application was more complex with database queries and calculations. This might have been a more realistic sim-ulation of backend systems. To produce fair results, we configured the server solutions to serve the same content through an Apache HTTP server. However, late in the testing process we discovered that there is a difference between the dedicated server and the cloud servers. The difference lies in how Apache connects incoming network traffic to the Python applica-tion. The cloud servers use a web server gateway interface which specifies how the web sever communicates with the web application. This communication between the application and web server could potentially be a bottleneck which has not been examined. This difference in implementation might be an explanation for why the cloud servers were affected more severely by the slow body attack and showed a more gradual degradation when subjected to different intensities of attack.

5.3

Work in a wider context

As discussed in chapter 1, today’s society is very reliant on the stability and availability of online services. This means that an effective denial of service attack can potentially have devastating consequences. We believe that to be able to prevent and defend against these attacks one must know how they function and how they can affect online infrastructure. In

(32)

6

Conclusion

The aim of this thesis was to investigate and evaluate the impact of application layer slow rate denial of service attacks on web applications hosted on dedicated- versus cloud based server solutions. The questions we chose to investigate as a way to fulfill the aim of the thesis are how the servers are impacted by these types of attacks as well as how the different server solutions differ from each other when under an attack of this kind. We have investigated this by conducting tests and experiments on these server solutions under attacks using varying settings and intensity.

The performance of all investigated server solutions were largely impacted by at least one of the attempted attacks. The server solution that performed best was the load-balancing and auto-scaling cloud server. It was completely immune to the slow header attack and managed to mitigate the slow body attack to the extent of not becoming completely unresponsive. It did however experience major slowdowns in response time during the first minute of the attack before the load-balancer kicked in.

The dedicated server solution and the static cloud based solution differed quite a bit in behaviour. The static cloud based solution gradually slowed down and became more un-stable as the attacks intensified until eventually becoming unresponsive while the dedicated solution showed low impact until it reached a point of failure and became completely unre-sponsive.

The two denial of service tools used by the attacking machine were chosen over other op-tions for their accessibility and easy of use. Despite not requiring any advanced knowledge of the technology, these two scripts were shown to have the potential to cause major inter-ruptions to internet applications. This effectively showcases the potential threat slow rate application layer attacks may pose towards unprotected web servers.

6.1

Future Work

Since this thesis solely investigates how different server solutions are impacted by slow rate application layer denial of service attacks a natural progression would be to investigate how to stop and/or mitigate these kinds of attacks on the different server solutions. One could for example examine how different settings to the load-balancer and auto-scaling features of the cloud server impacts the effectiveness of the attack. Tweaking the configuration files of the Apache web server might also yield some interesting results. Following this one could also

(33)

6.1. Future Work

attempt to alter the attacks to circumvent the above measures to get a further understanding about how the different server settings impact the attacks.

Another possible option is to attempt to distribute the attacks across many machines, that is to run the attack from a large number of devices, and see if the impact is different compared to the non-distributed versions. One might be able to investigate what measures are available for stopping and mitigating these kinds of distributed denial of service attacks.

One could also expand the scope of the report to investigating for example high rate denial of service attacks or slow rate attacks not targeting specifically the application layer.

(34)

Bibliography

[1] C. Johnston S. Thielman. “Major cyber attack disrupts internet service across Europe and US”. In: The Guardian (Oct. 21, 2016).URL: https://www.theguardian.com/ technology/2016/oct/21/ddos- attack- dyn- internet- denial- service (visited on 03/21/2018).

[2] Robert Bronte, Hossain Shahriar, and Hisham M. Haddad. “Mitigating Distributed De-nial of Service Attacks at the Application Layer”. In: Proceedings of the Symposium on Applied Computing. SAC ’17. Marrakech, Morocco: ACM, 2017, pp. 693–696.ISBN: 978-1-4503-4486-9.DOI: 10.1145/3019612.3019919.

[3] A. Aqil, A. O. F. Atya, T. Jaeger, S. V. Krishnamurthy, K. Levitt, P. D. McDaniel, J. Rowe, and A. Swami. “Detection of stealthy TCPbased DoS attacks”. In: MILCOM 2015 -2015 IEEE Military Communications Conference. Oct. -2015, pp. 348–353.DOI: 10.1109/ MILCOM.2015.7357467.

[4] N Muraleedharan and B Janet. “Behaviour analysis of HTTP based slow denial of ser-vice attack”. In: Wireless Communications, Signal Processing and Networking (WiSPNET), 2017 International Conference on. IEEE. 2017, pp. 1851–1856.

[5] Seyed Milad Helalat. “An Investigation of the Impact of the Slow HTTP DOS and DDOS attacks on the Cloud environment”. MA thesis. Blekinge Institute of Technol-ogy, 2017, p. 74.

[6] Nikhil Tripathi and Neminath Hubballi. “Slow rate denial of service attacks against HTTP/2 and detection”. In: Computers & Security 72 (2018), pp. 255–272.ISSN: 0167-4048.DOI: https://doi.org/10.1016/j.cose.2017.09.009.

[7] V. Durcekova, L. Schwartz, and N. Shahmehri. “Sophisticated Denial of Service attacks aimed at application layer”. In: 2012 ELEKTRO. May 2012, pp. 55–60.DOI: 10.1109/ ELEKTRO.2012.6225571.

[8] Enrico Cambiaso, Gianluca Papaleo, and Maurizio Aiello. “Taxonomy of slow DoS at-tacks to web applications”. In: International Conference on Security in Computer Networks and Distributed Systems. Springer. 2012, pp. 195–204.

[9] Dan C Marinescu. Cloud computing: theory and practice. Morgan Kaufmann, 2017. [10] Mark D. Ryan. “Cloud Computing Privacy Concerns on Our Doorstep”. In: Commun.

(35)

Bibliography

[11] Amazon Web Services. AWS Elastic Beanstalk Developer Guide. https://docs.aws. amazon.com/elasticbeanstalk/latest/dg/using- features- managing-env- types.html?icmpid=docs_elasticbeanstalk_console. 2018. (Visited on 04/20/2018).

[12] R. T. Fielding and G. Kaiser. “The Apache HTTP Server Project”. In: IEEE Internet Com-puting 1.4 (July 1997), pp. 88–90.ISSN: 1089-7801.DOI: 10.1109/4236.612229. [13] W3Techs. Usage of web servers broken down by ranking. https : / / w3techs . com /

technologies/cross/web_server/ranking. 2017. (Visited on 04/27/2018). [14] Apache Software Foundation. Apache HTTP Server Version 2.4 Documentation. https:

//httpd.apache.org/docs/2.4/. 2018. (Visited on 04/27/2018).

[15] Gokberk Yaltirakli. About Slowloris. https : / / gkbrk . com / 2016 / 09 / about -slowloris/. 2016. (Visited on 03/21/2018).

[16] dotfighter. Tor’s Hammer. https://github.com/dotfighter/torshammer. 2012. (Visited on 03/21/2018).

(36)

7

Appendix

7.1

Observer script

Listing 7.1: The python script used for creating load and measuring response time

import requests

from threading import Thread import queue

import numpy import time

from multiprocessing import Pool arr = [] concurrent = 10 numberOfTests = 1000 failed = 0 def doWork(): while True: url = q.get() ts = time.time() try: response = requests.get(url)

print(response.elapsed, response.status_code) if response.status_code == 200:

arr.append([response.elapsed.total_seconds(), ts]) else:

arr.append([0, ts]) except Exception as ex:

print(ex)

arr.append([0, ts]) q.task_done()

q = queue.Queue(concurrent * 2) for i in range(concurrent):

(37)

7.2. Full server configurations

t.daemon = True t.start() try:

for i in range(0,numberOfTests): q.put("http://example.com/") q.join()

counter = 0 avg = 0 stdarray = []

open(’data.txt’, ’w’).close() for val in arr:

if val[0] !=0:

avg = avg + val[0]

with open(’data.txt’, ’a’) as data_file:

data_file.write(str(val[1])+’: ’+str(val[0])+’\n’) counter = counter+1

stdarray.append(val[0]) avg = avg / len(arr)

print("Average response time:") print(avg)

print("STD:")

print(numpy.std(stdarray, ddof=1)) for val in arr:

if val[0] == 0: failed = failed+1 print("Failed requests") print(failed)

print((failed*100)/numberOfTests, "%") except KeyboardInterrupt:

sys.exit(1)

7.2

Full server configurations

Table 7.1: Full auto scale configuration Minimum instance count 1 Maximum instance count 4

Availability Zones Any

Scaling cooldown 1 second

Trigger measurement Latancy

Trigger statistic Average

Measurement period 1 minute

Breach duration 1 minute

Upper threshold 5 seconds

Upper breach scale increment 1

Lower threshold 1

(38)

7.3. Experiment 4

Table 7.2: Full Load balancer configuration

Listener port 80

Protocol HTTP

Secure listener port OFF

Connection Draining 20 seconds Health check interval 10 seconds Health check timeout 5 seconds Healthy check count threshold 3

Unhealthy check count threshold 5

Idle timeout 60 seconds

7.3

Experiment 4

Figure 7.1: Response times of 1000 request with a load of 10 and 200 with a slow header attack using 250 web sockets

(39)

7.4. Experiment 8

Figure 7.2: Response times of 1000 request with a load of 10 and 200 with a slow header attack using 500 web sockets

7.4

Experiment 8

Figure 7.3: Response times of 1000 request with a load of 10 and 200 with a slow body attack using 20 threads

References

Related documents

During an attack, after receiving a considerable number of traceback messages, the victim can identify the approximate source of the attack by tracing the entire path taken by

En måltid bör inte bestå av kosttillskott i form av proteinpulver och gainer, dock visar denna studie att vid speciella tillfällen där tidsbrist eller tillgången till mat

• The mitigation of Create Account Attack resulted in legitimate traffic being blocked, while making other parts of the application available.. I.e a concious decision by

Most of the rest services provided by Microsoft Azure enhance network-related performance of cloud applications or simplify the migration of existing on-premise solutions to

patient cohorts: The Swedish Coronary Angiography and Angioplasty Registry, the Västmanland Myocardial Infarction Survey, and the Throm- bus Aspiration in ST-Elevation

I kapitel fem presenteras information om samverkan mellan olika aktörer i Örebro kommun samt kommunens risk- och sårbarhetsanalys för att ge en helhetsbild av kommunens

En vetenskaplig brist kan medföra en förändring eller förbättring av kunskapsunderlaget (Rienecker 2016, s. En vetenskaplig brist som identifierats i den

For example, in the research, we already know that caching replacement policy, cache entry number, parallel processes number of proxy and attacking interval