• No results found

How to design a trustworthy IPsec VPN device employing nested tunnels?

N/A
N/A
Protected

Academic year: 2021

Share "How to design a trustworthy IPsec VPN device employing nested tunnels?"

Copied!
145
0
0

Loading.... (view fulltext now)

Full text

(1)

device employing nested tunnels?

Alexander Spottka

Information Security, master's level (60 credits) 2018

Luleå University of Technology

Department of Computer Science, Electrical and Space Engineering

(2)

A

BSTRACT

E

nterprises use site-to-site Virtual Private Network (VPN) technology to securely transmit data over insecure networks, such as the Internet. By utilizing commercial VPN products, organizations partially rely on the vendors to keep their communication out of reach from malicious groups or individuals. These VPN servers consist of thousands of subcomponents, which can be grouped into hardware, operating system, general software, protocols, and algorithms. The main idea of this study is to design an IPsec VPN architecture based on IPsec nesting. This is achieved by designing two servers that consist of different subcomponents on each layer. Thus, a vulnerability in one component will not necessarily put the entire IPsec communication at risk. The subcomponents picked for deployment are investigated and reviewed based on their trustworthiness, which will be based on later defined criteria. This trust analysis will act as a potential starting point for providing a framework for future trust assessments.

(3)
(4)

A

CKNOWLEDGEMENTS

B

efore presenting details of this study, I want to thank Combitech, and especially Per Westerberg and Daniel Arvidsson for their support of this thesis. Despite their busy schedule, we managed to initiate the thesis. Furthermore, I wish to thank Tero Päivärinta and Parvaneh Westerlund. Even though I decided to pursue this study on late notice, they supported me with detailed feedback and simplicity in the process. Personally, I want to thank my girlfriend Melina who, despite her own work, took care of everything else in order to help me stay focused on the study. Lastly, I need to express my appreciation to have such a caring family at home. My mother Dagmar and brother Matthias especially kept me motivated and helped me unwind after long days of writing.

(5)

Page

List of Tables vi

List of Figures vii

1 Introduction 1

1.1 Literature Review . . . 2

1.1.1 Trust . . . 3

1.1.2 Hardware . . . 4

1.1.3 Operating System . . . 5

1.1.4 IPsec Applications . . . 6

1.1.5 Protocols and Cryptography . . . 6

1.2 Research Question . . . 7

1.3 Motivation . . . 7

1.4 Limitation . . . 7

1.5 Target Group . . . 8

1.6 Outline . . . 8

2 Theoretical Context 9 2.1 Assurance and Security . . . 9

2.2 IPsec . . . 10

2.2.1 Symmetric vs. Asymmetric cryptography . . . 10

2.2.2 Key Exchange . . . 11

2.2.3 Functionality of IPsec . . . 11

3 Methodology 13 3.1 Research Method . . . 14

3.2 Reliability and Validity . . . 16

3.3 Ethical Considerations . . . 17

4 Design 19 4.1 Hardware . . . 20

(6)

4.1.1 UDOOx86 . . . 20

4.1.2 ODROID-XU4 . . . 22

4.2 Operating System . . . 23

4.2.1 RedHat Enterprise Linux . . . 23

4.2.2 Debian Jessie . . . 24

4.3 IPsec Applications . . . 25

4.3.1 strongSwan . . . 26

4.3.2 Libreswan . . . 27

4.4 Cryptography . . . 28

4.4.1 ESP-Encryption Algorithms . . . 29

4.4.2 Integrity . . . 31

4.4.3 Diffie-Hellman Group . . . 32

4.4.4 Digital signatures . . . 33

4.5 Finalized design . . . 34

5 Implementation 37 5.1 Configuration of the strongSwan servers . . . 37

5.2 Configuration of the Libreswan servers . . . 39

6 Demonstration 41 7 Evaluation 43 7.1 Summarized trust score for all . . . 43

7.2 IPsec Testing and Certification Program Criteria Version 3.0 . . . 45

7.2.1 IPsec . . . 46

7.2.2 Logging . . . 46

7.2.3 Administration . . . 47

7.2.4 Authentication using Certificates . . . 47

7.3 Performance . . . 48

8 Discussion 51 9 Conclusion 53 9.1 Future research . . . 54

A Appendix A 1

Bibliography 5

(7)

L

IST OF

T

ABLES

TABLE Page

4.1 Pros and cons of SECO and their product: UDOOx86 . . . 21

4.2 Trust Score for SECO/UDOOx86 . . . 21

4.3 Pros and cons of Hardkernel and their product ODROID-XU4 . . . 22

4.4 Trust Score for Hardkernel / ODROID-XU4 . . . 23

4.5 Pros and cons of RHEL v.7 . . . 24

4.6 Trust Score for RHEL v.7 . . . 24

4.7 Pros and cons of Debian . . . 25

4.8 Trust Score for Debian . . . 25

4.9 Pros and cons of strongSwan . . . 26

4.11 Pros and cons of Libreswan . . . 27

4.10 Trust Score for strongSwan . . . 27

4.12 Trust Score for Libreswan . . . 28

4.13 Algorithm Choices . . . 29

4.14 Trust Score for AES, CHACHA20, Camellia . . . 31

4.15 Trust Score for SHA . . . 32

4.16 Trust Score for MODP and X25519 . . . 33

4.17 Trust Score for RSA, ECDSA, and EdDSA . . . 34

4.18 Algorithm Choices . . . 34

4.19 Final design . . . 35

7.1 Trust Score for Device 1 . . . 43

7.2 Trust Score for Device 2 . . . 44

7.3 Throughput result . . . 49

7.4 Average Throughput in relation to baseline . . . 49

7.5 Latency result . . . 50

(8)

L

IST OF

F

IGURES

FIGURE Page

2.1 IP vs. IPsec datagram . . . 12

3.1 IPsec topology . . . 14

4.1 IPsec topology . . . 19

4.2 Architecture of the artefact . . . 36

6.1 Raw ICMP traffic . . . 41

6.2 Traffic after establishing one IPsec tunnel . . . 42

6.3 Traffic after establishing nested IPsec tunnels . . . 42

A.1 UDP throughput no tunnel . . . 1

A.2 TCP throughput no tunnel . . . 1

A.3 UDP throughput with tunnel between RHEL machines . . . 2

A.4 TCP throughput with tunnel between RHEL machines . . . 2

A.5 UDP throughput with tunnel between Debian machines . . . 2

A.6 TCP throughput with tunnel between Debian machines . . . 2

A.7 UDP throughput with tunnel between both machines . . . 2

A.8 TCP throughput with tunnel between both machines . . . 3

(9)
(10)

C

HAPTER

1

I

NTRODUCTION

I

n times of information becoming one of the world’s most valuable assets, it has become imper- ative to protect it. IT systems have become increasingly abstract and incomprehensible for the average user. The creation of a global spanning system requires secure communication, free from eavesdropping or manipulation. Virtual Private Networks (VPN) offer the capability to establish secure channels over an untrusted media, such as the Internet. The previous statement can only be deemed true if VPN devices can assure the correctness of their functionality so the user has a high degree of assurance of their safety. This study attempts to clarify the terminology of trust, since Gollman indicates its multiple definitions, leading to miscommunication, and therefore vulnerable structures [1]. Trust cannot be taken as granted, especially since Edward Snowden revealed large scale digital surveillance of intelligence agencies in the United States of America [2], abusing critical vulnerabilities in a wide span of firewall products. Still, the thorough investigation of the correctness of VPN servers is currently unfeasible, since they consist of a high number of subcomponents. As described in the journal paper of Nemec et al. [3], history has shown that particular flaw in a widely used cryptographic library, distributed on Trusted Platform Modules (TPM) and other carriers, such as ID cards and passports, which resulted in roughly a quarter of vulnerable keys within the test machine available in the study. Using a VPN server that contains a flawed TPM endangers confidentiality, integrity, and availability of the established communication channel. This is an example to show the importance of assurance in third party components and their manufacturers regarding today’s hypercomplex products, such as VPN servers. The amount of trust one puts into the organizations developing such subcomponents needs to be questioned. In order to decrease risk of compromise, Daniel Arvidsson, VPN expert at Combitech, had the initial idea of layering VPN tunnels. This results in a failover architecture, in which one tunnel could be attacked without the consequences of a compromised system. Each

(11)

VPN server in this architecture will further be divided into different modular sub-groups that will interact with each other. The first layer consists of hardware. Most importantly for cryptographic activities is the processor, since it holds the TPM, which holds highly important properties, such as private keys [4]. On top of the hardware lies the operating system (OS). The OS provides a platform for applications as well as basic functionality for accessing hardware components. For this study, applications are narrowed down to Internet Protocol Security (IPsec) VPN applications.

IPsec applications need to be configured to utilize desired protocols and encryption algorithms.

Organizations such as NIST publish standards and recommendations, e.g Key Management [5], to guide administrators securing their devices and networks. End-users often rely on trusting what vendors develop and sell. Individuals rarely have the knowledge or capacity for analysing the entirety of the VPN encryption devices they are using. While this is already worrisome, the impact on critical infrastructures and high-security systems is something that needs to be revised. Governmental instances have a funded interest in keeping classified information out of reach from potential harmful parties. Furthermore, critical communication systems need to be reliant and protected at any cost in order to protect the social sector. This study’s purpose is investigating possibilities of building a VPN device with potentially untrusted components, yet high degree of assurance. Redundancy of subcomponents will be a central concept and elaborated further in the report. In detail, the idea is to employ redundancy in each block of subcomponents (Hardware, Operating System (OS), Software, and Protocols). A redundant system using different subcomponents, e.g. encryption algorithms, operating systems, and others, will likely be less affected by sudden appearances of vulnerabilities due to its flexible nature. A potential scenario is that an encryption algorithm is found to have a vulnerability. If the VPN device is already layering encryption algorithms, the consequences will be less problematic. This will certainly affect the performance, which will be investigated throughout this thesis.

1.1 Literature Review

Literature and research on trustworthiness and assurance of VPN encryption devices is sparse.

On the other hand, there exists plenty of material towards trust of subcomponents, such as processors, encryption, software, and others. Relevant findings of these, which are connected to this study, will be shown below. Research that aims to investigate the possibilities of developing a highly distrusting VPN device employing nested tunnels could not be identified during the literature review. After discussion with the external supervisor Daniel Arvidsson, it might have to do with the proprietary nature of commercial VPN devices. The idea of this thesis stems from the need of high assurance VPN connections and was identified by Daniel Arvidsson. Therefore, it could be classified as a niche research that has not yet been investigated by researchers.

Nevertheless, the following subsections should provide a frame of information about the security of subcomponents. Also, it will show results from other researchers tackling the issue of assurance

(12)

1.1. LITERATURE REVIEW

of the defined subcomponents. These results cannot always be applied due to their high degree of complexity.

1.1.1 Trust

Researchers from different research areas have published plenty of definitions for trust. Gollmann identifies the different definitions as problematic and states: “First, security needs clarity and precision. Using a term like trust that has many different meanings (many more than explored in this paper) is unlikely to promote clarity and precision” [1]. Without the same notion of trust, discussing trusted systems becomes increasingly difficult. If a project leader has a different understanding of trust compared to his employees, communicating about trusted systems accord- ing to the project manager’s vision will prove problematic. Important to mention is that trust in components and humans does not necessarily lead to a secure system, but rather towards predictability [6].

In their article, McKnight et. al investigated over sixty-five definitions of trust from various researchers and developed an interdisciplinary typology for trust. They condensed sixteen cate- gories for trust into four definitions of trust constructs. The constructs that apply most for the relationship between users and vendors are Institution-based Trust (IBT) and Trusting Beliefs (TB). IBT is defined as the following: “Institution-based Trust means one believes the needed conditions are in place to enable one to anticipate a successful outcome in an endeavor or aspect of one’s life”. The definition of TB according to [7] is “Trusting Beliefs means one believes (and feels confident in believing) that the other person has one or more traits desirable to one in a situation in which negative consequences are possible”. The sub constructs of IBT and TB will be the criteria for trustworthiness of this study. Necessary to mention is that situational normality has been excluded, since longer observation periods are necessary for an accurate assessment.

Therefore, the criteria for assessing trust are the following:

• Structural Assurance

• Competence

• Benevolence

• Integrity

• Predictability

Structural Assurance achieves assurance through regulations, guarantees, contracts, processes, policies, and other methodical approaches. First, a product is more likely to be trusted if the vendor shows competence in their core business. This is done by publishing scientific papers or conference participation. Also, benevolence is considered an important factor in today’s IT landscape. Even technology giants like Intel and Samsung present a benevolent attitude that

(13)

one may believe they are human and caring. Often this affects several topics, not only their customers privacy. A major issue in technology is integrity. Even well-respected organizations such as the NSA, NIST, and IETF lost their credibility after the Snowden revelations [2]. Lastly, predictability describes that one is more trusted if the actions of the opposite party are consistent and therefore forecast-able.

A problem with trust in general is that there is no assurance in the correctness of a fact. The Oxford dictionary defines trust as follows: “Firm belief in the reliability, truth, or ability of someone or something”. Merely believing in the security of a nuclear power plant seems oddly unsatisfactory. Therefore, a system should be trustworthy instead of simply trusted. Trustworthy is defined as the following by the Oxford English dictionary: “Able to be relied on as honest or truthful”. Trustworthiness is the property of any kind of computer system to function exactly how it is intended to be. In the case of VPN devices, trustworthiness is the assurance of a secure communication over an insecure medium, such as the Internet. Therefore, critical encryption devices should not merely be trusted, but show a high level of trust.

1.1.2 Hardware

Hardware is the base layer of every IT-System. There is a vast number of different hardware that is used all over the globe. Hardware is a physical component that consists of a circuit board and several integrated circuits (IC). These circuits are fabricated in semiconductor fabrication plants, which are commonly located in Asia for reasons of production cost. These factories are often utilized by several customers that need customized hardware for their projects. Companies like Samsung and Intel hold respectable parts of the market share, but there are also other producers which have different supply chains and security mechanisms. Commonly, the Trusted Platform Module (TPM) is considered the root of trust, which means that, in case of compromise, the entire system can be considered endangered. ISO/IEC 11889-1:2009 defines TPM and its use.

It is described as a passive component that receives commands and processes responses. The commands towards the TPM are meant to perform primitive and confidential calculations. These calculations are mostly related to cryptographic keys [4]. Highly complex hardware builds the foundation for every server, PC, smartphone, and more, each with different operating systems and user applications. The problem with low-level hardware products is their high complexity and the difficulty of analysis. A research team around Joanna Rutkowska faced the challenge of analysing an Intel x86-based notebook. Their results are insufficient boot security and the identification of a black-box element that is not analysable. This element (Intel Management Engine) has permissions to the memory as well as the network card, which poses a risk for the assurance of data confidentiality [8]. This is just one example to highlight the importance of the problem. It is not recommended to simply trust hardware manufacturers. The revelations of Spectre and Meltdown, presented by joined researchers of different universities, further strengthen this statement [9] [10]. With those vulnerabilities deployed, millions of devices are vulnerable if no

(14)

1.1. LITERATURE REVIEW

countermeasures by the processor manufacturers or end users are taken. Another important task facing hardware security is to secure the supply chain process from malicious manipulation.

Adam Waksman identified the hardware supply chain as a weak link in the system and worked out an approach where he uses self-developed tools to prevent hardware manipulation during the production process [11]. The same problem is discussed in a paper from researchers of the university college of London, the Masaryk university cooperating with the security governance company “Enigma Bridge”. Their product called Myst combines redundancy and cryptographic techniques to withstand malicious or faulty subcomponents [12]. These artefacts are not mature enough to be used on larger scale and would cause this thesis to require more than double the time. Still, it seemed fitting to mention them for the interested reader.

Another angle facing hardware security is the view of the industry, which can be of help in understanding the entire scope of this study. The industry traditionally has three methods to ensure product trustworthiness. These are Trusted foundries, Split-Manufacturing, and Post- Fabrication inspection. The trusted foundry program initiated by the department of defence accredits suppliers of hardware if they have a trusted product flow. ”A trusted supply chain begins with trusted design and continues with trusted mask, foundry, packaging/assembly, and test services” [13]. Establishing secure, separate product flows and trusted associations is costly and requires the foundry to be run in-house. However, the risk of an inside job is always present.

Split-Manufacturing is more cost effective. The design is handled in-house while fabrication is handled from a third-party supplier. Integrated circuits can fall victim to attacks from their production to the arrival on-site. Chen et al. analysed if Split-Manufacturing can prevent attacks such as hardware Trojans but had unsatisfactory results. However, they present approaches on how to defend against the most common attacks [14]. The least level of trustworthiness is achieved by Post-Fabrication inspection. The previously mentioned papers, show that despite intense Post-Fabrication inspection malicious ICs are implemented in productive environments.

1.1.3 Operating System

Hardware security is the foundation for building a secure system. On top of that lies the operating system. There exists research and regulations concerning operating systems that have been assessed prior to the research phase of this thesis. The International Organization for Stan- dardization (ISO) standard 15408-3:2008 covers the challenges on the methodology of grading operating system software with respective security levels, enabling end users taking decisions if a product is appropriate towards the security requirements [15]. Evaluation Assurance Levels Levels (EAL) range from one to seven, with seven meaning the highest assurance. Operating system vendors like Cisco, Checkpoint, and Redhat achieved an EAL level of 4+. In 2.1, further explanation is given on why EAL might not be the best single best only criteria for assurance of operating systems. An interesting approach worth mentioning is QubesOS. Johanna Rutkowska and her team created an operating system that follows this approach: ”security by compartmen-

(15)

talization“. It consists of a bare metal hypervisor that invokes light virtual machines for arbitrary functions of a user (browsing, USB usage, etc.). Unfortunately, QubesOSs hardware requirements cannot be met by affordable Single Board Computer (SBC) [16].

1.1.4 IPsec Applications

On top of the operating system, applications are installed. Therefore, assembling a system with untrusted components also involves software. This study’s concerns are limited to IPsec applications. A major problem in application security is assuring the correctness of third-party software. In the past, there have been several major security breaches, due to the fact that companies relied on flawed third-party software. A paper by researchers of Carnegie Mellon University asks the question if third-party certification of software is necessary. Sadly, there was no conclusion that would have yielded a definite answer to the problem, but it has some valid information that seems relevant to this thesis [17]. Other papers propose different models on how to mitigate the risk of implementing third-party software. One is based on software-wrapping that offers developers an understanding of what the third-party software does, while the other paper presents three controls, two being technical and one being a clear policy towards the usage of third-party software. The importance of collaborating with vendors is stated, but not shown [18] [19]. As with the other subcomponents, it would be possible to dive into the details and apply all models and recommendations from previous papers. Nonetheless, this is not the objective of this study. Instead, the definition of trust in this study has been be given in 1.1.1 and will be empirically applied on vendors and products.

1.1.5 Protocols and Cryptography

IT systems range from a microchip on a personal computer to highly complex, global networks.

Common protocols and standards help building these systems. Most protocols are widely deployed and developed by organizations that are connected with governments. The question arises if they really are as trustworthy as we might think, especially after the revelations of Edward Snowden in 2014 became public. In an article that aims to summarize and order these leaked documents, the authors discuss the technicalities and procedures that have led to what is known from the leaks. Standards are described as “an instance of governance in the absence of government”, since they define conditions all over the globe without a higher authority of control. This can be interpreted positively due to the distribution of powers, but also negatively because of a missing control organ. Furthermore, most standards are overly complex and therefore often incorrectly implemented, which makes them vulnerable. The authors mention the IPsec protocol as an example. IPsec, for example, has an unnecessary amount of options, leading to a less than correct implementation [2].

Encryption is another hot topic. According to the leaked documents, the National Security Agency (NSA) has implemented a backdoor by design into Dual Elliptic Curve Deterministic Random

(16)

1.2. RESEARCH QUESTION

Bit Generator (Dual DRBG). Still, it passed the requirements of ISO and the National Institute of Standards and Technology (NIST), and has been implemented in multiple software libraries [20]. History has shown that this was not the only case of deliberate weakening of standards by political instances. Troublesome is that there might be more encryption standards out there that we blindly trust. On the blackhat IT security conference 2017, two researchers presented the results of their mathematical backdoor. They created a symmetric encryption algorithm, BSA-1, which has many parallels with the widely deployed, and standardized, AES algorithm.

Interestingly, BSA-1 passed all cryptographic tests of ISO and the NIST. They state that no one has found the intentionally placed backdoor to this date. Hence, the question arises how one can believe that the cryptographic standards, developed by the enormous cryptographic competence of the NSA, does not contain a similar backdoor [21] Another problem could be unintentionally insecure protocols or badly implemented protocols. An example of this is the key distribution protocol “Diffie-Hellman”. In the journal article “Imperfect Forward Secrecy:

How Diffie-Hellman Fails in Practice”, researchers present their findings on the weaknesses of protocol implementation. They advise cryptographers and system architects to peek on the other side of the table and collaborate closer to prevent further security violations [22]

1.2 Research Question

• How to employ trust models on subcomponents of VPN servers?

• How to design a trustworthy IPsec VPN device using untrusted subcomponents, based on subcomponent redundancy?

1.3 Motivation

The idea of this thesis was developed together with Daniel Arvidsson and Tomas Stewén (Com- bitech). The expected result will benefit Combitech along with VPN/Network architects and potential users. The results of this work provide trust-oriented implementation to IPsec VPNs, one of the fundamental concepts of privacy and communication security. The prototype introduces the concept of nested IPsec tunnels implemented on trust-evaluated subcomponents.

1.4 Limitation

The possibilities of designing architecture, as performed in this study, can be taken to a new level with more resources available. This project had the time frame of a regular master thesis project and processes needed to be adjusted accordingly. In a more extensive time-frame, further research and features could have been performed. Another crucial thing to be mentioned is the financial limitation. During the phase of determining the final choices of subcomponents to be

(17)

implemented, some potential objects needed to be ignored due to their price tag. During the research, it became clear that the potential extent of this topic exceeded the limits of a master thesis. An example thereof is that researching hardware trustworthiness would alone provide enough content for extensive research. An expert reader might find the results therefore slightly shallow. Nevertheless, this study provides an overview of the concept of trust and can be used as an entry point into further research.

1.5 Target Group

This study is aimed towards technical operators, whose responsibilities include implementation and maintenance of VPN solutions. Moreover, it might give new views on current IPsec solutions and may propose an alternative to them. Also, the architecture is mainly based on open-source units, which might be interesting to the open-source community. The design created in this study can be replicated and used to tunnel traffic all over the world

1.6 Outline

After the literature review in 1.1 , the terminology found in section 2 will clarify terms such as trust, trustworthiness, and others that are essential for defining trust in third-parties and their products. Afterwards, chapter 3, methodology, will introduce the concept of Design Science Research (DSR) and how exactly it will be applied in this study. Chapter 4 will validate sub- component choices and their level of trust. It will furthermore demonstrate an initial design of the functional artefact. Lastly, section 5, implementation, documents the actual assembly and configuration of the functional IPsec server architecture.

(18)

C

HAPTER

2

T

HEORETICAL

C

ONTEXT

T

his section aims to provide theoretical context, helping the reader to understand the following chapters. In order to understand trusted and untrusted VPN systems, the reader is required to acquire an exact understanding of the terminology used in this study.

It is of high importance to carefully distinguish between trust, trustworthiness, and assurance with regards to Information Technology (IT) security.

2.1 Assurance and Security

Discussing and designing a secure system based on untrusted subcomponents requires the reader to understand what “secure” exactly means. In reality, no system can be classified as entirely secure. It is more realistic to define and validate degrees of assurance that something is func- tioning as intended. One way to evaluate certain security requirements is by utilizing Common Criteria, short CC. CC have been developed by the United States, Canada, France, Germany, the Netherlands, and the United Kingdom and became ISO/IEC standard. CC developed out of the Trusted Computer System Evaluation Criteria (TCSEC), often referred to as the “Orange Book”, as well as the European equivalent named Information Technology Security Evaluation Criteria (ITSEC). CC involves testing the Target of Evaluation (ToE) towards sixty security functional requirements. Depending on the degree of success, ToE’s can achieve one out of seven Evaluation Assurance Levels (EAL). Higher assurance levels imply a higher degree of security [15]. Criticism has been levelled against CC by William Jackson in his report in 2007. Vendors of IT products such as Symantec, as well as researchers, see evaluation as necessary, but label the CC certification non-cost-effective; products are often solely certified in order to be able to be sold to governmental institutions. Another flaw is that EAL1-EAL4 only requires theoreti- cal documentation of the TOE and the development process for evaluation. Critics argue that

(19)

evaluating a product solely on its paperwork is insufficient [23]. In the book from 1991 titled

“Computers at Risk”, the predecessors of CC are mentioned, as well as guidelines on how to achieve a higher degree of assurance. Even though it was published twenty-six years ago, the procedure is still relevant. Assurance evaluation is divided into two stages: Design Evaluation and Implementation Evaluation. Evaluation of the design assures that the design is providing the necessary functionality. An external, qualified instance should perform this test. Detecting and correcting problems in the design phase is considered cheaper and less troublesome than in later stages, such as the implementation phase[24].

The vendor’s implementation of IPsec encryption devices is tested by laboratories, such as In- ternational Computer Security Association (ICSA), FIPS140-1-approved testing labs, or others.

Often, cost-efficiency is the critical factor deciding if a product is certified or not. For this study’s prototype, this will not be possible due to cost reasons.

2.2 IPsec

VPNs can use different protocol suites to operate. Popular choices are Layer-2-Tunnelling- Protocol (L2TP), Openvpn, Secure Socket Tunnelling Protocol (SSTP), and IP security (IPsec).

For this study, IPsec was chosen since it operates independent of the applications running on top. Furthermore, IPsec is the main standard for VPN defined in various documents of the IETF [25].Thus, it is preferable for organizations that have compliance responsibilities to a third party.

IPsec was developed to add an additional security level on the IP protocol. In the Request For Command (RFC), defining the IPsec standard, the Internet Engineering Task Force (IETF) states the services as follows : “The set of security services offered includes access control, connectionless integrity, data origin authentication, detection and rejection of replays (a form of partial sequence integrity), confidentiality (via encryption), and limited traffic flow confidentiality” [26]. Problems with IPsec where identified when the German newspaper agency “Der Spiegel” published leaked documents. These leaked NSA documents show that the key exchange protocol embedded in IPsec, the Internet Key Exchanged (IKE), contains vulnerabilities. This may result in a leak of the symmetric keys used for encryption [27].Security researchers have analysed the document and stated that if correctly implemented, there is no risk in using IPsec. Correct implementation involves the usage of Perfect Forward Secrecy and avoidance of Pre-Shared Keys (PSK).

2.2.1 Symmetric vs. Asymmetric cryptography

The understanding of encryption algorithms is essential to understanding IPsec, including IKE. Encryption algorithms lay the foundation for authenticity, confidentiality, and integrity in computer systems. Cryptographic algorithms can commonly be grouped into two categories:

Public-key and Private-key, or asymmetric and symmetric algorithms. Symmetric keys utilize

(20)

2.2. IPSEC

a single private key that encrypts and decrypts data. It is imperative to protect the key at any costs, otherwise, the payload is endangered. A problem with symmetric key cryptography is that it does not scale on a system as the Internet, due to impracticalities of key distribution.

Asymmetric cryptography solves this problem by utilizing a secret/private key and a public key.

The public key can be published and stands in mathematical relation to the private key, in a way that authentication and encryption can be performed.

2.2.2 Key Exchange

In IPsec, keys can be exchanged out of band e.g. USB sticks, physical delivery, or via IKE protocol. Manual key exchange runs into problems of scalability if the number of communication partners increases. The scalable protocol IKEv1/IKEv2 utilizes Internet Security Association Key Management Protocol (ISAKMP) to establish the previously mentioned SA. Since this study focuses on embedding redundancy in mechanisms, one VPN tunnel will utilize IKEv2 while the other keys will be transported out of band. The reason for using IKEv2 over IKEv1 are improvements such as mobile support, NAT traversal, limitation to secure cryptographic properties, and DDoS resilience mechanisms.

2.2.3 Functionality of IPsec

The two main protocols are Authentication Header (AH) and Encapsulation Security Payload (ESP). AH only provides authentication, whereas ESP offers authentication as well as encryption.

Hence, this study has the need for both which makes ESP the protocol of choice. The datagram of a regular IP packet differs from an IPsec packet. The exact differences between an IP and the IPsec ESP tunnel mode datagram can be seen in 2.1 taken from [28]. The payload of the normal IPv4 package is used to nest the encrypted IP and TCP header, as well as the payload along some other necessary values. Thus, a potential eavesdropper cannot see the content or the final recipients IP address. In order to establish an IPsec tunnel between to hosts a Security Association is needed. A SA is the relation between IPsec peers and defines which security services (algorithms, protocols, etc.) are used for the connection. SAs can be seen as unidirectional policies. This can be explained the following: To initiate a two-way communication tunnel the server will offer the client a SA. If the client accepts this policy, he sends it back. A security association contains variables that are needed for a successful IPsec tunnel creation, such as a security parameter index (SPI), the destination address, if the connection will be based on the ESP or AH protocol, keys, and others. The SPI can be compared to port numbers in UDP or TCP based connections. It enables the receiving OS to identify the corresponding SA for an incoming packet and therefore on how to further process it [29]. IPsec can operate in two modes, namely transport mode and tunnel mode. Transport mode only encapsulates the payload of the raw IPv4 packet, whereas tunnel mode encapsulates the entire IP packet as well as its header. Therefore, tunnel mode VPNs are commonly installed on gateways, which act as tunnel entrance or exit.

(21)

Figure 2.1: IP vs. IPsec datagram

After arrival of the packet, the IPsec frame gets stripped of the IP packet and delivered to the original destination address. This study will focus on a solution based on ESP in tunnel mode.

(22)

C

HAPTER

3

M

ETHODOLOGY

T

he method applied in this study is called Design Science Research (DSR). In the journal of management information systems, Peffers et. al. describe design science research and its suitability for academic IT projects. Researchers from different areas were involved in the outline of DSR. They brought input from different areas in order to decide on the elements that should define the process of DSR. Their argument revolves around whether traditional methodologies are necessarily the preferable approach to solving problems in information tech- nology studies. This is because they are based on social and natural sciences, while IT projects tend to be applicable research solutions[30]. According to Peffers et al. this is commonly used for engineering projects but should also be applied for computer science projects [31]. In his paper

“Design Science Research in Top Information Systems Journal”, the author argues that DSR publications are indeed needed to further drive innovation. He further elaborates that DSR helps combining all factors of Information Systems together. Another phenomena worth mentioning is the currently exponential amount of scientific progress happening all over the globe. This requires academia to accept relatively new methodologies, such as DSR.[32] ]. DSR fits this project for multiple reasons. This project’s purpose is designing and finally creating a prototype of a VPN encryption device artefact, with a high focus on subcomponent trustworthiness. The creation of artefacts is and has been one main driving force for knowledge building. An example for this is classical architecture, as well as aeronautical engineering, where structured construction, followed up with a reflection process, led humanity to the current state we live in [31]. The main challenge of this study is the design and assembly of technologies from separated areas (Hardware, OS, Software, Protocol/Standards) into one artefact. There are different models on how to structure a DSR study. After the literature review, the six-phases-model of Peffers et al.

[30] seemed to be the preferable choice for this study. It consists of the following phases that

(23)

need to be carried out sequentially, with potential re-initiation of previous activities in case their output was insufficient.

3.1 Research Method

The process of DSR is summarized in the following activities that have been created by Peffers et al. [30]. The correlation of these studies will be explained in the respective activities below.

Activity 1: Problem identification and motivationIn activity one it needs to be justified why the solution is meaningful for the academic world. It should motivate the researcher and audience to pursue the solution. Furthermore, the value for non-academical parties can be stated.

Along the problem identification the problem is divided into task which will be solved during this study [30]. Sonnenberg and vom Brocke describe a similar activity involving practices such as literature review, assertion, and expert interviews as the “justification” activity [33]. They combine problem identification and objective definition into one phase. This study’s problem can be defined as follows: As mentioned before, the problem is that current IPsec devices e.g. firewalls, rely on a single layer of subcomponents. If a subcomponent is found to have a vulnerability, the entire system security relies on patching or other measures from third parties in order to keep the system secure. However, the priorities of vendors and consumers might not align, which puts the VPN device at risk. As 3.1 shows, this study approaches this problem by configuring

Figure 3.1: IPsec topology

a site-to-site VPN architecture. In practice, it consists of two IPsec servers that are going to be designed to employ entirely different subcomponents. They will be configured to nest two IPsec tunnels towards each site. This means that an attacker must compromise both devices in order to break the system. This adds another layer of security. A vulnerability or backdoor in one server on any level will not necessarily result in the breach of the entire communication.

Activity 2: Definition of the objectives for a solution In activity two, objectives for the artefact must be defined. These objectives are based on the previous problem definition. This can only be performed with knowledge gathered from the literature review, along with a clear consid- eration of this studies scope [30]. The objectives defined below are the indicators for evaluation

(24)

3.1. RESEARCH METHOD

later in the study. The objectives for this project are the following:

Firstly, the components and organizations used in this project will be investigated with regards to their trustworthiness, based on the justified and defined criteria in 1.1.1. The criteria score ranges from one to five, with five being considered very well satisfied. Sources for grading different subjects is based on material found throughout the research phase. The amount of available material found on public platforms, such as the Internet, Newspaper articles, and others differs from organization to organization.

Secondly, the artefact should be designed, prototyped, and well documented. It is to be seen if performance, cryptographic functionality, and the general implementation satisfies the following frameworks:

1. IPsec Testing and Certification Program Criteria Version 3.0" by the independent testing organization ICSA [34]

2. Simplified performance test according to the IETF draft "Methodology for Benchmarking IPsec Devices" [35]

Activity 3: Design and development In activity three, the artefact is designed and real- ized. An artefact is meant to be an object, model, method, or something else that embeds values for academia, as well as the industry. Questions on how functional requirements are satisfied are going to be answered. Peffers et. al further state that the design can only be moulded into the actual artefact by applying theory gathered in the literature research process [30]. The artefact of this study is going to be an IPsec VPN device. This is done by carefully selecting, installing, and configuring hardware, operating systems, IPsec applications, protocols, and cryptographic algorithms. The word “carefully” implies that the reasoning and justification is involved in the selection of components, as well as reasons for other design choices. In the beginning, this artefact will be a conceptual design. Afterwards, the implementation/development will be taking place.

Here, it will be clear if the design can be applied in a practical context. Once a prototype is functional, activity four will be initiated.

Activity 4: Demonstration

The demonstration phase, as defined by Peffers et al.[30] aims to validate if the artefact yields relevant results towards the problem definition. They mention that artefacts can be tested in experiments, case studies, or any other suitable activity. This activity can be correlated to the

“prototyping and experimentation” activity of Sonnenberg et al. [33]. The demonstration will be an experiment that establishes a nested VPN tunnel. Success at this phase is defined by communication through the nested tunnels. In the case of sufficient results, the evaluation phase is initiated, and a deeper inspection focuses on the previously formulated objectives performed.

(25)

Activity 5: Evaluation

Activity five is the activity that reveals the success of the study. According to Peffers et al. [30]

phase five involves determining the level of satisfaction of the artefact towards the research questions. Objectives need to be brought in relation to the results. In activity two, trust evaluation criteria were defined in order to be able to measure the trustworthiness of the artefact. We can correlate the final “applicability check” phase of Sonnenberg and vom Brocke [33] to activity five of the paper of Peffers et al. [30].

In order to evaluate the level of trust of the artefact, values from each subcomponent will be accumulated to a final score. The score will be translated into a percentage value that reflects the level of trust. The higher this level is, the more trustworthy an artefact can be considered.

Afterwards, the performance of the artefact is measured. It needs to be noted that the above- mentioned testing frameworks will not be used in their entirety due to their extensive structure.

Only requirements with relation to the study objectives are going to be tested. In the respective sections, it will be explained why certain requirements might be relevant or irrelevant. The exact installation of the experiment will be explained in 7

Activity 6: Communication

Communication is only mentioned in the paper of Peffers et al. [30]. Not only researchers or security experts are intended to understand the relevance of it, but also decision makers with little to no technical background. This study will be revised by Combitech beforehand in order to prevent any leaks of even the slightest confidential information. Afterwards, it will be available to the public on the Digitala Vetenskapliga Arkivet (DiVa).

3.2 Reliability and Validity

The results of this thesis are based on two main resources. The evaluation of trust can be seen as a starting point for a more in-depth analysis. Every subcomponent trust analysis could have filled the frame of an entire thesis project if performed thoroughly. Nevertheless, all information that has been gathered was proven to be accurate by its references. The evaluation itself was conducted as objectively as humanly possible. Still, unconscious, subjective influence is likely to be found in the results. The reader is encouraged to challenge and question the outcome of this evaluation. This builds the foundation for the rather practical part. Testing the artefact against the previously mentioned framework is also an extensive task by itself. Limiting the frameworks to the most essential part enabled the author to use a well thought-out testing methodology without potential unnecessary overhead. In the end, the target of evaluation is a prototype, not a commercial product that requires extensive CC evaluation. Performance tests have been executed in the same environment, shortly after each other in order to decrease non-IPsec related noise.

Noise can be parallel traffic, high CPU usage, or others. This ensures that measured values can

(26)

3.3. ETHICAL CONSIDERATIONS

be compared to the best of the authors belief.

3.3 Ethical Considerations

This work was developed without the participation of research participants and therefore cannot harm their privacy or dignity. Persons mentioned in the report gave consent in being mentioned or are known as public personalities. Furthermore, all non-material resources towards this study can be found in the reference section at the end of the thesis. The thesis itself will be publicly available after being reviewed by Combitech for potential information leakage. Data and results presented in this study are either taken from other researchers, marked with a citation, or developed by the author of this thesis.

(27)
(28)

C

HAPTER

4

D

ESIGN

Before diving into the design phase, the general architecture must be explained in order for the reader to understand the design choices. The artefact of this study is a nested site-to-site IPsec architecture. The initial idea about how the topology should look like can be seen in 4.1.

The corresponding hosts establish tunnels between each other, resulting in a layered tunnel over the insecure medium This thesis circles around the idea of trust. All commercial VPN

Figure 4.1: IPsec topology

server rely on hardware, OSs, software and algorithms. It is therefore necessary to analyse all subcomponents thoroughly and evaluate potential alternatives that may be implemented in the artefact. Besides the previously mentioned evaluation of products and the organizations behind them, the practicality is currently also important to consider; the best subcomponents cannot be utilized if they are not compatible. In order to counter this problem, a bottom-up approach was followed. This implies that the first decision revolves around the hardware, followed by the OS, applications, and so on.

(29)

4.1 Hardware

Hardware is the foundation of every computer system. As presented in the literature review in 1.1 there exist security issues on processors, RAM, graphics cards, and other parts. The main issue of this industry is its dependence on the chain-of-trust, upon which every intermediary party relies. Simply said, the majority of hardware is based on microcontrollers that are being fabricated by a few companies. If one of these vendors has a security bug in one of its products, the flaw cascades down to various end devices. An example of this was the ROGA vulnerability that created a cryptographic weakness. It was located in the software library RSALib of Infineon, a major micro controller vendor from Germany. According to the researchers of the paper, these affected millions of end devices [3]. A design decision for this study’s artefact is to base the VPN device on Single-Board-Computers. One reason behind this decision is that performance for small/medium sized businesses are likely to be satisfied while running capable SBCs. Common proprietary products of market leaders often come along with a considerable price tag, license costs, and service fees. Another idea is to case the two connected units into a single product of a small form factor. Baseline requirements for the selection process of SBCs are the following:

• CPU with minimum 2.0 GHz

• 2GB of RAM

• Gigabyte Ethernet

• Max. 200 C/device

Furthermore, both SBCs should integrate different processor brands. Even though Spectre and Meltdown affect the majority of Intel, AMD, and ARM processors, the likelihood of all three types of processors falling victim to future vulnerabilities can be considered lower. During the hardware search, it became obvious that there were hardly any boards based on x86-processors that comply to the previously mentioned parameters to be found. The most promising board is the UDOOx86.

UDOOx86 was developed by the company SECO in cooperation with Aidilab from Italy. It utilizes the Intel Celeron N3160 processor, hence not only SECO’s but also Intel’s level of trustworthiness needs to be examined. On the other device, the ODROID-XU4 board with an embedded ARM architecture is the preferred choice.

4.1.1 UDOOx86

The company behind UDOOx86 is SECO and based in Italy with branches in other countries.

SECOs manufacturing is entirely in-house, from the design, hardware, software, and testing [36].This limits the risk for any problems in the supply chain from potential third party producers.

Unfortunately, there are no security oriented reviews of the UDOOx86. On the other hand, there are no vulnerabilities to be found in any public vulnerability database. In terms of reputation,

(30)

4.1. HARDWARE

Pros Cons

Transparency of board components (exclude processor)

SECO with thirty years of business experi- ence

In-house production -> control over entire production process

Quality Management Systems (ISO 9001:2008) certified

Engaged in research and development

No CC evaluation

Processor vulnerable to Spectre/Meltdown vulnerabilities

Integrity issues of Intel CPU according to [16]

Table 4.1: Pros and cons of SECO and their product: UDOOx86

SECO appears to be focused on business rather than appearing in press. On the other hand, Intel is the market leader of CPUs with a market share of 78 percent. Intel receives considerable respect from the industry and has immense financial resources to push innovative products.

The Intel Transparent Supply Chain addresses the need for end users to track their parts and prevent malicious “dangerous fakes” to be implemented. Still, some vulnerabilities appear in Intel products. Over the last nineteen years, 103 vulnerabilities were registered in the vulnerability database of cvedetails.com [37]. Spectre and Meltdown, as well as the impenetrability of the Intel Management Engine (ME) mentioned in 1.1.2 have impacted the trustworthiness of Intel heavily.

The problem is that due to the duopoly nature of the processor market gives little chance in buying components other than Intel or AMD. Leading vendors like Cisco are also using processors of Intel. This analysis of all attainable factors of SECO and UDOOx86 leads to the numeral score seen in 4.2

Criteria for Trust Score Structural Assurance 4

Competence 5

Benevolence 4

Integrity 3

Predictability 4

Total 20

Table 4.2: Trust Score for SECO/UDOOx86

(31)

Pros Cons Transparency of board components (exclude

processor)

No known vulnerabilities of ODROID (ex- clude processor)

Active community

Passed EC declaration, meaning that the XU- 4 is aligning with the technical drawings

No CC evaluation

Processor vulnerable to Spectre/Meltdown vulnerabilities

Little information about Hardkernel (even in Korean)

No information about manufacturing process

Table 4.3: Pros and cons of Hardkernel and their product ODROID-XU4

4.1.2 ODROID-XU4

The second board of choice had to utilize a different processor. Also, since AMD processors rely on the x86-architecture, the search was narrowed down to ARM processors. After researching differ- ent products, the ODROID-XU4 was chosen. ODROID platforms are developed by Hardkernel co., Ltd, which is based in South Korea. Hardkernel supports the open source movements with its production and design process being completely transparent. On their Wiki page, information such as revision history, block diagrams, schematics, the board layout, and in-depth specifications can be found. Opening up the design strengthens the trustworthiness, since it is less likely that backdoors are hidden. Nevertheless, there is no definite proof of higher or lower level of trust with regards to closed- or open-source design choices [38]. In vulnerability databases, neither ODDROID or Hardkernel appear, which can an indicator of a secure product.

As the processor of choice, engineers at Hardkernel decided to implement two Samsung pro- cessors. One Exynos5422 Cortex™-A15 with 2Ghz, plus a Cortex™-A7 Octa core. This concept is called ARM big.LITTLE, which aims to intelligently incorporate a powerful processor and a battery-saving processor into one. These CPUs are prone to the speculative execution and indirect branch predictions, just like the previously presented Intel processors. ARM processors for 64/32 Bit architecture were introduced in 2011 and are therefore relatively new. On CVE Details, only twelve vulnerabilities are registered, including Spectre and Meltdown. Historically, there were no scandals or major upsets found on the ARM holding. Nevertheless, ARM sells the architecture as intellectual property, in this case to Samsung, which may add any features they desire. The final processor can be referred to as a System on Chip (SoC). The ARM trust zone intends to establish trust in ARM platforms, but Johanna Rutkowska describes that vendors can change the ARM trust zone into something similar to Intel ME [8].In the end, this does not positively affect the trustworthiness.

This analysis of all attainable factors of Hardkernel and ODROID-XU4 leads to the numeral score seen in 4.4

(32)

4.2. OPERATING SYSTEM

Criteria for Trust Score Structural Assurance 2

Competence 4

Benevolence 5

Integrity 4

Predictability 3

Total 18

Table 4.4: Trust Score for Hardkernel / ODROID-XU4

4.2 Operating System

On top of the hardware resides the operating system. When researching compatible operating systems, trust and security was of great importance. Unfortunately, it was not possible to receive free copies of Windows and Apple’s operating systems. It would have been of great interest using two completely different operating systems with regards to redundancy. Nevertheless, two Linux distributions with different origins were chosen. Firstly, RHEL version seven, which is based on the fedora distribution, as well as Debian Jessie. Next, these operating systems were analysed towards trust requirements defined in 1.1.1.

4.2.1 RedHat Enterprise Linux

On the UDOOx86 board, the Linux enterprise solution RHEL provided by Red Hat will be installed. For developers, RHEL is free of charge but may not be used for commercial operation.

This implies that the license of the artefact is in the case of productive use. Another option would be the use of a non-commercial operating system. RHEL is based on its predecessor Red Hat Linux, which was discontinued in 2003 in favour of RHEL. In 2016 RHEL version seven achieved common criteria certification on EAL 4+. Security functionalities in RHEL are auditing, cryptographic support, packet filtering, identification, authentication, discretionary access control, mandatory access control, security management, runtime protection mechanism, and Linux container framework support [39]. A CC evaluation is ensuring a high level of structural assurance. Red Hat supports Open Source software organizations in order to improve the quality of Open Source products. This way they utilize the fact that Open Source software is reviewed by many eyes. In return, Red Hat utilizes and hardens security of these products to use them in their enterprise solutions. Therefore, it can be said that RHEL products are partially open source.

RHEL was used by the entirety of airlines, telecommunication, healthcare, and commercial banks in the Fortune 500 list in 2014 [40]. These areas have critical requirements for security. This consensus indicates strong competence within Red Hat and their product. Furthermore, RedHat utilizes reactive product security, which led to fast reaction towards critical vulnerabilities. In

(33)

Pros Cons CC EAL 4+ evaluation

Support of Open Source community Widely used in critical business area

Fast and transparent reaction towards vul- nerabilities

Hardening, refining of Open Source software not transparent

License costs

Table 4.5: Pros and cons of RHEL v.7

RHEL version seven, 82 percent of all vulnerabilities were fixed within one day [41].Documenting vulnerability management publicly indicates integrity and benevolence. RHEL releases new versions about every three to four years. Every version since five has ensured support for a minimal of ten years. This means that users can predict the near future of their operating system.

This analysis of all attainable factors of RHEL v.7 yields the following numeral score seen in table 4.6

Criteria for Trust Score Structural Assurance 5

Competence 5

Benevolence 3

Integrity 5

Predictability 5

Total 23

Table 4.6: Trust Score for RHEL v.7

4.2.2 Debian Jessie

The operating system running on the ODROID-XU4 was Debian Jessie. One reason is that Debian is developed by users from all over the world; there is no company behind Debian that has any financial interest in adding unnecessary features. This strengthens the benevolence of Debian. Also, Debian states that they are not certified by CC or another organization due to cost reasons. Still, certain parts of the CC certification process are included in the Linux Testing Project (LTP), which is available for Debian. Releases of Debian come in three shapes:

stable, testing, and unstable, with stable being the recommended version to use. The testing distribution, named ”frozen“ if considered mature enough. Frozen means that the development of new features is slowed down, sometimes completely. When the number of bugs is below the

(34)

4.3. IPSEC APPLICATIONS

Pros Cons

Holistic approach of transparency Quality assurance process

Strong community following ethical values Non-profit

No CC certification

Table 4.7: Pros and cons of Debian

maximum allowed limit, the frozen testing distribution becomes the new stable version [42]. This shows that developers of Debian are carefully inspecting and releasing their distribution, which supports structural assurance. To become a Debian Developer (DD), the community requires active work in some other form for the Debian project, e.g. maintenance of packages. This ensures that developers display some form of knowledge and competence, which supports the competence criterion. Unlike most companies, the developers also provide technical support, often leading to the user receiving help in less than fifteen minutes. Integrity is provided by the open culture embedded in the Debian project. The entire structure of Debian can be seen on the website of the Debian operating systrem [43]. Furthermore, all sources are publicly available. The stability and consistency that can be found throughout Debian supports the last criteria: predictability. Users of Debian can expect the current testing version to be the next stable version. The summarized evaluation of Debian towards trust can be seen in table 4.7 This analysis of all attainable factors of Debian yields the numeral score seen in table 4.8

Criteria for Trust Score Structural Assurance 4

Competence 5

Benevolence 5

Integrity 5

Predictability 4

Total 23

Table 4.8: Trust Score for Debian

4.3 IPsec Applications

VPN server applications are the next building block on top of the OS. Two Linux-supporting VPN suites needed to be chosen. After researching potential candidates, the decision fell for strongSwan and Libreswan. The similar names of strongSwan and Libreswan might make it

(35)

Pros Cons Holistic approach of transparency

Implements the IKEv1 (RFC 2409) and IKEv2 (RFC 4306)

Decreasing amount of vulnerabilities over the last versions years

No CC certification

Table 4.9: Pros and cons of strongSwan

seem to the reader that they are related. The thought is not far-fetched, since they both originate from the discontinued FreeS/WAN IPsec project. Libreswan oriented itself closer to the original implementation, while strongSwan was completely rewritten. Hence, the redundant approach of this thesis is supported.

4.3.1 strongSwan

strongSwan is an IPsec implementation that is completely OpenSource. Its roots lie in the discontinued FreeS/WAN project. The leading maintainer is Andreas Steffen, professor for security in communications and head of the Institute for Internet Technologies and Applications at the University of Applied Sciences Rapperswil in Switzerland. The entire architecture and source code is transparent, leading to increased integrity and benevolence. It is further supported by the open culture of issue tracking. strongSwan libraries are tested with Unit tests on a Kernel-based Virtual Machine (KVM) to assure the same environment for each test. Still, no other information about the actual development processes and certification can be found, which has a negative impact on structural assurance. strongSwan was analysed on the online platform Black Duck Open Hub (BDOH). Its results include contributors, vulnerabilities, commits, and more. Two metrics are calculated for each project that is being uploaded to BDOH: Security Confidence Index (SCI) and Vulnerability Exposure Index (VEI). Further information on the calculations of these values can be found on the BDOH website. strongSwan is an active project and scores better than average on the SCI and nearly perfectly on the VEI metric. These are indicators for competence as well as integrity of the development team. On OpenHub, it was established which developers did which commits. The main contributors are Tobias Brunner and Andreas Steffen with about six-hundred to seven-hundred commits over the past years.

Both are highly proficient in developing security focused applications, including strongSwan, which yields high competence. strongSwan has been under development since 2005 and can be considered stable. Additionally, the support of the University of Applied Sciences Rapperswil is a positive indicator for predictability. This analysis of all attainable factors of strongSwan yields the following numeral score seen in table 4.10

(36)

4.3. IPSEC APPLICATIONS

Pros Cons

High level of transparency

Implements various RFC standards Few reported vulnerabilities

Favorable security track-record

No CC certification

Open vulnerabilities found

Table 4.11: Pros and cons of Libreswan

Criteria for Trust Score Structural Assurance 3

Competence 4

Benevolence 5

Integrity 5

Predictability 4

Total 21

Table 4.10: Trust Score for strongSwan

4.3.2 Libreswan

As described before, Libreswan also originated from the discontinued FreeS/WAN project. Just like strongSwan, Libreswan also has a highly transparent development culture. The source-code is published on their Github repository and updated continuously. Transparency automatically affects integrity positively. The authors of Libreswan can also be found on Github. Most commits are done by Paul Wouters, who has been actively involved in the FreeS/WAN project. This can be an indicator for competence. Nevertheless, very little information about the development is given. Similar to strongSwan, there are no policies or certifications stating the workflow of Libreswan, which has negative impact on structural assurance. However, Libreswan satisfies several RFC standards regarding IPsec, IKEv1, and IKEv2. Security vulnerabilities are reported on the website, linking to the official CVE vulnerabilities. This way, users can immediately see what potential problems could arise, which positively affects benevolence. On BDOH, Libreswan scored nearly perfect in the previously mentioned vulnerability analysis. The development team is constantly working on Libreswan, which can be concluded by inspecting the github repository.

The maintenance of the sourcecode is predictable and steady, whereas announcements are solely distributed via a mailing list. This analysis of all attainable factors of Libreswan yields the following numeral score seen in table 4.12

(37)

Criteria for Trust Score Structural Assurance 3

Competence 4

Benevolence 5

Integrity 5

Predictability 4

Total 21

Table 4.12: Trust Score for Libreswan

4.4 Cryptography

IPsec requires different cryptographic properties in order to be functional. One encryption al- gorithm for the en/decryption of packets (EA), a hash algorithm for satisfying integrity and authenticity needs (HA), Diffie-Hellman for key agreement (DH), and an asymmetric algorithm for authentication in the form of digital signing and verification (AA). These four groups were extracted from a CC evaluation document of a VPN device by the “Bundesamt für Sicherheit in der Informatik” [39]. Since there are two tunnels, two different cipher suites must be chosen while conducting a trust analysis. In the documentation of strongSwan supported cipher suites are mentioned. These are oriented to the published documents of the Internet Assigned Numbers Au- thority (IANA) [44]. Possible EAs are 3DES, Cast128, Blowfish, AES, Camellia, and Chacha20. All come with different key sizes and modes. Supported IAs are MD5, SHA-1, AES-XBC, AES-CMAC, AES-GMAC, and SHA-2. Different possible Diffie-Hellman groups are MODP, ECP, ECPBP, and Curve. Since AA algorithms are independent from the IPsec application, no ciphers are mentioned in the documentation of strongSwan. Regarding supported algorithms the Libreswan project refers to the RFC8221 titled “Cryptographic Algorithm Implementation Requirements and Usage Guidance for Encapsulating Security Payload (ESP) and Authentication Header (AH)” [45]. This document lists the following EAs: DES, 3DES, Blowfish, 3IDEA, DESIV32, AES, and CHACHA20.

Mentioned HAs are MD5, SHA1, DES-MAC, KPDK-MD5, AESXCBC, AES-GMAC, and SHA-2.

IKEv2 related specifications can be found in RFC8247, which is also mentioned on the Libreswan website [46]. Viable DH groups are MODP with different key lengths. Authentication algorithms are defined and are the following: RSA, Shared Key, DSS, ECDSA, and Digital Signature. As- sumptions on the security or vulnerabilities requires the correct implementation in software libraries using satisfying key lengths.

(38)

4.4. CRYPTOGRAPHY

Cryptographic property strongSwan Libreswan

Encryption/Decryption AES AES

3DES 3DES

Blowfish Blowfish

Cast128 DES

Camellia DES-IV32

CHACHA20 CHACHA20

3IDEA Integrity and Authenticity MD5 MD5

SHA-1 SHA-1

AES-XBC AES-XCBC

AES-GMAC AES-GMAC

AES-CMAC AES-CMAC

SHA-2 SHA-2

DES-MAC KPDK-MD5

Diffie-Hellman Mode MODP MODP

ECP NIST X25519

Signatures/Verification ECDSA ECDSA

RSASSA-PSS* RSASSA-PSS

DSS DSS

Table 4.13: Algorithm Choices

*"RSA Signature Scheme with Appendix-Probabilistic Signature Scheme"

4.4.1 ESP-Encryption Algorithms

The fact that strongSwan and Libreswan support encryption algorithms mentioned in 4.13 does not imply their robustness against cryptographic attacks. In the RFC documents mentioned before, only Advanced Encryption Standard (AES) and CHACHA20 are described as secure, currently as well as in the future. In a paper about insecurity of 64-Bit block ciphers, collision attack vulnerabilities of 3DES and Blowfish are thematized [47]. DES is ruled out due to the small key size of 56-bit. These algorithms will not be considered as a design choice.

Camellia has been designed by researchers of Mitsubishi Electric and NTT in Japan. The robustness of the Japanese cipher is comparable to the AES, which will be presented later.

References

Related documents

In the present case, plastic will be used in the back case, stylus, foot, battery cover and protection case, and glass in the front case.. Regarding housings design,

When
 deciding
 on
 who
 should
 be
 involved
 in
 the
 team,
 the
 discussion
 started
 around
 the
 number
 of
 people.
 Keeping
 the
 group
 at
 a


Although the research about AI in Swedish companies is sparse, there is some research on the topic of data analytics, which can be used to understand some foundational factors to

Is there any forensically relevant information that can be acquired by using the Fusée Gelée exploit on the Nintendo Switch, that cannot otherwise be acquired by using

The device contains six attached hammer drills which are raised to the ceiling by six individual pneumatic cylinders that also provide the required drilling force.. It

The project failed, as the UV light did not make it through the lenses used to concentrate it.. Better lenses and a laser would allow it to

Also design and program the application running on the smart phone: it needs to process the incoming data from the jump sensor device and present it to the user via a graphical

Sub-problem Sub-solution #1 Sub-solution #2 Withstand load Egg-shaped Rectangular Transmit load (turnbuckle to rod) All up Half up, half down Measure force Axis-cylinder Strain