• No results found

Tamper Protection for Cryptographic Hardware : A survey and analysis of state-of-the-art tamper protection for communication devices handling cryptographic keys

N/A
N/A
Protected

Academic year: 2021

Share "Tamper Protection for Cryptographic Hardware : A survey and analysis of state-of-the-art tamper protection for communication devices handling cryptographic keys"

Copied!
68
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköpings universitet

Linköping University | Department of Electrical Engineering

Master’s thesis, 30 ECTS | Electrical Engineering

2020 | LIU-ISY/LITH-EX-A--20/5306--SE

Tamper Protec on for

Cryptographic Hardware

A survey and analysis of state-of-the-art tamper protec on for

communica on devices handling cryptographic keys

En undersökning och analys av moderna manipuleringsskydd för

kommunika onsenheter som hanterar kryptografiska nycklar

Emil Johansson

Supervisor : Assoc. Prof. Jacob Wikner Examiner : Prof. Jan-Åke Larsson

(2)

Upphovsrätt

De a dokument hålls llgängligt på Internet - eller dess fram da ersä are - under 25 år från publicer-ingsdatum under förutsä ning a inga extraordinära omständigheter uppstår.

Tillgång ll dokumentet innebär llstånd för var och en a läsa, ladda ner, skriva ut enstaka ko-pior för enskilt bruk och a använda det oförändrat för ickekommersiell forskning och för undervis-ning. Överföring av upphovsrä en vid en senare dpunkt kan inte upphäva de a llstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För a garantera äktheten, säker-heten och llgängligsäker-heten finns lösningar av teknisk och administra v art.

Upphovsmannens ideella rä innefa ar rä a bli nämnd som upphovsman i den omfa ning som god sed kräver vid användning av dokumentet på ovan beskrivna sä samt skydd mot a dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsman-nens li erära eller konstnärliga anseende eller egenart.

För y erligare informa on om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet - or its possible replacement - for a period of 25 years star ng from the date of publica on barring excep onal circumstances.

The online availability of the document implies permanent permission for anyone to read, to down-load, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educa onal purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are condi onal upon the consent of the copyright owner. The publisher has taken technical and administra ve measures to assure authen city, security and accessibility.

According to intellectual property law the author has the right to be men oned when his/her work is accessed as described above and to be protected against infringement.

For addi onal informa on about the Linköping University Electronic Press and its procedures for publica on and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

(3)

Abstract

This master’s thesis was conducted at Sectra Communications AB, where the aim of the thesis was to investigate the state of the art of physical hardware tampering attacks and corresponding protections and mitigations, and finally combining this to a protection model that conforms to the FIPS 140-2 standard. The methods used to investigate and evaluate the different attacks were literature searching, looking for articles presenting different attacks that have been used against real targets, and attacks that there are no records of being attempted on a real target, but are theoretically possible. After this, an attack tree was constructed, which then developed into a flowchart. The flowchart describes and visualizes how the different attacks could take place.

A qualitative risk analysis was conducted to be able to evaluate and classify the different attacks. This showed the attacks that would most likely have the greatest impact on a cryptographic communications device if used in an attack on the device, and also which of these attacks one must prioritize to protect the device against. The attacks that were regarded to have the highest impact on a cryptographic communication device were memory freezing attacks, and radiation imprinting attacks.

After this, a protection model was developed. This was done by placing protection and mitigation in the attack flowchart, showing how one could stop the different attacks. To then investigate the different protections, an evaluation process was conducted. An evaluation process was conducted to investigate the different protections by comparing their attributes to the requirements of the FIPS 140-2 standard. This evaluation process than resulted in a combined protection model that covers the requirements of the FIPS 140-2 standard.

This thesis concludes that there are many different protections available, and to be able to create solutions that protect the intended system one must perform a deep attack vector analysis. Thus, finding the weaknesses, and vulnerabilities one must protect.

(4)

Acknowledgments

I would like to thank my supervisor Mattias Fransson, and colleagues at Sectra for their advice and the technical expertise they have shared with me during this thesis. I would also like to thank my supervisor, Assoc. Prof. Jacob Wikner, and examiner, Jan-Åke Larsson, at Linköping University for guiding, and helping me to finalize this master’s thesis.

A special thanks to my parents Pär and Eva-Lena Johansson, for teaching me the im-portance of being curious, the necessity of critical thinking, and always have a goal to work towards. Finally, I would like to thank my girlfriend Jennifer Sundström, for her love, and unconditional support during the entirety of my studies.

(5)

Contents

Abstract iii

Acknowledgments iv

Contents v

List of Figures vii

List of Tables x

Acronyms x

1 Introduction 1

1.1 Motivation . . . 1

1.2 Background . . . 1

1.3 Aim of the Thesis . . . 2

1.4 Research Questions . . . 3

1.5 Delimitations . . . 3

2 Theory 5 2.1 Cryptography in Communication . . . 5

2.2 Encryption Keys and Key Management . . . 5

2.3 Physical Attacks and Tampering . . . 6

2.4 Physical Security and Anti-Tampering . . . 9

2.5 Physical Unlconable Functions - PUF . . . 10

2.6 Physical Unclonable Functions as Tamper-Resistance Enclosures . . . 13

2.7 FIPS 140-2 . . . 15

3 Method 20 3.1 Literature Study . . . 20

3.2 Threat Modelling . . . 21

3.3 Mitigation and Protection Modelling . . . 24

3.4 Protection Testing and Protection Evaluation . . . 24

3.5 Protection Compilation Model . . . 27

4 Results 28 4.1 Information on the Results . . . 28

4.2 Attack Flowchart and Attack Analysis . . . 28

4.3 Protection Model . . . 42

5 Discussion and Analysis 48 5.1 Results . . . 48

5.2 Method . . . 49

(6)

6 Conclusion 52

(7)

List of Figures

2.1 Two MOSFETs with an STI oxide layer between them. . . 8 2.2 Background data patterns of one of the four chosen SRAM memory cells

inves-tigated for radiation imprinting. Total Radiation doses: (a) not radiated, (b) 2000 Gy(Si), and (c) 3000 Gy(Si) - Zheng et al. - “Pattern Imprinting in Deep Sub-Micron Static Random Access Memories Induced by Total Dose Irradiation” . 9 2.3 A secure communication device with a authentication PUF, strong PUF,

authenti-cating itself against a secure server with the challenge-response pairs of the strong PUF. . . 10 2.4 A MOSFET transistor with width W and length L noted. . . . 11 2.5 An example of a one bit ring-oscillator PUF as illustrated by Kirkpatrick and

Bertino in their article “Software Techniques to Combat Drift in PUF-Based Au-thentication Systems” . . . 12 2.6 A six transistor SRAM memory cell - By Inductiveload - Own work, Public Domain,

https://commons.wikimedia.org/w/index.php?curid=5771850 . . . 13 2.7 Immler et al. proposed sensing mesh layout with Ri(1), Ro(1), Ti(1), To(1)

inter-twined and layered upon each other in a mesh form. The capacitive sensor cell is visualized as a grey circle in the intersection between the R(1), and T(1) circuits. . 14 2.8 The simplified equivalent circuit diagram of the sensor cells generating the seed for

the key. This shows that the combined capacitance of the sensor cells, Cs, is equal

to the sum of all the sensor cells, Cs,1, Cs,2, etc. . . 14

2.9 A visualization of Immler et al.’s proposed protection model with a PUF enclsoing envelope with vias in the PCB bulk detecting any attempts made by adversaries to penetrate the bulk of the PCB. . . 15 2.10 A summary table of physical security requirements of the FIPS 140-2 standard

taken from the National Institute of Standards and Technology document The Federal Information Processing Standard Publication 140-2, Security Requirements for Cryptographic Modules. . . 18 3.1 The theoretical system used for the threat analysis and protection modelling

per-formed in this thesis. Secure communication devices sending classified, secret in-formation between each other. . . 20 3.2 An attack tree of physical attacks against an embedded devices storing

crypto-graphic keys. . . 22 3.3 An impact and probability matrix used to perform qualitative risk analysis. Here,

the rows corresponds to the plausibility of an attack being attempted, and being succesfull, and the coulmns is the overall impact if the attack were to be succesfull. 22 3.4 A radar chart used to visualize the overall threat of an attack. The axis of the radar

chart are ”Impact”, ”Plausibility”, ”Adversary”, and ”Protection Implementation”. The larger the area of the plot, the larger the threat. . . 23 3.5 The testing setup for the light detection tests with the light sensor placed inside a

(8)

4.1 Flowchart of different attack pathways, starting at an X-ray inspection of the target device, and moving forward in different attack pathways, e.g., ”Radiation Imprint-ing”, ”Temperature ImprintImprint-ing”, ”Expose Chip Surface”, etc., with the goal of obtaining secret keys stored in the device hardware. . . 29 4.2 Attacks that were analyzed in this master’s thesis is highlighted, and those who

were not analyzed are greyed out. . . 29 4.3 A qualitative risk analysis of an adversary using X-ray to inspect the PCB of a

communication device handling secret keys. Here the matrix shows that X-ray inspection is of medium-high risk. . . 31 4.4 The summarized threat analysis of an X-ray inspection. The axis of the radar

chart are ”Impact”, ”Plausibility”, ”Adversary”, and ”Protection Implementation”. These describes the overall impact of the attack, the plausibility of an adversary performing the attack, possible adversaries, and how difficult or easy it is to imple-ment protection against the attack. . . 32 4.5 A qualitative risk analysis of an adversary opening the casing of a communication

device handling secret keys. Here the matrix shows that opening a device is of medium-high risk. . . 32 4.6 The summarized threat analysis of an adversary trying to open a device. The

axis of the radar chart are ”Impact”, ”Plausibility”, ”Adversary”, and ”Protection Implementation”. These describes the overall impact of the attack, the plausibility of an adversary performing the attack, possible adversaries, and how difficult or easy it is to implement protection against the attack. . . 33 4.7 A qualitative risk analysis of an adversary removing the protective epoxy

surround-ing the PCB of a communication device handlsurround-ing secret keys. Here the matrix shows that an adversary removing the protective epoxy coating is of medium risk. . . 34 4.8 The summarized threat analysis of an adversary removing the protective epoxy

coating of a device PCB. The axis of the radar chart are ”Impact”, ”Plausibility”, ”Adversary”, and ”Protection Implementation”. This attack is concluded to have a serious impact on a system, that it is possible that adversaries from small/medium size organization to intelligence agencies can perform the attack, and that it is easy to implement protection against this attack. . . 35 4.9 The pathway an attacker could take to perform a radiation imprinting attack.

The pathway starts with an X-ray inspection, then it separates in two different pathways. Opening the device to then remove any epoxy present in the device, or directly using the X-ray to imprint the memory. This would fixate the memory and could then be read at will. Thus, exposing any secret keys or information residing in the memory. . . 36 4.10 A qualitative risk analysis of an adversary using a radiation imprinting attack

on a communication device handling secret keys. Here the matrix shows that an adversary performing a radiation imprinting attack is of high risk. . . 36 4.11 The summarized threat analysis of a radiation imprinting attack. The axis of the

radar chart are ”Impact”, ”Plausibility”, ”Adversary”, and ”Protection Implemen-tation”. This attack is concluded to have a catastrophic impact on a system, that it is possible for large organizations or intelligence agencies to perform this attack, and that it is difficult to implement protections against this attack. . . 37 4.12 The pathway an attacker could take to perform a electron beam read attack. as in

the other pathways, they start with an X-ray inspection to then move on to open the device and removing any epoxy. After these steps, the adversary would need to remove the IC capsule of the memory that are to be read. . . 38 4.13 A qualitative risk analysis of an adversary using an electron microscope to perform

a electron beam read attack on a communication device handling secret keys. Here the matrix shows that an adversary using an electron microscope to perform a electron beam read attack is of medium risk. . . 38

(9)

4.14 The summarized threat analysis of an electron beam read attack. The axis of the radar chart are ”Impact”, ”Plausibility”, ”Adversary”, and ”Protection Implemen-tation”. This attack is concluded to have a catastrophic impact on a system, but that it is almost impossible to perform, that possible adversaries are only large organizations or intelligence agencies, and that it is easy to implement protection against this attack. . . 39 4.15 The pathway an attacker could take to perform a temperature memory remanence

attack. As in the other attacks, this one starts with X-ray inspection of the target device, open the device, and removing any epoxy present. After this, the target memory would be frozen with a cooling agent, to then be removed from the target device and read. . . 40 4.16 A qualitative risk analysis of an adversary using a freezing attack to achieve memory

remanence on a communication device handling secret keys. Here the matrix shows that an adversary performing a temperature memory remanence attack is of high risk. . . 40 4.17 The summarized threat analysis of a temperature remanence attack. The axis of

the radar chart are ”Impact”, ”Plausibility”, ”Adversary”, and ”Protection Imple-mentation”. This attack is concluded to have a catastrophic impact on a system, that it is almost certain to be attempted by all kinds of adversaries, and that it is easy to implement protection against this attack. . . 41 4.18 A total qualitative risk analysis of an adversary using the attacks presented in the

thesis to obtain secret information or secret keys on a cryptographic communication device. Here the matrix shows that the attacks presented in this thesis is medium to high risk, and therefore needs to be protected against. . . 41 4.19 The summarized threat analysis of all the attacks presented in this thesis. The

axis of the radar chart are ”Impact”, ”Plausibility”, ”Adversary”, and ”Protection Implementation”. The combined attacks are covering all the different levels of the axis. . . 42 4.20 A flowchart where the green boxes, representing different protection components

that can mitigate, detect, or protect against the attacks such as freezing attacks, and radiation imprinting. . . 43 4.21 Proposed protection model using the conventional sensor methods combined with

a tamper evident PUF-based key generation system. The model is constructed in such a way that the outer layer of the model is also the outer layer of the intended device. Thus, the casing, screws, and switches are the first line of defence, then the light detection sensor, and so forth. . . 46

(10)

List of Tables

2.1 Summarized table showing the requirements a level 4 multiple-chip standalone cryp-tographic module must meet to conform to the FIPS 140-2 standard. . . 19 3.1 Requirement fulfillment table of the level 4 FIPS 140-2 plus the extra criterion of

irradiation detection. . . 27 4.1 The requirements of the FIPS 140-2 standard, and the extra criterion of irradiation

measurement with examples on how different protection components cover these requirements. . . 45 4.2 A table of the physical security requirements of the FIPS 140-2 standard with the

extra criterion of radiation measuring, and the corresponding evaluation of the combined protection model presented in Figure 4.21. . . 47

(11)

Acronyms

AISEC Fraunhofer Institute for Applied and Integrated Security. 2 ASIC Application Specific Integrated Circuit. 11

CHES Conference on Cryptographic Hardware and Embedded Systems. 2 CMOS Complementary Metal Oxide Semiconductor. 8, 11

CSP Critical Security Parameter. 16

DSO Defence Science Organisation (Singapore). 2

EEPROM Electrical Erasable Programmable Read-Only Memory. 7 EPROM Erasable Programmable Read-Only Memory. 6, 7

FIPS Federal Information Processing Standard. iii, x, 3, 4, 15–19, 27, 45, 47–51, 54 FPGA Field-Programmable Gate Array. 11

HDD Hard Disk Drives. 6

HSM Hardware Security Module. 2

IC Integrated Circuit. 8, 10, 17, 30, 37, 46, 49, 50, 52 IoT Internet of Things. 2, 48, 49

JTAG Joint Test Action Group. 33

NIST National Institute of Standards and Technology. 15, 16, 50 PCB Printed Circuit Board. 7, 9, 15, 29–36, 44, 45, 52, 53

PUF Physical Unclonable Function. 2, 3, 10–15, 20, 21, 27, 43–47, 52–54 RAM Random Access Memory. 6, 7, 9, 39

SRAM Static Random Access Memory. 6, 8, 10, 12, 13, 31, 35, 46, 49 STI Shallow Trench Isolation. 8

STRIDE Spoofing, Tampering, Repudiation, Information Disclosure, Denial-of-Service,

Ele-vation of Privileges. 21

(12)

1

Introduction

1.1 Motivation

High assurance devices that handle cryptographic keys, such as secure communication devices used by nation-states, military, and intelligence services, are under constant threat. Today, software security and is a well-researched field and is widely used to protect devices from being attacked remotely through wireless communication protocols. Furthermore, the crypto-graphic functions used currently are so secure that without the secret cryptocrypto-graphic key, the information in the message is practically impossible readout.

As these devices have left the security of secure facilities and moved into the mobile world other threat vectors have been discovered. The following are examples of threats that in recent years become more prevalent:

• Watering Hole - where end-points of secure communication are targeted.

• Supply-Chain Attacks - targeting less secure parts of the infrastructure to then move vertically in it.

• Social Engineering Attacks - to obtain information or physical access to devices giving adversaries the possibility to analyze devices in laboratory environments. Thus, allowing adversaries to evaluate different attack possibilities to extract the cryptographic keys. Since many of the attacks described above can be mitigated and protected against using restrictive user policies, access control, and software protection mechanisms. Hardware and tampering attacks become more prevalent and a more effective way of trying to extract the secret cryptographic keys.

1.2 Background

Throughout the history of cryptography, tamper resistance has been widely used by the mil-itary and governments. In the navy, it was common to have weighted naval codebooks so that if they were to be captured the codebooks could be thrown overboard. Also, the British government officials carrying state secrets used lead-lined dispatch boxes to make sure that

(13)

1.3. Aim of the Thesis

they would sink if thrown into the water. There are many examples of these kinds of solutions, everything from water solvable ink to cipher machines with self-destruct thermite charges.[1]

As secret information has gone from being physically carried to being encrypted, and sent via communication devices, the goal of tamper protection has moved from protecting the information itself to protecting the keys encrypting the information. One such example is the Hardware Security Module (HSM).

HSM are microcomputers with encryption capabilities, and key memory that zeroes the memory if the metal enclosure it is placed in is opened [1]. Although they were theoretically secure researches such as Ross Andersson, author of the book Security Engineering: A Guide

to Building Dependable Distributed Systems, 2nd Edition | Wiley and Professor of Security

En-gineering at the Computer Laboratory at University of Cambridge. Has managed to, together with his researchers, successfully hack into HSMs and steal the keys with several different tech-niques, and on several revisions of the HSMs. One of these attacks was as simple as cutting the casing open, circumventing the early zeroization solution with mechanical switches in the case. Another one of these attacks were data remanence attacks, freezing the static memory holding the key, cutting the power of the device, opening it and placeing the memory in a new open device, more on these attacks in chapter 2.[1]

Although these attacks were working, they relied on the attacker to physically having access to the HSM, and as these modules are meant to be under lock and key in server rooms, it is hard for attackers to do so. As encryption has become more and more prevalent, due to the rise of cellphones and the Internet. A new challenge of protecting keys in hardware, that is not locked in secure server rooms, but rather operating in untrusted environments, has presented itself.

Currently, there are many different hardware tamper protection solutions available on the market. Everything from passive solutions such as epoxy coatings, to active ones that use sensor technologies to detect intrusion attempts. The ideal tamper protection would be a completely self-reliant, non-battery driven, solution where it was impossible to access the hardware physically, and also, if an attempt to access or change something in the hardware was made, it would be detected right away. Currently, such protection does not exist, but many attempts are being done to create such a solution. One of which being the one pre-sented at the Conference on Cryptographic Hardware and Embedded Systems (CHES), 2019 in Atlanta USA. A research group from Fraunhofer Institute for Applied and Integrated Secu-rity (AISEC), Defence Science Organisation (Singapore) (DSO), and the Technische Univer-sität München (TUM) presented a paper called “Secure Physical Enclosures from Covers with Tamper-Resistance” [2].

In this paper Immler et al. presented a new way of using a Physical Unclonable Function (PUF), a physical object or component, from where one can derive a completely random num-ber seed to use in key generation [3], and at the same time use it as a passive tamper-resistant and tamper-evident physical enclosure to protect the cryptographic keys in an embedded sys-tems.

The possibility of having a passive, battery-less, tamper evident and tamper detection solution is very interesting in low-power devices operating in threat full environments, such as Internet of Things (IoT) devices. Although IoT devices are the most notable application, PUF architectures in cryptographic devices are also of great interest.

1.3 Aim of the Thesis

The purpose of this master’s thesis is to review the conventional use of sensor technologies, i.e. using several, or different kinds of sensors as tamper detection, and the newly presented PUF technology for tamper protection and tamper detection. This will be done by investigating, and analyzing different physical hardware tamper attacks, and how one could protect a generic cryptographic communication device.

(14)

1.4. Research Questions

The thesis will also evaluate different kinds of solutions to understand what aspects of physical security they cover, and finally compare them to each other.

The thesis will conclude with a discussion on whether the conventional use of sensors, the PUF solution, or a mixed solution is more practical and feasible in a secure cryptographic communication product.

1.4 Research Questions

Below are the research questions that this master’s thesis will address.

1. Which attacks does the conventional sensor, and epoxy protection approach detect, or protect embedded devices against?

a) What sensors and components are most important when defending against tamper attacks?

i. Are these sensors practical? ii. How hard are they to implement? iii. Is the price of the sensors reasonable?

iv. Are they hard to perform maintenance on, or do they prohibit maintenance of the device?

2. Which attacks does the presented PUF protection approach defend against? a) Are there any attacks that it does not protect against or mitigate? b) Is the implementation of a weak tamper evident PUF practical? c) Is the manufacturing price reasonable?

d) Are they hard to perform maintenance on, or do they prohibit maintenance of the device?

3. Is a mixed tamper protection solution, using both sensors and the PUF approach, the best solution for high assurance security devices?

4. Does this mixed tamper protection solution fulfill the security requirements of a Level 4 cryptographic module in the Federal Information Processing Standard (FIPS 140-2) standard?

These questions will be answered in Chapter 5, the discussion part of this thesis.

1.5 Delimitations

The focus of this thesis is evaluating the means of detecting, and protecting against physical hardware tampering attacks. The attacks will not be attempted and evaluated on its success, but rather their impact on a system. Nor will every possible physical tamper attack be tested, but rather those who are deemed to be of medium or high risk for a system.

Side-channel attacks, such as timing attacks and power consumption attacks are also left out in this master’s thesis. Due to the nature of side-channel attacks being device design, and implementation dependent. Social engineering, watering hole attacks, and supply-chain attacks is also outside of the scope of this thesis. This is since we are assuming that the device is already in the hands of an adversary.

The current Covid-19 pandemic has forced this thesis to become more of a theoretical thesis than it was planned to be. All physical tests had to be canceled as the companies, where the tests were planned to be performed, canceled our tests appointments due to com-pany restrictions. This forced the thesis to instead of performing physical tests of different

(15)

1.5. Delimitations

tamper protections, theoretically evaluate the different protections against each other and an established standard.

The FIPS 140-2 standard where chosen to be used as a measurement to evaluate whether the protections presented in this thesis is secure. This standard is chosen as it is deemed as most reliable and as it is open for anyone to read and do comparisons with.

(16)

2

Theory

The theory section of a thesis should be based on earlier, and related work done in the field. It should also present the reader with a chance of gathering the necessary information to understand the topics and theories presented in the thesis.

2.1 Cryptography in Communication

Modern communication is dependent on all facets of cryptography, asymmetric key cryptogra-phy for key exchange and digital signatures, symmetric cryptogracryptogra-phy for message encryption, and one-way hash functions for message authentication. These technologies enable end-to-end encryption in communication systems. [1]

Current cryptographic algorithms should all be built upon Kerckhoff’s principle that states that the security of a cryptographic system should be key dependent, i.e. the security must depend on the choice of keys. Everything else of the cryptographic system should be viewed as public knowledge. [4] Thus, the security of most communication systems lies in the security of their key management and key storage.

This forces one to consider the physical security of the locations where these keys are stored and managed. In embedded devices this would be the memory of the device. Thus, the physical security of hardware, such as memories are of great importance when it comes to securing encryption in communication.

2.2 Encryption Keys and Key Management

To protect sensitive information encryption is used to hide the information as scrambled data. All forms of encryption uses encryption/decryption keys to hide the information in such a way that only the parties that hold the keys are allowed access to read the information.

Volatile and Non-Volatile Memory

The difference in volatile and non-volatile memory is that a volatile memory only maintains its data as long as the device has a power supply. That is, if the device loses its power the information is lost. While the non-volatile memory will retain its data even if the power is lost.

(17)

2.3. Physical Attacks and Tampering

Many cryptographic computations using secret keys are performed in volatile Static Ran-dom Access Memory (SRAM). These keys are, as derived from Kerckhoff’s principle, not meant to be readable by any malicious party. Therefore security engineers are interested in the data retention time of such Random Access Memory (RAM) devices once the power supply has been removed. Research has shown that so called freezing attacks, an attack where memory cells are chilled to low temperatures to retain its memory, has been able to retain SRAM memory for several minutes at temperatures ranging from room temperature to -50○C [5].

In electronic embedded devices these secret keys are stored in non-volatile memories, his-torically Erasable Programmable Read-Only Memory (EPROM), Flash, Hard Disk Drives (HDD). While solid state non-volatile memories increase the physical security more than disk drives, they are inherently simple to reverse engineer [6].

Exploiting the weaknesses of volatile and non-volatile memories are a priority for attackers when trying to obtain the secret encryption keys. Attacks such as cold-boot attacks, radiation imprinting, and reverse engineering are examples of wider spectra of attacks called physical hardware attacks.

2.3 Physical Attacks and Tampering

As computers and other computing devices has left the safety of server rooms the physical security of these devices has become a vulnerability. The logical security, i.e. the algorithmic security of the cryptographic systems are widely viewed as secured. Thus, the security of the system relies, as Kerckhoff’s principle states, on the security of the secret keys, and since these keys must be stored somewhere on these devices, the security of the keys relies on the physical security of these devices.

There are numerous physical attacks that can be performed on these devices, and to con-textualize these they are divided into two separate attacks columns; invasive and non-invasive attacks.

Invasive Attacks

Invasive attacks are dependent on depackaging chip, removing epoxy or physically accessing the circuitry of the hardware to gather secret information such as keys and other sensitive information [7]. All of these attacks start with opening a device, and exposing its underlying hardware.

Open Device

To open a device is most often one of the first attacks used by an adversary. Exposing the underlying hardware of a device helps a malicious actor to locate where sensitive information, such as cryptographic keys, can be stored. It can also help the adversary to locate any tampering or protection circuitry that it needs to avoid or disable.

Freezing RAM Memory and Cold Boot Attacks

When the hardware has been exposed, and the memory chip located, a so called a freezing attack on the RAM memory can be performed. An adversary can perform this attack by lowering the temperature of a volatile memory chip, RAM, with a cooling agent. The RAM can then retain its contents for several minutes even when the power is removed. Thus, enabling an adversary to remove/disable tamper circuitry and read the memory contents, or even removing the entire memory to a new device. [5][8]

A cold boot attack on the other hand does not need a cooling agent to be efficient. I relies on the fact that a computer being reset without following proper procedures, e.g., a cold/hard reboot, briefly holds information in the memory that can be stolen. This is a known method

(18)

2.3. Physical Attacks and Tampering

of obtaining encryption keys, passwords network credentials and any stored data from devices. [9]

Cold boot attacks is nothing new, and therefore there have been developments to make them less effective. One such development was made by the Trusted Computing Group, overwriting the contents of the RAM when power is restored. [9] Although, this safeguard was in 2018 worked around by a team of security researchers Olle Segerdahl and Pasi Saarinen. They figured out a way of disabling this safeguard by physically manipulating the hardware of the computer. Rewriting the non-volatile memory chip to disable the memory overwriting settings,and enabling booting from an external device. This enable them to be able to perform a cold boot attack from a special program on a USB stick.[9]

The attack process of this cold boot attack is as follows [9]: 1. Attacker gains physical access to device.

2. Attacker manipulates firmware settings. 3. Attacker performs cold reboot into USB.

4. Attacker obtains encryption keys from device memory.

One of the researchers, Segerdahl, describes the attack as, “It’s not exactly easy to do,

but it’s not a hard enough issue to find and exploit for us to ignore the probability that some attackers have already figured this out,”. Segerdahl also thinks that there is no easy mitigation

for computer vendors. So therefore the end user needs to take this attack into account and be vigilant in their handling of computers, and in their shut down procedures.

Electron Beam Read with a Electron Microscope

Using a conventional scanning electron microscope individual bits of an EPROM, EEPROM, or RAM can be read, and possibly written to. Normally non-readable secret keys can then be stolen and/or modified. To enable this attack the enclosure of the chip must be chemically etched away, exposing the silicon. [8]

Also, most electron microscopes needs to have the sample in vacuum, otherwise the particles in the air could scatter the electrons. An exception to this is the Liquid-Phase Electron Microscopy that can view samples in a liquid instead [10].

Possible attackers of this attack are not many. The cost and knowledge of obtaining and operating a electron microscope are so high that possible attackers have to have almost limitless resources and knowledge. This excludes any home attackers and small companies, leaving large companies, universities and countries as the plausible attackers.

Non-invasive Attacks

A non-invasive attack uses only externally available data, parameters, and information to gather and deduce secret information, such as cryptographic keys from hardware [7]. This could be anything from gathering electromagnetic radiation leaking from the device to something as simple as connecting an open hardware communication port, for example an USB port.

X-ray

X-ray imaging is widely used in Printed Circuit Board (PCB) manufacturing, and PCB as-sembly, to detect faulty soldering. It can also be used to inspect the topology of the PCB, components placement and associated conductive tracks [11].

(19)

2.3. Physical Attacks and Tampering

Radiation Imprinting

Schott and Zugich showed in their article “Pattern Imprinting in CMOS Static RAMs from Co-60 Irradiation” that by irradiating various Complementary Metal Oxide Semiconductor (CMOS) SRAMs that the pattern stored in the memory were imprinted during irradiation from Co-60. This imprinted pattern then became the preferred state of the SRAM memory at consecutive power-ups. They also discovered that the imprinting occurs before the damaging failure levels of the devices are reached. [12]

Radiation imprinting is possible due to the bias-dependent shift in the transistor’s threshold voltage. Radiation shifts the thresholds of the transistors towards a preferred state because the gate oxide traps the radiation induced voltage. [13]

Zheng et al. in the article “Pattern Imprinting in Deep Sub-Micron Static Random Access Memories Induced by Total Dose Irradiation” investigates whether current deep sub-micron SRAMs are also vulnerable to radiation imprinting. In their work they describe that the gate oxide layer thickness of sub-micron SRAMs are so small that oxide traps are eliminated by tunnel annealing. Thus, the gate oxide layer is naturally resistant against radiation. Al-though, they do theorize that the Shallow Trench Isolation (STI) oxide, an Integrated Circuit (IC) feature that creates isolation regions between transistors [14], is vulnerable due to its thickness being several hundred nanometers. That is much thicker than the gate oxide, which is illustrated in Figure 2.1.

Figure 2.1: Two MOSFETs with an STI oxide layer between them.

Zheng et al. finally concludes, after testing four commercial types of sub-micron SRAM, and with a direct radiation dose of 1 Gy/s, that all of the tested sub-micron SRAMs were effected by the radiation, imprinting up to 90% of the memory at an irradiation dose of 3000 Gy(Si).[13] In Figure 2.2 Zheng et al. illustrates this with a bitmap of a small part of a memory cell, where the black spots represents ’0’ and the white spots represents ’1’. Bitmap (a) is a cell that has not been radiated, bitmap (b) is a cell that has been irradiated with 2000 Gy(Si), and bitmap (c) is the memory cell irradiated with 3000 Gy(Si). When they measured the bit distribution, i.e. the percentage of bits wich are zeros or are ones, of the memory cells, they found that the unradiated cell, (a), had a bit distribution of 49.81%, bitmap (b) 78.09%, and as mentioned above, (c) a distribution of 90%, i.e. that the memory had been radiated to hold 90% of the bits as a logical ’1’.

(20)

2.4. Physical Security and Anti-Tampering

Figure 2.2: Background data patterns of one of the four chosen SRAM memory cells inves-tigated for radiation imprinting. Total Radiation doses: (a) not radiated, (b) 2000 Gy(Si), and (c) 3000 Gy(Si) - Zheng et al. - “Pattern Imprinting in Deep Sub-Micron Static Random Access Memories Induced by Total Dose Irradiation”

Thus, by irradiating volatile RAM memory in the X-ray band, wavelengths of 0.03 to 3nm [15], the memory contents of the volatile memory can be imprinted in the memory such that when the circuit is powered down, or overwritten, the original contents will still be present after power-up and can be read freely at will [8].

Radiation Imprinting: Who can perform the attack?

In the article “Advanced Electronics X-Ray Dosimetry for Space and Airborne Applications” by Lindsay Quarrie, Quarrie describes the issues of finding information about the dosage rates of PCB X-ray machines, and how it is necessary to perform dosage measurements on a machine to know the exact dosage rate of that individual machine.[16]

The information provided by Quarrie instigates that it is almost impossible to know the actual radiation dose delivered by a specific PCB X-ray machine, and thus makes it hard to say whom may perform a radiation imprinting attack similar to the one described by Zheng et al. in their article “Pattern Imprinting in Deep Sub-Micron Static Random Access Memories Induced by Total Dose Irradiation”. Where they deliver a dose rate of 1 Gy/s.

What can be concluded from this, is that a university [13], company, or threat actor with no limit in resources, for example a nation state, could perform this kind of an attack.

2.4 Physical Security and Anti-Tampering

The term ’physical security’ has been long viewed as the protection mechanism that defends material assets from physical harm such as water, fire, theft, and other physical damage. As computers has evolved, and so their threats, physical security has also become the technologies that safeguards, and protects information against physical attacks and/or tampering. The protection mechanisms of these technologies consists of four components: resistance, detection, prevention, and evidence [8].

• Tamper Resistance - An embedded device’s capability of resisting physical attacks from adversaries. One example of tamper resistance is enclosing the hardware in hard epoxy, keeping the attacker from gaining access to the hardware.[8]

• Tamper Detection - An embedded device capability of detecting a physical attack per-formed by an adversary. This could be circuitry detecting probing attempts perper-formed on the hardware or circuitry detecting if the epoxy is being removed.[8]

(21)

2.5. Physical Unlconable Functions - PUF

• Tamper Response - An embedded device response time and method of responding to physical attacks. Take the example presented in the tamper detection section, the re-sponse would than be to alert the system of the probing attempt and deleting any secrets held within the device. [8]

• Tamper Evident - A embedded device capability of proving that an attack has occurred. If a probing attack is detected and alerted to the system, one would also like to have evidence that an attack has taken place. This could be a software implementation but also the memory itself could work as evidence, if one could see that the memory has been erased.[8]

There are many different ways of obtaining and implementing these components, and in the next section some of these are presented.

2.5 Physical Unlconable Functions - PUF

Physical Unclonable Functions are physical objects which a digital fingerprint, or key, is derived from. The key is derived from the objects physical attributes and characteristics. In electronics, common objects and attributes are the gate delays of IC or the initial state of an SRAM memory cell [3].

PUFs are based on the idea that the small manufacturing differences between ICs are large enough to generate unique keys from. Even tough the ICs are from the same factory and manufacturing batch. [3]

Different types of Physical Unclonable Functions

There are typically two applications for PUFs, authentication and secure key generation. These kinds of PUFs are typically called ’strong’ and ’weak PUFs’.

According to Herder et al. in the article “Physical Unclonable Functions and Applications”, every PUF can be based on a black-box, challenge and response system. They describe this as a PUF is passed an input, challenge c, and returns a response, r= f(c), where f(), the physical attributes of the PUF, is the relation between the input and output of the PUF. The security of PUFs relies on the difficulty of modelling the physical attributes, the function f , of the object, and manufacturing identical ICs with the same parameters. [3]

Herder et al. also states that the difference between the two types of PUFs is the number of unique challenges, c, that the PUF can generate responses to. Weak PUFs can only process a small amount of challenges, whilst the strong PUFs can process a large number. So large that it is not feasible to determine every single challenge-response pair in a limited amount of time.

Strong PUFs can process a large number of challenge-response pairs. Thus, the PUF does not need any additional cryptographic hardware, and can be authenticated directly. Such an example can be seen in Figure 2.3 where a secure communication devices uses a strong PUF to authenticate itself against the corresponding secure server.

Figure 2.3: A secure communication device with a authentication PUF, strong PUF, authen-ticating itself against a secure server with the challenge-response pairs of the strong PUF.

(22)

2.5. Physical Unlconable Functions - PUF

Weak PUFs on the other hand only hold a small number of challenge-response pair, some-times only one. Thus, weak PUFs need additional cryptographic hardware to derive keys. Since weak PUFs only have a small number of pairs the challenge-response pairs must be kept secret. If they can be measured, or are obtained by an adversary, any device can emulate that specific PUF. [3]

The name ’strong PUF’ might imply that they are more secure than weak PUFs, but according to Mahmoud et al. this is misleading. In their article “Combined Modeling and Side Channel Attacks on Strong PUFs” they prove that when combining a power analysis side channel attack with machine learning they were able to predict a 256 bit key at a prediction rate of 96.1% with a training time of 700 seconds. Although the data they used were synthetic challenge-response data, they deemed their attack as severe and applicable to real PUFs whom have not implemented their proposed side channel mitigation. [17]

The responses of the PUFs are, as mentioned in the beginning of section 2.4, most often de-rived from the initial state of a memory cell or similar random physical data. This randomness most often come from the transistor mismatch in CMOS design.

Transistor Mismatch

Threshold voltage mismatches are according to Lovett et al. in their article “Optimizing MOS Transistor Mismatch” dependent on the width and length ratio, W/L, of transistors. They concluded that for transistors of the same layout area, the smaller the W/L ratio the better the threshold mismatch, i.e. less of a mismatch [18]. In transistor manufacturing it is really difficult to generate transistors at sub-micron level that have exactly the same W/L ratio. Thus, mismatches in transistors threshold voltage’s are bound to occur.

Figure 2.4: A MOSFET transistor with width W and length L noted.

Ring-Oscillator PUF

The ring-oscillator PUF uses the manufacturing variability of intrinsic circuit gate delay. They are often implemented using either Application Specific Integrated Circuit (ASIC) or Field-Programmable Gate Array (FPGA). The architecture consists of N identical ring-oscillator that, due to the different delays in the oscillators inverters, outputs different frequencies. Two frequencies are measured and compared to generate on of the PUFs output bits. Due to correlations, the maximum number of bits that can be extracted from the PUF is log(N!). This PUF is considered a weak one, since it has a limited number of challenge bits. Once the ring-oscillator is fabricated the frequency is fixed, and so is the output bits of the PUF [3].

(23)

2.5. Physical Unlconable Functions - PUF

Figure 2.5: An example of a one bit ring-oscillator PUF as illustrated by Kirkpatrick and Bertino in their article “Software Techniques to Combat Drift in PUF-Based Authentication Systems”

Although the weak PUF is theoretically secure, the ring-oscillator has some factors that need to be taken into account. Ring-oscillators are temperature dependent, and so the fre-quency of the oscillator will change with the temperature. This will ultimately change the response of the PUF. Another thing that needs to be taken into account before implementing a ring-oscillator as a PUF is the size of the PUF. Since it scales according to log(N!) it scales quit badly. A 256-bit key would in this example scale as shown below:

256= log(N!) => N ≈ 146.89 ≈ 147

This means that to generate a 256-bit key 147 ring-oscillators are needed. This takes up a lot of areal space where the intended PUF is to be placed.

SRAM PUF

An SRAM PUF is a weak PUF architecture that has only one challenge-response pairs. The challenge being the power-up of the device and the response being the preferred zero state of the SRAM. An SRAMs zero state is dependent on the positive feedback loop of SRAM structures. An SRAM cell has two stable states,′1′and′0′. The positive feedback forces the cell into one of these states, depending on the write operation, and prevents it from falling out of it accidentally. Since the SRAM does not have a write operation at power-up the SRAM is put in a theoretical meta-stable state where two feedback loops are trying to push it into both

1and0a the same time. This state is as mentioned only a theoretical state since in practice

one feedback loop is always stronger due to threshold mismatches between the transistors in the manufacturing process. This, and the fact that thermal noise naturally causes the SRAM cell to either fall into a relaxed′1′or ′0′ state. Thus, generating a random bit sequence that can be used as an encryption key. [3]

In Figure 2.6 a six transistor SRAM is depicted. An example of how the threshold voltage mismatch works would be that transistor M2, in Figure 2.6, has a lower threshold voltage than

M4. Thus, transistor M2 starts to conduct before M4. Pushing Q to′1′. This would then give

either the response pair(Q = 1, Q = 0) or (Q = 0, Q = 1), depending on the mismatch between M1and M3 [19].

(24)

2.6. Physical Unclonable Functions as Tamper-Resistance Enclosures

Figure 2.6: A six transistor SRAM memory cell - By Inductiveload - Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=5771850

Although the ring-oscillator and SRAM PUF are some of the most common PUFs, PUFs can be created from any physical object that can generate a challenge-response pair. Therefore, there are being made some attempts of creating PUFs from other objects.

2.6 Physical Unclonable Functions as Tamper-Resistance Enclosures

In the article “Secure Physical Enclosures from Covers with Tamper-Resistance”, Immler et al. presents how one could use the PUF as a tamper detecting and tamper evident enclosure. They propose enclosing the circuit to be protected with a one-time flex Printed Circuit Board (PCB), which is the PUF itself. Thus, they obtain the challenge-response pairs from the enclosure, which is then used as an encryption key. [2]

Key Derivation

Immler et al.’s solution uses the physical attributes of the one-time flex PCB for the challenge-response pairs. Their flex PCB design is a circuit mesh that consists of separate wires: Ri(1),

Ro(1), Ti(1), and To(1). Were the intersection between the R and T wires are the capacitive

(25)

2.6. Physical Unclonable Functions as Tamper-Resistance Enclosures

Figure 2.7: Immler et al. proposed sensing mesh layout with Ri(1), Ro(1), Ti(1), To(1)

intertwined and layered upon each other in a mesh form. The capacitive sensor cell is visualized as a grey circle in the intersection between the R(1), and T(1) circuits.

The sensor cells within the mesh can be described with simplified equivalent circuit, illus-trated in Figure 2.8. The PUF generated key can then be derived from the sensors cells, the capacitance between the Rx and Tx, in the mesh after some normalization and error correction code to mitigate for example temperature differences [2].

Figure 2.8: The simplified equivalent circuit diagram of the sensor cells generating the seed for the key. This shows that the combined capacitance of the sensor cells, Cs, is equal to the

sum of all the sensor cells, Cs,1, Cs,2, etc.

Security: Hiding in plain sight

The security of the proposed solution lies in the approach of hiding the secret key in plain sight. For an adversary to gather information about the PUF they need to open the enclosure to probe the PUF itself. Opening the enclosure will damage the enclosure, altering the physical attributes of the PUF. Thus, changing the weak PUFs challenge-response pair, ultimately destroying the encryption key.

Intrusion Sensing

In addition to generating the encryption key, the PUF mesh detects adversaries who try to penetrate the envelope by drilling holes or prying it open. If a wire in the mesh is cut,

(26)

2.7. FIPS 140-2

a tampering detection mechanism will detect that one of circuits in the mesh is open, and perform safety precautions to keep any secrets present in memory safe. Also, if a mesh wire is cut the physical capacitance of the corresponding sensor node change. Thus, altering the challenge-response pair and deleting the key.

To achieve a full envelope protection, enclosing the entire PCB, Immler et al. model fixates two separate flex-PCB with adhesives to both sides of the PCB to protect. To then achieve the full envelope protection, they mean that the PCB has to have circuits that goes through the bulk of the PCB, called vias, that can detect any attempts of penetrating the bulk of the PCB by an adversary. This is visualized in Figure 2.9.

Figure 2.9: A visualization of Immler et al.’s proposed protection model with a PUF enclsoing envelope with vias in the PCB bulk detecting any attempts made by adversaries to penetrate the bulk of the PCB.

Threats and Attacks

The main attack vector the authors are working from is that of trying to open the device. Either by breaking the enclosure open or drilling a small hole to use for probing. What they discovered was that prying the envelope open will damage the structure of the PUF, altering the capacitance’s illustrated in Figure 2.8 and Figure 2.7. Thus, changing the challenge-response pair of the PUF, deleting the secret key.

Immler et al. also tried attacks that involved drilling small holes into the envelope, enabling probing of the challenge-response pair, or of the key, of the PUF. Without either destroying any sensing wires or altering the capacitance’s of the mesh. They managed to work around the tamper detection mechanism that detects open circuits in the mesh by doing an drilling attack when the device is powered-off, and then repair the broken circuits in the mesh. Although they succeeded with the repair they did not manage to keep the PUFs physical integrity intact. That is, the capacitance’s changed so much that the key was destroyed.

These tests performed by Immler et al. verifies their working theory of being able to protect against drilling and probing attacks. Which makes them conclude that they do in fact meet the requirements of the FIPS 140-2 standard.

2.7 Federal Information Processing Standard 140-2

Federal Information Processing Standard 140-2 or FIPS 140-2 is a standard developed by National Institute of Standards and Technology (NIST) that specifies requirements of cryp-tographic modules used in security systems protecting sensitive information in telecommu-nication and computer systems as defined in Section 5131 of the Information Technology Management Reform Act of 1996, Public Law 104-106. [20]

(27)

2.7. FIPS 140-2

General Physical Requirements

The general physical security requirements of the FIPS 140-2 standard should, according to NIST, ”... apply to all physical embodiments” Security Requirements for Cryptographic

Modules. This means that for some entity to state that they conform to the FIPS

140-2 standard, they need to also conform to the general physical security requirements of the standard. These requirements are quoted below, and states the following:

• ”Documentation shall specify the physical embodiment and the security level for which

the physical security mechanisms of a cryptographic module are implemented.”

• ”Documentation shall specify the physical security mechanisms of a cryptographic

mod-ule.”

• ”If a cryptographic module includes a maintenance role that requires physical access to

the contents of the module or if the module is designed to permit physical access (e.g., by the module vendor or other authorized individual), then:”

– ”a maintenance access interface shall be defined,”

– ”the maintenance access interface shall include all physical access paths to the

contents of the cryptographic module, including any removable covers or doors,”

– ”any removable covers or doors included within the maintenance access interface

shall be safeguarded using the appropriate physical security mechanisms,”

– ”all plaintext secret and private keys and CSPs shall be zeroized when the

mainte-nance access interface is accessed, and”

– ”documentation shall specify the maintenance access interface and how plaintext

secret and private keys and CSPs are zeroized when the maintenance access interface is accessed.”

- Security Requirements for Cryptographic Modules, Federal Information Processing Standard (FIPS) 140-2, NIST [20]

The general requirements are to be fulfilled for any module in any of the four security levels. Although, for security level four there are additional requirements beyond those for levels one, two and three.

Security Levels

The FIPS 140-2 standard has four levels of security, each with a higher level of security than the preceding one. In this thesis the focus is on the physical security of devices. Thus, the description of the different security levels in this thesis focuses on the physical security requirements. [20]

Security Level 1

The lowest level of security in the FIPS 140-2 standard. It requires no specific physical security layer or mechanisms beyond what is required for production-grade components. One example of such a cryptographic module is a personal computer encryption board.

Security Level 2

In level two, the security of the physical layer is enhanced with the added requirements off tamper-evidence. This includes the use of tamper-evident seals, epoxy coatings or pick resistant locks on doors, or removable covers of the module.

(28)

2.7. FIPS 140-2

Security Level 3

In conjunction with the requirements of level two, level three attempts to prevent unautho-rized access to cryptographic sensitive information, such as encryption keys. The intention of the required security mechanisms of level three is to detect and respond to attempts at unauthorized physical access with a high probability of success. The mechanisms may include strong enclosures and memory zeroing circuitry.

Security Level 4

The fourth security level is the highest one defined in the FIPS 140-2 standard. It provides a complete protection envelope around the module with the requirements of the preceding levels. Thus, an attempt of penetrating the enclosure from any direction has a high probability of being detected.

The security mechanisms of this level should also protect against security compromises due to environmental factors causing fluctuations outside of the modules operational ranges. These changes could for example be temperature and voltage fluctuations.

Physical Security Requirements

This section covers the specific security requirements a module must conform to in the FIPS 140-2 standard. NIST describes the standard as following:

”A cryptographic module shall employ physical security mechanisms in order to restrict

unauthorized physical access to the contents of the module and to deter unauthorized use or modification of the module (including substitution of the entire module) when installed. All hardware, software, firmware, and data components within the cryptographic boundary shall be protected” - Security Requirements for Cryptographic Modules, NIST[20]

Within the physical security requirements of the standard there are sub-requirements for different kinds of modules. The standard has specified requirements for three different em-bodiment’s of a cryptographic module [20]:

• Single-chip cryptographic modules - Single IC chips or smart cards that may not be physically protected.

• Multiple-chip embedded cryptographic modules - Two or more ICs interconnected within an embedded unit that may not be physically protected.

• Multiple-chip standalone cryptographic modules - Two or more ICs interconnected within an embedded unit enclosure that is physically protected.

In Figure 2.10, a summary table from the FIPS 140-2 standard describing the difference in security levels between different cryptographic modules. Which NIST uses to present the physical security requirements of the FIPS 140-2 standard for the three different cryptography module embodiment’s [20].

(29)

2.7. FIPS 140-2

Figure 2.10: A summary table of physical security requirements of the FIPS 140-2 standard taken from the National Institute of Standards and Technology document The Federal In-formation Processing Standard Publication 140-2, Security Requirements for Cryptographic Modules.

In this thesis multiple-chip standalone cryptographic modules of security level 4 is what will be investigated.

Requirements Security Level 4

For multiple-chip standalone cryptographic modules of security level four the additional re-quirements are the following:

• The enclosing material of the module ”...shall be encapsulated by a tamper detection envelope...” that uses detection mechanisms such as:

– Cover switches - micro-switches, permanent magnetic actuators, magnetic Hall

ef-fect switches, etc.

– Motion detectors - infrared, ultrasonic, microwave, etc.

These mechanisms should be able to detect tampering by means of drilling, cutting, grinding, or dissolving of the enclosing potting material, i.e. epoxy potting, or enclosure. [20]

• The module should use continuous monitoring, with tamper response and zeroization circuitry, of the enclosing envelope. This circuitry should upon tamper detection ze-roes all plain-text secrets and private cryptographic keys. Furthermore, it shall remain operational with plain-text cryptographic keys contained in the module. [20]

This means that for a multiple-chip standalone cryptographic module to meet the require-ments for security level 4 it needs to be enclosed in a tamper detection envelope or casing that can detect drilling, cutting, grinding, and other means of tampering. The module should also zero any memory holding secret information or keys upon tamper detection.

FIPS 140-2 Conclusion

To summarize the FIPS 140-2 standard, Table 2.1 shows the different requirements the stan-dard puts on a multiple-chip standalone cryptographic module system, and corresponding example.

(30)

2.7. FIPS 140-2

Table 2.1: Summarized table showing the requirements a level 4 multiple-chip standalone cryptographic module must meet to conform to the FIPS 140-2 standard.

FIPS 140-2

Example:

Envelope protection Epoxy potting, hard casing, etc.

Cover switches Mechanical or magnetic switches.

Tamper detection Circuitry detecting probing attempts.

Memory zeroization Circuitry that deletes secrets from mem-ory upon tamper detection.

Continuous monitoring Tamper detection and memory zeroiza-tion circuitry is always monitoring.

(31)

3

Method

When performing evaluations of two or more systems that are meant to solve the same problem, the method used is of great importance. Not only for the reader to be able to emulate the work presented in the thesis but also for the reader to evaluate the method itself.

In this thesis a theoretical system was used to model protection and analyze threats. The system consists of a cryptographic cell phone that uses encryption to enable secure communi-cations.

Figure 3.1: The theoretical system used for the threat analysis and protection modelling performed in this thesis. Secure communication devices sending classified, secret information between each other.

3.1 Literature Study

Firstly, a literature study of the state of the art of tamper protection and PUFs was conducted. Most of the articles that were used in this thesis were gathered using Google’s Scholar search engine and researchgate.net.

The literature was used to gather information on the current methods used to protect against physical tampering attacks, what kinds of attacks that were used, and different sort of implementations for PUF architectures in electronic devices handling cryptographic keys.

(32)

3.2. Threat Modelling

The main article studied in this thesis are Immler et al. work “Secure Physical Enclosures from Covers with Tamper-Resistance”. As mentioned in Chapter 1, they propose a new way of using a PUF as a physical enclosure. This solution, and the conventional sensor solution is what has been evaluated in this thesis.

3.2 Threat Modelling

To visualize and get a better understanding of what kinds of threats there is against an embedded device threat modelling can be used. There are many ways one can develop a threat model. Firstly, the threats against a theoretical cryptographic system was explored using the STRIDE method described below.

STRIDE

In the book Threat Modeling, the author Shostack presents the STRIDE method of exploring system threats. STRIDE is a mnemonic for identifying threats against a system [21]:

• Spoofing - Pretending to be something ore someone you are not.

• Tampering - Altering something you are not allowed to alter. Memory bits, data packets, etc.

• Repudiation - ”...claiming you didn’t do something (regardless of whether you did or

not).” - Shostack

• Information Disclosure - Exposing secret information to people who do not have permission to see it.

• Denial-of-Service - Attacks with the goal of disrupting a systems service or crashing it.

• Elevation of Privilege - Attacks with the goal of performing actions that they are not allowed to do.

This method, together with the article “Physical Security Devices for Computer Subsys-tems” by Weingart is the basis of the threat modelling done in this thesis work. The attacks presented in “Physical Security Devices for Computer Subsystems” were evaluated depending on the severity of the attack, the possibility of an attack to succeed, and on what kind of an attacker could perform the attack.

Attack Tree

An attack tree was constructed to visualize how an attacker could use different attacks to access the cryptographic keys of an embedded device. These attacks were then ordered such that at the root of the tree the first attack vector is located. The attacker would then move upwards in the tree using the different attacks in series to finally access the cryptographic key. When creating an attack tree one starts at the top of the tree asking; ”what is the goal

of an attacker?”. Then one asks themselves how an attacker would achieve this goal, this is

then the next node of the tree. This process continues until one thinks that they have covered every possible attack. [21]

(33)

3.2. Threat Modelling

Figure 3.2: An attack tree of physical attacks against an embedded devices storing crypto-graphic keys.

This attack tree covers the attack deemed as the most dangerous ones for an embedded device holding secret keys, such as a encrypted cellphone. These attacks are based on the work done by Weingart in their article “Physical Security Devices for Computer Subsystems”.

Qualitative Risk Analysis

Qualitative risk analysis is a method of analyzing the impact and probability of a risk occurring using a impact and probability matrix [22]. This can be seen in Figure 3.3.

Every attack of interest in this thesis will be analyzed and placed in the matrix to visualize their individual risk.

Figure 3.3: An impact and probability matrix used to perform qualitative risk analysis. Here, the rows corresponds to the plausibility of an attack being attempted, and being succesfull, and the coulmns is the overall impact if the attack were to be succesfull.

The main purpose of doing a qualitative risk analysis is to assess the impact and risk of a threat. So that one can prioritize mitigation’s and protections accordingly.

Threat Summarizing

To summarize the threats discovered in this thesis a radar chart was used. This chart is meant to visualize how big of a threat an attack is, and can be seen in Figure 3.4.

(34)

3.2. Threat Modelling

Figure 3.4: A radar chart used to visualize the overall threat of an attack. The axis of the radar chart are ”Impact”, ”Plausibility”, ”Adversary”, and ”Protection Implementation”. The larger the area of the plot, the larger the threat.

An example of this can be seen in 3.4, where we have on the vertical level the impact of the attack, and the different kinds of adversaries, and on the horizontal level the plausibility of an adversary using the attack, and the difficulty of implementing protection against the attack. For each of the axis’s, the further out from the center, the more of a threat the attack is. An example of this is an attack having an catastrophic impact, the plausibility of being attempted is almost certain, the possible adversaries being a home hacker, and implementing protection being extremely difficult. This is since if the impact is catastrophic, it is almost certain that an adversary will attempt it, and that the possible adversaries are anyone from a home hacker to a intelligence agency. Plus, the protection being extremely difficult to implement, the threat would be overall really high.

• Impact - The impact of an successful attack, this scale ranges all the way from negligible to catastrophic.

• Plausibility - The plausibility of an attack happening.

• Adversary - This axis represents who the possible adversaries might be, and is derived from the report “ECRYPT Yearly Report on Algorithms and Keysizes”. In the article they present five possible adversaries: [23]

1. Hacker

2. Small Organization 3. Medium Organization 4. Large Organization 5. Intelligence Agency

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Syftet eller förväntan med denna rapport är inte heller att kunna ”mäta” effekter kvantita- tivt, utan att med huvudsakligt fokus på output och resultat i eller från

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

I regleringsbrevet för 2014 uppdrog Regeringen åt Tillväxtanalys att ”föreslå mätmetoder och indikatorer som kan användas vid utvärdering av de samhällsekonomiska effekterna av

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

In particular, we have (1) in- vented a dynamic key lookup mechanism based on the explicit declaration of key own- ership via tagging, (2) defined a comprehensive message

Recently, there are emerging studies on hardware assisted security on IoT related devices such as: building a unified identity verification framework based on PUFs [36], FPGA