• No results found

Security Risk Analysis based on Data Criticality

N/A
N/A
Protected

Academic year: 2021

Share "Security Risk Analysis based on Data Criticality"

Copied!
59
0
0

Loading.... (view fulltext now)

Full text

(1)

Master Thesis Project

Security Risk Analysis based on Data Criticality

Author: Luyuan Zhou

Supervisor: Narges Khakpour External supervisor: Simon Roe Examiner: Francesco Flammini Reader: Alisa Lincke

Semester: VT 2019

Course Code: 4DV50E

Subject: Computer Science

(2)

Abstract

Nowadays, security risk assessment has become an integral part of network security as everyday life has become interconnected with and dependent on computer networks. There are various types of data in the network, often with different criticality in terms of availability or confidentiality or integrity of information. Critical data is riskier when it is exploited. Data criticality has an impact on network security risks. The challenge of diminishing security risks in a specific network is how to conduct network security risk analysis based on data criticality. An interesting aspect of the challenge is how to integrate the security metric and the threat modeling, and how to consider and combine the various elements that affect network security during security risk analysis. To the best of our knowledge, there exist no security risk analysis techniques based on threat modeling that consider the criticality of data. By extending the security risk analysis with data criticality, we consider its impact on the network in security risk assessment. To acquire the corresponding security risk value, a method for integrating data criticality into graphical attack models via using relevant metrics is needed. In this thesis, an approach for calculating the security risk value considering data criticality is proposed. Our solution integrates the impact of data criticality in the network by extending the attack graph with data criticality. There are vulnerabilities in the network that have potential threats to the network. First, the combination of these vulnerabilities and data criticality is identified and precisely described. Thereafter the interaction between the vulnerabilities through the attack graph is taken into account and the final security metric is calculated and analyzed. The new security metric can be used by network security analysts to rank security levels of objects in the network. By doing this, they can find objects that need to be given additional attention in their daily network protection work.

The security metric could also be used to help them prioritize vulnerabilities that need to be fixed when the network is under attack. In general, network security analysts can find effective ways to resolve exploits in the network based on the value of the security metric.

Keywords: network security risk assessment, attack graph, data criticality,

security metric, threat modeling

(3)

Preface

First of all, I want to thank my supervisor, Dr. Narges Khakpour and my external

supervisor, Simon Roe of the Outpost24 company, for providing such an

interesting topic, and their help and guidance during project researching and thesis

writing. I also want to thank Charilaos Skandylas for answering my technical

questions and his advice on writing. Finally, I want to thank my examiner,

Francesco Flammini, for his assist in the process of completing this project.

(4)

Contents

1. Introduction ______________________________________________ 1 1.1. Background _______________________________________________ 1 1.2. Motivation ________________________________________________ 2 1.3. Problem Statement __________________________________________ 3 1.4. Method ___________________________________________________ 3 1.5. Contributions ______________________________________________ 5 1.6. Target groups ______________________________________________ 5 1.7. Report Structure ____________________________________________ 5 2. Background ______________________________________________ 6

2.1. Security risk assessment _____________________________________ 6 2.2. Threat modeling ____________________________________________ 7 2.2.1. STRIDE ___________________________________________ 7 2.2.2. T-MAP ____________________________________________ 8 2.2.3. HARMs ___________________________________________ 8 2.2.4. Attack Tree ________________________________________ 9 2.2.5. Attack Graph _______________________________________ 9 2.3. Network Security Metrics ___________________________________ 15 2.3.1. CVSS ____________________________________________ 16 2.3.2. Data criticality _____________________________________ 17 3. Method _________________________________________________ 19

3.1. Scientific Approach ________________________________________ 19 3.2. Running Example _________________________________________ 19 3.3. The Approach Outline ______________________________________ 20 3.3.1. Step 1- Generating Attack Graph using MulVAL __________ 21 3.3.2. Step 2- Augmenting Attack Graph with Data Sensitivity

Information _____________________________________________ 23 3.3.3. DSAC (Data-Sensitivity Aware Cost) Security Metrics _____ 25 3.3.4. Enhanced DSAC cost metric __________________________ 27 4. Implementation and Evaluation _____________________________ 31

4.1. Implementation ___________________________________________ 31

4.2. Case Study _______________________________________________ 32

4.3. Results __________________________________________________ 34

4.4. Evaluation _______________________________________________ 35

4.5. Discussion _______________________________________________ 39

5. Conclusions and Future Work ______________________________ 42

6. References _______________________________________________ 44

A. Interaction rules added based on MulVAL's interaction_rules.P __ 49

B. Partial configuration information of the network ______________ 51

(5)

1. Introduction

There is a saying in The Art of War: “Precise knowledge of self and precise knowledge of the threat leads to victory”. An essential part of the work of the cybersecurity team concerns minimizing security risk, therefore comprehending the actions and behaviors of attackers is critical to succeeding in this goal. Threat modeling can be used to set forth the relevance between the vulnerabilities in the network. Hosts with sensitive data are more valuable and may expose their owners to more risk compared to others in the network. Therefore they need more attention and protection. The overall goal of this report is to integrate threat modeling with data criticality for security risk evaluation, helping security analysts in formulating more effective strategic decisions. These decisions should be able to be used in the daily protection of the network, and may also be applied to the repair work after the network is attacked.

1.1. Background

The general field related to this thesis is security risk analysis. The first step of security risk assessment is to identify vulnerabilities on the target that can be exploited, then evaluate it and the final result is used to develop effective strategies for mitigating exploits or recover a network from those attacks. Threat modeling can be applied to support network security analysts to realize security risk assessment by visualizing information about networks and attacks. The classification information of the vulnerabilities in the network comes from CVSS

1

, the Common Vulnerability Scoring System; it is used in related fields to evaluate vulnerabilities [14].

When determining security risk, other factors must be considered in addition to the information on the vulnerability itself. Hosts in the network store different types of data since different users have access to different data on hosts, have different responsibilities, and play different roles. They exhibit different behaviors in the network. In a network with diverse vulnerabilities, because objects in the network are interconnected, an attacker can exploit the vulnerabilities on one of the objects to access other objects and gain higher privileges in the network to achieve her goals. Objects can be computers or terminal devices that are connected to each other and share resources.

Threat modeling can be used to visualize the associations between vulnerabilities; it is the key to a focused defense. It is the use of abstractions to aid in thinking about risks [1]. Several models and languages have been introduced for threat modeling in the literature [2]. Here is the threat modeling process proposed by Microsoft

2

: identify assets, create an architecture overview, decompose the application, identify the threats, document the threats, and rate the threats [66]. Without threat modeling, it is hard to deal with IT security problems when they happen [1]. Common network threats are DDOS attack [53], Trojan horses [54], phishing [55], and worms [56]. Threat modeling techniques include the attack tree [18], the Petri net [52], and the attack graph [8], etc. The attack graph is a model that enables the modeling of different scenarios in which an attacker implements the attack [3]. It can help network defenders track the movements of the attacker. It allows us to consider dependencies among attack steps. Moreover, the nodes in the attack graph can be assigned information on

1

https://www.first.org/cvss/v3.0/user-guide

2

https://www.microsoft.com/en-us/

(6)

security metrics. Therefore, we think that choosing an attack graph is a wise choice. There are different types of attack graphs, and each uses different information and knowledge which can be attributes of attackers, network vulnerabilities, attack data collected by monitoring mechanisms, network topology information. To create an attack graph using the least amount of information requires connection information between hosts and vulnerabilities information on hosts [4]. The logical attack graph is one kind of attack graph [35].

The first attack graph generation tool was proposed by a group from Princeton University. MulVAL [49] and NetSPA [44] are typical logic-based tools in the field of attack graph generation [15]. Nodes in the attack graph are of two types:

exploit nodes that represent the exploitation of vulnerabilities and condition nodes that indicate the prerequisites or consequences of exploitation. Based on the attack graph, Wang et al. [4] proposed an approach to calculate the probability of exploitation. Cao et al. [5] scored the attack impact of exploitation via their algorithm.

Lord Kelvin is a well-known British mathematical physicist and engineer and has made significant contributions in the fields of thermodynamics and electromagnetic. He said, "Measuring things is necessary for improving things." A metric is composed of a series of measurements; it describes things by quantifying attributes [7], [57]. Security metrics are the metric that is proposed and used to measure the security of a system or a network, the security status of a network can be understood through them, and they can be used in the process of making relevant security decisions, also they are helpful in choosing the configuration that is beneficial for improving the ability of a network to withstand attacks [7], [57], [58].

1.2. Motivation

Ensuring network security is the most basic guarantee for using a network thus it is critical to any network. The following are the security risk analysis methods and security metrics that have been proposed. Graphical threat modeling can be used to identify threats that a system or a network may face, Hussain et al [28]

summarized some existing threat modeling approaches, such as stride [59], abuser stories [60], and fuzzy logic [59]. Metrics used for security risk analysis were discussed detailed in [7], for example, VEA-bility metric [61] for assessing network security, false positive and false negative rates of an intrusion detection system [62] that are related to the effectiveness of a strategy, and the number of users with passwords following the password management security policy is a metric associated with compliance [63].

Through literature surveys, we find that there have been several studies on performing security risk analysis using security metrics. Edbro and Hansson [6]

proposed a method that can help network security analysts make the decisions

about the prioritization of repairing hosts with vulnerabilities quicker than doing

this prioritization manually. Additionally, they have come up with ways to find

out high-risk users through the data they get in background study. “Identifying

high risk targets in a corporate multi-user network” mentions that Edbro and

Hansson [6] verified that the method proposed by them is more accurate than the

state-of the-art-method by comparing the values of accuracy, Root Mean Square

Error (RMSE) and Cohen’s kappa, since they are not only focused on regular

information on vulnerabilities but also the users of the hosts, access of the users,

(7)

doing prioritization manually. The value of a security metric is calculated as a result of analyzing risk objectives. Ramos et al [7] proposed the properties and classifications of security metrics. The basic properties of security metrics are granularity, availability, cost-effectiveness, localization, and validation. The classifications of security metrics are target type, objective type, construction type, automation level, measurement consistency, measurement type, and measurement moment.

Existing security risk assessment methods based on attack graphs considered vulnerability information [5], and some also considered user behavior [6], [16], attack behavior [3], overall risk of a network [10], probabilistic of vulnerability [4], and insider threat [17]. Although these methods help identify higher risk scenarios, data criticality is still not considered. Data sensitivity allows organizations to pay further attention to network nodes that have access to or contain enterprise sensitive information. For example, there are two hosts on the same network. The host of a junior developer has almost no access to the production environment, and the database administrator's host has access to all the customer data of the company. If we assume that the cost of both attacks is the same, then from an attacker's perspective, compromising the database administrator's host is highly effective because it will allow her to gain access to more company sensitive information. It can be seen that the higher the criticality of the data, the higher the risk when it is exploited, therefore data criticality has an impact on the development of overall security risk. By extending the attack model with data criticality, the impact of data criticality in the network can be considered in the process of security risk assessment. To access the security risk considering data criticality, novel approaches and security metrics should be proposed.

1.3. Problem Statement

When the network is attacked, the network security analysts need to prioritize the vulnerabilities to be patched. Vulnerabilities that have a greater impact on network security should be prioritized to minimize the loss of assets in the network. To help the network security analysts formulate policies, a suitable security risk assessment method should be proposed that considers the costs of attacks/defense taking into account the criticality of data. This problem can be formulated in the following specific research questions.

RQ1: How can we instrument the data criticality information into a threat model so that it enables us to perform security risk analysis considering the criticality of data?

RQ2: How can we perform security risk analysis using the above-extended threat model considering data criticality?

RQ3: How can we compute the security metrics for the combination of vulnerabilities and data criticality and considering the impact between vulnerabilities?

RQ4: How to verify and evaluate the results of the experiment?

1.4. Method

Threat modeling aims to consider exploitation and dependencies between

vulnerabilities. We need an approach for security risk analysis that considers the

criticality of data in addition to the dependencies of vulnerabilities to be able to

have a proper security assessment. To achieve this goal, the following steps have

been followed:

(8)

1. Literature survey on threat modeling, data criticality, and security metric.

2. Incorporation of vulnerabilities information and data criticality into an attack graph.

3. Proposing a novel security risk analysis approach defined over attack graphs that considers the dependencies of vulnerabilities as well as the criticality of data stored on each component.

4. Implementation and Evaluation of approach of a real-life case study provided by Outpost24

3

.

Outpost24 is a leading network assessment company founded in 2001, they commit themselves to helping their customers tighten their cyber exposure and improve cyber resilience.

We intend to provide a new method and a series of security metrics for network security risk assessment. In order to consider data sensitivity when measuring the security risks of a network, we need to find a suitable threat model that should be able to integrate data sensitivity information. Through literature surveys, we find that vulnerability information and data sensitivity information can be combined and incorporated into the attack graph. The data criticality and vulnerability information can be refined into a set of metrics respectively. The criticality of the data is reflected in confidentiality, integrity, and availability.

Vulnerability metrics include potential damages in terms of confidentiality, integrity, and availability of the asset in case of exploitation of vulnerabilities. We referenced to the vulnerability metrics in the CVSS database. The defined data criticality depends on the real-life network environment, policies, and corporate rules and regulations. Therefore, relevant policies need to be considered when scoring the criticality of data stored in objects in the network. These scores should be obtained from the company that is the user of the network, this we assume that experts with knowledge of network security and data classification score the data stored in objects in the network by criticality. Security metrics considering data sensitivity information are contained in nodes in the attack graph. An attack graph consists of nodes and edges, which shows the dependencies between attack steps.

For example, an attacker wants to exploit vulnerability v2, first, she must exploit vulnerability v1. V1 can be regarded as one of the conditions for exploiting v2, this reflects the dependencies between the vulnerabilities. We propose security metrics that take into account the sensitivity of the data and the dependencies among vulnerabilities in the attack graph to measure the security risks of a network. And we introduce their calculation methods. We evaluate these security metrics on a real-life network.

In general, we proposed a new approach to integrating data criticality information in an attack graph, proposed security metrics and algorithms to calculate the security metric automatically. Our method should be automatic, efficient and can be applied to large networks. Our security metrics should help us to understand the relationship between the data stored in network nodes and the potential threats to a network. The proposed method should be able to help network administrators identify important assets and critical parts of a network that need attention and protection. We implemented the approaches. The evaluation of it is on a real-life case study provided by Outpost 24.

(9)

1.5. Contributions

The contribution of this thesis is to present a security risk analysis method based on threat modeling that considers the criticality of data stored on each host. In the case of a subdivision:

We first propose the new security metric that can be used to assess the security risk level in the network.

The second is the method of considering the interaction between vulnerabilities based on threat modeling.

1.6. Target groups

This thesis contains an in-depth study of threat modeling-based network security risk assessment methods. All groups involved in the cybersecurity risk assessment domain ought to benefit from this work. For example, tasks of design, construction, operation, and maintenance of network engineers. In the process of building the network, more protection measures can be provided for important objects. When patching vulnerabilities in a network, the patching order can be sort in order to prioritize the repair of critical vulnerabilities. The new security metric should be helpful for researchers in creating new techniques to deal with such network threats. In addition, all organizations (enterprises) may face cybersecurity risks.

1.7. Report Structure

This section provides a brief overview of the remaining part of this thesis. The following list is a summary of each chapter.

- Chapter 2 introduces the research about domains that are relevant to this thesis and the reasons for choosing the used methods and techniques.

- Chapter 3 proposes a new method for security risk analysis that takes data criticality into account.

- Chapter 4 mainly describes the implementation of the proposed method, as well as the results of applying the proposed method on a case study provided by Outpost24 and the analysis of the results.

- Chapter 5 concludes the report and discusses future work.

(10)

2. Background

We introduce the means and technology of the related research area in this part.

We first conduct the literature survey of network security risk analysis and data classification, as they are the basis of our research. Then we analyze the characteristics and usage of various threat modeling, as we want to find a threat model that can be combined with data criticality and reflects the relationship between vulnerabilities. Finally, we discuss the attributes of security metrics and technologies we use in the proposed security risk analysis method.

2.1. Security risk assessment

Numerous enterprises and organizations have suffered enormous losses due to network attacks with the coming of the internet age, as the attacks on the network always exist, even where there are corresponding security measures all network attacks cannot be ruled out [22]. Security risk assessment is one of the most important parts of security risk management [20]. It can protect the network security of enterprises by identifying threats and priorities of them in the network, as well as help network security analysts to rationally plan network resources to achieve an effective allocation of them, thereby strengthening the resilience of the network environment. So security risk assessment is very eventful for security indemnification of the network [23].

The results of the security risk analysis ought to demonstrate:

1. The current security risk level of the network.

2. The threats that the network faces and the impact of these threats on the network.

3. The conclusion with recommendations.

The 3 main factors of security risk assessment are:

1. The effect of the exploited vulnerability to the network.

2. Method of defining in-place countermeasure effectiveness.

3. Recommendations to minimize risk against identified threats.

Integrated network security risk analysis with nine steps:

 Worth assessment: The foremost measure in security risk analysis is to identify the worth of protected objects in the network. The exploitation of the object with higher value causes a greater cost. E.g. the leakage of stored data, and the loss of system functions. So such objects in the network should be paid more attention to.

 Delimiting the facing threat: Looking for the correlative threats concerning the objects defined in the above process. As well as determining the extent of impact and frequency of occurrence for these threats. Such as unauthorized login, illegal reading of sensitive data, and vulnerabilities in system configuration settings.

 Definition of vulnerability: The definition of security risk levels utilizing investigating vulnerability interaction. The vulnerability itself does not cause damage to the system automatically. Just as the dormant vulnerability does not affect the system until it is triggered.

 Correlation between vulnerabilities and threats: The vulnerability poses threats to the system when it is exploited. The accuracy of assessment increases dramatically by correlating them, as there are miscellaneous categories of vulnerabilities and threats in the network.

 The influence of threats on the system: Threats are potential hazards

(11)

classified as disclosure, tampering, devastation, and repudiation of service [21]. Disclosure is a field of confidentiality of information.

Tampering can camouflage the aftermath incited by threats. Devastation represents substantial damage to the protected object and is more severe than the tampering. The repudiation of service affects the use of features. The classification can be employed to support decision making to achieve a reasonable allocation of effective resources.

 In-Place Countermeasures: Choosing in-place countermeasures to belong to preliminary data collection work in the security risk analysis.

In-place countermeasures can be broken down into technical countermeasures and administrative countermeasures. It can also be broken down into preventive, detective and corrective. Preventive aims to forestall the occurrence of an incident that is the threat to the system.

Detective ensures that an alert is issued when a threat is detected.

Corrective signifies the automatically patching of threats after they are appraised, the status of certain vulnerabilities can be altered from being exploited to ordinary.

 Residual Risk: All threats cannot be detected and mitigated even when in-place countermeasures exist, these trace threats are known as residual risks. Therefore, they need to be further evaluated. The evaluation course demands identification: 1) Objects with great value to the network, and the probability of them facing threats. 2) All meaningful and necessary countermeasures apart from in-place countermeasures.

The above conclusions can be used to select countermeasures for the remaining threats which cannot be detected or mitigated.

 Complementary Countermeasures: Countermeasures for the remaining threats are formulated through the identification of both cost and efficiency to attain the intent of minimizing risk.

 Security risk assessment report: The report embodies the outcome and course of the security risk analysis. It should be the consultation literature that can be applied to develop measures of risk recovery and security protection [21].

2.2. Threat modeling

Threat modeling is a method used to evaluate system security. It is intently related to security risk assessment and helpful for network security analysts to appraise principal objects in the network that require to be protected, as it enables consideration and calculation of intricate nexus of threats in the network. Threat modeling can be used to treat issues from distinct perspectives, such as attackers, assets, maintenance personnel, and systems [24], [25]. There are many threat modeling techniques. E.g. Attack Trees, STRIDE, Elevation of Privilege, T-MAP, Petri Net, Data Flow Diagram, Activity Diagram, Risk Reduction Overview, Abuser Stories, Fuzzy Logic, CORAS, and Attack Graph, etc. [28], [29]. The following is a concise characterization of several common techniques.

2.2.1. STRIDE

The STRIDE threat modeling was put forward by Microsoft, it is a tool for

identifying threats that the network confronts. STRIDE compartmentalizes the

threat into six categories which make up its initials. The following are the

classifications of the six types of threats [26], [27].

(12)

 Spoofing: The conduct of enabling unauthorized access by users who do not have access privileges.

 Tampering: Illegal modification of stored data.

 Repudiation: The user refuses to acknowledge the situation in which they have performed the operation.

 Information Disclosure: Confidential data can be accessed as one pleases by illegal users.

 Denial of Service: The system loses the capability to function.

 Elevation of Privilege: The events of acquiring higher access prerogative for users with limited permissions.

Finally, the possible threats to the network and the corresponding mitigation measures are given.

2.2.2. T-MAP

T-MAP is a threat modeling technique based on Commercial-Off-The-Shelf (COTS) systems [28]. The principle of its calculation is to quantify security risks by considering the weight assignment of attack paths on COTS. Its strength is that it pays more attention to the business worth of the enterprise and network security environment [29]. The T-MAP not only considers the security impact of the vulnerability but also analyzes its impact on the interests of a company when calculating the threat on each attack path. Finally, the risk value of the entire attack scenario is acquired by summing the risk values of each attack path. The development of T-MAP uses multiple class diagrams, including UML class diagrams, access class diagrams, vulnerability class diagrams, target asset class diagrams, and affected Value class diagrams. There is a software tool called Tiramisu that can automate the TMAP framework. It requires three types of input data which are vulnerability information, IT base installation information of tissue and the dependency between base installation information and business worth for the tissue [30].

2.2.3. HARMs

When the attack model calculates the full attack path, the complexity increases exponentially, this class of problems is called scalability problem. The dynamic adjustment phenomenon in the network system may cause the attack model to be unusable, which is a dynamic adjustment problem, such as network setting configurations, vulnerability. To solve the scalability problem and the dynamic adjustment problem, Hong et al. [31] proposed the hierarchical attack representation models (HARMs). The HARMs uses a layered ideology for network security analysis, which includes two layers, the upper layer is the topology information of the network, and the lower layer is the vulnerability information.

Its build process has five steps which are preprocessing, construction,

representation, evaluation, and modification. During the preprocessing and

construction, the network information is handled and the attack model is built

respectively. The attack model is drafted and stored during the process of

representation. The process of evaluation includes the identification of probable

attack paths and the feasibility of the success of the entire attack. In the

modification process, diversification in the network is caught, and appropriate

changes to the attack model are made. Scalability issues exist when this method is

(13)

2.2.4. Attack Tree

The attack tree model was proposed by B. Schneier in 1999 [18]. It provides a way to describe the security threats the system faces and the attacks that the system may suffer. The tree structure is used to represent the attack that the system will face. The root node represents the target or asset to be attacked. And the leaf node denotes the means to achieve sub-goals of the attack [19].

The attack tree has a multi-level data structure, which means it has multiple levels of nodes. It includes the root node and the leaf node. The lower nodes of a node are the child nodes which are the refinement of the node. Refinement indicates aggregation or disjunction. They are the relationship between child nodes. The leaf nodes cannot be refined, which means they have no child node [51]. The lower node directly drawn by the current node is the child node of the current node, and the current node is the parent node of its child node. From the bottom of the attack tree, child nodes must be implemented to achieve the target of the parent node. The root node is the ultimate goal of the attacker. When the root node is attained, it signifies that the attack has been realized. If the attacker has numerous final intents, diverse attack trees demand to be considered for the integrated analysis. There are two relationships among child nodes, “AND” and

“OR”. The OR nodes are used to denote alternatives. The AND nodes are used to represent different steps to realize the same goal. That is OR means that the parent node can be implemented when one of the directly connected child nodes is satisfied. AND means that all directly connected child nodes behoove be satisfied simultaneously to bring about the goal of parent node [38].

The attack tree mainly considers the problem from the perspective of the attacker; it can be used to analyze the possible attack targets of the attacker and feasibility methods [37]. Furthermore, the intuitive explanation of attack attributes is beneficial for the association of security risks and attributes involved in an attack. For example, the attack cost required by the attacker, the lasting required for the attack, the access privilege, and the technical skills of the attacker. All of them can be included in an attack tree [34]. After the formation of the attack tree, the leaf nodes are assigned values. They are finally used to calculate the security risk value of the attack goal. The assignment of the initial values of the leaf nodes needs to be carried out by experts. An automated tool called Secure Tree developed by Amenaz Technologies can be used to implement the above threat model, which is a graphical tool for simulating attack trees based on a mathematical attack tree model [28], [34].

The limitation of the attack tree is that it can only be used to make judgments and analysis of known attacks. It relies on the knowledge about the attack of analysts who build the attack tree. One of the currently effective solutions is about using the attack information stored in the standard attack library of common systems [36].

2.2.5. Attack Graph

The attack graph model was proposed by Philips and Swiler [8]. It is frequently

applied for security risk analysis of vulnerabilities in the network. It can be used

to visualize the behavior of attackers exploiting the vulnerability in the network to

achieve the attack goals. There are always vulnerabilities in the network, and there

are some associations between them. For instance, vulnerability exploitation may

create favorable conditions for another exploitation, thus it poses the threat to

critical resources in the network. Simulating the attack on a network with

(14)

vulnerabilities is employed in order to find out these associations. The attack graph can be used to visualize the attack path. So that network defenders can identify potential threats and high-risk areas in the network to specifically enhance the network's security defense capabilities. The input for establishing an attack graph is the information of network topology, attackers, systems, and vulnerabilities. And incomplete information can lead to poor security risk analysis [42]. Eventually, a rich view displaying the affiliation among the vulnerabilities is formed [43].

Larger attack graphs contain more amounts of vulnerabilities. If there are many attack paths through the same vulnerability, then this vulnerability has a great impact on the network. Correspondingly, if there are few paths through the same vulnerability, the impact of this vulnerability on the network is minimal [35].

The network defender can select the priorities of vulnerability patching and network reconfiguration according to the severity of the intrusion consequence to guarantee the function provision and security of the objects in the network. They are defined by them through the data illustrated by the attack graph [40], [41].

Since the idea of the attack graph was raised in 1998, the researchers have done a lot of studies regarding it. In the early days, some researchers (called the Red Team) conducted the security analysis of the network in the form of a hand- drawn attack graph. This method is both cumbersome and error-prone, as well as has low scalability when the network scale needs to be broadened. Eventually, there is a demand for automated methods for the attack graph generation. The method can generate two versions of attack graphs. The detailed attack graph contains all possible attack paths in the network. The brief attack graph only contains the attack paths to the definite intention of the attacker. The attack graph generation method proposed in the early days includes model checking, custom algorithms, and logic-based method [35]. Attack graphs can be used to quantify and analyze the cybersecurity risks of enterprises.

Petri net is suitable for describing asynchronous and concurrent computer system models [20]. Compare with the attack tree and the Petri net, the former has a stronger description of the network attack process and a wider range of applications.

The genre of attack graph incorporates exploit dependency attack graph,

multiple prerequisite attack graph, and logical attack graph, etc. The attack graph

that enumerates the sequences in which the vulnerability may be exploited is the

exploit dependency attack graph. The following is the exploit dependency attack

graph in [35]. Oval nodes represent vulnerability information in the graph. Other

nodes represent network configuration information and the skill of the attacker.

(15)

Figure 2.1: The exploit dependency attack graph from [35]

The http(H0, H1) in the graph is a network condition node which means that from host 0 to host 1 the network service can be accessed. The user(H2) is the attacker ability node that indicates the attacker has privilege on the host 2. The directed edge points to the consequences of the attack, and the opposite direction is the prerequisite of the attack. There is an ssh vulnerability ’cve-2002-0640’ on node ssh(H0, H2). User(H2) and http(H2, H1) are necessary conditions to achieve V1(H2, H1), which can lead to user(H1) eventually. The exploit dependency attack graph can enumerate the attack path intuitively and concisely. The attack path illustrates the exploitation of critical resources of the network. As shown in Figure 2.1, the attacker has two possible attack paths to gain the privilege on host 3:

1. V2(H0, H2) → V1(H2, H1) → V3(H1, H3) → V4(H3).

2. V1(H0, H1) → V3(H1, H3) → V4(H3).

The attack graph produced by the network security planning architecture

(NetSPA) approach is called a multiple prerequisite attack graph. There are three

genres of nodes in the attack graph, which represent states, conditions, and

vulnerabilities. The state node indicates the privilege that the attacker needs to

access a particular host. The condition node represents the necessary condition for

exploiting the vulnerability. The vulnerability node refers to the vulnerability

information. The directed edge between the state node and the condition node

represents the ability that the attacker can gain based on the state. A directed edge

between a condition node and a vulnerability node indicates the vulnerability that

can be exploited when the condition is satisfied. Correspondingly, the directed

edge among the vulnerability node and the state node refers to the state that the

attacker can reach by favorably exploiting the vulnerability. If the attacker has not

achieved her final attack goal, then each attack on this attack path can be

understood as preparing for the next attack.

(16)

Figure 2.2: The multiple prerequisite attack graph from [35]

As shown in Figure 2.2, the elliptical nodes, rectangular nodes, and triangular nodes respectively represent state nodes, condition nodes, and vulnerability nodes.

A, B, C, D, and E denote the privilege of the attacker on distinct hosts. From A to ”Can Reach H1, H2” and to V1, V2 means that after the attacker gains ability on state A, she can access H1 and H2, then the vulnerabilities can be exploited by her. Only after V1 is exploited the attacker can attain B state and access host 3.

After that, the attacker can achieve state D by exploiting V3.

A logical attack graph is one of the principal methods used for corporate network security assessment and protection. "It helps network security analysts analyze latent threats in the network by analyzing the nexus between network configurations and vulnerabilities" [46].

Figure 2.2 is an example of a logical attack graph. As can be seen from the

graph, there are three types of nodes: oval nodes, rectangular nodes, and diamond

nodes. The rectangular nodes are also the primitive fact nodes representing the

network configuration settings. Such as network connectivity, firewall rules, and

data on the host. Oval nodes are also derivation nodes representing vulnerability

exploitation, and they depict the action of the attacker to gain capabilities. Derived

fact nodes denote privileges gained by the attacker. She gains the ability by

exploiting vulnerabilities. For example, the attacker can interview user profiles by

overmastering the account of a manager. Such nodes are represented by diamond

nodes in the logical attack graph [45].

(17)

Figure 2.3: The logical attack graph from [45]

There are two main relationships between nodes in a logical attack graph:

AND-relation and OR-relation. The oval nodes can be satisfied when all the preconditions of the oval nodes are satisfied. Therefore the oval nodes are AND- relation. For example, a5 is a derivation node in Figure 2.3. It can only be utilized if c3 and p2 are satisfied. The predecessors of derived fact nodes are rule nodes.

One diamond node can be achieved when the rule node is satisfied. The diamond nodes are OR-relation in the logical attack graph. For instance, g stands for the root privilege. It is the goal of the attacker. As long as one of a5 and a4 exists, it can be achieved.

The edges in the logical attack graph are made use of representing the dependency between nodes, that is, causality. For example, if the attacker wants to exploit the vulnerability at a1, in that way there must be a connection between c2 and a1, as well as the condition on c2 must have been sufficed. Thence there is an edge from c2 to a1. P3 is the node derived from a1, so there is an edge from a1 to p3. MulVAL is a prevalent tool for generating attack graphs.

2.2.5.1. MulVAL

MulVAL and the MIT-implemented NetSPA system are logical interface implementations of attack graphs. In 2006, MIT introduced NetSPA, which uses the Multiple-Prerequisite Attack Graph to represent potential attack paths for attackers to exploit known vulnerabilities. Ou et al. proposed MulVAL in 2005 [49]. The full name of MulVAL is multi-host, multistage vulnerability analysis, which is a tool for generating attack graphs by depicting the interactions between vulnerabilities that the attacker might exploit and the configurations that exist in the network. It generates the attack path from exploits to possible attack targets [9], [35], [48].

MulVAL has powerful network data acquisition capabilities and performance

advantages. The MulVAL uses the Datalog language to describe a network

security analyzer based on Datalog. The information in the vulnerability database,

the configuration information of each host, and other related information are

encoded into the Datalog through the processing of the program so that the

inference engine can analyze and calculate the interaction between various

components in the network. The size of the generated Logical Attack Graph of

MulVAL varies with the size of the network to O(n2); n is the number of

machines in the network [47]. In the logic proof graph generated by MulVAL,

there are three types of nodes. The primitive fact nodes are represented by the

rectangle, which indicates host information and configuration settings. The

(18)

derived fact node represents privileges. It is expressed in diamond and generated by particular rules. The interaction rule can be seen as exploiting,which are expressed in ovals [5].

In order to generate an attack graph, MulVAL needs two kinds of input files:

a rule file and a data input file. The rule file covers the rules for the interaction of facts in the network. MulVAL has its own default rules, and users can establish and employ their rule documents according to not alike demands. The data input file summarizes the configuration information of the host or server in the network [42].

Figure 2.4: The framework of MulVAL [47]

Figure 2.4 reveals the framework of MulVAL. Open Vulnerability Assessment Language (OVAL) is a language that can be used to evaluate the security of vulnerabilities. The MulVAL generates the logical attack graph by integrating the scan outcome of the OVAL scanner with interactions between vulnerabilities. The scan upshots are vulnerability information and network configuration data [9]. Figure 2.5 is a simple logical attack graph generated by MulVAL. The attacker is situated at node 26, if she exploits the vulnerability on node 24, then she can access the webserver at node 17. The value on the right side of each node indicates the event occurrence probability calculated by MulVAL's algorithm. The probability of the event contained in node 17 is 0.9971. Besides, our metric is defined in this study, and an exhaustive interpretation of it is granted in the Method section.

Figure 2.5: A logical attack graph of MulVAL

(19)

2.3. Network Security Metrics

We first introduce the fundamental concepts, attributes, and classifications of security metrics in this section. We then describe the information gathered about the metrics used in the study. It includes the information in the Common Vulnerability Scoring System (CVSS) database and the definition of data criticality.

Security metrics can be employed to support network security analysts to enhance network resilience by discovering the most effective network configuration. Metrics are generated by a set of measurements and rules for analyzing measurements; measurements are raw data [7]. Simple metrics have no comparative value, they need to be statistically analyzed and extracted as metrics.

When defining a new security metric, some of its attributes need to be premeditated. These properties are listed below:

 Granularity: Full consideration should be given to differences between attribute values. It means that the standard of metric should be sufficiently exhaustive and should not be treated as a general distinction.

 Availability: Its functionality should be able to be implemented. It is useful for calculating the metric for the objective system.

 Cost effectiveness: Calculation cost of the metric should be reasonable.

 Localization: To facilitate the use of the metric, a metric should have explicit information pertaining to the type of scale, range of values of the scale, and their implications.

 Validation: The metric used ought to be concerned with the security attribute value of the system being measured, so it is important to select the metric that considers the security attribute value of the system.

Classification of security metrics according to the following criteria:

 Target Type: The target being evaluated. Such as software, network.

 Objective Type: References to be analyzed. Such as economic, effectiveness.

 Construction Type: It has a way of how the metric is extracted and includes two types which are Measurement-based and Model-based.

Models such as attack trees and attack graphs, etc. belong to Model- based type. Such models demand input parameters such as attacker properties, vulnerability information, and network configuration communication.

 Automation Level: In view of the extent of automation in the procedure of analyzing metrics, it can be compartmentalized into manual, voluntary and semi-automatic.

 Measurement Consistency: Security metrics are classified as subjective and objective according to whether they depend on human subjective judgment. Subjective judgment is an inevitable existence in this area for research.

 Measurement Type: The security metric can be divided into quantitative and qualitative according to the type of measurement.

Quantitative metrics can be used for arithmetic calculations, qualitative metrics focus on the utterance of written implication.

 Measurement Moment: There are dynamic security metrics and static

security metrics. Properties of statically measured objects do not vary

over time. If the properties of the target are changing dynamically,

dynamic security metrics are required.

(20)

2.3.1. CVSS

CVSS is the Common Vulnerability Scoring System, it is used as an industry open standard to measure the severity of vulnerabilities and help determine the urgency and importance of incidents that need to be addressed. Its main role is to help users establish standards for measuring the severity of vulnerabilities, to compare vulnerabilities and determine the priority of patching them. The maximum score for a vulnerability is 10 and the minimum score is 0. Vulnerabilities scores between 7 and 10 are considered serious. Intermediate vulnerabilities are scored from 4 to 6.9. And low-level vulnerabilities are rated on a scale of 0 to 3.9. CVSS consists of three basic metrics which are base, temporal and environmental. Each one of them is textured by a set of metrics [10], which can be seen in Figure 2.6.

Figure 2.6: CVSS Metric Groups

The following is the introduction of groups and metrics [7], [39]:

 Base Metric Group: Represents the fundamental and intrinsic features of vulnerabilities that do not change over time and operating environment. This group has the following metrics:

- Access Vector: This metric describes how an attacker could exploit a vulnerability, the metric values are local, adjacent network, and network.

- Access Complexity: The metric depicts how easy it is for an attacker to exploit a vulnerability. The vulnerable exploit has a higher rating. Corresponding metric values are high, medium, low.

- Authentication: This metric related to the requirement for the attacker to exploit a vulnerability, which is authentication.

Vulnerabilities with lower cost requirements are rated higher.

Available values are multiple, single, and none.

- Impact: There are potential damages to three aspects of the asset when the vulnerability is exploited. The three aspects are confidentiality, integrity, and availability. The metric values are complete, partial, none.

 Temporal Metric Group: The features of these vulnerabilities change over time compared to the base metric group. This group consists of the following metrics:

- Exploitability: Indicates the difficulty of exploiting a vulnerability.

It involves exploiting technology and the attributes of the code.

Harder exploits have lower scores. It incorporates five metric

values, which are unproven, proof-of-concept, functional, high,

and not defined.

(21)

- Remediation Level: Represents the patch that can be used to fix a vulnerability. Available metric values are official fix, temporary fix, workaround, unavailable, as well as not defined.

- Report Confidence: Information on the likelihood of exploiting and the capability gained by the attacker through the vulnerability exploitation. The values of the metric are unconfirmed, uncorroborated, confirmed, as well as not defined.

 Environmental Metric Group: Metrics intently related to the operating environment. This group contains these metrics:

- Collateral Damage Potential: Possible tangible losses resulting from damage to critical resources. Corresponding metric values are none, low, low-medium, medium-high, high, and not defined.

- Target Distribution: It is utilized to gauge the vulnerable parts of the system as a percentage of the total system. The metric values are none, low, medium, high, and not defined.

- Security Requirements: Consider the importance of key resources to the confidentiality, integrity, and availability of the organization. Selectable metric values include low, medium, high, and not defined.

2.3.2. Data criticality

Data criticality could be defined by data classification. The key factor required for data classification is meta-data. Meta-data could be the value or extent of security, privacy, or other related policies for data items. Therefore it should be obtained from experts who have a deep knowledge of the organization policies, procedures, business rules, as well as government mandates on data secrecy and safety [11], [13].

The classification of data can be defined as: public, private, between private and mission critical, mission critical, confidential, secret, and top secret. The following aspects need to be considered during the event analysis:

 The degree of protection required by the target, concerns about security measures, intellectual property rights, and organizational institutions.

 Relevant access restrictions and protection provisions issued by the government.

 The criticality of data and its value to the company.

 Clear-cut requirements for data access privilege at distinct levels of confidentiality.

 Comprehend the internal and external operations the data will confront.

 The life cycle, storage device, storage location, backup, and deletion of data need to be confirmed.

 Define the scale of data security conservation that demands to be

provided to distinct users [11].

(22)

Figure 2.7: Meta-data value [13]

After the data is classified, in order to perform arithmetic operations, values are assigned to different categories. Public = 0-10 which means a value from 0 to 10, private = 8-20, private but not mission critical = 16-30, mission critical = 28- 40, confidential = 37-50, secret = 48-60, top secret = 58-70. Figure 2.7 displays an example, the first column is the customer information stored in the database [13].

The second and third columns are the value of meta-data given the policy of the organization and government regulatory respectively. For instance, the value of customerID to the organization policy is 68, and the value of customerID is 39 when only the government regulatory policy is considered. This is just a much- uncomplicated instance, it is utilized to demonstrate a process of defining data criticality, new methods should be proposed based on this process without limitation.

We use a security metric that takes into account the criticality of the data affected by the exploiting to represent the impact of a vulnerability exploitation.

We determine the data criticality based on three parameters, as done in [12]. They

are the criticality of confidentiality, integrity, and availability. Their scores are

provided on the client-side by the user in this experiment. We explain them in

detail in the 3.3.2.

(23)

3. Method

In this chapter, we discuss our approach for performing security risk analysis considering the data criticality. We first choose attack graphs as the threat modeling technique and instrument it with the data criticality. We afterward propose a set of security metrics to measure the security risks and propose an algorithm to calculate the security metrics from the instrumented attack graphs.

3.1. Scientific Approach

A combination of quantitative methods and qualitative methods is used in this thesis project. Quantitative methods are used to score the severity or probability of vulnerability exploitation as well as the criticality of data. We use a security metric that considers vulnerability information and data criticality information, and uses the information contained in the nodes in the attack graph to show the value of the security metric. The qualitative approach is applied to calculate security metrics and it is applied in a real-life case study while considering the relationships among vulnerabilities. We discover the trajectory of an attacker based on the attack graph, so as to consider the causality of exploiting, and then propose algorithms for security metrics. In the case study section, we classify threat levels for the attacker's behavior and suggest the priorities for repairing and defense strategies.

3.2. Running Example

We use a small case study of a network consisting of 3 hosts with a few vulnerabilities on each. In this case, host plycent02 runs CentOS

4

6.10 and the services of ssh. Host plydeb01 runs the Debian

5

operating system 9.0 and the services of ssh. Host plyubu01 runs the Ubuntu

6

operating system 16.04 and the services of ssh.

Host plycent02 contains 2 vulnerabilities. The vulnerability ’cve-2017-9076’

is exploited when the attacker executes the code on host plyubu01. The vulnerability ’cve-2017-18017’ is exploited when the attacker executes the code on host plycent02. Host plydeb01 contains 3 vulnerabilities. The vulnerability ’cve-2018-7566’ is exploited when the attacker executes arbitrary code on host plydeb01. The vulnerability ’cve-2018-1000004’ is exploited when the attacker as a user to execute the code on host plydeb01. The vulnerability ’cve-2018-13405’ is exploited when the attacker has a root to execute the code on host plydeb01. Host plyubu01 contains 2 vulnerabilities. The vulnerability ’cve-2017-18017’ is exploited when the attacker as a user to execute the code on host plyubu01. The vulnerability ’cve-2017-0861’ is exploited when the attacker has a root to execute the code on host plyubu01. The topology of this example is shown in Figure 3.1.

4

https://www.centos.org/

5

https://www.debian.org/

6

https://ubuntu.com/

(24)

Figure 3.1: Network topology of the example

3.3. The Approach Outline

Our approach consists of three steps. It will be discussed in the following

subsections. Figure 3.2 shows the general architecture of our approach. In brief,

first, an attack graph showing attack scenarios is generated (i.e. considers

vulnerabilities and exploits dependencies) (See Section 3.3.1). Then it is

augmented with data sensitivity information (See Section 3.3.2) and finally a few

new security metrics are introduced and algorithms to calculate the security risks

using the introduced security metrics are proposed (See Section 3.3.3).

(25)

Figure 3.2: Our approach 3.3.1. Step 1- Generating Attack Graph using MulVAL

In the first step, we specify the network configurations and vulnerabilities using Horn clauses and then feed it into MulVAL [9] to generate an attack graph. For example, we specify our running example using the specification shown in Figure 3.1. Table 3.1 shows the list of predicate and their semantics.

Predicate Semantics

host X Host X means that x is a host.

(26)

node I Node I means that I is a node.

hacl(h1,h2,pr,port) Hacl(h1,h2,pr,port) means that the host h1 has access to a service running on the host h2 with the protocol pr and the port port.

RULE N RULE N means that N is a rule.

RULE N (E) RULE N (E) means that E is an action of

exploiting.

p(h,ide) P(h,identity) means that the attacker with

the identity ide can gain privilege p on the host h.

p(h,pr,port) P(h,pr,port) means that the attacker can gain privilege p on the host h2 with the protocol pr and the port port.

networkServiceInfo(h,’p’,pr,port,user) NetworkServiceInfo(h,p,pr,port,user) means that the user has access to port port on host h, host h runs on platform p, and the protocol is pr.

vulExists(h,’id’, ’p’,m,priv) VulExists(h,’id’,’p’,e,priv) means that id is the identification code of a

vulnerability, the vulnerability exists on host h which runs on platform p, and the attacker exploits the vulnerability through method m to gain privilege priv.

attackerLocated(internet) AttackerLocated(internet) means that the attacker is located on internet.

Table 3.1: Predicate and their semantics

Given the specification of the network and its vulnerabilities and an attack goal, we use MulVAL to generate an attack graph that shows all the attack scenarios as well as the probabilistic information about each attack step. Figure 3.3 partially shows the attack graph of our running example where the attacker is located on the internet and its goal is to execute code on host plyubu01.

Figure 3.3: Part of the attack graph of our running example

As shown in this figure, each rule node has a probability that indicates the

(27)

of exploit nodes indicates the probability of reaching that specific node. For example, the probability of performing direct network access is 0.8, as shown in Figure 3.3. The probability of performing remote exploit of a server program is 0.768. The probability of reaching node 26 is 0.768. The configuration nodes indicate existing network settings. For example, the probability of the internet has access to a service running on the host plycent02 with the protocol tcp and port 22 is 1. The probability of the internet has access to a service running on the host plyubu01 with the protocol tcp and port 22 is 1.

3.3.2. Step 2- Augmenting Attack Graph with Data Sensitivity Information After obtaining the attack graph from step 1, we augment it with data sensitivity information to obtain an attack graph that contains both vulnerabilities dependencies as well as data sensitivity information. To this end, we instrument data sensitivity information into the nodes of the attack graph, i.e. we assign such information to the attack steps and consequences. We first determine the sensitivity of data based on three parameters:

 Confidentiality criticality criteria (cCrit) that refers to the criticality of data in terms of confidentiality. Data confidentiality is related to the disclosure of sensitive data. The confidentiality of the data ensures that sensitive data is not disclosed to unauthorized users while authorized users can access the data [64].

 Integrity criticality criteria (iCrit) that represents the criticality of data in terms of integrity. Data integrity refers to the accuracy and validity of data over its lifecycle [65]. It can also be understood as the quality of the data. Data integrity is about the protection of data. For example, to prevent data from being improperly modified and maintained.

 Availability criticality criteria (aCrit) that indicates the criticality of data in terms of availability. Data availability ensures that data can be accessed and used in a timely and reliable manner [65].

The network security analysts and domain experts define the criticality of data based on their business policies and organization preferences. For instance, the web server in our running example contains very critical information for the business and it is more critical than host plyubu01 from a data sensitivity point of view. If the web server becomes unavailable due to an attack, it can cause serious consequences for the business, e.g. it can lead to the loss of customers' information and trust, loss of money and prestige, competitiveness. The webserver is important from all the three aspects of confidentiality, integrity, and availability.

Consider the host plyubu01 that stores the local data of a user and she always

keeps a backup copy of her data in a cloud-based system. If the attacker gets

control over this host, it can impact the user from the data confidentiality

perspective, however, its impact on the availability or integrity of data is not

significant, as the user can restore her data from the cloud. And in the same case,

if she doesn't keep a backup copy of her data in a cloud-based system, its impact

on all aspects of data is significant, as the user can not restore her data. Consider

the host plycent02 that stores the financial data of a company. If the attacker

deletes some data, the most important data remains. It can impact the company

from the data integrity perspective, however, its impact on the availability or

confidentiality of data is not significant, as the important data is more useful for

the company. If the attacker only reads this data, it can impact the company from

(28)

the data confidentiality perspective. However, its impact on integrity and availability is significant, as the company can still use the complete data.

The impact of an attack on the data depends on two factors: the criticality of data and the impact of the attack. The impact of an attack can be obtained from the vulnerability databases such as CVSS (Common Vulnerability Scoring System) [14]. In CVSS, the impact of an attack from the three aspects of confidentiality, integrity, and availability is defined. Hence, we define the three parameters cImpact, iImpact, and aImpact to respectively in terms of confidentiality, integrity, and availability.

For example, there is a vulnerability on the webserver that can be exploited in our running example (See node 10 in Figure 3.4). The id of the vulnerability is ’CVE-2002-0392’ that exists in the program httpd. The attacker can escalate her privileges by exploiting this vulnerability remotely. According to the data information stored in the webserver, we score its cCrit, iCrit, and aCrit respectively to 5, 3, and 3. By searching for the vulnerability id in the CVSS database, the impact of exploiting the vulnerability can be found. The impact of exploiting the vulnerability ’CVE-2002-0392’ in terms of confidentiality (cImpact), integrity (iImpact), and availability (aImpact) are all partial. None, partial, and complete in the CVSS database correspond to 0.0, 0.275, and 0.660 respectively [50]. So they are 0.275.

Figure 3.4 shows the attack graph augmented with data criticality information. The nodes of the original attack graph contain the probability of events, it is replaced by VEC (vulnerability exploitation cost) in the new attack graph. The VEC is a security metric for attacks with vulnerability exploitation, it considers data sensitivity information. We use cCrit, iCrit, and aCrit in calculating it. In order to have a better understanding of the following content, it is best to read the content in Table 3.2 first.

Figure 3.4: The Logic proof graph from MulVAL

Abbreviation Implication

cCrit The criticality of data in terms of confidentiality.

iCrit The criticality of data in terms of integrity.

aCrit The criticality of data in terms of availability.

CVSS Common Vulnerability Scoring System.

cImpact Potential damages in terms of confidentiality of the asset in case of exploitation of vulnerabilities.

iImpact Potential damages in terms of the integrity of the asset in case of exploitation of vulnerabilities.

aImpact Potential damages in terms of availability of the asset in

case of exploitation of vulnerabilities.

References

Related documents

Generally, a transition from primary raw materials to recycled materials, along with a change to renewable energy, are the most important actions to reduce greenhouse gas emissions

where r i,t − r f ,t is the excess return of the each firm’s stock return over the risk-free inter- est rate, ( r m,t − r f ,t ) is the excess return of the market portfolio, SMB i,t

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

• Utbildningsnivåerna i Sveriges FA-regioner varierar kraftigt. I Stockholm har 46 procent av de sysselsatta eftergymnasial utbildning, medan samma andel i Dorotea endast

Den förbättrade tillgängligheten berör framför allt boende i områden med en mycket hög eller hög tillgänglighet till tätorter, men även antalet personer med längre än

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av

Figur 11 återger komponenternas medelvärden för de fem senaste åren, och vi ser att Sveriges bidrag från TFP är lägre än både Tysklands och Schweiz men högre än i de