• No results found

Reliability calculations for complex systems

N/A
N/A
Protected

Academic year: 2021

Share "Reliability calculations for complex systems"

Copied!
127
0
0

Loading.... (view fulltext now)

Full text

(1)

Institutionen för systemteknik

Department of Electrical Engineering

Examensarbete

Reliability calculations for complex systems

Examensarbete utfört i Reglerteknik vid Tekniska högskolan vid Linköpings universitet

av

Malte Lenz och Johan Rhodin LiTH-ISY-EX--11/4441--SE

Linköping 2011

Department of Electrical Engineering Linköpings tekniska högskola

Linköpings universitet Linköpings universitet

(2)
(3)

Reliability calculations for complex systems

Examensarbete utfört i Reglerteknik

vid Tekniska högskolan i Linköping

av

Malte Lenz och Johan Rhodin LiTH-ISY-EX--11/4441--SE

Handledare: André Carvalho Bittencourt

isy, Linköpings universitet

Examinator: Torkel Glad

isy, Linköpings universitet

(4)
(5)

Avdelning, Institution Division, Department

Division of Automatic Control Department of Electrical Engineering Linköping University SE–581 83 Linköping Sweden Datum Date 2011-06-08 Språk Language Svenska/Swedish Engelska/English   Rapporttyp Report category Licentiatavhandling Examensarbete C-uppsats D-uppsats Övrig rapport  

URL för elektronisk version

http://www.control.isy.liu.se http://www.ep.liu.se ISBN — ISRN LiTH-ISY-EX--11/4441--SE

Serietitel och serienummer Title of series, numbering

ISSN —

Titel Title

Tillförlitlighetsberäkningar för komplexa system Reliability calculations for complex systems

Författare Author

Malte Lenz och Johan Rhodin

Sammanfattning Abstract

Functionality for efficient computation of properties of system lifetimes was developed, based on theMathematica framework. The model of these systems consists of a system

struc-ture and the component’s independent lifetime distributions. The components are assumed to be non-repairable. In this work a very general implementation was created, allowing a large number of lifetime distributions fromMathematica for all the component distributions.

All system structures with a monotone increasing structure function can be used. Special ef-fort has been made to compute fast results when using the exponential distribution for com-ponent distributions. Standby systems have also been modeled in similar generality. Both warm and cold standby components are supported. During development, a large collection of examples were also used to test functionality and efficiency. A number of these examples are presented. The implementation was evaluated on large real world system examples, and was found to be efficient. New results are presented for standby systems, especially for the case of mixed warm and cold standby components.

Nyckelord

(6)
(7)

Abstract

Functionality for efficient computation of properties of system lifetimes was de-veloped, based on theMathematica framework. The model of these systems con-sists of a system structure and the component’s independent lifetime distribu-tions. The components are assumed to be non-repairable. In this work a very general implementation was created, allowing a large number of lifetime distri-butions fromMathematica for all the component distributions. All system struc-tures with a monotone increasing structure function can be used. Special effort has been made to compute fast results when using the exponential distribution for component distributions. Standby systems have also been modeled in simi-lar generality. Both warm and cold standby components are supported. During development, a large collection of examples were also used to test functionality and efficiency. A number of these examples are presented. The implementation was evaluated on large real world system examples, and was found to be efficient. New results are presented for standby systems, especially for the case of mixed warm and cold standby components.

Sammanfattning

Funktionalitet för effektiv beräkning av systems livstidsegenskaper har utveck-lats, baserat på Mathematicas ramverk. Modellerna för dessa system består av en systemstruktur och komponenternas oberoende livstidsdistributioner. Kom-ponenterna antas vara icke reparerbara. En mycket generell implementation som hanterar ett stort antal distributioner frånMathematica som komponenters distri-butioner har utvecklats. Alla systemstrukturer med en monotont växande struk-turfunktion kan användas. Särskild hänsyn har tagits för att uppnå effektiva ut-räkningar när exponentialdistributionen används för komponenter. Standbysy-stem har också modellerats med motsvarande generalitet. Både varma och kalla standbykomponenter stöds. Under utvecklingen har ett stort antal exempel an-vänts för utvärdering av korrekthet och effektivitet. Ett antal av dessa exempel presenteras. Implementationen har även utvärderats på stora verklighetsbasera-de system, och konstaterats vara effektiv. Nya resultat presenteras för standby-system, speciellt för fallet med blandade varma och kalla standbykomponenter.

(8)
(9)

Acknowledgments

We would like to thank Roger Germundsson and Wolfram Research for the oppor-tunity to do our thesis project at the Wolfram Research headquarters in Cham-paign, Illinois. Special thanks to Oleksandr Pavlyk for all his support. Without his help, ideas and pointers this project would not have gotten as far as it has. Also, the whole statistics team at Wolfram Research deserves many thanks for building an excellent framework on which we rely, as well as answering all our questions.

We would also like to thank Henrik Tidefelt for his aid in everything between Mathematica, LATEX and restaurants in Champaign.

At Linköping University we would like to thank André Bittencourt and Torkel Glad at the Department of Electrical Engineering, ISY.

Champaign, Illinois, March 2011 Malte Lenz and Johan Rhodin

(10)
(11)

Contents

Notation xiii

1 Introduction 1

1.1 Purpose and goal . . . 2

1.2 Outline of the thesis . . . 2

2 Theoretical background 3 2.1 Reliability measures . . . 3

2.1.1 Distribution functions . . . 4

2.1.2 Lifetime distributions . . . 6

2.1.3 Properties of the lifetime distribution . . . 9

2.2 Boolean logic . . . 10

2.3 Systems of components . . . 11

2.3.1 Series system . . . 12

2.3.2 Parallel system . . . 12

2.3.3 Mixed system . . . 13

2.3.4 Structure function to survival function . . . 13

2.3.5 Standby systems . . . 13

2.4 Graph theory . . . 15

2.4.1 Adjacency matrix . . . 16

2.4.2 Path and cut vectors . . . 17

2.4.3 Minimal cut set . . . 17

2.5 Importance measures . . . 18

2.5.1 Structural importance . . . 18

2.5.2 Birnbaum importance . . . 19

2.5.3 Risk Achievement Worth and Risk Reduction Worth . . . . 20

2.5.4 Improvement potential . . . 21 2.5.5 Barlow-Proschan importance . . . 21 2.5.6 Criticality importance . . . 22 2.5.7 Fussell-Vesely importance . . . 22 2.6 Special functions . . . 23 ix

(12)

3 Structure function 25

3.1 Check if a boolean function is increasing . . . 25

3.2 Converting a boolean expression . . . 26

3.3 Structure function to survival function . . . 27

3.3.1 Expanding to remove exponents . . . 27

4 Reliability distribution 33 4.1 Properties for some basic systems . . . 33

4.1.1 Serial system . . . 34

4.1.2 Parallel system . . . 35

4.1.3 2 out of 3 system . . . 36

4.1.4 Simple mixed system . . . 37

4.1.5 Bridge system . . . 38

4.2 Parallelization . . . 39

4.2.1 Parallelization on system level . . . 40

4.2.2 Parallelization on component level . . . 40

4.2.3 Comparison . . . 41

4.3 Properties for real world systems . . . 42

4.3.1 Cockpit information system . . . 42

4.3.2 Airplane . . . 43

4.3.3 Electrical diesel generator system . . . 45

5 Optimizations 47 5.1 Simplification of distributions . . . 47

5.2 Special cases of properties . . . 48

5.2.1 Exponentially distributed components . . . 48

5.2.2 Hazard function . . . 50

6 Importance measures 53 6.1 Properties for a bridge system . . . 53

6.1.1 Structural importance . . . 53

6.1.2 Birnbaum importance . . . 54

6.1.3 Risk Achievement Worth . . . 55

6.1.4 Risk Reduction Worth . . . 55

6.1.5 Improvement potential . . . 56

6.1.6 Barlow-Proschan importance . . . 57

6.1.7 Criticality importance . . . 58

6.1.8 Fussell-Vesely importance . . . 58

6.2 Comparison on a simple system . . . 59

7 rbd modeling 63 7.1 Boolean expression to rbd . . . 63

7.2 rbd to boolean expression . . . 65

(13)

CONTENTS xi

8 Standby systems 69

8.1 Cold standby . . . 69

8.1.1 Cold standby with perfect switching . . . 69

8.1.2 Cold standby with imperfect switching . . . 72

8.2 Warm standby . . . 75

8.2.1 Warm standby with perfect switching . . . 75

8.2.2 Warm standby with imperfect switching . . . 77

8.2.3 Mixed standby . . . 80

8.3 Applications . . . 82

9 Conclusions and future work 87 9.1 Conclusions . . . 87

9.2 Future work . . . 88

9.2.1 Graph editor . . . 88

9.2.2 Special system structures . . . 88

9.2.3 Repairable systems . . . 88

9.2.4 Censored data . . . 88

9.2.5 Dependent lifetime distributions . . . 88

9.2.6 Accelerated life . . . 89

9.2.7 Real world reliability data verification . . . 89

A Computable properties 91 B Airplane cockpit system 93 B.1 System presentation . . . 93

B.1.1 rbd . . . 93

B.1.2 Structure function . . . 93

B.1.3 Basic events . . . 93

C Electrical diesel generator system 97 C.1 System presentation . . . 97

C.1.1 Structure function . . . 98

C.1.2 Basic events . . . 98

Bibliography 103

(14)
(15)

Notation

Abbreviations

Abbreviation Meaning

cdf Cumulative Distribution Function

ccdf Complementary Cumulative Distribution Function

cnf Conjunctive Normal Form

dnf Disjunctive Normal Form

pdf Probability Density Function

mgf Moment Generating Function

mttf Mean Time To Failure

raw Risk Achievement Worth

rbd Reliability Block Diagram

rrw Risk Reduction Worth

Probability and Statistics

Notation Meaning

T ∼ L T is distributed according to lifetime distribution L

P(A) Probability of the event A

P(A|B) Conditional probability of event A given event B

E(T ) or hT i Expectation of T µ0

n The n’th moment around 0 (see definition 2.18)

µ Mean (see definition 2.19)

µn n’th central moment (see definition 2.20)

σ2 Variance (see definition 2.21)

σ Standard deviation (see definition 2.22)

β2 Kurtosis (see definition 2.24)

γ1 Skewness (see definition 2.23)

MT(s) Moment-generating function (see definition 2.25)

ϕT(s) Characteristic function (see definition 2.27)

(16)

Reliability

Notation Meaning

Φ(~x) Structure function for the component states ~x (see def-inition 2.32)

Importance measures

Notation Meaning

Iφ(i) Structural importance of component i (see

defini-tion 2.46)

IB(i)(t) Birnbaum importance of component i at time t (see definition 2.48)

II P(i)(t) Improvement potential of component i at time t (see definition 2.52)

IB−P(i) Barlow-Proschan importance of component i (see defi-nition 2.53)

Iraw(t)(i) Risk Achievement Worth of component i at time t (see

definition 2.50)

Irrw(t)(i) Risk Reduction Worth of component i at time t (see

definition 2.51)

ICR−F(i) (t) Criticality importance (failure oriented) of component i at time t (see definition 2.55)

ICR−S(i) (t) Criticality importance (success oriented) of compo-nent i at time t (see definition 2.56)

IF−V(i) (t) Fussell-Vesely measure of component i at time t (see definition 2.57)

Boolean operators

Notation Meaning

a ∧ b Conjunction of a and b, “a and b” a ∨ b Disjunction of a and b, “a or b”

¬a Negation of a, “not a”

a∨b Negation of a disjunction of a and b, ¬(a ∨ b)

a∧b Negation of a conjunction of a and b, ¬(a ∧ b)

a ⇒ b If a is true, b must be true, ¬a ∨ b

majority True if more than half of the arguments are true (see definition 2.31)

(17)

Notation xv Graph theory

Notation Meaning

vi Vertex in a graph

vivk Edge from vertex vi to vertex vk in a graph

Special functions

Notation Meaning

Γ(a, z) Incomplete gamma function (see definition 2.58)

min(t) The minimum of t

(18)
(19)

1

Introduction

It is scientific only to say what is more likely and what less likely, and not to be proving all the time the possible and impossible.”

Richard P. Feynman

Reliability engineering deals with the construction and study of reliable systems. This is used in a wide range of applications such as semiconductor design and production, aerospace, nuclear engineering and space flight.

By studying the configurations and the lifetimes of components in complex sys-tems, one can draw conclusions regarding the optimal design for reliability. By using importance measures, it is possible to draw conclusions about which com-ponents are the most important to improve to achieve better reliability of the whole system.

The first examples of reliability calculations and estimates can be found in the investigations of John Graunt in 1662. Graunt studied the probability of survival for humans to different ages [Graunt, 1662, p. 75]. From this first step it took a long time before the field of reliability emerged and became frequently used. It was not before the end of the second world war that the field of reliability engi-neering expanded rapidly due to mass manufacturing, statistical quality control and computational resources at hand [Saleh and Marais, 2006, p. 251].

Modern day reliability measures and methods depend heavily on the contribu-tions of W. Weibull and Z.W. Birnbaum. Birnbaum developed the first impor-tance measure, which can be used to rank components in a system according to how important they are. Weibull developed the distribution that now bears his name and is a standard tool in reliability applications. Richard E. Barlow and Frank Proschan are frequently credited as the founders of the reliability field in its form today, and their book Mathematical Theory of Reliability [Barlow and Proschan, 1965] is one of the standard texts in the field.

(20)

1.1

Purpose and goal

This thesis uses the softwareMathematica from Wolfram Research for its imple-mentation. This is an application that supports a wide array of mathematical computation. The high level language used inMathematica also lends itself very well to quickly implementing efficient new algorithms and functionality. In ver-sion 8 ofMathematica, an extensive new framework for probability and statistics was created. This framework makes it easy to create new distributions, and cal-culate properties for them.

The purpose of this thesis is to implement functionality inMathematica for the basics in the field of reliability, and to then explore howMathematicas mathemat-ics framework can be used to extend the amount of computations possible. To this end, we look at modeling non-repairable systems, standby systems, and how to determine the importance of components in a system. From these models, the goal is to be able to efficiently compute properties, both symbolically and numer-ically. A part of the goal is to introduce this functionality in a future release of Mathematica.

1.2

Outline of the thesis

The thesis is structured as follows:

• Chapter 2 presents the theoretical background necessary to understand re-liability calculations from the areas of statistics, graph theory and boolean logic.

• Chapter 3 shows how to represent the structure of a system and how to map it to the survival function.

• Chapter 4 describes the reliability distribution and shows some important properties for different system configurations.

• Chapter 5 shows some special cases of distributions and distribution prop-erties for which optimizations can be done.

• Chapter 6 shows results for importance measures.

• Chapter 7 presents prototypes for converting back and forth between relia-bility block diagrams and boolean structure functions.

• Chapter 8 covers standby systems and how to calculate their reliability. • Chapter 9 discusses conclusions and future work.

• Appendix A presents a list of lifetime properties we can calculate. • Appendix B defines the system of an airplane cockpit.

(21)

2

Theoretical background

Each problem that I solved became a rule which served afterwards to solve other problems.”

Rene Descartes

In this chapter we will present the theoretical background of the parts of relia-bility engineering that are interesting in the scope of the thesis. Readers familiar with probability and statistics might want to read sections 2.3 and 2.5, and then focus on the later chapters.

2.1

Reliability measures

We define some basic terminology that is used in the description of reliability systems.

2.1 Definition (Working and Failed). A component or system is working when it is performing its intended function. A component or system is failed when it is not performing as intended.

2.2 Definition (State). The state of a component is defined as a boolean vari-able x(t), where

x(t) = 1 (or true) (2.1)

if the component is working at time t, and

x(t) = 0 (or false) (2.2)

if the component is failed at t.

With this definition we can define the time to failure T . 3

(22)

2.3 Definition (Time To Failure). The time to failure T of a component is de-fined as

T = Min(t) where x(t) = 0 (2.3)

assuming that the component is not repairable.

Time to failure T can be any of a large number of units. One example would be a unit of time, such as the number of hours a component is used. Another example is a unit of distance, such as how far a car is driven.

2.4 Definition (Lifetime distribution). The lifetime distribution L is defined as the probability distribution of the time to failure T .

2.1.1

Distribution functions

Based on the previous definitions, there are a few different functions describing the probability of times to failure. These are presented in the following defi-nitions. Throughout the scope of this thesis we assume all distributions to be continuous and univariate.

2.5 Definition (Cumulative Density Function, cdf). The cdf F(t) describes the probability that a specific component fails before the time t:

F(t) = P (T ≤ t) (2.4)

where T ∼ L (T is distributed according to lifetime distribution L). The following conditions are fulfilled by all cumulative density functions:

All components fail eventually: lim

t→∞F(t) = 1

(2.5)

A failed component never starts working again:

F(t) is nondecreasing (2.6)

Components always work at t ≤ 0:

F(t) = 0, t ≤ 0 (2.7)

2.6 Definition (Probability Density Function, pdf). The pdf f (t) describes the probability that a specific component fails at the time t:

f (t)∆t ≈ P (t < T ≤ t + ∆t) (2.8)

(23)

2.1 Reliability measures 5 It can also be defined in terms of the cdf:

f (t) = dF(t)

dt (2.9)

The following conditions are fulfilled by all probability density functions for life-times:

All components fail eventually: ∞

Z

0

f (t)dt = 1 (2.10)

A failed component never starts working again:

f (t) ≥ 0, t ≥ 0 (2.11)

Components always work at t < 0:

f (t) = 0, t < 0 (2.12)

2.7 Definition (Survival Function / Reliability Function). The survival func-tion, in some literature called the reliability funcfunc-tion, describes the probability that a specific component is working at time t:

S(t) = P (T > t), t ≥ 0 (2.13)

where T ∼ L, assuming that the component is not repairable. The following conditions are fulfilled by all survival functions:

All components fail eventually: lim

t→∞S(t) = 0

(2.14)

A failed component never starts working again:

S(t) is nonincreasing (2.15)

Components always work at t ≤ 0:

S(t) = 1, t ≤ 0 (2.16)

In statistics, this function is usually called the Complementary Cumulative Dis-tribution Function (ccdf), because it can be defined in terms of the cdf:

(24)

2.8 Definition (Hazard Function). The Hazard Function describes the failure rate at time t, given it is still working at that time:

h(t) = limt→0 P (t ≤ T < t + ∆t|T ≥ t)t = f (t) S(t), t ≥ 0 (2.18) where T ∼ L.

Once one of these functions is known, all the others can be calculated as needed, as can be seen for example in Leemis [2009, Table 3.1, p. 62]. As an example, the conversions from the survival function to the other functions are given here.

h(t) =S 0 (t) S(t) (2.19) F(t) = 1 − S(t) (2.20) f (t) = −S0(t) (2.21)

2.1.2

Lifetime distributions

There are a few distributions that are most often used as lifetime distributions. In theory any distribution where the pdf is 0 for t ≤ 0 can be used. This requirement comes from the assumption that a system does not fail before time t = 0. Here we present the distributions used in this thesis.

Exponential distribution

The exponential distribution is the most commonly used lifetime distribution. It is defined as follows:

2.9 Definition (Exponential distribution). The exponential distribution can be defined by its pdf: f (t) = ( e t > 0 0 otherwise (2.22)

where λ is called the failure rate of the component and λ > 0.

The exponential distribution has the important memoryless property:

P (T ≥ t) = P (T ≥ t + s | T ≥ s), t ≥ 0; s ≥ 0 (2.23) which means that a used component that has survived to time t is as good as a new component. The exponential distribution is the only distribution with this property [Leemis, 2009, pp. 325-326]. The exponential distribution has a hazard function h(t) that is a constant λ for t > 0, and 0 otherwise. It is sometimes called the Epstein distribution [Saunders, 2010, p. 14].

(25)

2.1 Reliability measures 7 Weibull distribution

A very commonly used distribution is the Weibull distribution, named after the Swedish mathematician Waloddi Weibull who used the distribution for a large number of applications, for example the strength of Indian cotton or Bofors steel [Weibull, 1951, p. 293].

2.10 Definition (Weibull distribution). The Weibull distribution can be de-fined by its survival function:

S(t) =        e− t−µ β α t > µ 1 otherwise (2.24)

where α is called the shape parameter, β the scale parameter, and µ the location parameter.

For this distribution to make sense as a lifetime distribution, µ must be non-negative.

Erlang distribution

The Erlang distribution was developed by the Danish mathematician Agner Krarup Erlang for modeling of telephone systems. This distribution comes up in relation to standby systems, as we show in section 8.1.1.

2.11 Definition (Erlang distribution). We define the Erlang distribution by its probability density function:

f (t) =        λktk−1e (k−1)! t > 0 1 otherwise (2.25)

where k is called the shape parameter and λ the rate parameter. Pareto distribution

The Pareto distribution takes two parameters.

2.12 Definition (Pareto distribution). The Pareto distribution is most readily defined by its survival function:

S(t) = ( t k −α t > k 1 otherwise (2.26)

where k is the minimum value parameter and α the shape parameter. Frechet distribution

The Frechet distribution, as used in this thesis, takes two parameters.

(26)

by its cdf:

F(t) = (

e(t/β)α t > 0

1 otherwise (2.27)

where α is the shape parameter and β the scale parameter. Lognormal distribution

The lognormal distribution is based on the normal distribution.

2.14 Definition (Normal distribution). The normal distribution can be defined by its pdf: f (t) = e(t−µ)2 2σ 22πσ (2.28)

where µ is the mean and σ the standard deviation.

2.15 Definition (Lognormal distribution). If X is a normally distributed ran-dom variable, the variable

Y = eX (2.29)

will be distributed according to a lognormal distribution. Hypoexponential distribution

A distribution that comes up in standby systems is the hypoexponential distribu-tion. We show this in section 8.1.1

2.16 Definition (Hypoexponential distribution). The hypoexponential distri-bution is most readily defined relative to the exponential distridistri-bution. If Xi are k independently exponentially distributed random variables with failure rates λi, then the random variable X

X = k X

i=1

Xi (2.30)

will be hypoexponentially distributed. Order distribution

The order distribution is a derived distribution, in the sense that it relates to a “parent” distribution.

2.17 Definition (Order distribution). The order distribution is the distribu-tion of the k’th smallest element in a sorted list of n samples from the parent distribution.

The order distribution comes up in reliability as a natural representation of some special systems.

(27)

2.1 Reliability measures 9

2.1.3

Properties of the lifetime distribution

Lifetimes can be characterized by expectations, such as the mean time to failure (mttf), or probabilities, such as the probability of the system working until a time t, or the probability of the system working until a time t2, given it works at time t1.

2.18 Definition (Moment). The n’th moment of a lifetime distribution L with pdff (t) is defined as µ0n = hTni= ∞ Z −∞ tnf (t)dt (2.31)

if the integral converges, where T ∼ L.

Mean is a moment deemed important enough to deserve it’s own name.

2.19 Definition (Mean). The mean µ of a lifetime distribution L is defined as the first moment of L:

µ = µ01 (2.32)

Another family of properties of a lifetime distribution are the central moments. 2.20 Definition (Central moment). The n’th central moment of a lifetime dis-tribution L with pdf f (t) is defined as

µn= h(T − µ)ni= ∞ Z

−∞

(t − µ)nf (t)dt (2.33)

where T ∼ L, if the integral converges.

With the central moments, a few more named properties can be defined.

2.21 Definition (Variance). The variance σ2 of a lifetime distribution L is de-fined as the second central moment of L:

σ2= µ2 (2.34)

2.22 Definition (Standard Deviation). The standard deviation σ of a lifetime distribution L is defined as the square root of the variance of L:

σ =

(28)

2.23 Definition (Skewness). The skewness γ1 of a lifetime distribution L is defined as a function of central moments:

γ1= µ3

µ3/22 (2.36)

2.24 Definition (Kurtosis). The kurtosis β2 of a lifetime distribution L is de-fined as a function of central moments:

β2= µ4

µ22 (2.37)

A sequence of moments can also be represented as a function, that can be used to generate these moments. These are defined as follows.

2.25 Definition (Moment-generating function). The moment-generating

func-tion is defined as:

MT(s) = E(esT) (2.38)

if this expectation exists.

2.26 Definition (Central moment-generating function). The central

moment-generating function is defined as:

CMT(s) = MT(s)e

(2.39)

2.27 Definition (Characteristic function). The characteristic function is defined as:

ϕT(s) = E(eisT) (2.40)

if this expectation exists.

2.2

Boolean logic

Boolean logic is used in reliability to define how the reliability of a system de-pends on the underlying components. Boolean functions can be represented in different but logically equivalent ways. Two presentations are cnf and dnf. 2.28 Definition (Conjunctive Normal Form, CNF). A boolean expression is in Conjunctive Normal Form when it is a conjunction (∧) of disjunctions (∨), as follows:

(x1x2. . . ) ∧ (xnxn+1. . . ) ∧ . . . (2.41) where xnis a literal or a negation of a literal.

(29)

2.3 Systems of components 11 2.29 Definition (Disjunctive normal form, DNF). A boolean expression is in Disjunctive Normal Form when it is a disjunction (∨) of conjunctions (∧), as fol-lows:

(x1x2. . . ) ∨ (xnxn+1. . . ) ∨ . . . (2.42) where xnis a literal or a negation of a literal.

A requirement on the boolean functions used for reliability systems is that they are monotone increasing.

2.30 Definition (Monotone increasing boolean function). A monotone increas-ing boolean function is a function such that f (x1, ..., xn) ≤ f (y1, ..., yn), ∀x, y where xiyi, ∀i and xi, yi ∈ {0, 1}.

An alternative definition is that in a monotone increasing boolean function, the minimal dnf and cnf forms contain no negations [Biere and Gomes, 2006, p. 228].

This is also sometimes called a positive unate boolean function. majority is a boolean function sometimes used in reliability.

2.31 Definition (majority). Majority(e1, e2, ...en) = true if the majority of the boolean variables ek are true. If exactly half of the ek are true, majority gives false.

2.3

Systems of components

The previous definitions describe single components. Now we expand the scope and look at systems of components. For this we need a few more definitions. We can define a structure function φ(~x).

2.32 Definition (Structure function). We define the structure function of a sys-tem with the component states given as the vector ~x as follows:

φ(~x) = 1 (2.43)

if the system works with the component states ~x, and

φ(~x) = 0 (2.44)

if the system does not work with the component states ~x.

The structure function is a boolean expression, where the states of the compo-nents are represented by the state as per definition 2.2.

The system structures most commonly found in literature are the series system and the parallel system. These are presented below.

(30)

Start x y End

Figure 2.1:A serial system with two components.

2.3.1

Series system

The series system is the system where all components are needed for the system to function. The rbd (see section 2.4) of a simple serial system with two components is shown in figure 2.1. The time to failure T then is the time to failure for the first component that fails. With the structure function we can express this as follows:

φ(~x) = min{x1, x2, . . . , xn}= n Y i=1 xi (2.45) or equivalently: φ(~x) = x1∧x2∧ · · · ∧xn (2.46)

2.3.2

Parallel system

Start x y End

Figure 2.2:A parallel system with two components.

The parallel system is the system where only one of the components is needed for the system to work. A simple parallel system with two components is shown in figure 2.2. The time to failure T then is the time to failure for the last component to fail. With the structure function we can express this as follows:

φ(~x) = max{x1, x2, . . . , xn}= 1 − n Y

i=1

(31)

2.3 Systems of components 13 or equivalently:

φ(~x) = x1∨x2∨ · · · ∨xn (2.48)

2.3.3

Mixed system

It can be shown that any system with an increasing structure function and no irrelevant components can be seen as a series of parallel arrangement, or equiv-alently, as a parallel arrangement of a series, see Leemis [2009, pp. 27-29]. In this way, more complex systems can be modeled. For such a general system, we define the reliability distribution.

2.33 Definition (Reliability distribution). The reliability distribution of a sys-tem is defined as the lifetime distribution of that syssys-tem.

2.3.4

Structure function to survival function

Once we have the structure function, the survival function can be calculated by replacing the state variables of the components in the structure function with their survival functions. For the series system this gives

Sserial(t) = S1(t) · S2(t) · · · Sn(t) (2.49)

and for the parallel system

Sparallel(t) = 1 − (1 − S1(t)) · (1 − S2(t)) · · · (1 − Sn(t)) (2.50) as seen in for example Leemis [2009, pp. 18-19].

2.3.5

Standby systems

For some real world systems, a few more advanced concepts in the system model may be considered for modeling. For example, in a critical system, standby com-ponents are often used, which will be switched on and used when the original component fails. Systems using this concept are called standby systems.

There are three different categories of standby components, hot standby, warm standby and cold standby [Kuo and Zuo, 2002, p. 129]. Hot standby components are always switched on, and have the same failure distribution as normal compo-nents. Cold standby components are switched off until they are needed, and therefore cannot fail before that time. Warm standby systems have some proba-bility of failing while waiting to be used. This probaproba-bility is normally lower than the failure probability of the active component.

A further complication in the real world is that the component responsible for the switching itself can fail. This can be modeled with imperfect switching, either as a component with a lifetime distribution, or as a probability of failure on each switch.

(32)

Cold standby

The cold standby system with perfect switching fails when the last component fails, and the system lifetime is equal to the sum of the component lifetimes, as we assume that switching takes no time, and no components fail until they are switched on. The survival function can be found intuitively. The survival function describes the probability that a system works until time t.

2.34 Example: Two component cold standby, perfect switching

We consider a standby system with one component in standby, as shown in

fig-Start

x1

Switch x2 End

Figure 2.3:A standby system with two components.

ure 2.3. Also assume the that the switch always works as intended. This system works until time t in either of the following scenarios:

• component 1 survives until time t

• component 1 fails at time x < t, and component 2 survives longer than t − x As these two scenarios are independent, the survival function follows:

Scold_standby_2(t) = S1(t) + t Z

0

f1(x)S2(t − x)dx (2.51)

The general case for n components can be found in a similar way as in exam-ple 2.34.

(33)

2.4 Graph theory 15 2.35 Definition (Survival Function, cold standby system). The survival func-tion of a cold standby system with n components, where we assume perfect switching, is defined as [Kuo and Zuo, 2002, Equation 4.133, p. 131]:

Scold_standby_n(t) = S1(t) + t Z 0 f1(x1)S2(t − x1)dx1+ + t Z 0 f1(x1) t−x1 Z 0 f2(x2)S3(t − x1−x2)dx2dx1+ · · · + t Z 0 f1(x1) t−x1 Z 0 f2(x2) · · · · · · t−x1−x2−···−xn−2 Z 0 fn−1(xn−1)Sn(t − x1− · · · −xn−1)dxn−1· · ·dx2dx1 (2.52) Warm standby

The warm standby case considers the possibility of failure of the components while they are in standby and waiting to go into operation. This possibility of failure is modeled by a lifetime distribution for the standby mode, in addition to the lifetime distribution while the component is operational. There is very little information on how to exactly compute warm standby systems in the literature. This is probably because this would be extremely tedious to do by hand, as the complexity grows rapidly with the number of standby components. Results for two components can be found, for example in Kuo and Zuo [2002, Equation 4.138 p. 138]. The result is reproduced here for convenience:

Swarm_standby_2(t) = S1(t) + t Z

0

f1(x)S2sb(x)S2op(t − x)dx (2.53)

where S2sb(t) is the survival function for component 2 in standby mode, and S2op(t) the survival function for component 2 while it is operating.

2.4

Graph theory

An alternative way to define a structure function is through a reliability block diagram (rbd). This is essentially a graph, defining a system structure. If a path can be found from the left (or the start vertex) to the right (the end vertex), the system works, and if no path can be found the system has failed. Each component can be represented by a vertex, usually presented in the form of a rectangular block. A failed vertex is represented by removing that vertex and all connecting edges from the rbd.

(34)

2.36 Example: rbd of a simple mixed system

Consider a system with 3 components, x, y and z. For the system to work, x has to work, and y or z has to work. This can be represented either by a boolean function:

φ(t) = x(t) ∧ (y(t) ∨ z(t)) (2.54)

or as a reliability block diagram, as shown in figure 2.4.

It can easily be seen that this figure represents the described system. To get from the start vertex to the end vertex, a path has to go through x, and then either y or z.

Start x

y z

End

Figure 2.4:A simple mixed system.

2.4.1

Adjacency matrix

An adjacency matrix is a matrix describing a graph as follows.

2.37 Definition (Adjacency matrix). Given a graph G, the adjacency matrix is a matrix of size n × n, where n is the number of vertices in G. Any entry aijcontains the number of edges from vertex vi to vertex vj.

This is illustrated in the following example.

2.38 Example: Adjacency matrix for a simple mixed system Consider the graph in figure 2.4. The adjacency matrix for this system is:

                S x y z E S 0 1 0 0 0 x 0 0 1 1 0 y 0 0 0 0 1 z 0 0 0 0 1 E 0 0 0 0 0                 (2.55)

(35)

2.4 Graph theory 17

2.4.2

Path and cut vectors

The structure of the system can also be described with path vectors or cut vectors. 2.39 Definition (Path vector). A path vector is a vector ~x for which the follow-ing property holds:

φ(~x) = 1 (2.56)

Equivalently, a path vector is a vector of component states ~x for which the system works.

A subset of the path vectors are the minimal path vectors.

2.40 Definition (Minimal path vector). The set of minimal path vectors are the path vectors for which the system will stop working if any working component in the vector fails.

2.41 Example: Minimal path vectors of a simple mixed system

Consider the system in figure 2.4. The set of path vectors are {1, 1, 0}, {1, 0, 1} and {1, 1, 1}. Of these, the last one is not minimal.

States for which the system does not work are defined by cut vectors.

2.42 Definition (Cut vector). A cut vector is a vector ~x for which the following property holds:

φ(~x) = 0 (2.57)

Equivalently, a cut vector is a vector of component states ~x for which the system does not work.

As with path vectors, a subset of cut vectors, the minimal cut vectors, can be defined.

2.43 Definition (Minimal cut vector). The set of minimal cut vectors are the cut vectors for which the system would work if any of the failed components was repaired.

2.44 Example: Minimal cut vectors of a simple mixed system

Consider the system in figure 2.4. The set of minimal cut vectors are {0, 1, 1} and {1, 0, 0}.

2.4.3

Minimal cut set

For each minimal cut vector there is a minimal cut set which contains the failed components in that cut vector.

(36)

2.45 Example: Minimal cut sets of a simple mixed system

Consider the system in figure 2.4. The minimal cut sets of this system are {y, z} and {x}.

2.5

Importance measures

When designing and analyzing a system, it is of interest to know the importance of the different components in a system and how they contribute to the overall reliability of the system. There are several measures for this, depending on how much information is available, and what measure of importance is of interest. The simplest case is structural importance. If more advanced analysis is desired, there are many importance measures that account for the lifetime distributions.

2.5.1

Structural importance

The simplest measure of how important a component is for the reliability of a system is the structural importance. It only takes into account the structure of the system, and not the lifetime distributions of the components. As such, it is relatively easy to calculate, and can for example be used in the design phase or when the lifetime distributions are not known. It is also an alternative when the more advanced measures would be too time consuming to compute or difficult to use.

2.46 Definition (Structural importance). When φ(si, ~x) is the structure func-tion where component i is in state s, the structural importance of a component i in a system with n components is defined as

Iφ(i) = 1 2n−1 X {~x|xi=1} [φ(1i, ~x) − φ(0i, ~x)] (2.58) for i = 1, 2, . . . , n.

The result can be seen as a measure of how much the system would suffer from the component going from a working state (φ(1i, ~x)) to a failed state (φ(0i, ~x)).

2.47 Example: Structural importance for a mixed system

Calculate the structural importance for the mixed system shown in figure 2.5. For component x the state vectors are (1,0,0), (1,0,1), (1,1,0) and (1,1,1). For these four vectors the three last ones corresponds to a working system and hence the structural importance is 14[(0 − 0) + (1 − 0) + (1 − 0) + (1 − 0)] = 34

For component y the summation over the state vectors yield: 14[(0 − 0) + (0 − 0) + (1 − 0) + (1 − 1)] = 14. Similarly component z has the same structural importance as component y.

(37)

2.5 Importance measures 19

Start x

y z

End

Figure 2.5:A simple mixed system.

2.5.2

Birnbaum importance

The Birnbaum importance (sometimes also called reliability importance) is some-what more advanced than the structural importance. It does take into account the lifetime distributions of components. It does however ignore the lifetime dis-tribution of the component studied. The original version of this measure was first introduced in Birnbaum [1968, pp. 9 - 11]. However, this definition only used a fixed probability for each component, ignoring the time component. A later adaptation can be found in Natvig and Gåsemyr [2009, p. 605], which is presented here:

2.48 Definition (Birnbaum importance). If S(xi, t) is the survival function for the system at time t given that the component i has state xi, and Si(t) is the survival function for component i, the Birnbaum importance IB(i)of a component i in a system at time t is defined as

IB(i)(t) = ∂S(t)

∂Si(t) (2.59)

or equivalently

IB(i)(t) = S(1i, t) − S(0i, t) (2.60)

for i = 1, 2, . . . , n.

For a component with a high Birnbaum importance, a small change in reliability will give a large increase in system reliability. This can be used to decide which component to focus improvement efforts on in the design of a system.

2.49 Example: Birnbaum importance for a mixed system

Consider the same system as in example 2.47, but this time with exponential lifetime distributions for all components. Let the failure rate be 1 for all compo-nents.

(38)

S(1x, t) − S(0x, t). The system doesn’t work with component x in a failed state S(0x, t) = 0, and S(1x, t) corresponds to a parallel system of two exponential dis-tributions. The parallel system can be calculated with equation 2.47:

S(1x, t) = 1 − (1 − S1(t)) (1 − S2(t))

S1(t)=S2(t)=et

= 2ete2t (2.61)

For component y the working state corresponds to a system with just one com-ponent and survival function S(0y, t) = et and the failed state corresponds to a serial system of components x and z with survival function S(t) = e2t. Equation 2.60 now gives the Birnbaum importance as

IB(y)(t) = S(1y, t) − S(0y, t) = ete2t (2.62) The Birnbaum importance plotted as a function of time can be seen in figure 2.6.

1 2 3 4 5 t 0.2 0.4 0.6 0.8 1.0 IBH t L y x

Figure 2.6:The Birnbaum importance for component x and y over time.

2.5.3

Risk Achievement Worth and Risk Reduction Worth

In nuclear power stations, the two related measures Risk Achievement Worth (raw) and Risk Reduction Worth (rrw) are often used [Rausand and Høyland, 2004, pp. 190 - 191].

2.50 Definition (Risk Achievement Worth, raw). The importance Iraw(i) of a component i is defined as

Iraw(t) =(i) 1 − S(0i, t)

1 − S(t) (2.63)

for i = 1, 2, . . . , n.

The raw measure represents how much the component is worth for the reliability of the system. Components with high raw values are the ones that will impact the system the most if their reliability would go down.

(39)

2.5 Importance measures 21

2.51 Definition (Risk Reduction Worth, rrw). The importance Irrw(i) of a com-ponent i is defined as

Irrw(t) =(i) 1 − S(t) 1 − S(1i, t)

(2.64) for i = 1, 2, . . . , n.

The rrw represents with what ratio the system reliability would be improved by replacing the component with a perfect one.

2.5.4

Improvement potential

The improvement potential II P(i) of a component i also describes how much the system reliability would be increased by replacing i with a perfect component. 2.52 Definition (Improvement potential).

II P(i)(t) = S(1i, t) − S(t) (2.65)

for i = 1, 2, . . . , n.

It can also be defined in terms of IB(i)(t) as

II P(i)(t) = IB(i)(t) · (1 − Si(t)) (2.66)

This is related to rrw and is sometimes called the rrw calculated as a difference [Modarres et al., 2010, p. 309].

2.5.5

Barlow-Proschan importance

This measure is a weighted version of the Birnbaum importance, where the weight is the pdf fi(t) of the component i being investigated. The measure can be seen as the probability that the component i fails at the time the system fails.

2.53 Definition (Barlow-Proschan importance). IB−P(i) = ∞ Z 0 IB(i)(t)fi(t)dt = ∞ Z 0 [S(1i, t) − S(0i, t)]fi(t)dt (2.67) for i = 1, 2, . . . , n.

2.54 Example: Barlow-Proschan importance

Calculate the Barlow-Proschan importance for component x in figure 2.5 when all components have exponential lifetime distributions with failure rate 1. When component x is working the survival function will be S(1x, t) = −e

2t + 2et

(40)

and when component x is failed the survival function is zero. Equation 2.67 gives: IB−P(x) = ∞ Z 0 (−e2t+ 2et)etdt = 2/3 (2.68)

2.5.6

Criticality importance

The criticality importance measure can be defined either as success or failure oriented. For the failure oriented case, the criticality importance ICR(i) is the prob-ability that component i is critical for the system and failed at time t, given that we know the system is failed at t.

2.55 Definition (Criticality importance, failure oriented). The failure oriented criticality importance ICR−F(i) of a component i at time t is defined as

ICR−F(i) (t) = I (i)

B (t) · Fi(t)

F(t) (2.69)

for i = 1, 2, . . . , n.

This measure is often called Fussell-Vesely importance, but is not to be confused with the definition of Fussell-Vesely importance used in this thesis.

The success oriented version is very similar. Instead of using the cdf, we use the survival function:

2.56 Definition (Criticality importance, success oriented). The success oriented criticality importance ICR−S(i) of a component i at time t is defined as

ICR−S(i) (t) = I (i) B (t) · Si(t) S(t) (2.70) for i = 1, 2, . . . , n.

2.5.7

Fussell-Vesely importance

The Fussell-Vesely measure was suggested by W.E. Vesely in Vesely [1970], and was further developed by J.B. Fussell in Fussell [1975].

2.57 Definition (Fussell-Vesely importance). The importance IF−V(i) (t) of a com-ponent i is the probability that at least one minimal cut set that contains i, MCSi, is failed at time t, given that the system is failed at time t.

IF−V(i) (t) = P (∪MCSi)

(41)

2.6 Special functions 23 An interpretation of what the Fussell-Vesely importance means is the fraction of the system risk that is associated with the component i.

2.6

Special functions

To be able to represent certain expressions in a concise way, we define the Incom-plete Gamma function.

2.58 Definition (Incomplete Gamma function Γ ). The incomplete Gamma

func-tion Γ (a, z) is defined, for positive a and z, as: Γ(a, z) =

∞ Z

z

(42)
(43)

3

Structure function

By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct is solely and precisely that it is expected to work.”

John Von Neumann

Given the structure function in a boolean form, we need to do certain conversions and changes on it to get a form which can be efficiently used in computation. In this chapter, we present these operations.

3.1

Check if a boolean function is increasing

We first want to validate that the function given does indeed represent a valid system. The requirement is that the function is increasing. According to the def-inition 2.30, a monotone increasing boolean function is equivalent to a boolean function which does not contain ¬ in the cnf or dnf form. To check if a boolean expression is monotone increasing is then a matter of converting the expression to conjunctive normal form (cnf) and check if the result contains any ¬. This can easily be done with the builtin function BooleanConvert. However, this does more work than we actually need, and is therefore slower than necessary. The alternative approach is to structurally take the expression in question apart on ∧and ∨ recursively, and only convert to cnf for the subparts of the expressions actually containing other expressions, such as ∨, ∧ and ¬. A speed comparison for the systems defined in appendices B and C is shown in table 3.1.

(44)

System BooleanConvert Our implementation

Airplane cockpit 0.13 0.47

Diesel generator 2780 1.36

Table 3.1: Computation time in milliseconds for checking if a function is monotone increasing, averaged over 1000 runs.

3.2

Converting a boolean expression

To make computation easy and to be able to use simple replacement rules for the conversion of the structure function to a survival function, the boolean ex-pression for the structure function should be in a form only containing ∧ and ∨. A simple solution which was used in our first approach, was to just use the builtinMathematica function BooleanConvert for converting to conjunctive nor-mal form. However, this results in a very large expression for many systems, as it, again, does more work than needed. The only thing we need is to do the following conversions:

a ⇒ b → ¬a ∨ b (3.1)

a∨b → ¬a ∧ ¬b (3.2)

a∧b → ¬a ∨ ¬b (3.3)

We also need to convert the majority function to the appropriate combination of ∧and ∨.

An efficient way to do these conversions is to use Mathematicas pattern matching [Wolfram Research, 2011] to look for the functions we need to convert, and then only convert these subfunctions. This is somewhat slower than running Boolean-Convert on an expression with a large number of the unwanted functions, but gives a smaller final representation. The reason this is slower than the builtin function is most likely that the builtin function is more optimized. On an expres-sion with only a few of the unwanted functions, our implementation is found to be faster, and also returns a much smaller expression, which is beneficial for fur-ther calculation and memory use. This second type of expression is the one that occurs most in real world systems.

Table 3.2 shows the computation time for BooleanConvert and our implementa-tion for the systems given in appendices B and C.

System BooleanConvert Our implementation

Airplane cockpit 0.13 1.65

Diesel generator 2793 4.97

Table 3.2:Computation time in milliseconds for converting a boolean func-tion, averaged over 1000 runs.

(45)

3.3 Structure function to survival function 27

3.3

Structure function to survival function

The survival function of a system can be calculated with S(t) = E(Φ(~x)) [Leemis, 2009, p. 32]. The first step of this calculation is to represent the structure func-tion in its polynomial form using the following relafunc-tions:

xixjxi· xj (3.4)

xixj1 − (1 − xi) · (1 − xj) (3.5)

An intuitive way to explain these rules is to see the complex system as consisting of a combination of the special cases of serial and parallel systems.

We then need to take the expectation of this expression. All states are Bernoulli variables, which gives that E(Xn) = E(X)†. This means we can find the survival function of the system, S(t), by expanding the expression, replacing all exponents with 1, and then replace each variable with the corresponding survival function, as the survival function at the time t is the probability of a component working at time t, which is the expectation of the state variable at time t.

3.3.1

Expanding to remove exponents

To be able to replace the states above with the survival functions for the com-ponents, all exponents have to be removed from the expression we get from the replacement procedure. To do this, we started out by using the builtin Mathemat-ica function Expand, which expands the whole expression as much as possible. Given a large system, such as the system describing the information system in an airplane cockpit (see appendix B), this quickly results in large memory consump-tion. An alternative solution was devised, where instead of expanding everything, we only expand on the variables that actually need expanding, i.e. the ones that have an exponent higher than 1. This was checked with the builtin function Ex-ponent. This gives a much smaller expression that takes less memory to handle, and thus enables the implementation to handle much larger systems. It also gives a speedup in computation time, as less computations are needed to expand the expression.

Later on an even larger example was tried, namely a system of diesel generators in a nuclear power plant (see appendix C). This system took a relatively long time to calculate properties for, and almost all the time was spent in expanding. To speed up the calculations even more, another solution to give an increase in execution speed was investigated and implemented.

As most of the time is spent in expanding the expression, we naturally want to expand as little as possible. The only time we need to expand, is if there are

ex-†

Proof: The mgf for a Bernoulli distribution is q + petand since E (Xn) = MX(n)(0) = dnMX(t)

dtn t=0 we have E (Xn) = E (X) for Bernoulli random variables.

(46)

ponents that are not 1 in the polynomial. That only occurs if a variable in the boolean expression occurs multiple times. A natural approach to take then is to get rid of as many duplications as possible in the boolean expressions, by apply-ing different boolean algebra transformation rules. This gives a remarkable in-crease in computation speed for many systems. A problem with this approach is that there are systems for which there is no way to represent the boolean function without duplicating variables. Therefore, this is only a partial solution, and after applying the transformation, checks for exponents must still be done. However, these checks are inexpensive compared to expanding, so a substantial decrease in computation time is still achieved.

The simplifications used, with boolean functions g1and g2, are: g1(e1, b)∧b ∧ g2(e2, b) → g1(e1, 1) ∧ b ∧ g2(e2, 1) g1(e1, b)∨b ∨ g2(e2, b) → g1(e1, 0) ∨ b ∨ g2(e2, 0) d1∨(a1b ∧ a2) ∨ d2∨(c1b ∧ c2) ∨ d3→

d1∨d2∨d3∨(b ∧ ((a1∧a2) ∨ (c1∧c2))) d1∧(a1b ∨ a2) ∧ d2∧(c1b ∨ c2) ∧ d3→

d1∧d2∧d3∧(b ∨ ((a1∨a2) ∧ (c1∨c2)))

(3.6)

The first two rules do pure simplifications, while the other two factor out expres-sions or variables. All variables with an index may or may not be present. Some examples where the rules are used are found in example 3.1. The rules are ap-plied recursively to each subexpression.

3.1 Example: Boolean simplifications

A few boolean expressions are simplified according to the rules in equation 3.6. The first rule is used:

(x ∨ y) ∧ y = y (3.7)

The second rule works similarly:

(x ∧ y) ∨ z ∨ y = z ∨ y (3.8)

The third rule:

(x ∧ y) ∨ (x ∧ v) = x ∧ (y ∨ v) (3.9)

Applying the last rule in a similar manner:

(47)

3.3 Structure function to survival function 29 Finding the highest exponent of a polynomial

To find the highest exponent of our polynomial, we first used the Mathemat-ica builtin function Exponent. We found, however, that a simplistic approach which recursively calculates the exponent of a certain variable in a polynomial performed better. The calls to Exponent were replaced with this new function, giving an increase in performance. This new function simply splits the expres-sion recursively on multiplication and addition. On a multiplication, it returns the sum of the exponents, and on a plus the maximum of the exponents. If an expression does not contain the variable, it returns 0. Once a single symbol is reached, it returns 1 if the symbol is the variable we want the exponent for. A speed comparison is shown in table 3.3.

System Exponent Our implementation

Airplane cockpit 74.3 18.7

Diesel generator 191.2 190.8

Table 3.3: Computation time in milliseconds for finding exponents of all variables, averaged over 100 runs.

Expanding and exponent removal

An effective way of expanding and removing all exponents that we finally used is based on the Shannon expansion, presented in his masters thesis [Shannon, 1938, p. 34]. The approach is to recursively apply the Shannon expansion to the parts of the expression that contain the given variable. The order of variables on which to expand first is chosen so that the one with the lowest exponent gets expanded first. This gives the result that we work with a smaller expression as long as possible.

The Shannon expansion on variable x of the function f works on the following principle:

f (x) = x (f (1) − f (0)) + f (0) (3.11)

We show our implementation in an example: 3.2 Example: Shannon expansion

Let us start with the polynomial for the system that works when two out of three components work:

1 − (1 − xy)(1 − xz)(1 − yz) (3.12)

We want to expand on the variables that have a degree higher than 1, and remove these exponents. The degrees are 2 for all variables. In this example we will illustrate the principle by expanding on x. Let the expanding function have the

(48)

name SExp. First we split SExp over the minus: SExp(1 − (1 − xy)(1 − xz)(1 − yz)) =

= SExp(1) − SExp((1 − xy)(1 − xz)(1 − yz)) (3.13)

1 does not contain x, so we can remove the expansion around it:

= 1 − SExp((1 − xy)(1 − xz)(1 − yz)) (3.14)

To expand on x, we take the polynomial from the input, and apply equation 3.11. We first compute f (1) and f (0):

f (1) = (1 − 1y)(1 − 1z)(1 − yz) = (1 − y)(1 − z)(1 − yz)

f (0) = (1 − 0y)(1 − 0z)(1 − yz) = (1 − yz) (3.15)

We can now expand (from equation 3.14): =1 − x(f (1) − f (0)) + f (0) =

= x((1 − y)(1 − z)(1 − yz) − (1 − yz)) + (1 − yz) (3.16) We now have an expression with degree 1 in x. The same procedure can be used for y and z. This allows us to use the replacement procedure in equations 3.4 and 3.4.

Table 3.4 shows the time required to remove all exponents from the diesel genera-tor systems in appendix C. The cockpit system example also used in the previous speed comparisons is not shown, as both solutions are instantaneous. This is be-cause there are no exponents to remove. The times given are after applying the simplification rules in 3.6. In both cases, expanding is only done on the variables that require it, and with the variable with the lowest exponent first. We can see that our final implementation is significantly faster than the simple approach of expanding with Expand and then removing exponents.

System Expand and Replace Our implementation

Diesel generator 15.140 0.44

Table 3.4:Computation time in seconds for removing all exponents. 3.3 Example: Boolean expression to survival function

In this example we will go from a boolean expression to a survival function, us-ing the same steps as the final implementation in Mathematica. We start out by defining our boolean expression.

(¬x∨¬y) ∧ (z ∨ x) (3.17)

The first thing we do is check that the expression is indeed a monotone increasing boolean function. The algorithm for this is the one presented in section 3.1. Let us

(49)

3.3 Structure function to survival function 31 call the recursive function IncrQ for the purpose of this example. The following chain of calls will be the result.

IncrQ¬x∨¬y∧(z ∨ x)=

= IncrQ¬x∨¬y∧IncrQ (z ∨ x) =

= IncrQBooleanConvert¬x∨¬y∧IncrQ (z) ∧ IncrQ (x) = = IncrQ (x ∧ y) ∧ true ∧ true =

= IncrQ (x) ∧ IncrQ (y) = true

(3.18)

This result shows that the given function is monotone increasing. We continue by converting the expression to one with only ∧ and ∨, as discussed in section 3.2. Let us call the function for this Conv. The following is the result on running the function on our expression.

Conv¬x∨¬y∧(z ∨ x)=

= Conv¬x∨¬y∧Conv (z ∨ x) = = BooleanConvert¬x∨¬y∧(z ∨ x) = = (x ∧ y) ∧ (z ∨ x)

(3.19)

We now have a function consisting only of our three variables, x, y and z, as well as ∧ and ∨. The next step is to reduce our function as much as possible by applying the rules in equation 3.6. We can apply the first rule.

(x∧y) ∧ (z ∨ x) = = x ∧ y ∧ (z ∨ x) = = x ∧ y ∧ (z ∨ 1) = = x ∧ y ∧ 1 = x ∧ y

(3.20)

As we can see, the function was in fact not dependent on z, making it an irrele-vant component that does not impact reliability of the system. We now do the replacement procedure given in equations 3.4 and 3.4.

x ∧ y → x · y (3.21)

We check to see that there are no exponents in the result, which is trivial in this case. Finally, we replace each variable with that components survival function, arriving at the systems survival function.

(50)
(51)

4

Reliability distribution

The theory of probabilities is basically just common sense reduced to calculus.”

Pierre-Simon Laplace

When the survival function is known, a large number of properties follow from fairly straightforward definitions. A complete list of properties and functions that can now be calculated for the system distribution are presented in table A.1. Since our implementation is integrated into theMathematica framework, we can use the very large number of distributions already included inMathematica as component distributions. This includes parametric distributions, non-parametric distributions and derived distributions. Parametric distributions are distribu-tions defined by parameters, such as the exponential distribution or the Weibull distribution. Non-parametric distributions are distributions constructed directly from data. This can be done by smoothing the data, or using a histogram of the data as a pdf. Under derived distributions, we find distributions defined as a function of a random variable, a truncated version of another distribution, or the order distribution presented in definition 2.17. A distribution can also be defined by simply giving a formula for the pdf or the survival function.

This thorough framework of distributions allows flexible modeling, either via parametric distributions, or directly from data collected during testing of compo-nents.

4.1

Properties for some basic systems

The standard example systems in the literature are the parallel, serial and k-out-of-n systems. The most commonly used distribution for the lifetime distributions is the exponential distribution, because of its simplicity in calculation. Since the

References

Related documents

He is the author of The Quest for Sustainable Peace: The 2007 Sierra Leone Elections (2008), and co-editor of Africa in Crisis: New Challenges and Possibilities (Pluto Press, 2002)

Given this weak level of financial support from the government, it appears that the commission will have to remain dependent on external funding in order to have a chance at

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

• UnCover, the article access and delivery database allows users of the online catalog to search the contents of 10,000 journal titles, and find citations for over a

Dessa stänger måste vara fullt rörliga samt att glidytorna som tungan rör sig över måste vara fri från snö och is för att växeln skall kunna röra sig.. stångkåpa vid