Reliability based design optimization with experiments on demand
T. Dersj¨o a,b,∗ , M. Olsson a
a Department of Solid Mechanics, Royal Institute of Technology, SE-100 44, Stockholm, Sweden
b Truck Chassis Development, Scania CV AB, SE - 151 87, S¨odert¨alje, Sweden
Abstract
In this paper, an algorithm for reliability based design optimization (RBDO) is presented. It incorporates a novel procedure in which experiments are performed one at a time where and when they are needed. The procedure is called experiments on demand. The experiment procedure utilizes properties specific to RBDO and the problem at hand augmented by the concept of D-optimality familiar from traditional design of ex- periments. Furthermore, an adaptive surrogate model fitting scheme is proposed which balances numerical stability and convergence rate as well as accuracy. Benchmarked against algorithms in the literature, the number of experiments needed for convergence was reduced by up to 80 % for a frequently used analytical problem and by up to 19 % for an application example. The accuracy of the reliability index is in line with the most efficient algorithm against which it was benchmarked but up to 3 % lower than the most accurate algorithm.
Keywords: Experiments on demand, Reliability based design optimization, Surrogate model
1. Introduction
Design of experiments (DoE) is a practice for ex- traction of a maximum amount of information from a given number of experiments. Specifically, it aims to reduce uncertainty caused by randomness. DoE is an integral part of the response surface method- ology (RSM), a methodology for optimization of products and processes, see Myers and Montgomery (2002). A response surface is an approximation of a response obtained through experiments on a poorly understood physical system, i.e. a black-box function. The drive to reduce the number of ex- pensive physics-based computer model evaluations has rendered DoE and RSM significant interest in simulation-driven development of structural compo- nents, see Roux et al. (1998), Redhe et al. (2002), and Youn and Choi (2004a). However, it has become ap- parent that the differences are not insignificant, see Simpson et al. (2001). A major difference between
∗ Corresponding author
Email addresses: tdersjo@kth.se (T. Dersj¨o ), mart@kth.se (M. Olsson)
physical and virtual experimentation is that all vir- tual input can be controlled, whereas this is seldom true for physical experimentation. Hence, responses from virtual experiments on deterministic systems are fully deterministic while physical responses are generally not, even if the physical system is deter- ministic. Thus, a branch of DoE called design of computer experiments (DoCE), which considers de- terministic responses has emerged, see Sacks et al.
(1989), Simpson et al. (2008), and Chen et al. (2006).
In DoCE, the concepts from DoE believed to be valid in computer experiments are further evaluated and additional considerations have been suggested. In Jin et al. (2005), the computational cost required to construct optimal designs was investigated and an efficient algorithm for that purpose was presented.
The use of response surfaces fitted using D-optimal
experiment designs in crashworthiness related prob-
lems has been studied in Redhe et al. (2002). How-
ever, within virtual optimization, there are also dif-
ferences. In reliability based design optimization
(RBDO), the randomness inherent to physical sys-
tems is taken into account by considering the design
variables as stochastic instead of deterministic. This does not mean that they cannot be controlled in sim- ulations. It does however mean that the response for the mean value is not as important as the response at the so-called most probable failure point (MPFP).
In Youn and Choi (2004a), it is stated that DoCE for deterministic optimization are not appropriate for RBDO applications since they do not produce sam- ples near the MPFP, and an integrated DoCE/RSM method suitable for RBDO is proposed. However, the experiments are still performed in sets.
In this paper, an RBDO algorithm employing a problem-dependent computer experiment procedure is presented; experiments on demand (EoD). It is a one-experiment-at-a-time approach. The justifica- tion for the work is that computer experiments on physics-based models can be so expensive that the algorithms presented to date make RBDO industri- ally unfeasible, e.g. for large scale FE-models. The goal of the proposed algorithm is to reduce the num- ber of computer experiments needed for convergence in RBDO problems. This is achieved by making the most out of the information available before per- forming another experiment, and then to conduct them one at a time rather than in sets. Moreover, every new experiment is placed where it is predicted to add the most useful information, i.e. where the de- mand for new information is the highest. The defini- tion of demand is here made by specifically consid- ering the core of RBDO, which is the most probable point of failure (MPFP). Furthermore, if experiments have already been performed in the vicinity of the MPFP, the next experiment is added in a D-optimal way. Although D-optimal designs are developed to deal with randomness and not model bias, it is not very different than a bias-optimal design if applied to a limited region of interest according to Box and Draper (1971). Thus, the D-optimality augmented experiments on demand procedure utilized in this pa- per combines aspects of both RBDO and classical DoE into an advantageous scheme that reduce the expensive computer model evaluations needed for convergence. The particular choice of D-optimality is not crucial but to distribute the experiments in a space-filling fashion in the design space is.
The outline of this paper is as follows: The RBDO algorithm is presented in Section 2, the EoD proce-
dure is described in Section 3, Section 4 describes the adaptive surrogate model fit, and in Section 5, the algorithm is applied to numerical problems as well as problems from solid mechanics. A discussion of the results is given in Section 6 and conclusions are given in 7.
2. Reliability based design optimization
In reliability based design optimization (RBDO) the problem can be stated as
min µ µ µ C(µ µ µ) s.t.
p f, j (X) ≤ α j , j = 1 . . . N C µ i L < µ i < µ U i , i = 1 . . . N X
(1)
Throughout this paper, X = [X 1 . . . X N X ] T is the de- sign variable vector and its lowercase counterpart x means realizations thereof, µ µ µ = [µ 1 . . . µ N X ] T is the de- sign variable mean value vector where µ i = E[X i ], C is the objective function (cost), p f, j is the j:th failure probability, α j is the value of the j:th target failure probability, and µ i L and µ U i are the lower and upper bound of design variable i, respectively. The prob- ability of failure can be formulated using a failure function G and a limit state g separating the safe and the failure domain. Conventionally, g = 0 is used, so that
p f, j (X) = P(G j (X) ≤ 0)
= R
G j (x)≤0
f X 1 ...X NX (x)dx 1 . . . dx N X , (2)
where P(•) denotes probability of the event and
f X is the joint probability distribution function of
the design variables. In RBDO, the integral is al-
most without exception solved using either analyt-
ical formulations, such as the first order reliability
method (FORM), see Madsen et al. (1986), or sam-
pling based methods such as Monte Carlo simula-
tion (MCS), see Rubinstein (1981). The majority
of RBDO formulations use FORM for reliability as-
sessment. FORM is based on an isoprobabilistic
transformation of design variables to normed nor-
mally distributed variables followed by a lineariza-
tion of the failure function limit state (g = 0). Numer-
ical formulations for the isoprobabilistic transforma- tion have been proposed in HongShuang et al. (2008) and Noh et al. (2009). For non-linear functions, it was shown in Nikolaidis and Burdisso (1988) that the point of linearization is of utmost importance. Two points dominate the literature; the most probable fail- ure point (MPFP) and the minimum performance tar- get point (MPTP). It was shown by Youn and Choi (2004b) that formulations based on the MPTP are less sensitive to the non-linearity of the isoproba- bilistic transformation than those based on the MPFP.
Finally, in the formulations, the linearization point is either determined exactly for each iteration in the op- timization or gradually converging approximations are used. In both Yang and Gu (2004), and Aoues and Chateauneuf (2010), the so-called single loop- single variable RBDO algorithm, which is based on an approximate MPTP was found to be most efficient and highly stable. Thus, the approximate MPTP ap- proach presented for normally distributed variables by Chen et al. (1997) and further developed to ap- ply to general distributions in Wang and Kodiyalam (2002) is used in this work.
2.1. Algorithm description
The RBDO algorithm employed in this work is presented in Fig. 1.
The computer experiments are presented in Sec- tion 3. The surrogate model used in this work is
ˆr(x) = r 0 + a
n T x + p γ
. (3)
It is a function with a hyperplane as limit state but with non-linearity in the gradient direction. For a justification of the model and a description of the co- efficient fitting scheme, see Section 4.
Based on the assumption that the design variables can be separated into their mean values µ i and a func- tion of its normed normally distributed counterparts u i as
X i = µ i + H i (u i ), (4) the equivalent standard deviations matrix compo- nents ˆ σ ii are estimated through
σ ˆ (k,l) ii, j = ∂H i
∂u i µ (k,l)
i ,u ∗(k,l)
j
, (5)
Set µ (1) i , µ i L , µ U i , β, f X
1X
2...X
NXDesign computer experiment D (k) x
Perform experiments G j (D (k) x )
Fit ˆ G (k) j , j = 1, 2, . . . , N C
Compute the N C eq. stdd matrices ˆ σ σ σ (k,l) j
Compute the N C u-space MPPs u ∗(k,l+1) j
Solve for µ µ µ (k,l+1) min C(µ µ µ) s.t. ˆ G (k,l) j (µ µ µ, u ∗ j (k,l+1) ) ≥ 0
Compute the CAP u ∗ (k,l+1)
Check convergence in µ µ µ w.r.t. l Check convergence in µ µ µ w.r.t. k
Solution converged k = 1
l = 1
No l = l + 1
No k = k + 1
Yes Yes
Figure 1: Flowchart for the RBDO algorithm employed in this
work.
The condition in Eq. (4) holds for a variety of prob- ability distributions such as the Normal, the Log- normal, the Gumbel, the Uniform and the three- parameter Weibull if the variables are independent.
If the design variables are normally distributed, then H i (u i ) = σ i u i and the MPTP is independent of l. For other distributions and dependent variables, algo- rithms have been proposed by Noh et al. (2009). The important point here is that the approximate MPTP from the previous iteration u ∗(k,l) j is used.
In the RBDO algorithm, G = r max − r, where r is the surrogate model presented in Eq. (3), is used to formulate the failure function. Having approximated the equivalent standard deviation matrix ˆ σ σ σ j , approx- imate MPTPs u ∗(k,l) j are computed as
u ∗(k,l+1) j =
σ σ σ ˆ (k,l) j ∇r ˆ (k) j
σ σ σ ˆ (k,l) j ∇r ˆ (k) j
β j = ± σ σ σ ˆ (k,l) j ˆn (k) j β j (6)
where β j = Φ −1 (1 − α j ) is the reliability index and Φ(•) is the cumulative probability distribution for a normed normally distributed variable, ˆn (k) j is the k:th estimate of the j:th limit state normal. It can be noted here that the sign in Eq. (6) depends on the sign of the coefficients ˆa and ˆγ, i.e. the estimations of a and γ in Eq. (3). Finally, the k, l:th optmization problem can be stated as
min µ µ µ ± ˆn T c µ µ µ
s.t. ( (∆r j /ˆa j ) 1/ ˆγ j − p j − ˆn T j σ σ σ ˆ j u ∗ j ≤ ˆn T j µ µ µ µ L i < µ i < µ U i
(7)
where superscripts k, l have been dropped for read- ability but are clear from Eqs. (5) and (6), ˆn c is the normal of the cost function limit state, and ∆r j = r max, j − r 0 j . The sign in Eq. (7) is determined by the signs of the coefficients ˆa c and ˆγ c . As can be noted in Eq. (7), the surrogate model employed in this work facilitates the solution of the RBDO problem through a series of linear optimization problems. Thus, lin- ear programming algorithms, such as the simplex method, can be used to solve the optimization prob- lem once the coefficients have been estimated.
3. Experiment scheme
3.1. First iteration
In the first iteration, it is assumed that prior in- formation that is specific to the problem at hand is not known. Therefore, a design which requires the least amount of experiments needed to fit the sur- rogate model is used. For the directional surrogate model, the minimum number of experiments needed is N X + 2. Also, the experiments x m need to be placed so that the projected design variable experiment vec- tor [p] = [p m, j = n T j x m ] has at least N X + 1 unique en- tries for all constraints j = 1, . . . , N C . The following experiment design
D (1) =
µ µ µ (1) T h µ µ µ (1) + β σ ˆ (1) 1 e 1 i T
.. . h µ µ µ (1) + β σ ˆ (1) N
X e N X i T
"
µ µ µ (1) − β P
i
ˆ σ (1) i e i
# T
, (8)
where e i , i = 1, . . . , N X are the basis vectors for the design variables, fulfills this uniqueness condition. It is a Koshal design augmented with an additional ex- periment in the negative direction of the sum of the Koshal experiments. For problems where each con- straint only depends on one variable, it is the design which maximizes the minimum distance between ex- periments p m .
3.2. Determining the demand
A technique for experiment design based on ex- periments on demand (EoD) requires a way to de- termine the demand. In this subsection, the ap- proach taken in this work is outlined. However, some background is needed and will be given first. The approach described here is intended for reliability based design optimization (RBDO). One major dif- ference between traditional deterministic optimiza- tion and RBDO is that design variables are consid- ered stochastic instead of deterministic and that the constraints are stated as the probability of an event rather than the complete prohibition of that event.
Due to this, information about the constraints at the
design variable’s mean values is not of primary im-
portance in RBDO, even though the design variable
means is what can be altered during the optimization.
Instead, information about the constraint at the most probable failure point is of highest importance in an RBDO. This causes a problem since this point may differ in between constraints. The standard way to deal with this problem is to perform a set of exper- iments, often only enough to estimate the gradients, at each of the constraints or at least the active con- straints. However, experiments are expensive and lowered computational cost is along with increased stability and accuracy the driving force for develop- ment of new algorithms. Thus, the approach taken here is to extract as much information as possible out of every conducted experiment before making a new one. This means that experiments are not made in sets (as in DoE) but one at a time (EoD). Where to perform the next experiment is determined by where it is most needed, i.e. where the demand is the high- est. Here, the demand in iteration k has been deter- mined using the change in reliability estimate ∆β j for all the active constraints between iteration k and k − 1. It is stated as
∆β (k) j = β j (µ µ µ (k) ) − β j (µ µ µ (k−1) ), ∀ j ∈ Ω (k) , (9) where Ω (k) is the set of active constraints with cardi- nality N Ω (k) and
β j (µ µ µ (k) ) =
(∆r (k) j /a (k) j ) 1/γ (k) j − p (k) j − n (k) j T µ µ µ (k) n (k) j T σ σ σ (k) j n (k) j T sign(a (k) j )sign(γ (k) j )
. (10)
with ∆r (k) j = r max, j − r 0(k) j and analogously for β j (µ µ µ (k−1) ). The constraint for which the demand for new experiments is highest is j d .
3.3. D-optimal augmentation
A straightforward way to add experiments to the total experiment design would be to place the next experiment at the MPFP of the constraint for which the demand is the highest. However, there are poten- tial drawbacks with this. There is no guarantee that the experiments are not linearly dependent or even aligned, making estimates of the limit state normal n poor. Hence, the demand condition has in this work
been augmented by a D-optimality criterion. A D- optimal design is the solution to
max det D T D
. (11)
In Eq. (11), the experiment design matrix D can be expressed in any space; the design variable space x, the normed normal space u or another arbitrar- ily scaled space v. The space in which the design variables are described will however be important.
The normed normal space offers a natural scaling in RBDO applications. Thus, a scaling to normed nor- mal space u is the starting point for the algorithm used to determine the next experiment in this paper.
The experiments are further centered at the MPFP and scaled to unit length as
v m =
u m − u ∗ j
d
u m − u ∗ j
d
, (12)
where m = 1, ..., N E . In RBDO applications, the ex- periments closest to the MPFP are most important.
Thus, the experiments v m are weighted into fictitious experiments ˜v m using a radial contraction W m as
˜v m = v m W m , (13) where
W m = w j e −λ
u m −u ∗
jd
. (14)
The fictitious experiments make up a fictitious ex- periment design matrix [D] = [˜v m ] composed of all the experiments conducted up to the current point.
In Eq. (14), the common factor w j is chosen so that max(W m ) = 1. The purpose of the weight function is to make those experiments close to the MPFP more influencial in the determinant in Eq. (11). The weight function will be further explained in Section 4, where it has been used for another purpose.
In this work, one experiment d (k) is added in each
iteration. The experiment added in iteration k is num-
ber m = N X + k + 1, k > 1. It is selected as one of the
experiments in a full factorial design with absolute
component values of one, i.e. all the corners of a hy-
percube with side length 2 centered at the origin. The
design matrix D is thus augmented with each of the
hypercube corners c at a time. The experiment c opt
which renders the largest value for the determinant
u 1
u 2
! !
!
! !
! !
u ∗(k) j
d