• No results found

Reliability based design optimization with experiments on demand

N/A
N/A
Protected

Academic year: 2022

Share "Reliability based design optimization with experiments on demand"

Copied!
13
0
0

Loading.... (view fulltext now)

Full text

(1)

Reliability based design optimization with experiments on demand

T. Dersj¨o a,b,∗ , M. Olsson a

a Department of Solid Mechanics, Royal Institute of Technology, SE-100 44, Stockholm, Sweden

b Truck Chassis Development, Scania CV AB, SE - 151 87, S¨odert¨alje, Sweden

Abstract

In this paper, an algorithm for reliability based design optimization (RBDO) is presented. It incorporates a novel procedure in which experiments are performed one at a time where and when they are needed. The procedure is called experiments on demand. The experiment procedure utilizes properties specific to RBDO and the problem at hand augmented by the concept of D-optimality familiar from traditional design of ex- periments. Furthermore, an adaptive surrogate model fitting scheme is proposed which balances numerical stability and convergence rate as well as accuracy. Benchmarked against algorithms in the literature, the number of experiments needed for convergence was reduced by up to 80 % for a frequently used analytical problem and by up to 19 % for an application example. The accuracy of the reliability index is in line with the most efficient algorithm against which it was benchmarked but up to 3 % lower than the most accurate algorithm.

Keywords: Experiments on demand, Reliability based design optimization, Surrogate model

1. Introduction

Design of experiments (DoE) is a practice for ex- traction of a maximum amount of information from a given number of experiments. Specifically, it aims to reduce uncertainty caused by randomness. DoE is an integral part of the response surface method- ology (RSM), a methodology for optimization of products and processes, see Myers and Montgomery (2002). A response surface is an approximation of a response obtained through experiments on a poorly understood physical system, i.e. a black-box function. The drive to reduce the number of ex- pensive physics-based computer model evaluations has rendered DoE and RSM significant interest in simulation-driven development of structural compo- nents, see Roux et al. (1998), Redhe et al. (2002), and Youn and Choi (2004a). However, it has become ap- parent that the differences are not insignificant, see Simpson et al. (2001). A major difference between

∗ Corresponding author

Email addresses: tdersjo@kth.se (T. Dersj¨o ), mart@kth.se (M. Olsson)

physical and virtual experimentation is that all vir- tual input can be controlled, whereas this is seldom true for physical experimentation. Hence, responses from virtual experiments on deterministic systems are fully deterministic while physical responses are generally not, even if the physical system is deter- ministic. Thus, a branch of DoE called design of computer experiments (DoCE), which considers de- terministic responses has emerged, see Sacks et al.

(1989), Simpson et al. (2008), and Chen et al. (2006).

In DoCE, the concepts from DoE believed to be valid in computer experiments are further evaluated and additional considerations have been suggested. In Jin et al. (2005), the computational cost required to construct optimal designs was investigated and an efficient algorithm for that purpose was presented.

The use of response surfaces fitted using D-optimal

experiment designs in crashworthiness related prob-

lems has been studied in Redhe et al. (2002). How-

ever, within virtual optimization, there are also dif-

ferences. In reliability based design optimization

(RBDO), the randomness inherent to physical sys-

tems is taken into account by considering the design

(2)

variables as stochastic instead of deterministic. This does not mean that they cannot be controlled in sim- ulations. It does however mean that the response for the mean value is not as important as the response at the so-called most probable failure point (MPFP).

In Youn and Choi (2004a), it is stated that DoCE for deterministic optimization are not appropriate for RBDO applications since they do not produce sam- ples near the MPFP, and an integrated DoCE/RSM method suitable for RBDO is proposed. However, the experiments are still performed in sets.

In this paper, an RBDO algorithm employing a problem-dependent computer experiment procedure is presented; experiments on demand (EoD). It is a one-experiment-at-a-time approach. The justifica- tion for the work is that computer experiments on physics-based models can be so expensive that the algorithms presented to date make RBDO industri- ally unfeasible, e.g. for large scale FE-models. The goal of the proposed algorithm is to reduce the num- ber of computer experiments needed for convergence in RBDO problems. This is achieved by making the most out of the information available before per- forming another experiment, and then to conduct them one at a time rather than in sets. Moreover, every new experiment is placed where it is predicted to add the most useful information, i.e. where the de- mand for new information is the highest. The defini- tion of demand is here made by specifically consid- ering the core of RBDO, which is the most probable point of failure (MPFP). Furthermore, if experiments have already been performed in the vicinity of the MPFP, the next experiment is added in a D-optimal way. Although D-optimal designs are developed to deal with randomness and not model bias, it is not very different than a bias-optimal design if applied to a limited region of interest according to Box and Draper (1971). Thus, the D-optimality augmented experiments on demand procedure utilized in this pa- per combines aspects of both RBDO and classical DoE into an advantageous scheme that reduce the expensive computer model evaluations needed for convergence. The particular choice of D-optimality is not crucial but to distribute the experiments in a space-filling fashion in the design space is.

The outline of this paper is as follows: The RBDO algorithm is presented in Section 2, the EoD proce-

dure is described in Section 3, Section 4 describes the adaptive surrogate model fit, and in Section 5, the algorithm is applied to numerical problems as well as problems from solid mechanics. A discussion of the results is given in Section 6 and conclusions are given in 7.

2. Reliability based design optimization

In reliability based design optimization (RBDO) the problem can be stated as

min µ µ µ C(µ µ µ) s.t.

 

 

p f, j (X) ≤ α j , j = 1 . . . N C µ i L < µ i < µ U i , i = 1 . . . N X

(1)

Throughout this paper, X = [X 1 . . . X N X ] T is the de- sign variable vector and its lowercase counterpart x means realizations thereof, µ µ µ = [µ 1 . . . µ N X ] T is the de- sign variable mean value vector where µ i = E[X i ], C is the objective function (cost), p f, j is the j:th failure probability, α j is the value of the j:th target failure probability, and µ i L and µ U i are the lower and upper bound of design variable i, respectively. The prob- ability of failure can be formulated using a failure function G and a limit state g separating the safe and the failure domain. Conventionally, g = 0 is used, so that

p f, j (X) = P(G j (X) ≤ 0)

= R

G j (x)≤0

f X 1 ...X NX (x)dx 1 . . . dx N X , (2)

where P(•) denotes probability of the event and

f X is the joint probability distribution function of

the design variables. In RBDO, the integral is al-

most without exception solved using either analyt-

ical formulations, such as the first order reliability

method (FORM), see Madsen et al. (1986), or sam-

pling based methods such as Monte Carlo simula-

tion (MCS), see Rubinstein (1981). The majority

of RBDO formulations use FORM for reliability as-

sessment. FORM is based on an isoprobabilistic

transformation of design variables to normed nor-

mally distributed variables followed by a lineariza-

tion of the failure function limit state (g = 0). Numer-

(3)

ical formulations for the isoprobabilistic transforma- tion have been proposed in HongShuang et al. (2008) and Noh et al. (2009). For non-linear functions, it was shown in Nikolaidis and Burdisso (1988) that the point of linearization is of utmost importance. Two points dominate the literature; the most probable fail- ure point (MPFP) and the minimum performance tar- get point (MPTP). It was shown by Youn and Choi (2004b) that formulations based on the MPTP are less sensitive to the non-linearity of the isoproba- bilistic transformation than those based on the MPFP.

Finally, in the formulations, the linearization point is either determined exactly for each iteration in the op- timization or gradually converging approximations are used. In both Yang and Gu (2004), and Aoues and Chateauneuf (2010), the so-called single loop- single variable RBDO algorithm, which is based on an approximate MPTP was found to be most efficient and highly stable. Thus, the approximate MPTP ap- proach presented for normally distributed variables by Chen et al. (1997) and further developed to ap- ply to general distributions in Wang and Kodiyalam (2002) is used in this work.

2.1. Algorithm description

The RBDO algorithm employed in this work is presented in Fig. 1.

The computer experiments are presented in Sec- tion 3. The surrogate model used in this work is

ˆr(x) = r 0 + a 

n T x + p  γ

. (3)

It is a function with a hyperplane as limit state but with non-linearity in the gradient direction. For a justification of the model and a description of the co- efficient fitting scheme, see Section 4.

Based on the assumption that the design variables can be separated into their mean values µ i and a func- tion of its normed normally distributed counterparts u i as

X i = µ i + H i (u i ), (4) the equivalent standard deviations matrix compo- nents ˆ σ ii are estimated through

σ ˆ (k,l) ii, j = ∂H i

∂u i µ (k,l)

i ,u ∗(k,l)

j

, (5)

Set µ (1) i , µ i L , µ U i , β, f X

1

X

2

...X

NX

Design computer experiment D (k) x

Perform experiments G j (D (k) x )

Fit ˆ G (k) j , j = 1, 2, . . . , N C

Compute the N C eq. stdd matrices ˆ σ σ σ (k,l) j

Compute the N C u-space MPPs u ∗(k,l+1) j

Solve for µ µ µ (k,l+1) min C(µ µ µ) s.t. ˆ G (k,l) j (µ µ µ, u j (k,l+1) ) ≥ 0

Compute the CAP u (k,l+1)

Check convergence in µ µ µ w.r.t. l Check convergence in µ µ µ w.r.t. k

Solution converged k = 1

l = 1

No l = l + 1

No k = k + 1

Yes Yes

Figure 1: Flowchart for the RBDO algorithm employed in this

work.

(4)

The condition in Eq. (4) holds for a variety of prob- ability distributions such as the Normal, the Log- normal, the Gumbel, the Uniform and the three- parameter Weibull if the variables are independent.

If the design variables are normally distributed, then H i (u i ) = σ i u i and the MPTP is independent of l. For other distributions and dependent variables, algo- rithms have been proposed by Noh et al. (2009). The important point here is that the approximate MPTP from the previous iteration u ∗(k,l) j is used.

In the RBDO algorithm, G = r maxr, where r is the surrogate model presented in Eq. (3), is used to formulate the failure function. Having approximated the equivalent standard deviation matrix ˆ σ σ σ j , approx- imate MPTPs u ∗(k,l) j are computed as

u ∗(k,l+1) j =

σ σ σ ˆ (k,l) j ∇r ˆ (k) j

σ σ σ ˆ (k,l) j ∇r ˆ (k) j

β j = ± σ σ σ ˆ (k,l) j ˆn (k) j β j (6)

where β j = Φ −1 (1 − α j ) is the reliability index and Φ(•) is the cumulative probability distribution for a normed normally distributed variable, ˆn (k) j is the k:th estimate of the j:th limit state normal. It can be noted here that the sign in Eq. (6) depends on the sign of the coefficients ˆa and ˆγ, i.e. the estimations of a and γ in Eq. (3). Finally, the k, l:th optmization problem can be stated as

min µ µ µ ± ˆn T c µ µ µ

s.t. ( (∆r j /ˆa j ) 1/ ˆγ jp jˆn T j σ σ σ ˆ j u jˆn T j µ µ µ µ L i < µ i < µ U i

(7)

where superscripts k, l have been dropped for read- ability but are clear from Eqs. (5) and (6), ˆn c is the normal of the cost function limit state, and ∆r j = r max, jr 0 j . The sign in Eq. (7) is determined by the signs of the coefficients ˆa c and ˆγ c . As can be noted in Eq. (7), the surrogate model employed in this work facilitates the solution of the RBDO problem through a series of linear optimization problems. Thus, lin- ear programming algorithms, such as the simplex method, can be used to solve the optimization prob- lem once the coefficients have been estimated.

3. Experiment scheme

3.1. First iteration

In the first iteration, it is assumed that prior in- formation that is specific to the problem at hand is not known. Therefore, a design which requires the least amount of experiments needed to fit the sur- rogate model is used. For the directional surrogate model, the minimum number of experiments needed is N X + 2. Also, the experiments x m need to be placed so that the projected design variable experiment vec- tor [p] = [p m, j = n T j x m ] has at least N X + 1 unique en- tries for all constraints j = 1, . . . , N C . The following experiment design

D (1) =

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

µ µ µ (1) T h µ µ µ (1) + β σ ˆ (1) 1 e 1 i T

.. . h µ µ µ (1) + β σ ˆ (1) N

X e N X i T

"

µ µ µ (1) − β P

i

ˆ σ (1) i e i

# T

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

, (8)

where e i , i = 1, . . . , N X are the basis vectors for the design variables, fulfills this uniqueness condition. It is a Koshal design augmented with an additional ex- periment in the negative direction of the sum of the Koshal experiments. For problems where each con- straint only depends on one variable, it is the design which maximizes the minimum distance between ex- periments p m .

3.2. Determining the demand

A technique for experiment design based on ex- periments on demand (EoD) requires a way to de- termine the demand. In this subsection, the ap- proach taken in this work is outlined. However, some background is needed and will be given first. The approach described here is intended for reliability based design optimization (RBDO). One major dif- ference between traditional deterministic optimiza- tion and RBDO is that design variables are consid- ered stochastic instead of deterministic and that the constraints are stated as the probability of an event rather than the complete prohibition of that event.

Due to this, information about the constraints at the

design variable’s mean values is not of primary im-

portance in RBDO, even though the design variable

(5)

means is what can be altered during the optimization.

Instead, information about the constraint at the most probable failure point is of highest importance in an RBDO. This causes a problem since this point may differ in between constraints. The standard way to deal with this problem is to perform a set of exper- iments, often only enough to estimate the gradients, at each of the constraints or at least the active con- straints. However, experiments are expensive and lowered computational cost is along with increased stability and accuracy the driving force for develop- ment of new algorithms. Thus, the approach taken here is to extract as much information as possible out of every conducted experiment before making a new one. This means that experiments are not made in sets (as in DoE) but one at a time (EoD). Where to perform the next experiment is determined by where it is most needed, i.e. where the demand is the high- est. Here, the demand in iteration k has been deter- mined using the change in reliability estimate ∆β j for all the active constraints between iteration k and k − 1. It is stated as

∆β (k) j = β j (µ µ µ (k) ) − β j (µ µ µ (k−1) ), ∀ j ∈ Ω (k) , (9) where Ω (k) is the set of active constraints with cardi- nality N (k) and

β j (µ µ µ (k) ) =

(∆r (k) j /a (k) j ) 1/γ (k) jp (k) j − n (k) j T µ µ µ (k) n (k) j T σ σ σ (k) j n (k) j T sign(a (k) j )sign(γ (k) j )

. (10)

with ∆r (k) j = r max, jr 0(k) j and analogously for β j (µ µ µ (k−1) ). The constraint for which the demand for new experiments is highest is j d .

3.3. D-optimal augmentation

A straightforward way to add experiments to the total experiment design would be to place the next experiment at the MPFP of the constraint for which the demand is the highest. However, there are poten- tial drawbacks with this. There is no guarantee that the experiments are not linearly dependent or even aligned, making estimates of the limit state normal n poor. Hence, the demand condition has in this work

been augmented by a D-optimality criterion. A D- optimal design is the solution to

max det  D T D 

. (11)

In Eq. (11), the experiment design matrix D can be expressed in any space; the design variable space x, the normed normal space u or another arbitrar- ily scaled space v. The space in which the design variables are described will however be important.

The normed normal space offers a natural scaling in RBDO applications. Thus, a scaling to normed nor- mal space u is the starting point for the algorithm used to determine the next experiment in this paper.

The experiments are further centered at the MPFP and scaled to unit length as

v m =

u m − u j

d

u m − u j

d

, (12)

where m = 1, ..., N E . In RBDO applications, the ex- periments closest to the MPFP are most important.

Thus, the experiments v m are weighted into fictitious experiments ˜v m using a radial contraction W m as

˜v m = v m W m , (13) where

W m = w j e −λ

u m −u

jd

. (14)

The fictitious experiments make up a fictitious ex- periment design matrix [D] = [˜v m ] composed of all the experiments conducted up to the current point.

In Eq. (14), the common factor w j is chosen so that max(W m ) = 1. The purpose of the weight function is to make those experiments close to the MPFP more influencial in the determinant in Eq. (11). The weight function will be further explained in Section 4, where it has been used for another purpose.

In this work, one experiment d (k) is added in each

iteration. The experiment added in iteration k is num-

ber m = N X + k + 1, k > 1. It is selected as one of the

experiments in a full factorial design with absolute

component values of one, i.e. all the corners of a hy-

percube with side length 2 centered at the origin. The

design matrix D is thus augmented with each of the

hypercube corners c at a time. The experiment c opt

which renders the largest value for the determinant

(6)

u 1

u 2

! !

!

! !

! !

u ∗(k) j

d

u (k)

*

ˆg = 0

g = 0

c (k) opt ∆u max

Figure 2: Illustration of the experiments on demand approach and the measures involved. Experiments are represented with dots. The initial experiments are represented in black, the inter- mediate in gray and the new experiment in white.

det 

[D(˜v m , c) T D(˜v m , c) 

(15) is chosen as the added experiment. The procedure will place the experiments at a distance √

N X away from the center (the MPFP in RBDO) with compo- nents ±1. However, experiments should preferably be placed closer to the MPFP in RBDO applica- tions. Hence, the solution obtained from Eq. (15) is scaled using a maximum component length ∆u max in the procedure employed here. The experiment that is added in iteration k described in u-space; u (k) , is formed using the solution c (k) opt to Eq. (15) as

u (k) = u ∗(k) j

d + c (k) opt ∆u max . (16)

In design variable space, the added experiment is

x (k) = µ µ µ (k) + σ σ σ (k) j

d

 u ∗(k) j

d + c (k) opt ∆u max 

. (17)

4. Surrogate model fit

When experiments have been performed, a surro- gate is fitted to the responses. Every surrogate model includes a number of independent coefficients that

need to be fitted to the data, and thus a minimum number of responses. If, like the algorithm used in this work, all experiments are used, it is likely that there is more data than needed available. A decision can then be made to attribute higher weight to some data than to others. In RBDO application, the ap- parent choice is to attribute higher weight to those experiments that are close to the MPFP. Due to these reasons, a two-step weighted least squares method has been used to estimate the surrogate model coef- ficients in this work. The surrogate model is stated in Eq. (3) and the set of unknown coefficients is A j = n n j , a j , γ j , r 0 j o

, j = 1, . . . , N C , where n j

= 1 and the iteration number k has been left out for readability.

In the first step, a linear weighted least squares prob- lem is solved analytically. The surrogate model limit state normal estimate ˆn j is then taken as the (signed) norm of the hyperplane normal where the sign is set so that ˆn T j u j > 0. The limit state normal estimate ˆn j is then used as input to the second least squares problem which is solved numerically to obtain the remaining coefficient estimates ˆa j , ˆγ j , and ˆr 0 j . The weight function matrix W j , which is used in both steps, is a diagonal matrix with components

W mm = w j e −λ

u m −u j

, (18)

where m = 1, . . . , N E indicates experiment and N E is the total number of experiments performed, the common factor w j is chosen for normalization so that the largest weight is 1 and the remaining weights are smaller. In Eq. (18), u j is the MPFP approximated using the surrogate model from the prior iteration.

Index k has again been left out for readability. In Eq. (18), a decay factor λ is introduced. In Dersj¨o and Olsson (2012), a study was presented in which a larger λ rendered faster convergence and more ac- curate results. A too large decay factor λ will on the other hand render an ill-conditioned weight matrix.

One way to deal with this is to have an adaptive de-

cay factor. It is natural to use the distance from the

j:th MPFP to experiment no. m; ∆u m, j . The ratio

κ between weights for the furthest experiment that is

required to fit the model parameters (m = N X + 2) and

the closest one (m = 1) is stated as

(7)

κ = e −λ j

∆u NX+2, j

e −λ j | ∆u 1, j | . (19) For a particular κ, the decay factor is found as

λ j = ln(κ)/(

∆u 1, j

∆u N X +2, j

). (20) In this work, different values of κ have been used. A reasonable choice is 0.1, i.e. the weight is 10 times larger for the closest relative to the furthest of the needed experiments.

5. Examples

The algorithm outlined in this paper has been ap- plied to two frequently used benchmark problems. A target reliability of β = 3 ⇒ 1 − α = 0.9987 has been used for both examples. The RoC, i.e. the move limit, reduction scheme of Stander and Craig (2002) has been applied to the inner loop, whereas the outer loop uses an RoC which is a fixed ratio of the design do- main. The first outer loop RoC is 1/10 of the global design domain and the first inner loop RoC is 1/10 of its respective outer loop. Furthermore, the mean values are not changed during the first iteration. It is used to determine the MPFP only. The convergence criteria used are those of Aoues and Chateauneuf (2010) and the threshold is 10 −3 . For problems with multiple limit states, a check of all active constraints is performed when convergence is first achieved. If all the active constraints are not satisfied, the opti- mization continues.

5.1. Short column design

A common application benchmark example is the design of a beam. Various problem formulations have been used. Here, the problem formulation from Aoues and Chateauneuf (2010) is used with mi- nor modifications. A short column with rectangular cross-section of dimensions b and h, is subjected to a normal force F and two perpendicular bending mo- ments M 1 and M 2 . The objective is to minimize the cross-section area of the beam. The problem is stated as

min µ µ µ C = µ b µ h

s.t. ( P(G(X) ≥ 0) ≤ α 2 < µ i < 10

, (21)

where µ i is in mm. The limit state function expresses the condition of plastic collapse of the cross section.

It is formulated with respect to an elastic ideally plas- tic constitutive law and takes the form

G(X) = 1 − 4M 1

bh 2 f y4M 2

b 2 h f yF 2

(bh f y ) 2 . (22) The normally distributed design variables are the mean cross-section breadth b and height h. The mo- ments M 1 and M 2 , the force F, and the yield stress f y are all normally distributed random parameters, i.e.

the mean values and standard deviations are fixed.

Values for the design variables and parameters are presented in Table 1. The modifications made to the problem is the scale and the upper and lower bounds.

The problem has been solved using the original scal- ing as well and the results were in line with the re- sults presented here. The upper and lower bound in Aoues and Chateauneuf (2010) is prescribed using a ratio. It does not change the outcome of the op- timization presented here since neither constraint is active in any iteration.

Table 1: Values for the design variables and parameters for the short column design problem.

b/mm h/mm f

y

/MPa M

1

/Nmm M

2

/Nmm F/N

E[•] µ

b

µ

h

40 125 250 250

V[•] .5 .5 4 37.5 75 50

In the RBDO algorithm described in this paper, the parameters κ and u max need to be set by the user. Values for these parameters in the range of 0.1 ≤ κ ≤ 0.2 and 0 ≤ u max ≤ 0.2 have been evalu- ated in this paper in 2 and 3 steps, respectively. The minimum and maximum number of function evalua- tions needed for convergence was 29 and 73, respec- tively. There was a trend in the efficiency as a func- tion of κ; the efficiency was increased for smaller val- ues of κ. For the value κ = 0.1 no more than 43 ex- periments were needed for convergence regardless of

∆u max , and the average was 36. The effect of ∆u max

was not as distinct although the lowest number of ex- periments needed for convergence were achieved for

∆u max = 0.1 for both values of κ. Parameter values

κ = 0.1 and ∆u max = 0.1 were used to render the re-

sults reported in Table 2. In Table 2, β MC is the reli-

ability index computed using the true constraint and

(8)

10 6 Monte Carlo samples, CM denotes the compu- tational method used, Dir SM/EoD denote the direc- tional surrogate model and experiments on demand algorithm presented in this work, and SLA and PMA denote the single loop algorithm and performance measure approach which were used in Aoues and Chateauneuf (2010) to compute the values cited on the corresponding line here.

Table 2: RBDO results for the short column design example.

State

(CM) µ

b

µ

h

C β

MC

No of exps

Start

(-) 5 5 10 3.00 -

Converged

(Dir SM/EoD) 4.387 5.540 24.30 2.932 29 Converged

SLA - - 23.35 2.925 36

Converged

PMA - - 23.72 2.999 570

The convergence is at best about 19 % faster than the most efficient algorithm reported in Aoues and Chateauneuf (2010), measured in the number of ex- periments needed for convergence. It is on average equally efficient as the SLA algorithm for κ = 0.1.

The accuracy is comparable to the SLA algorithm but significantly lower than the PMA algorithm. The converged solutions are very consistent. The error in reliability index can partly be explained by the con- vexity of the limit state. It is not clear from Aoues and Chateauneuf (2010) if this has been acounted for in the results therein. The converged solution mean value, limit states and the experiments leading up to it is displayed in Fig. 3.

The constraint in Eq. (22) is very close to a func- tion on the form G ex = 1 − b 0 h 0 /(bh). For a type of function such as G ex and an objective function C = bh, both the constraint and objective are con- stant for the set of variable combinations which sat- isfy b ∝ 1/h. Thus, there is a significant risk that a sequential linear programming optimizer will fluctu- ate around the solution if a constant move limit is used. This has been experienced with the algorithm presented here as well.

The relatively fast convergence for the example may be somewhat surprising since the limit state function in this example is fairly different from the surrogate model. In particular, the true limit state function is, given all other variables fixed, linear in

Figure 3: Plot of the limit states G = 0 (solid line), the surrogate model limit state ˆ G = 0 (dashed line), the converged solution µ

µ µ (x marker), and the experiments (+ markers) obtained with κ = 0.1 and ∆u max = 0.1. The total number of experiments is N E = 29.

the load-related parameters F, M 1 , and M 2 and in- versely proportional to the yield stress parameter f y and the design variables b and h. In the surrogate model fit approach taken here, the exponent γ < 0 in all iterations. However, since the mean values of the parameters are fixed, their MPFP components x i do not change much as soon as the first u-space MPFP u i has been computed. The surrogate model is thus able to approximate the true constraint (as a function of design variables given the MPFP components of the stochastic parameters) in an accurate manner. It is believed that this is representative for applied de- sign problems, where the design variables are typi- cally related to the geometry and the stochastic pa- rameters are related to boundary conditions, material properties and loads.

5.2. Mathematical RBDO benchmark problem A frequently referenced benchmark problem for RBDO was presented in Youn and Choi (2004b).

The problem is stated as min µ µ µ C = µ 1 + µ 2

s.t. ( P(G j (X) ≥ 0) ≤ α 0 < µ i < 10

(23)

(9)

where

G 1 (X) = X 1 2 X 2 /20 − 1

G 2 (X) = (X 1 + X 2 − 5) 2 /30 + (X 1X 2 − 12) 2 /120 − 1 G 3 (X) = 80/(X 1 2 + 8X 2 + 5) − 1

(24) The optimization results are summarized in Ta- ble 3. The same denominations as in Table 2 have been used.

Table 3: RBDO results for the mathematical benchmark exam- ple.

State

(CM) µ

1

µ

2

C β

MC

No of exps

Start

(-) 5 5 10 2.499 -

Converged

(Dir SM/CAP) 3.444 3.264 6.709 2.9601 32 Converged

(Dir SM/EoD) 3.440 3.287 6.726 2.9686 18 Converged

SLA - - 6.757 2.9998 90

Converged

PMA - - 6.725 2.9970 540

In Table 3, the results row denoted Dir SM/CAP contain the results presented in Dersj¨o and Olsson (2012). The directional surrogate model was then used with a single constraint approximation point (CAP) for the constraints. It is the starting point from which the algorithm presented in this work has been developed. As for the previous example, a study of the influence of the parameters κ and ∆u max has been performed. The results were fairly consistent for the ranges 0.1 ≤ κ ≤ 0.2 and 0 ≤ ∆u max ≤ 0.2. The mini- mum number of function evaluations needed for con- vergence was 18 and the maximum was 23. The con- verged mean values differed only on the third deci- mal between the solutions, making the reliability es- timates highly consistent. The nominal values were again chosen to κ = 0.1 and ∆u max = 0.1 and the re- sults in Table 3 are based on these. Compared to the algorithm in Dersj¨o and Olsson (2012), the al- gorithm is up to 43 % more efficient and slightly more accurate. The efficiency is improved by 80

% compared to the best algorithm in Aoues and Chateauneuf (2010). However, the accuracy is re- duced by approximately 1 %, measured in reliabil- ity index. It is somewhat surprising since the objec- tive function is practically identical. Thus, a Monte

Carlo simulation was performed using 10 7 samples and the solution reported in Youn and Choi (2004b), that is µ µ µ = [3.441, 3.290] ⇒ C = 6.731. It was found that the reliability index is β = 2.9707. It is unclear to the author how the reliability index in Aoues and Chateauneuf (2010) was determined.

For the values of κ and ∆u max , the extra constraint satisfaction check proved unnecessary. However, for other values of κ and ∆u max , it was revealed that all active constraints where not satisfies. The effect of constraint check is that the convergence rate is de- creased but the accuracy is significantly improved in some cases.

The converged solution mean value, limit states and the experiments leading up to it is displayed in Fig. 4.

Figure 4: Plot of the limit states G j = 0 (solid lines), the surro- gate model limit states ˆ G j = 0 (dashed lines), the converged so- lution µ µ µ (x marker), and the experiments (+ markers) obtained with κ = 0.1 and ∆u max = 0.1. The total number of experiments N E = 18.

6. Discussion

The algorithm for RBDO presented in this work

exhibit efficiency and accuracy. It is thus very

promising for use in industrial applications with

large-scale computations. There are however poten-

tial pitfalls with the approach, and they are discussed

in this section.

(10)

The surrogate model used in this paper is non- linear only in the gradient direction. Thus, all par- tial second derivatives of the surrogate model with respect to the design variables are completely de- termined by the gradient and the sign of the ex- ponent γ. The functional form for the surrogate model is believed to be representative for engineer- ing applications where the design variables are re- lated to geometric properties of the design and loads, material data, and boundary conditions are consid- ered stochastic parameters, i.e. their means are fixed.

However, when this is not the case, slower conver- gence is likely.

A D-optimal experiment design gives the most ac- curate estimate of n if the response is a linear func- tion with a normally distributed noise. It has been chosen here for just that reason; to obtain an accu- rate estimate, ˆn, of the limit state normal. However, it is not necessarily optimal for obtaining accurate estimates of the other surrogate model coefficients r 0 j , a j , γ j . This is because a D-optimal design does not ensure that the experiments are well distributed along the p-direction in design space. Also, there is still a risk that the gradient estimate is poor if less than N X + 1 linearly independent experiments have been performed in the vicinity of the MPFP.

Moreover, there is a risk that a large amount of experiments are performed at what is believed to be the MPFP of one of the constraints. Because the ex- periments are performed at the MPFP for that con- straint, the change in surrogate model is likely to be the highest and thus the demand using the definition here. However, when other constraints are eventually checked, it may reveal that the surrogate model esti- mates are far from accurate at the MPFP and thus large changes in mean values are enforced. This would render the experiments performed at the first high-demand constraint less valuable. This may be viewed as a type of sub-optimization. The fact that all experiments are utilized in the surrogate model fit mitigates this effect to some extent, and no experi- ence with this pitfall has been found during the work on this paper. However, the extent of this risk should be evaluated in future work.

7. Conclusions

In this paper, an RBDO algorithm based on a sur- rogate model fitted by experiments on demand (EoD) has been presented. The experiment design has fur- ther been augmented by a D-optimality criterion for improved robustness. Also, the surrogate model is fitted through a weighted least squares procedure in which the weights are adaptive and modified based on the amount of information readily available. The goal is to decrease the number of experiments needed for convergence, which is important in applications where they are computationally costly. Only then can RBDO become a common practice in simulation- based development of structural components. The results are promising. The number of experiments needed for convergence for an application example is up to 19 % lower than the least expensive algo- rithm presented in Aoues and Chateauneuf (2010).

The accuracy is on the same order. The accuracy is however 3 % lower than for other reported algo- rithms. Part of this is due to the use of FORM and part of it is due to errors related to the algorithm in this paper. For a mathematical example, 43 % fewer experiments were needed for convergence compared to the results in Dersj¨o and Olsson (2012) and 80 % fewer compared to the results reported in Aoues and Chateauneuf (2010). The accuracy - measured by de- viation from the target reliability index - is on the or- der of per mille (not counting the FORM error which is about 1 % for the highly non-linear problem used).

Acknowledgements

This research was financially supported by Sca- nia CV AB and the Swedish Governmental Agency for Innovation Systems (VINNOVA). The support is gratefully acknowledged.

References

Aoues, Y., Chateauneuf, A., 2010. Benchmark study of numeri- cal methods for reliability-based design optimization. Struc- tural and multidisciplinary optimization 41, 277–294.

Box, G. E. P., Draper, N. R., 1971. Factorial design, the |X X|

criterion, and some related matters. Technometrics 13, 731–

742.

Chen, V. C. P., Tsui, K.-L., Barton, R. R., Meckesheimer, M.,

2006. A review on design, modeling and applications of

computer experiments. IIE Transactions 38, 273–291.

(11)

Chen, X., Hasselman, T. K., Neill, D. J., 1997. Reliability based structural design optimization for practical applica- tions. In: Proc. 38th AIAA/ASME/ASCE/AHS/ASC Struc- tures, structural dynamics, and materials conference. AIAA- 97-1403. pp. 2724–2732.

Dersj¨o, T., Olsson, M., 2012. A directional surrogate model tailored for efficient reliability based design optimization.

Technical report HLF 2012:518, Department of Solid Me- chanics, Royal Institute of Technology, SE - 100 44 Stock- holm Sweden.

HongShuang, L., ZhenZhou, L., XiuKai, Y., 2008. Reliability based optimization: A safety index approach. Chinese bul- letin of science 53(17), 2586–2592.

Jin, R., Chen, W., Sudjianto, A., 2005. An efficient algorithm for constructing optimal design of computer experiments.

Journal of statistical planning and inference 134, 268287.

Madsen, H. O., Krenk, S., Lind, N. C., 1986. Methods of struc- tural safety. Prentice-Hall, Englewood Cliffs.

Myers, R. H., Montgomery, D. C., 2002. Response surface methodology - Process and product optimization using de- signed experiments, 2nd Edition. John Wiley & Sons Inc., NY, NY.

Nikolaidis, E., Burdisso, R., 1988. Reliability based optimiza- tion: A safety index approach. Computers and structures 28(6), 781–788.

Noh, Y., Choi, K. K., Du, L., 2009. Reliability-based design op- timization of problems with correlated input variables using a gaussian copula. Structural and multidisciplinary optimiza- tion 38, 1–16.

Redhe, M., Forsberg, J., Jansson, T., Marklund, P.-O., Nilsson, L., 2002. Using the response surface methodology and the d- optimality criterion in crashworthiness related problems - an analysis of the surface approximation error versus the num- ber of function evaluations. Structural and multidisciplinary optimization 24, 185–194.

Roux, W. J., Stander, N., Haftka, R. T., 1998. Response surface approximations for structural optimization. Interna- tional journal for numerical methods in engineering 42, 517–

534.

Rubinstein, R. R., 1981. Simulation and the Monte Carlo method. Wiley.

Sacks, J., Schiller, S. B., Welch, W. J., 1989. Design for com- puter experiments. Technometrics 31 (1), 41–47.

Simpson, T. W., Peplinski, J. D., Koch, P. N., Allen, J. K., 2001. Metamodels for computer-based engineering design:

Survey and recommendations. Engineering with Computers 17, 129150.

Simpson, T. W., Toropov, V., Balabanov, V., Viana, F. A. C., 2008. Design and analysis of computer experiments in mul- tidisciplinary design optimization: a review of how far we have come - or not. In: Proc. 12th AIAA/ISSMO multidisci- plinary analysis and optimization conference. AIAA-2008- 5802.

Stander, N., Craig, K. J., 2002. On the robustness of a sim- ple domain reduction scheme for simulation-based optimiza- tion. Engineering with Computers 19 (4), 431–450.

Wang, L., Kodiyalam, S., 2002. An efficient method for prob- abilistic and robust design with non-normal distributions.

In: Proc. 43rd AIAA/ASME/ASCE/AHS/ASC Structures, structural dynamics, and materials conference. Denver, Col- orado, pp. 1–13.

Yang, R., Gu, L., 2004. Experience with approximate reliability-based optimization methods. Structural and mul- tidisciplinary optimization 26(1-2), 152–159.

Youn, B. D., Choi, K. K., 2004a. A new response sur- face methodology for reliability-based design optimization.

Computers and structures 82, 241–256.

Youn, B. D., Choi, K. K., 2004b. Selecting probabilistic ap-

proaches for reliability-based design optimization. AIAA

Journal 42 (1).

(12)

Appendix A. Fitting scheme

In this work, a weighted least squares fit has been used to fit the surrogate model to the responses. It has been noted that whether or not the converged solution is the global minimum depends on the start guess.

Thus, an effort has been made to develop a scheme to compute start guesses for the optimization. Two start guess estimate schemes are used in the optimization.

One is used in the first iteration - where no informa- tion about the response is available, and the other is used in the subsequent iterations - where information from prior iterations can be utilized. In the following presentation, the tilde sign ˜ and the hat sign ˆ are both used to indicate estimates where ˜ is used to indicate the start guess estimate and ˆ is used to indi- cate the weighted least squares estimate. In the first scheme, the start guess is estimated using:

1. Estimate ∇ x r j using the N X + 1 first experiments and responses as

[˜r 0 j , ˆ ∇ x r j ] T = [D (1) ] −1 r j

where the experiment design matrix expressed in design space is generally stated

D =

 

 

 

 

 

1 x 11 . . . x N X 1

.. . . .. ...

1 x 1N E . . . x N X N E

 

 

 

 

 

and in this work the experiment design the first iterations is

D (1) =

 

 

 

 

 

 

 

 

 

 

 

 

 

1 x ∗(1) T 1 [x ∗(1) + β σ ˆ (1) 1 e 1 ] T .. . . .. ...

1 [x ∗(1) + β σ ˆ (1) N

X e N X ] T 1 [x ∗(1) − β P

i

ˆ σ (1) i e i ]

 

 

 

 

 

 

 

 

 

 

 

 

 

where e i is the basis vector for design variable i.

2. Set

ˆn (1) j = ± ˆ ∇ x r j / ∇ ˆ x r j

, chosen so that

ˆn (1) j T x ∗(1) j = ˆp ∗(1) j ≥ 0 with

x ∗(1) j = µ µ µ (1) + σ σ σ ˆ (1) j u ∗(1) j and

u ∗(1) j = σ σ σ ˆ

(1) j ∇r ˆ (1) j

σ σ σ ˆ (1) j ∇r ˆ (1) j

β j

3. Set

p (1) j =

 

 

0 if ˆp (1) min, j ≥ 0

ˆp (1) min, j else

where ˆp (1) min, j is the smallest value of ˆp (1) j in the design domain.

4. Approximate the first and second derivatives

dr j d p j , d

2 r j d p 2

j

of r j using the Taylor expansion r T, j = r j (p ∗(1) j ) + d p dr j

j (p jp ∗(1) j ) + 1 2 d 2 r j

d p 2 j (p jp ∗(1) j ) 2 least squares fit and all N X + 2 experiments.

5. Estimate γ (1) j as

˜γ (1) j = E[ ˆp (1) j ] d 2 r j

d p 2 j / d p dr j

j + 1 6. Estimate a (1) j as

˜a (1) j = d 2 r j /d p 2 +dr j /d p

˜γ (1) j

 

 

E[ ˆp (1) j ] ˜γ

(1) j −1

+ 

˜γ (1) j −1  E[ ˆp (1) j ] ˜γ

(1) j −2 

 

 

7. Estimate r 0(1) j as

˜r 0(1) j = n 1

e

P

e

r e, j˜a (1) j ˆp (1)

˜γ (1) j

e, j .

8. Find the solution ˆc (1) j = [ˆr 0(1) j , ˆa (1) j , ˆγ (1) j ] T to the weighted least squares problem

min c j (r − ˆr( ˆn (1) j , c j )) T W j (r − ˆr( ˆn (1) j , c j )) using the start guess ˜c (1) j = [˜r 0(1) j , ˜a (1) j , ˜γ (1) j ] T . The weight function matrix W j is a diagonal ma- trix with components W mm = w j e −λ

u m −u j

. The

weight function ensures that those experiments

m = 1, . . . , N E closest to the MPTP is given

heighest weight, since they to higher degree de-

termine the failure probability than those far

from the MPTP. The decay factor λ will deter-

mine the extent to which the distance affects the

weight.

(13)

In all iterations but the first, the coefficients are estimated using

1. Estimate ∇ x r (k,l) j using [˜r 0 j ∇ ˆ x r (k,l) j ] T =

(D T T W j D T ) −1 D T T W j r (k) j 2. Set

ˆn (k,l) j = ± ˆ ∇ x r (k,l) j /

∇ ˆ x r (k,l) j , chosen so that

ˆn (k,l) j T x ∗(k,l) j = ˆp ∗(k,l) j ≥ 0 with

x ∗(k,l) j = µ µ µ (k,l) + σ σ σ ˆ (k,l) j u ∗(k,l) j and

u ∗(k,l) j = σ σ σ ˆ

(k,l) j ∇r ˆ (k,l) j

σ σ σ ˆ (k,l) j ∇r ˆ (k,l) j

β j 3. Set

p (1) j =

 

 

0 if ˆp (k,l) min, j ≥ 0

ˆp (k,l) min, j else

4. Find the solution ˆc (k,l) j = [ˆr 0(k,l) j , ˆa (k,l) j , ˆγ (k,l) j ] T to the weighted least squares problem

min c j (r − ˆr( ˆn (k,l) j , c j )) T W(r − ˆr( ˆn (k,l) j , c j ))

using the start guess ˜c (k,l) j =

[˜r 0(k,l−1) j , ˆa (k,l−1) j , ˆγ (k,l−1) j ] T .

References

Related documents

We investigate the direct effect on climate-certified milk, substitution effects on other milk products, and the dynamic effects across time.. Published by

Multiplied by the estimated coefficient on task 1 accuracy in the main performance regressions (0.72, see Table 4 column (2)), this implies less than 0.2 percentage points

If it is primarily the first choice set where the error variance is high (compared with the other sets) and where the largest share of respondents change their preferences

In most countries, there are systematic age and gender differences in key labor market outcomes. Older workers and women often have lower employment rates and

Table 10 only shows the real sprints for each team (3 per team), the percentage of obsolete requirements they had in their product backlog when estimating the coming sprint,

Executive functions were also gathered from 19 studies and made into a similar graph (Figure 2). Some of the criteria for the studies included were that the mean age of the

To be able to study about democratic values in the curricula, I will choose some sample texts from both South African “Revised National Curriculum statement, Grades R-9 (Schools)

Målet med studien är att utöka distriktssköterskans kunskap och förståelse för vad som motiverar personer med T2DM till utförandet av fysisk aktivitet samt att få kunskap om