Reliability Engineering and System...

19
Contents lists available at ScienceDirect Reliability Engineering and System Safety journal homepage: www.elsevier.com/locate/ress Information Reuse for Importance Sampling in Reliability-Based Design Optimization Anirban Chaudhuri ,a , Boris Kramer b , Karen E. Willcox c a Massachusetts Institute of Technology, Cambridge, MA, 02139, USA b University of California, San Diego c University of Texas at Austin, Austin, TX, 78712, USA ARTICLE INFO Keywords: Information reuse Importance sampling Biasing density Probability of failure Reliability analysis Optimization under uncertainty Reliability-based optimization RBDO ABSTRACT This paper introduces a new approach for importance-sampling-based reliability-based design optimization (RBDO) that reuses information from past optimization iterations to reduce computational effort. RBDO is a two- loop process—an uncertainty quantification loop embedded within an optimization loop—that can be compu- tationally prohibitive due to the numerous evaluations of expensive high-fidelity models to estimate the prob- ability of failure in each optimization iteration. In this work, we use the existing information from past opti- mization iterations to create efficient biasing densities for importance sampling estimates of probability of failure. The method involves two levels of information reuse: (1) reusing the current batch of samples to con- struct an a posteriori biasing density with optimal parameters, and (2) reusing the a posteriori biasing densities of the designs visited in past optimization iterations to construct the biasing density for the current design. We demonstrate for the RBDO of a benchmark speed reducer problem and a combustion engine problem that the proposed method leads to computational savings in the range of 51% to 76%, compared to building biasing densities with no reuse in each iteration. 1. Introduction Designing efficient and robust engineering systems requires dealing with expensive computational models while taking into account un- certainties in parameters and surrounding conditions. Reliability-based design optimization (RBDO) is a framework to minimize a prescribed cost function while simultaneously ensuring that the design is reliable (i.e., has a small probability of failure). RBDO is a two-loop process involving an outer-loop optimization with an inner-loop reliability analysis for each optimization iteration as shown in Figure 1(a). The reliability analysis requires estimating a probability of failure. RBDO approaches include: fully-coupled two-loop methods that evaluate the reliability at each optimization iteration, single-loop methods that in- troduce optimality criteria for an approximation of the reliability esti- mate [1–4], and decoupled approaches that transform RBDO into a series of deterministic optimization problems with corrections [5,6]. Surveys of existing RBDO methods can be found in Refs. [7–9]. In this work, we concentrate on a two-loop RBDO method. For mildly non- linear systems, reliability can be estimated using first-order and second- order reliability methods [10,11]. However, strongly nonlinear systems typically require Monte Carlo sampling. The use of Monte Carlo methods is also more appropriate for systems with multiple failure re- gions, which first-order and second-order reliability methods cannot handle. The high cost of Monte Carlo sampling renders the RBDO problem computationally prohibitive in the presence of expensive-to- evaluate models. Thus, efficient methods are needed for evaluating the reliability constraint in each RBDO iteration for nonlinear systems. One way to reduce the computational cost for RBDO is by using cheap-to-evaluate surrogate evaluations to replace the expensive high- fidelity evaluations in the Monte Carlo estimation of the probability of failure. Several methods use surrogates that have been adaptively re- fined around the failure boundary. Dubourg et al. [12] proposed re- fining kriging surrogates using a population-based adaptive sampling technique through subset simulation for RBDO. Bichon et al. [13,14] combined adaptive Gaussian-process-based global reliability analysis with efficient global optimization (a.k.a. Bayesian optimization) for RBDO. Qu and Haftka [15] presented a method to solve the RBDO problem by building surrogates for probability sufficiency factor, which defines a probability of failure on the safety factor instead of the limit state function directly. Recent work has developed a quantile-based RBDO method using adaptive kriging surrogates [16], which uses the quantile (a.k.a. value-at-risk) instead of the probability of failure https://doi.org/10.1016/j.ress.2020.106853 Received 30 January 2019; Received in revised form 11 November 2019; Accepted 21 January 2020 Corresponding author. E-mail addresses: [email protected] (A. Chaudhuri), [email protected] (B. Kramer), [email protected] (K.E. Willcox). Reliability Engineering and System Safety 201 (2020) 106853 Available online 22 January 2020 0951-8320/ © 2020 Elsevier Ltd. All rights reserved. T

Transcript of Reliability Engineering and System...

Page 1: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

Contents lists available at ScienceDirect

Reliability Engineering and System Safety

journal homepage: www.elsevier.com/locate/ress

Information Reuse for Importance Sampling in Reliability-Based DesignOptimizationAnirban Chaudhuri⁎,a, Boris Kramerb, Karen E. Willcoxc

a Massachusetts Institute of Technology, Cambridge, MA, 02139, USAb University of California, San Diegoc University of Texas at Austin, Austin, TX, 78712, USA

A R T I C L E I N F O

Keywords:Information reuseImportance samplingBiasing densityProbability of failureReliability analysisOptimization under uncertaintyReliability-based optimizationRBDO

A B S T R A C T

This paper introduces a new approach for importance-sampling-based reliability-based design optimization(RBDO) that reuses information from past optimization iterations to reduce computational effort. RBDO is a two-loop process—an uncertainty quantification loop embedded within an optimization loop—that can be compu-tationally prohibitive due to the numerous evaluations of expensive high-fidelity models to estimate the prob-ability of failure in each optimization iteration. In this work, we use the existing information from past opti-mization iterations to create efficient biasing densities for importance sampling estimates of probability offailure. The method involves two levels of information reuse: (1) reusing the current batch of samples to con-struct an a posteriori biasing density with optimal parameters, and (2) reusing the a posteriori biasing densities ofthe designs visited in past optimization iterations to construct the biasing density for the current design. Wedemonstrate for the RBDO of a benchmark speed reducer problem and a combustion engine problem that theproposed method leads to computational savings in the range of 51% to 76%, compared to building biasingdensities with no reuse in each iteration.

1. Introduction

Designing efficient and robust engineering systems requires dealingwith expensive computational models while taking into account un-certainties in parameters and surrounding conditions. Reliability-baseddesign optimization (RBDO) is a framework to minimize a prescribedcost function while simultaneously ensuring that the design is reliable(i.e., has a small probability of failure). RBDO is a two-loop processinvolving an outer-loop optimization with an inner-loop reliabilityanalysis for each optimization iteration as shown in Figure 1(a). Thereliability analysis requires estimating a probability of failure. RBDOapproaches include: fully-coupled two-loop methods that evaluate thereliability at each optimization iteration, single-loop methods that in-troduce optimality criteria for an approximation of the reliability esti-mate [1–4], and decoupled approaches that transform RBDO into aseries of deterministic optimization problems with corrections [5,6].Surveys of existing RBDO methods can be found in Refs. [7–9]. In thiswork, we concentrate on a two-loop RBDO method. For mildly non-linear systems, reliability can be estimated using first-order and second-order reliability methods [10,11]. However, strongly nonlinear systemstypically require Monte Carlo sampling. The use of Monte Carlo

methods is also more appropriate for systems with multiple failure re-gions, which first-order and second-order reliability methods cannothandle. The high cost of Monte Carlo sampling renders the RBDOproblem computationally prohibitive in the presence of expensive-to-evaluate models. Thus, efficient methods are needed for evaluating thereliability constraint in each RBDO iteration for nonlinear systems.

One way to reduce the computational cost for RBDO is by usingcheap-to-evaluate surrogate evaluations to replace the expensive high-fidelity evaluations in the Monte Carlo estimation of the probability offailure. Several methods use surrogates that have been adaptively re-fined around the failure boundary. Dubourg et al. [12] proposed re-fining kriging surrogates using a population-based adaptive samplingtechnique through subset simulation for RBDO. Bichon et al. [13,14]combined adaptive Gaussian-process-based global reliability analysiswith efficient global optimization (a.k.a. Bayesian optimization) forRBDO. Qu and Haftka [15] presented a method to solve the RBDOproblem by building surrogates for probability sufficiency factor, whichdefines a probability of failure on the safety factor instead of the limitstate function directly. Recent work has developed a quantile-basedRBDO method using adaptive kriging surrogates [16], which uses thequantile (a.k.a. value-at-risk) instead of the probability of failure

https://doi.org/10.1016/j.ress.2020.106853Received 30 January 2019; Received in revised form 11 November 2019; Accepted 21 January 2020

⁎ Corresponding author.E-mail addresses: [email protected] (A. Chaudhuri), [email protected] (B. Kramer), [email protected] (K.E. Willcox).

Reliability Engineering and System Safety 201 (2020) 106853

Available online 22 January 20200951-8320/ © 2020 Elsevier Ltd. All rights reserved.

T

Page 2: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

constraint. A comprehensive review on the use of surrogates in RBDOcan be found in [17]. We note that sampling directly from surrogatemodels—while being computationally cheaper—introduces a bias,while we propose a method that is unbiased. However, using surrogatemodels offer substantial computational savings [17], and it would bepossible to extend the proposed method to incorporate surrogates. Also,these methods do not reuse information from previous design itera-tions, which is a source of computational savings that we explore in thiswork.

Another approach to reduce computational cost is to use importancesampling for the reliability analysis, which can allow for a drastic reductionin samples needed to estimate the reliability (or failure probability).Importance sampling is, in general, an efficient method for reliable systemsand can be used to estimate small failure probabilities [18,19] or othermeasures of risk [20–23]. Although importance-sampling-based approachescan increase the efficiency of probability of failure estimation, they couldstill require many samples for the estimate if the biasing density is notconstructed appropriately. Various efficient adaptive importance samplingmethods [19] that iteratively get close to the optimal biasing density havebeen devised to meet this challenge. The cross-entropy method is a popularadaptive importance sampling method that uses the Kullback–Leibler (KL)divergence to iteratively get closer to the optimal biasing density [24–26].Adaptive importance sampling has also been implemented using mixturemodels to account for multiple failure regions [27,28]. Subset simulation isanother method for efficiently estimating failure probabilities by convertinga small failure probability into a series of larger conditional failure prob-abilities [29–31]. Surrogate-based importance sampling methods have alsobeen proposed to further improve computational efficiency [32–36]. In thiswork, we develop an importance sampling method that builds good biasingdensities using the optimization setup in RBDO.

We propose a new importance-sampling-based RBDO method thatreuses information from past optimization iterations for computation-ally efficient evaluation of the reliability constraint as illustrated inFig. 1(b). At the core of the IRIS-RBDO (Information Reuse for Im-portance Sampling in RBDO) method, we propose to build a good im-portance sampling biasing density by reusing data from previous opti-mization iterations. The proposed method reduces the computationaltime for probability of failure estimates in each RBDO iteration throughtwo levels of information reuse:

1. At the current design iteration, once the reliability estimate iscomputed via importance-sampling based Monte Carlo sampling, wereuse the current batch of samples from the reliability estimate toform an a posteriori biasing density. The a posteriori biasing density isbuilt in an optimal way by minimizing the KL divergence measure to

the optimal (zero-variance) biasing density.2. At the next design iteration, we reuse the available a posteriori

biasing densities from nearby designs explored in the past iterationsto construct a mixture density at the current iteration. The moti-vation for this is that nearby designs are likely to have similar failureregions, and hence reusing the a posteriori biasing densities from theexisting nearby designs can lead to efficient biasing densities.

In our IRIS-RBDO framework, the information from past optimiza-tion iterations acts as a surrogate for building biasing densities in eachRBDO iteration. The optimization history is a rich source of informa-tion. Reusing information from past optimization iterations in optimi-zation under uncertainty has been previously done in the context ofrobust optimization using control variates [37–39]. Cook et al. [40]extended the control variates method for information reuse to a largerclass of estimators for robust optimization. The probabilistic re-analysismethod, which has been used for RBDO [41] and probability of failuresensitivity analysis under mixed uncertainty [42], can be seen as a reusemethod. The probabilistic re-analysis method creates an offline librarywith a large number of samples using a density encompassing the entiredesign and random variable space, and then uses importance samplingto re-weight the samples according to a specific density in each RBDOiteration [43]. However, RBDO using probabilistic re-analysis is de-pendent on restrictive assumptions on the structure of the design andrandom variables, and does not guarantee the accuracy of probability offailure estimates in each RBDO iteration since it is sensitive to thequality of existing offline samples. Beaurepaire et al. [44] proposed amethod for reusing existing information in the context of RBDO throughbridge importance sampling, which is an adaptive sampling schemethat uses Markov Chain Monte Carlo to sample from intermediatedensities. The initial density for the bridge importance sampling at thecurrent design is constructed using the existing density from previousoptimization iterations whose mode leads to the limit state function atthe current design to be closest to the failure boundary while beingwithin the failure threshold. This requires multiple new evaluations ofthe limit state function for implementing the reuse. While similar inpurpose, our method is fundamentally different from the existing workof Beaurepaire et al. [44] in the way information from past optimiza-tion iterations is reused through a two-step process. We directly connectto variance reduction for probability of failure estimates by building thea posteriori biasing densities (leading to optimal biasing densities for theexisting designs). Note that no new evaluations of the expensive limitstate function are required to implement our information reuse. Wenext outline several important advantages of our method.

The key contribution of this paper is a new approach for reusing

Fig. 1. Two-loop process for RBDO using (a) the high-fidelity model, and (b) the proposed information reuse method.

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

2

Page 3: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

information from past optimization iterations in the context of im-portance-sampling-based RBDO. There are several advantages of theproposed IRIS-RBDO method. First, the method is computationally ef-ficient as it does not require building a biasing density from scratch atevery iteration and can build efficient biasing densities by reusing ex-isting information. The computational efficiency is demonstratedthrough two numerical experiments. Second, the method can overcomebad initial biasing densities by reusing samples to build (at every designiteration) a posteriori biasing densities for future reuse. Third, themethod is potentially useful for building biasing densities for dis-connected feasible regions or multiple failure regions because it uses amixture of existing biasing densities. Using mixture densities has beenshown to be useful for multiple disconnected failure regions for prob-ability of failure estimation [27,28]. The novelty of our approach lies inthe way we choose the mixture densities from the existing biasingdensities, and the mixing weights in the context of RBDO. Fourth, thereis no bias in the IRIS-RBDO reliability analysis and it can maintain adesired level of accuracy in the reliability estimates in every optimi-zation iteration.

The rest of the paper is structured as follows. Section 2 provides theRBDO formulation and background on methods used to estimate theprobability of failure. Section 3 describes the details of the proposedIRIS-RBDO method along with the complete algorithm. The effective-ness of IRIS-RBDO is shown using a benchmark speed reducer problemin Section 4 and a combustion engine model in Section 5. Section 6presents the conclusions.

2. Reliability-based design optimization (RBDO)

This section describes the RBDO formulation used in this work(Section 2.1) followed by existing Monte Carlo methods for estimatingthe probability of failure. Section 2.2 describes the Monte Carlo esti-mate and Section 2.3 describes the importance sampling estimate forprobability of failure.

2.1. RBDO formulation

The inputs to the system are nd design variables d nd and annr-dimensional random variable Z: nr defined on the samplespace Ξ and with the probability density function p, henceforth callednominal density. Here, denotes the design space and Ω denotes therandom sample space. A realization of Z is denoted as z∈ Ω. We usez∼ p to indicate that the realizations are sampled from distribution p.We are interested in the RBDO problem that uses a reliability con-straint—herein a failure probability—to drive the optimization. TheRBDO problem formulation used in this work is

=

<

d d

d

J f Z

g Z P

min ( ) [ ( , )]

subject to ( ( , ) 0) ,d

p

thresh (1)

where J: is the cost function, ×f : is the quantity ofinterest, ×g: is the limit state function, and Pthresh is the ac-ceptable threshold on the probability of failure. Without loss of gen-erality, the failure of the system is defined by g(d, Z) < 0. With

=d d z z zf Z f p[ ( , )] ( , ) ( )dp we denote the expectation of therandom variable f( · , Z) with respect to the density p.

2.2. Monte Carlo estimate for probability of failure

The solution of the RBDO problem given by Eq. (1) is obtained viaan iterative procedure where the optimizer evaluates a sequence ofdesign iterates while seeking a minimum-cost solution that satisfies thereliability constraint. Let dt be the design in optimization iteration t anddefine the corresponding failure set as

= <z z d zg{ | , ( , ) 0}.t t (2)

We emphasize that evaluating the limit state function, g, and hencechecking if z ,t requires evaluation of an expensive-to-evaluatemodel. The indicator function ×: {0, 1}t is defined as

=d z z( , ) 1, ,0, else.t

tt

(3)

The Monte Carlo estimate of the probability of failure= <d dP g Z( ): ( ( , ) 0)t t is given by

==

d d z zPm

p^ ( ) 1 ( , ), ,p tt i

m

t i iMC

1

t

t(4)

where zi, =i m1, , t are the mt samples from probability density pused in iteration t. The subscript for P denotes the density from whichthe random variables are sampled to compute the estimate.

In Monte Carlo simulation for estimating small probabilities, thenumber of samples required to achieve a fixed level of accuracy in theprobability estimate scales inversely with the probability itself. Due tothe low probability of failure for reliable designs, standard Monte Carlosampling would be computationally infeasible for expensive-to-eval-uate limit state functions because the number of samples, mt, requiredto reach an acceptable level of accuracy would be prohibitively large.

2.3. Importance sampling estimate for probability of failure

Importance sampling is a change of measure—from the nominaldensity to the biasing density—that is corrected via re-weighting of thesamples drawn from the new measure. In probability of failure esti-mation, a biasing density is sought so that many samples lie in the set

t . In this work, we use a parametric biasing density denoted by q t foroptimization iteration t, where θt denotes the parameters of thedistribution. The biasing density must satisfy

d Z p Z q Zsupp( ( , ) ( )) supp( ( ))tt t . The importance sampling esti-mate for P(dt) is given by

==

d d z zz

zPm

pq

q^ ( ) 1 ( , ) ( )( )

, ,q tt i

m

t ii

ii

IS

1t

t

tt

t (5)

where z ,i =i m1, , t are the mt samples from probability density q t

used in iteration t. The ratio zz

pq

( )( )

i

t iis called the importance weight, or

likelihood ratio.The unbiased sample variance for the importance sampling estimate

is defined by ,m

^mtt

2where m

2t is estimated by

==

d z zz

zm

pq

P q^ 11

( ( , ) ( )( )

^ ) , .mt i

m

t ii

iq i

2

1

IS 2t

t

tt

t t

The relative error, or coefficient-of-variation, in the probability offailure estimate is given by

=( )e PP m

^ 1^

^.q

q

m

t

ISIS

2

t

t

t

(6)

The importance sampling estimate of the failure probability is unbiased,i.e.,

=d d pq

[ ( , ·)] ( , ·) .p t q tt t tt

3. IRIS-RBDO: Information reuse in importance sampling forRBDO

We propose an efficient importance-sampling-based RBDO methodthat reduces computational cost through two levels of reusing existinginformation:

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

3

Page 4: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

1. Reusing existing samples from the reliability computation in thecurrent iteration to build an a posteriori biasing density with optimalparameters (see Theorem 1) that minimize the Kullback–Leiblerdivergence measure as described in Section 3.1.

2. Reusing existing biasing densities from nearby designs as describedin Section 3.2.

The complete IRIS-RBDO algorithm is summarized in Section 3.3.

3.1. Reusing samples for a posteriori biasing density with optimalparameters

The first level of information reuse consists of building an a pos-teriori biasing density. We propose a method for approximating theoptimal biasing density at current iteration t by reusing the currentbatch of samples that are used in the probability of failure computation.This first level of reuse builds on the ideas developed in the cross-en-tropy method for probability of failure estimation [24] applied to thecontext of reusing samples in the RBDO setup. The novelty lies in theway KL divergence is applied to reusing existing data in the optimiza-tion loop (after getting the reliability estimate), which is tailored to thetwo-loop RBDO setup. While we do use KL divergence as a distancemeasure to formulate the optimization problem to solve for the a pos-teriori density, our approach is different from the cross-entropy method(where KL divergence is used during the reliability estimate at a par-ticular design in the RBDO iteration). Note that this reuse method canbe extended to use with cross-entropy method, where the initial densitycan be defined by the mixture of these a posteriori densities (as de-scribed in Section 3.2) to reduce the number of cross-entropy iterations.

After evaluating the importance-sampling failure probability esti-mate dP ( )q t

IS

twith density q ,t we can compute an a posteriori biasing

density that is close to the optimal (also known as zero-variance)biasing density. It is known [45, Chapter 9] that the theoretical optimalbiasing density results in the estimate dP ( )h t*

ISt having zero variance, and

is given by

=zd z z

dh

pP

* ( )( , ) ( )

( ),t

t

t

t

(7)

where the superscript for h *t denotes that it is the optimal biasingdensity for RBDO iteration t. However, due to the occurrence of P(dt) inEq. (7), it is not practical to sample from this density.

Consequently, we want to find an a posteriori biasing density that isclose to h *t . In this work, the KL divergence [46] is used as the distancemeasure between two distributions. We thus define a density qθ para-meterized by and want to minimize the KL divergence to h *,twhich is defined as

= = zz

z zh q hq

hq

hKL( * ) ln*

ln*( )( )

* ( ) d .t ht t

t*t(8)

The optimal parameters for qθ for RBDO iteration t are given by *,twhere the superscript denotes that it is the optimal solution. Then *tcan be found by solving an optimization problem given by

=

=

=

=

=

=

z z z z z z

z z z

d

arg min h q

arg min hq

arg min h h q h

arg min q h

arg min q

arg min q

* KL( * )

ln*

ln( * ( )) * ( ) d ln( ( )) * ( ) d

ln( ( )) * ( ) d

[ln( )]

[ ( , ·)ln( )],

z z

z

t t

ht

t t t

t

h

p t

*

*

t

t

t (9)

where in the last step we used the definition of the optimal biasingdensity from Eq. (7), and dropped the term P(dt) as it does not affectthe optimization. Since the integral requires evaluating the failure re-gion, we use again importance sampling with density q t to obtain anefficient estimate, i.e.,

=d dq pq

q[ ( , ·)ln( )] ( , ·) ln( ) .p t q tt t tt (10)

Overall, we obtain the closest biasing density in KL distance via

=

=

d

d z zz

z

arg min pq

q

arg min pq

q

* ( , ·) ln( )

( , ) ( )( )

ln( ( )) ,

t q t

i

m

t ii

ii

1

t tt

t

tt (11)

where we replaced the expectation by an importance sampling estimateand z′ is sampled from the biasing density q t. Note that z′ samples areexisting samples that are reused after the probability of failure for dt isalready estimated.

We choose a multivariate normal distribution as the parametricdistribution for qθ. However, the method can be applied to any choiceof parametric distribution. In order to find analytic solutions for theparameters, the chosen distribution can be mapped to an exponentialfamily. One can also directly sample from the zero-variance optimalbiasing density h *t using Markov chain Monte Carlo and this could be apossible extension to the proposed method.

For the case of the multivariate normal distribution, i.e.,µq ( , ),t t*t (and for several other parametric distributions, speci-

fically the exponential family of distributions), an analytic solution ofthe optimal parameters *t can be derived and shown to be the globaloptimum for Eq. (11) as described below.

Theorem 1. Let µq ( , )t t*t be a multivariate normal distribution, withthe mean vector =µ µ µ[ , , ]t t t

n1 r and = =[ ]t tj k

j k n,

, 1, , r being thesymmetric positive definite covariance matrix. Let

=z z z q[ , , ]i i i n,1 , r t represent the ith sample vector and zi j, representthe jth entry of the ith sample vector for j n{1, , }r . Then the parameters

= µ* { , }t t t are given by

= ==

=

=

=

d z

d zµ

z z( , )

( , ),

zz

zz

zz

zz

tj i

mt i

pq i j

im

t ip

q

ip

q i j

ip

q

1( )( ) ,

1( )( )

1| | ( )

( ) ,

1| | ( )

( )

tt

i

t i

tt

i

t i

t i

t i

t i

t i (12)

=

=

=

=

=

=

d z

d z

z µ z µ

z µ z µ

( , ) ( )( )

( , )

( )( ).

zz

zz

zz

zz

tj k i

mt i

pq i j t

ji k t

k

im

t ip

q

ip

q i j tj

i k tk

ip

q

, 1( )( ) , ,

1( )( )

1| | ( )

( ) , ,

1| | ( )

( )

tt

i

t i

tt

i

t i

t i

t i

t i

t i (13)

and are the global optimum for the optimization problem given by Eq. (11).

Proof. See Appendix A. □

Constructing the a posteriori biasing density by reusing the existingsamples as proposed here can help overcome a bad initial biasingdensity. Example 2 presents a two-dimensional example to illustrate theeffectiveness of the a posteriori biasing density constructed through thefirst level of information reuse in the proposed IRIS-RBDO method.These a posteriori biasing densities are then stored in a database forfuture optimization iterations to facilitate the second level of reuse inIRIS-RBDO as described in Section 3.2.

Example 2. We give a simple example to illustrate the reuse of samplesto build the a posteriori biasing density, which constitutes the first levelof information reuse in IRIS-RBDO. We compare with a commonmethod to built biasing densities following [18], where the biasing

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

4

Page 5: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

density is chosen to be the normal distribution with mean shifted to themost probable failure point (MPP, see Appendix B on how to compute)and the same standard deviation as the nominal density.

Given is a two-dimensional random variable Z with nominal density

p 110 , 0. 1 0

0 32

2 . The limit state function is

=zg z z( ) 18 1 2. Failure is defined as g(z) < 0. In this case, the MPPis located at = =z z1.0078, 16.99221 2 . Therefore, the MPP-based biasing

density is q 1.007816.9922 , 0. 1 0

0 3MPP

2

2 .

KL-MVN denotes the first level of information reuse in IRIS-RBDOused for building the a posteriori biasing density with optimal para-meters θ* for the multivariate normal distribution using KL divergenceas described in Section 3.1. In this case,

q 1.010718.0124 , 0.01 0.0112

0.0112 0.8812* .

We compute the probability of failure using importance sampling tobe ×9.7 10 3. The coefficient of variation of probability of failure es-timate using the MPP-based biasing density is 0.0164 and KL-MVN-based biasing density is 0.0081. In both cases, 104 samples are usedfrom the biasing densities. For this example, we observe that the KL-MVN biasing density (constructed without any additional calls to thelimit state function) is a better biasing density since it leads to a re-duction in the coefficient of variation by around a factor of two com-pared to using the MPP-based biasing density. Notice that this examplehas a linear limit state function which makes it easy to find the MPP,leading to a good biasing density using MPP. This makes the reductionin coefficient of variation of probability of failure estimate by a factor oftwo using KL-MVN biasing density even more impressive and the gainswill be potentially much higher for non-linear limit states, wherefinding a good biasing density using MPP is much more difficult.

Fig. 2 shows the resulting biasing densities and the failureboundary. Note that the MPP-based biasing density successfully tilts thedistribution towards the failure region. However, since the MPP-basedmethod chooses the same variance as the nominal density with mean onthe failure boundary, samples drawn from that biasing density are farfrom the failure boundary and about 50% of the samples are in the safe

region. This results in either small importance weights (which are proneto numerical errors and increase variance in the estimate) or unin-formative samples (in the safe region with indicator function equal tozero). As can be seen from Fig. 2, the information-reuse-based a pos-teriori biasing density places a large portion of samples in the failureregion, and close to the failure boundary, which leads to good im-portance weights and variance reduction.

3.2. Reusing biasing densities from nearby designs for failure probabilitycomputation

The second level of information reuse involves reusing the existing aposteriori biasing densities (see Section 3.1) from past RBDO iterationsto construct a biasing density in current design iteration t. We proposeto reuse the a posteriori biasing densities corresponding to existing de-signs …d d, , t0 1 from past optimization iterations that are close in de-sign space.

A neighborhood of designs is defined as:= d d dj t r: { | 0 1, },t j j t 2 where r is the radius of the

hypersphere defined by the user. The weights =j t, 0, , 1j forexisting a posteriori biasing densities are defined according to the re-lative distance of design point dt to previously visited design points as

=d dj t r

:, 0 1,

0, else.

d dd d

jj t 2

d

j t

j t i t2

1

21

(14)

Note that == 1jt

j01 . The weights for each biasing density in our

second level of information reuse reflects a correlation between thedesigns, which exists if designs are close to each other. Nearby designsfrom past optimization iterations will likely have similar failure regionsand thus similar biasing densities. We set the radius of the hypersphereto only include designs that are in close proximity to the current design,and otherwise set the weight to zero as seen in Eq. (14). In this work,we chose the l2-norm as the distance metric, however, any other normcan also be used in this method.

The information reuse biasing density for current design iteration tis defined by the mixture of existing a posteriori biasing densities asgiven by

==

q q: ,j

t

j0

1

*t j(15)

where q *j are all the a posteriori biasing densities constructed using thefirst level of information reuse (see Section 3.1) from the past RBDOiterations j t{0, , 1} that are stored in a database. Using a mix-ture distribution for constructing the biasing density also has the po-tential to capture multiple disconnected failure regions.

3.3. Algorithm and implementation details

Algorithm 1 describes the implementation of IRIS-RBDO methodthat constitutes of two levels of information reuse for an RBDO problemas described by Eq. (1).

In this work, we set the radius of the specified hypersphere r to 0.5%of the largest diagonal of the hypercube defining the design space. Wenote that a distance measure of this nature works best if the designspace is normalized so that each design variable has the same scale. Thesecond level of information reuse described in Section 3.2 is used only ifthere are nearby designs. If there are no nearby designs ( =t ) forcurrent optimization iteration t, we use an MPP-based [18] method (seeAppendix B on how to compute) for building a biasing density for im-portance sampling with no reuse. However, any other method de-pending on the user’s preference can be chosen to build the biasingdensity for importance sampling with no reuse. Note that we do not putany restrictions on how many designs to reuse, i.e., we reuse

Fig. 2. Illustrative example comparing a posteriori biasing density with MPP-based biasing density.

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

5

Page 6: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

Req

uire

:C

ost

func

tionJ

(·),

limit

stat

efu

nctio

ng(·)

,no

min

alde

nsity

p,

desi

gnsp

aceD,

initi

alde

sign

d 0,

hype

rsph

ere

radi

usr,

coeffi

cien

tofv

aria

tion

tole

ranc

eε tol

,max

imum

num

ber

ofsa

mpl

esmm

ax,t

hres

hold

onpr

obab

ility

offa

ilureP

thre

sh

Ens

ure:

Opt

imal

desi

gnd∗

,opt

imal

cost

J opt

1:pr

oced

ure

IRIS

_RB

DO

(p,D,d

0,r,ε

tol,

mm

ax,P

thre

sh)

2:B

uild

bias

ing

dens

ityq θ

0fo

rd 0

from

scra

tch

.F

orM

PP

-bas

edm

etho

d,se

eA

ppen

dix

B3:

(PIS q θ

0(d

0),G 0

)=

ProbabilityOfFailure(p,qθ 0,d

0,ε

tol,

mm

ax)

.S

eeA

lgor

ithm

24:

Eva

luat

eco

stfu

nctio

nJ(d

0)

5:if

PIS q θ

0(d

0)≤

Pth

resh

then

.C

heck

ifde

sign

isre

liabl

e6:

J opt←

J(d 0

).

Ass

ign

optim

alco

st7:

d∗←

d 0.

Ass

ign

optim

alde

sign

8:el

se9:

J opt←

1010

.In

itial

ize

optim

alco

st10

:en

dif

11:

t=

012

:w

hile

optim

izat

ion

notc

onve

rgedd

o13

:C

alcu

lateµ

tus

ing

Eq.

12an

dsa

mpl

esfr

omqθ t

14:

Cal

cula

teΣ

tus

ing

Eq.

13an

dsa

mpl

esfr

omqθ t

15:

q θ∗ t←N

(µt,Σ

t).

Reu

sing

exis

ting

sam

ples

:apo

ster

iori

bias

ing

dens

ity

16:

t←

t+1

17:

Get

desi

gnd t

from

optim

izer

18:

Mt←{d

j|0≤

j≤

t−1,‖d

j−

d t‖ 2≤

r}.

Nei

ghbo

rhoo

dde

sign

s19

:ifM

t,∅t

hen

.N

earb

yde

sign

sex

ist

20:

Com

pute

wei

ghtsβ

j∀

j=

0,...,

t−1

usin

gE

q.14

21:

q θt←∑ t−1 j=

jqθ∗ j

.R

eusi

ngex

istin

gbi

asin

gde

nsiti

es:m

ixtu

reofqθ∗ j

22:

else

23:

Bui

ldbi

asin

gde

nsity

q θtfr

omsc

ratc

h.

No

near

bypr

evio

usde

sign

s24

:en

dif

25:

(PIS q θ

t(d

t),G

t)=

ProbabilityOfFailure(p,qθ t,d

t,ε t

ol,m

max

).

See

Alg

orith

m2

26:

Eva

luat

eco

stfu

nctio

nJ(d

t)

27:

ifJ(

d t)≤

J opt

and

PIS q θ

t(d

t)≤

Pth

resh

then

28:

J opt←

J(d t

).

Ass

ign

optim

alco

st29

:d∗←

d t.

Ass

ign

optim

alde

sign

30:

end

if31

:en

dw

hile

32:

retu

rnd∗,J

opt

33:

end

proc

edur

e

Algorithm

1.IR

IS-R

BDO

:Inf

orm

atio

nre

use

inim

port

ance

-sam

plin

g-ba

sed

RBD

O.

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

6

Page 7: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

Req

uire

:Li

mit

stat

efu

nctio

ng(·)

,des

ignd

t,no

min

alde

nsity

p,b

iasi

ngde

nsity

q θt,

coeffi

cien

tofv

aria

tion

tole

ranc

eε tol

,max

imum

num

ber

ofsa

mpl

esmm

ax

Ens

ure:

Pro

babi

lity

offa

ilure

estim

ateP

IS q θt(d

t),f

ailu

rese

tG t1:

proc

edur

eProbabilityOfFailure(p,qθ t,d

t,ε t

ol,m

max

)2:

mt=

0.

Num

ber

ofsa

mpl

es3:

mad

d=

100

.10

0sa

mpl

esar

ead

ded

ata

time

4:e

(PIS q θ

t)=

100ε

tol

5:Z′ t=∅

6:w

hile

(mt≤

mm

ax)

or(e

(PIS q θ

t)>ε t

ol)

do7:

Get

mad

dsa

mpl

es{z′ 1,...,z′ m

add}f

rom

q θt

8:m

t←

mt+

mad

d

9:Z′ t←Z′ t∪{

z′ 1,...,z′ m

add}

10:

Com

pute

prob

abili

tyof

failu

re

PIS q θ

t(d

t)=

1 mt

mt ∑ i=1

I Gt(

d t,z′ i)

p(z′ i)

q θt(

z′ i)

11:

Cal

cula

teco

effici

ento

fvar

iatio

nin

prob

abili

tyof

failu

rees

timat

ee(P

IS q θt)

usin

gE

q.6

12:

end

whi

le13

:G t←{z|z∈Z

′ t,g

(dt,

z)<

0}.

Fai

lure

set

14:

retu

rnP

IS q θt(d

t),G

t

15:

end

proc

edur

e

Algorithm

2.Pr

obab

ility

offa

ilure

estim

ate.

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

7

Page 8: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

information from all nearby designs within the specified radius r, seeEq. (14). However, the user can choose to limit the number of reuseddesigns.

Algorithm 2 shows the implementation for the failure probability es-timation required in every iteration of Algorithm 1. We require the coef-ficient of variation defined in Eq. (6), in the probability of failure estimateto be within acceptable tolerance tol at iteration t. That is, we require

( )e P .qIS

tolt (16)

The value for tol can be set by the user depending on the level of accuracyrequired for a specific application. We choose [10 , 10 ]tol

2 1 in theapplications presented in this paper. We follow an iterative process to addsamples and check if the coefficient of variation is below a specified errortolerance. In this work, 100 samples are added at a time. However, toensure termination of the algorithm in case this criterion is not met, we seta maximum number of samples mmax that shall not be exceeded at everydesign iteration. Note that mmax can typically be set by taking into accountthe values of Pthresh and tol.

After estimating dP ( )q tIS

twithin the specified relative error tolerance

,tol we reuse the existing samples to construct the a posteriori biasingdensity q *t with optimal parameters *t using the method described inSection 3.1. The algorithm then proceeds to the next optimizationiteration.

Remark 3 (Defensive importance sampling). Importance sampling (with agood biasing density) is efficient for small probabilities and we arecommonly interested in low probabilities of failure in reliableengineering design applications. It should be noted that for largeprobabilities, importance sampling can be inefficient. To ensure a robustmethod to guard against such cases when building biasing densities forimportance sampling with no reuse, one can use defensive importancesampling (described in Appendix C) in combination with a method ofchoice for importance sampling (in our case, MPP-based).However, wefound biasing densities built using IRIS to be efficient without the use ofdefensive importance sampling even for higher probabilities of failure. Areason is that we build the a posteriori biasing density via a weightedsample average as given by Eqs. (12) and (13), where the weights dependon the distance from the nominal density, naturally and efficientlyencoding the defensive part.For high probabilities of failure, IRIS shouldpotentially lead to similar number of required samples as defensiveimportance sampling or generic Monte Carlo sampling, and we see thatto be the case in our numerical experiments.

Remark 4 (Optimizer and gradients). We emphasize that this work doesnot develop a particular optimization algorithm for RBDO but providesa general method of efficiently integrating information from pastoptimization iterations into the reliability analysis. In this work, weshow the efficiency of the proposed method for both a gradient-freeoptimizer and a gradient-based optimizer. Typically, it is difficult toaccurately estimate gradients of probability of failure making itchallenging to use a gradient-based optimizer for RBDO. However, forhigh-dimensional optimization problems gradient-based optimizersoften are the only choice. If one wants to use a gradient-basedoptimizer for black-box optimization, finite difference (although notthe most efficient) is a generic choice for calculating derivatives in anumber of off-the-shelf optimizers.The IRIS method offers an advantagewhen finite difference is used to calculate the gradients because itreuses biasing densities from a very close design (since the finitedifference step-size is very small) and leads to efficient estimates for theprobability of failure gradients. For differentiable dP ( ),q

ISthe finite

difference estimate for derivative of the probability of failure at anygiven design d is

+=

d d e dPd

P Pi n

^ ( ) ^ ( ) ^ ( ), 1, ,q

i

q i qd

IS IS IS

where is a small perturbation, and ei is the ith unit vector. We canensure that every probability of failure estimate meets a set errortolerance tol as seen in Algorithm 2. The error in probability of failureestimates directly affect the variance of the finite difference estimatorfor the derivatives, and thus ensures that the variance is also under acertain tolerance.The variance of the finite difference estimate for thederivatives using IRIS is given by

+ +

+ +

[ ]dd e d

d e d

Pd

P P

P P

ar^ ( ) 1 ar ^ ( ) ar[ ^ ( )]

^ ( ) ^ ( ) .

q

iq i q

q i q

IS

2

IS IS

tol2

2

IS 2 IS 2

Notice that since ( ≪ r) is a small perturbation, we can use the IRISmethod for rest of the nd probability of failure estimates required in thefinite difference scheme after estimating dP ( )q

IS. Thus, using IRIS we

need substantially fewer samples to estimate +d eP ( )q iIS

whilemaintaining the same error tolerance tol in the probability of failureestimates required in the finite difference scheme as compared toregular Monte Carlo estimator or importance sampling without reuse.The variance of the derivative is also controlled through the errortolerance given by Eq. (16).Further variance reduction at the cost ofadditional bias can be obtained by using common randomnumbers [47,48] for the finite difference scheme.There are alsoseveral approximate methods available for estimating the gradient ofprobability of failure that can also be used to solve the RBDOproblem [12,49–51].

4. Benchmark problem: Speed reducer

The speed reducer problem used in Ref. [52, Ch.10] is a benchmarkproblem in RBDO. Here, we make the problem more challenging bymodifying the limit state functions in order to lead to lower prob-abilities of failure for the system. We set a lower threshold probabilityof failure of =P 10 ,thresh

3 as compared to 10 2 in Ref. [52]. The toler-ance on coefficient of variation in probability of failure estimationwithin IRIS-RBDO is set to = 0.01tol . This makes the estimation of thelower failure probabilities within the specified tolerance more ex-pensive and challenging than the original problem. The inputs to thesystem are six design variables defined in Table 1 and three uncertainvariables defined as uncorrelated random variables in Table 2. Notethat the method can handle any distribution for the random variablesand they do not need to be uncorrelated. We fix the number of gearteeth to 17.

Table 1Design variables =d d d[ , , ]1 6

6 used in the speed reducer applica-tion.

Designvariable

Lower bound(mm)

Upper bound(mm)

Initial design(mm)

Best design(mm)

d1 2.6 3.6 3.5 3.5d2 0.7 0.8 0.7 0.7d3 7.3 8.3 7.3 7.3d4 7.3 8.3 7.72 7.88d5 2.9 3.9 3.35 3.45d6 5.0 5.5 5.29 5.34

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

8

Page 9: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

The RBDO problem formulation used in this work is given by

=

< =

d d

d

J f Z

g Z P i

min ( ) [ ( , )]

subject to ( ( , ) 0) 10 , 1, 2, 3,d

p

i thresh3 (17)

where

= × + ×+ + +

+ +

d zf d zd z d z d

z z d d

( , ) 0.7854 (3.3333 17 14.9334 17 43.0934)1.5079 ( ) 7.477( )

0.7854( ),

1 12 2

1 32

62

33

63

2 32

4 72 (18)

is a cost function that penalizes the material used in the manufacturingprocess with units of mm3. The limit state functions are

=

= = + × =

= = + × =

d z

d z

d z

g zz z

g AB

A zz

B z

g AB

A dz

B d

( , ) 1 1.9317

,

( , ) 1120 , 74517

16.9 10 , 0.1 ,

( , ) 870 , 74517

157.5 10 , 0.1 .

123

1 34

21

11

2

1

26

0.5

1 33

32

22

4

1

26

0.5

2 63

(19)

We used the COBYLA (constrained optimization by linear approxima-tion) optimizer from the NLopt package to run the optimization andalso set a cut-off for maximum number of samples to be used in eachoptimization iteration to = ×m 5 10max

5. COBYLA is a gradient-freeoptimizer. We compare the efficiency of probability of failure estimatesusing the proposed IRIS-RBDO method that reuses information to im-portance sampling with no information reuse. We also compare thoseresults to subset simulation [29], a state-of-the-art method in reliabilityanalysis and failure probability estimation.1

Fig. 3 (a) shows the IRIS-RBDO convergence history versus the cu-mulative computational cost in terms of number of samples used. Wesee that the optimization requires around 2 × 105 samples before itfinds the first feasible design. The probability of failure history seen inFig. 3 (c) shows the progress of designs from infeasible to feasible re-gions during the optimization. The best design obtained in this case isgiven in Table 1, which had an associated cost of 3029.2 mm3.

The total number of samples used in each optimization iteration inIRIS-RBDO versus importance sampling with no reuse and subsetsampling is shown in Fig. 4 (a). Note that we are showing the plots forthe same designs in each RBDO iteration for all the cases, which makesit a one-to-one comparison. When no designs are nearby, our methodalso builds a biasing density with no reuse, hence the two markersoverlap in those iterations. Otherwise, IRIS-RBDO always outperformsthe other methods. IRIS-RBDO leads to overall computational savings ofaround 51% compared to importance sampling with no reuse and

subset sampling throughout the optimization. Current implementationof subset simulation performs similar to the importance sampling withno reuse for the speed reducer problem. For this problem, subset si-mulation using 104 samples in each level does not meet the error tol-erance (see Fig. 4 (c)). However, IRIS and importance sampling with noreuse meet the set tolerance ( = 0.01tol ) on the coefficient of variationin probability of failure estimate for every optimization iteration asseen in Fig. 4 (c).

Fig. 4 (b) compares performance of IRIS-RBDO vs importance samplingwith no reuse and subset simulation by showing the number of samplesrequired for the corresponding probability of failure estimates. For thecase when there is no reuse, we see that the required number of samples isapproximately inversely proportional to the respective probability offailure. However, for IRIS the required number of samples depend on thequality and amount of information reused. In this case, using IRIS weconsiderably reduce the number of samples required even for lowerprobabilities of failure due to the extra information about the failureboundary encoded in the biasing density by reusing information from thepast optimization iterations. As noted before, when the markers overlap itmeans that there was no nearby designs and the biasing density was builtwith no reuse (here, MPP-based) during IRIS-RBDO.

The number of designs reused in each optimization iteration of IRIS-RBDO is shown in Fig. 5. Reusing designs leads to computational sav-ings because of better biasing densities. As the iteration converges, IRIS-RBDO finds many close designs and beneficially reuses the biasingdensities; compare this to Fig. 4 (a) to see how reuse saves modelevaluations. However, note that the computational savings are not di-rectly proportional to the number of reused designs. For iterationswhere no designs were reused—typically in the early design space ex-ploration stage—the biasing density was built without any informationreuse (here, MPP-based).

5. RBDO for a combustion engine model

In this section, we show the effectiveness of the proposed IRIS-RBDO method when applied to the design of a combustion engine. Theadvent of reusable rockets for space flight requires new, more durable,engine designs [53]. Satisfying reliability constrains is not only im-portant for safety, but also for durability, as failure to meet reliabilityconstraints results in excessive wear of the engine, in turn limiting therockets’ repeated use. The computational model used to analyze thecombustion engine is described in Section 5.1. Section 5.2 describes theRBDO problem formulation and the results are discussed in Section 5.3.

5.1. Computational model

We consider a continuously variable resonance combustor (CVRC),which is a single element model rocket combustor as illustrated in Fig. 6. TheCVRC is an experiment at Purdue University which has been extensivelystudied both experimentally [54–56] and computationally [57–59].

5.1.1. Governing equations and geometryThe experiment is modeled with a quasi-1D2 partial differential

equation model. Fig. 7 shows the computational domain of the com-bustor, plotting the one-dimensional spatial variable x versus thecombustor radius R(x). From left to right, the figure shows the fiveimportant combustor segments, separated by the dashed lines: the in-jector, back-step, combustion chamber, back-step and nozzle. The in-jector length is =L 3.49i cm, the chamber length is =L 38.1c cm, thelength of both backsteps is fixed at =L 3.81bs cm and the nozzle length is

=L 0.635n cm. The spatial variable is thus considered as

Table 2Uncertain variables modeled as vector-valued random variable z 3

with realization =z z z z[ , , ]1 2 3 used in the speed reducer application.

Random variable Distribution Mean Standard deviation (μm)

z1 Normal d2 1z2 Normal d4 30z3 Normal d6 21

1 We use the recent Markov Chain Monte Carlo implementation for subsetsimulation of [30] (https://www.mathworks.com/matlabcentral/fileexchange/57947-monte-carlo-and-subset-simulation-example?s_tid=prof_contriblnk)where we use 104 samples in each level. Note that we tried different samplesizes for each level and settled on 104 in order to get close to the set errortolerance. For this problem, subset simulation does not meet the set error tol-erance as seen later. We use the approximate error estimate for subset simu-lation from [30].

2 The three dimensional state variables are averaged across the combustor,resulting in a stream-wise dependence of the state variables, i.e., states are x-dependent.

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

9

Page 10: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

+ +x L x L L L2i bs c n. The injector radius is given by Ri and thecombustion chamber radius is given by Rc. The nozzle radii are

=R 1.0401t cm at the throat and 1.0922cm at the exit.The quasi-1D Euler partial differential equation model for the

CVRC [59,60] is given as

= ++

++

tu

EY

A x

A uA u pAu E p

A uY

pA

Ax

u

hC

1 ( )( )

dd

/

,

f

f

f

f f oox

2

ox0

/ (20)

which we solve for the steady-state solution = 0t via pseudo-timestepping. In the following we compute steady-state solutions for Eq.(20), i.e., time-independent solutions. The velocity is u(x) and Y x( )ox isthe oxidizer mass fraction. The state variables are density (x), specificmomentum (x)u(x), total energy E(x), and x Y x( ) ( )ox . The equation ofstate

= +E p u1

12

2(21)

relates energy and pressure p(x) via the heat capacity ratio . In thesource terms of Eq. (20), which model the chemical reaction and thecross-sectional area variation, h0 denotes the heat of reaction, which istaken as a constant. The fuel-to-oxidizer ratio is the parameter Cf o/ .Moreover, = =A A x R x( ) ( )2 encodes the cross-sectional area of thecombustor as a function of x . Fuel at a mass-flow rate mf is injectedthrough an annular rig at the backstep after the oxidizer injector,centered at coordinate =x L ,f see also Fig. 7. The forcing function f inEq. (20) is then modeled as

=+

++ +x mm

A x x xx( , )

( ) (1 sin( ( )))d(1 sin( ( ))),f f

f

LL L L2ibs c n

(22)

Fig. 3. Optimization progress using IRIS-RBDO showing (a) the objective function value for designs from all optimization iterations, (b) magnified convergence plotof feasible designs against the cumulative computational cost in terms of number of samples, and (c) probability of failure history in each optimization iteration forthe speed reducer problem.

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

10

Page 11: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

=+ < <

xx ll l

l x l( ) 2

2 ,

0, else,

s

f ss f

(23)

where =l L 4.44s f and = +l L 3.18f f . The computational model is afinite-volume discretization with upwinding, where we use 800 non-uniform finite volume elements and a fourth order Runge-Kutta in-tegration scheme. The CPU time required for one evaluation of thecomputational model is on average around 20 seconds.

5.1.2. Boundary conditionsThe inlet boundary condition is modeled via a subsonic inlet. At the

inlet, we prescribe the oxidizer mass flow rate mox and the oxidizerconcentration Yox. The inlet stagnation temperature T0 is determined asfollows: we prescribe a reference temperature T∞ and reference pres-sure p∞, which are typically given from upstream components of an

engine. We then use the relation = +T Tm R T

A p C012 p

ox2

gas2 2

2 2 with universal gas

constant = ×R 8.314 10 gmolgas

3 and specific heat of the fuel= ×C 4.668 10p

Jkg K

3 . Due to the subsonic nature of the boundary, thepressure is extrapolated from the domain. Having m Y T p, , ,ox ox 0 at theinlet allows us to compute the boundary conditions for the state vari-ables. The downstream boundary is modeled as a supersonic outlet,with constant extrapolation of the state variables.

5.1.3. Design variablesWe define a four-dimensional design space with the following

design variables d 4: the geometric parameters of the injectorradius Ri, the combustion chamber radius Rc and the location of the fuelinjection Lf (see Fig. 7), and the mass-flow rate mf that enters into theforcing model in Eq. (22). The design variables =d R R L m[ , , , ]i c f fand the respective bounds are given in Table 3.

Fig. 4. Comparison of IRIS, importance sampling (IS) with no reuse and subset simulation for the same designs showing (a) number of samples required in eachoptimization iteration, (b) number of samples required to calculate the corresponding probabilities of failure, and (c) error in probability of failure estimate(quantified by the coefficient of variation) in each optimization iteration for the speed reducer problem.

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

11

Page 12: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

5.1.4. Uncertain variablesThe reference pressure p∞ and the reference temperature T∞ are

typically measured from upstream components of the combustion en-gine and are therefore subject to uncertainty. They enter in the inletboundary conditions, see Section 5.1.2. Another uncertain variable isthe fuel-to-oxidizer ratio Cf/o which enters into the forcing term in thegoverning equations, Eq. (20), and in practice is also uncertain. Sinceall three uncertain variables are known within certain bounds, wemodel them as a vector-valued random variable Z with a uniformprobability distribution. A realization of the random variable is

=z p T C[ , , ]f o/3. We list the three uncertain variables and

their respective probability distributions (in this case, uncorrelated) inTable 4.

5.2. RBDO formulation: Objective function and reliability constraints

Having defined both the design variables and uncertain variables,we note that solutions to the state Eqs. (20) and (21) depend on thedesign =d R R L m[ , , , ]i c f f and a realization =z p T C[ , , ]f o/ of theuncertain parameters, i.e., the pressure = d zp x p x( ) ( ; , ). We next de-scribe the cost function for RBDO and the reliability constraints,which

Fig. 5. Number of designs reused in IRIS in each optimization iteration for thespeed reducer problem.

Fig. 6. CVRC experimental configuration from [55]. The computational domain for the reactive flow computations is given in Fig. 7.

Fig. 7. Computational domain (dashed area in Fig. 6) for the CVRC model combustor and its segments. The injector radius Ri and combustion chamber radius Rc aredesign parameters (for this plot chosen as the mean of the design parameter intervals). The location of the fuel source, Lf, is also a design variable.

Table 3Design variables =d R R L m[ , , , ]i c f f

4 used in the combustion engine problem.

Design variable Description Range Initial design Best design

Ri Injector radius [0.889, 1.143]cm 1.02 1.14Rc Combustion chamber radius [1.778, 2.54]cm 2.16 2.41Lf Location of fuel injection [3.5, 4]cm 3.75 3.5mf Mass flow rate for fuel injection [0.026, 0.028]kg/s 0.027 0.026

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

12

Page 13: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

then completes the RBDO problem formulation from Eq. (1).

5.2.1. Cost functionWe are interested in maximizing C⋆ (“C-star”) efficiency, also

known as characteristic exhaust velocity, a common measure of theenergy available from the combustion process of the engine. To com-pute C⋆, we need the total mass flow rate at the exhaust,

= +m m mout ox f . The oxidizer mass flow rate mox is given via=m m

Coxf

f o/with the equivalence ratio ϕ computed from reference mass-

flow and oxidizer-flow rates as = C0.0844

f o/. We then obtain

=m m11.852ox f . The outlet mass flow rate follows as=dm m( ) 12.852out f . Recall, that mf is a design variable.

The C⋆ efficiency measure is defined as

=d z d zd

C p Am

( , ) ( , )( )

,t

out

with units of m/s. Here, =A Rt t2 denotes the area of the nozzle throat

(see Fig. 7 for the nozzle radius Rt) and

Table 4Random variable z 3 with realization =z p T C[ , , ]f o/ used in the combustion engine problem.

Uncertain variable Description Distribution Range

p∞ Upstream pressure Uniform [1.3, 1.6]MPaT∞ Upstream oxidizer temperature Uniform [1000, 1060]KCf/o Fuel-to-oxidizer ratio Uniform [0.10, 0.11]

Fig. 8. Optimization progress using IRIS-RBDO showing (a) convergence history of mean C* values for designs from all optimization iterations, (b) magnifiedconvergence plot of feasible designs vs the cumulative computational cost in terms of number of samples, and (c) probability of failure history in each optimizationiteration for the combustion engine problem.

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

13

Page 14: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

=+ + +

+ +d z d zp

L L L Lp x x¯ ( , ): 1

2( ; , )d

L

L L L

i bs c n

2

i

bs c n

is the spatial mean of the steady-state pressure. We then define thequantity of interest ×f : as

=d z d zf C( , ) ( , ),

and recall that the RBDO objective is to minimize the cost function fromEq. (1).

5.2.2. Reliability constraintThe reliability constraint is based on the maximum pressure, as

engines are unsafe if the maximum chamber pressure exceeds a certainthreshold. Here, we limit the pressure deviation in the engine relative tothe inflow pressure to 13.5% to define failure, i.e., the engine is safe if

<max 0.135d zx

p x pp

( ; , ) . The limit state function ×g: forthis example is

=d zd z

gp x p

p( , ) 0.135 max

( ; , ),

x

where the pressure p(x; d, z) is computed by solving Eqs. (20)–(21) fordesign d and with a realization z of the random variable. Note that p∞ isan uncertain variable, defined in Section 5.1.4. Failure of the system isdefined by g(d, z) < 0. Recall from Eq. (1) that the reliability con-straint is <dg Z P( ( , ) 0) thresh. For the CVRC application, thethreshold on the reliability constraint is set at =P 0.005thresh with errortolerance = 0.05tol in Eq. (16).

5.3. Results of RBDO

We use the fmincon optimizer in MATLAB to run the optimization.fmincon is a gradient-based optimizer that uses finite difference toestimate the gradients. The maximum number of samples allowed ineach optimization iteration for estimating the probability of failure setto =m 10max

4. Note that in this case, the mmax value is governed by cost

Fig. 9. (a) Comparison of IRIS and importance sampling (IS) with no reuse for the same designs showing number of samples required in each optimization iteration,(b) number of samples required to calculate the corresponding probabilities of failure, and (c) the error in probability of failure estimate (quantified by the coefficientof variation) using IRIS in each optimization iteration for the combustion engine problem.

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

14

Page 15: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

of evaluation of the computational model. IRIS-RBDO convergencehistory in Fig. 8 (a) shows that it requires more than 2.5 × 104 samplesbefore the optimizer finds the first feasible design. The probability offailure history in Fig. 8 (c) shows the progress of designs from infeasibleto feasible regions during the optimization of the combustion engine.The best design obtained through RBDO is given in Table 3 and theoptimal mean C* efficiency obtained is 1426.2 m/s.

Fig. 9 (a) shows the number of samples used in each optimizationiteration using IRIS-RBDO compared to importance sampling with noreuse. Note that the comparison is shown for the same designs in eachoptimization iteration so that we can make a direct comparison of thecomputational efficiency. We can see that when biasing density wasbuilt with no reuse (here, MPP-based), the number of required samplesreached the maximum of 104 in most of the optimization iterations.There were only six cases for IRIS-RBDO that reached the maximumnumber of samples. In this case, IRIS-RBDO leads to overall computa-tional saving of around 76% compared to importance sampling with noreuse. The efficiency of IRIS-RBDO can also be seen from Fig. 9 (b) thatshows the required number of samples for corresponding probability offailure estimates. We can see that specifically for low probabilities offailure, the required number of samples are substantially lower whencompared to building biasing densities with no reuse.

Fig. 9 (c) shows that the coefficient of variation (error) in prob-ability of failure estimate for IRIS-RBDO is below the set tolerance( = 0.05tol ) for all but six optimization iterations. All of the cases wherethe error tolerance was not met for IRIS-RBDO occurred because forthese cases the required number of samples reached m ,max which is setto 104 (as seen from Fig. 9 (a)). These were also the same cases wherethere were no nearby designs (as seen from Fig. 9 (a)) which mean thateven IRIS builds the biasing density with no information reuse (here,MPP-based).

The number of designs reused in each optimization iteration byIRIS-RBDO is shown in Fig. 10. We can see that all six cases of IRIS-RBDO that required 104 samples were cases where no nearby designswere available, i.e., no information was reused. However, we can seethat for most of the cases where information was reused, the required

number of samples was lower compared to building biasing densitieswith no reuse (see Fig. 9 (a)). The required number of samples are thesame when there are zero reused designs. We can also see the additionaladvantage of the IRIS method for the gradient-based optimizer usingfinite difference. As pointed out in Remark 4, after estimating theprobability of failure at a particular design, the next nd probability offailure estimates required for the finite difference estimate of the de-rivative will always use the IRIS method in the implementation of ourmethod with a gradient-based optimizer. This can be clearly seen fromthe first +n 1d (here, five) optimization iterations shown in Fig. 10,where designs are reused after the first iteration. The efficiency of usingthe IRIS method is reflected by concurrently looking at the number ofsamples required by IRIS in the first +n 1d optimization iterations inFig. 9 (a). Note that in this case, an optimization iteration refers toeither a probability of failure estimate at a given design or the prob-ability of failure estimates required for the gradient.

6. Concluding remarks

This paper introduced IRIS-RBDO (Information Reuse forImportance Sampling in RBDO), a new importance-sampling-basedRBDO method. IRIS-RBDO reuses information from past optimizationiterations for computationally efficient reliability estimates. Themethod achieves this by building efficient biasing distributions throughtwo levels of information reuse: (1) reusing the current batch of samplesto build an a posteriori biasing density with optimal parameters for alldesigns, and (2) reusing a mixture of the a posteriori biasing densitiesfrom nearby past designs to build biasing density for the current design.The rich source of existing information from past RBDO iterations helpsin constructing very efficient biasing densities. The method can alsoovercome bad initial biasing densities and there is no bias in the re-liability estimates. We show the efficiency of IRIS-RBDO through abenchmark speed reducer problem and a combustion engine problem.IRIS-RBDO leads to computational savings of around 51% for the speedreducer problem and around 76% for the combustion engine problem ascompared to building biasing densities with no reuse (using MPP in thiscase). In this work, we develop the information reuse idea for im-portance sampling in RBDO but the method can be easily extended tobuilding initial biasing densities for adaptive importance samplingschemes used in the RBDO setup, and we will explore this in a futurework.

Declaration of Competing Interest

The authors declare that they have no known competing financialinterests or personal relationships that could have appeared to influ-ence the work reported in this paper.

Acknowledgments

This work has been supported in part by the Air Force Office ofScientific Research (AFOSR) MURI on managing multiple informationsources of multi-physics systems, Award Numbers FA9550-15-1-0038and FA9550-18-1-0023 and by the Air Force Center of Excellence onMulti-Fidelity Modeling of Rocket Combustor Dynamics under awardFA9550-17-1-0195. The authors thank Dr. Cheng Huang for valuablediscussions relating to the CVRC design problem, and Dr. AlexandreMarques and Jiayang Xu for making this solver for the CVRC available.

Appendix A. Proof for Theorem 1

The proof follows from similar work in the cross-entropy method [24], where KL divergence is applied in a different context than in this RBDOwork. In this section, we derive the analytic solution for the parameters of multivariate normal density, which is chosen to be the distribution in thiswork. Consider the multivariate normal density with parameters = µ{ , }:

Fig. 10. Number of designs reused in IRIS in each optimization iteration for thecombustion engine problem.

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

15

Page 16: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

=z z µ z µq ( ) 1(2 ) | |

exp 12

( ) ( ) ,n

1r (A.1)

Taking the logarithm of qθ(z), we get

=z z µ z µq nln( ( ))2

ln(2 ) 12

ln | | 12

( ) ( ).r 1(A.2)

Then the objective function of the optimization problem given by Eq. (11) at iteration t can be rewritten as

=

= +

+

=

= =

=

µ d z zz

z µ z µ

d z zz

d z zz

d z zz

z µ z µ

pq

n

n pq

pq

pq

( , ) ( , ) ( )( ) 2

ln(2 ) 12

ln | | 12

( ) ( )

2ln(2 ) ( , ) ( )

( )12

ln | | ( , ) ( )( )

12

( , ) ( )( )

( ) ( ),

i

m

t ii

i

ri i

r

i

m

t ii

i i

m

t ii

i

i

m

t ii

ii i

1

1

1 1

1

1

t

tt

t

tt

t

tt

t

tt (A.3)

where z qi t.The local optimum of Eq. (11) given by parameters = µ* { , }t t t for RBDO iteration t can be found by equating the gradients of Eq. (A.3) to zero

(Karush–Kuhn–Tucker (KKT) conditions).The local optimum μt is found by setting the gradient of µ( , ) w.r.t. μ to zero as given by

= ==

d z zz

z µpq

12

( , ) ( )( )

(2 ( )) 0,µi

m

t ii

ii

1

1t

tt (A.4)

which then leads to the solution for the parameter μt as given by

= ==

=

=

d z zz

z µ µd z z

d zp

q( , ) ( )

( )( ) 0

( , )

( , ).

zzzzi

m

t ii

ii t t

im

t ip

q i

im

t ip

q1

1( )( )

1( )( )

t

tt

tt

i

t i

tt

i

t i (A.5)

We used the fact that Σ is symmetric positive definite to get the derivative in Eq. (A.4). The expression given by Eq. (12) can then be derived bywriting out each entry of the vector in Eq. (A.5) as given by

= ==

=

=

=

d z

d zµ

z z( , )

( , ).

zz

zz

zz

zz

tj i

mt i

pq i j

im

t ip

q

ip

q i j

ip

q

1( )( ) ,

1( )( )

1| | ( )

( ) ,

1| | ( )

( )

tt

i

t i

tt

i

t i

t i

t i

t i

t i (A.6)

Since in Eq. (12), the indicator function =d z( , ) 1t it only for the failed samples z ,i t the indicator function can be removed by taking the sumover the failed samples.

In order to show that Eq. (12) is the global minimum, we take the second-order partial derivative of µ( , ) w.r.t. μ as given by

==

d z zz

pq

( , ) ( )( )

.µi

m

t ii

i

2

1

1t

tt (A.7)

We get convexity in μ because µ2 is positive definite. We know that the local minimum in convex optimization must also be the global

minimum [61]. Thus, Eq. (12) is the global optimum for μ.In order to derive the local optimum Σt, we rewrite µ( , ) using traces due to its usefulness in calculating derivatives of quadratic form. Note

that z µ z µ( ) ( )i i1 is a scalar and thus is equal to its trace, z µ z µtr(( ) ( ))i i

1 . Since the trace is invariant under cyclic permutations, wehave

=z µ z µ z µ z µtr(( ) ( )) tr(( )( ) ).i i i i1 1 (A.8)

We can take the derivative of the above expression w.r.t. the matrix 1 to get

=z µ z µ z µ z µ(tr(( )( ) )) ( ) ( ).i i i i11 (A.9)

Also note that since 1 is a symmetric positive definite matrix, we have

= =ln .11

11(A.10)

Using Eq. (A.8) and the fact that the determinant of the inverse of a matrix is the inverse of the determinant, µ( , ) can be rewritten as

= += =

µ d z zz

d z zz

z µ z µpq

pq

( , ) ln | | ( , ) ( )( )

( , ) ( )( )

tr(( )( ) ).i

m

t ii

i i

m

t ii

ii i

1 1

1 1

1t

tt

t

tt (A.11)

We can substitute the optimum value μt given by Eq. (A.5) in Eq. (A.11) to get

= += =

d z zz

d z zz

z µ z µpq

pq

( ) ln | | ( , ) ( )( )

( , ) ( )( )

tr(( )( ) ).i

m

t ii

i i

m

t ii

ii t i t

1 1

1 1

1t

tt

t

tt (A.12)

The local optimum Σt is found by taking the gradient of ( ) w.r.t. the matrix 1 using the properties described in Eqs. (A.9) and (A.10), andequating it to zero, as given by

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

16

Page 17: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

= + == =

d z zz

d z zz

z µ z µpq

pq

( , ) ( )( )

( , ) ( )( )

( ) ( ) 0,i

m

t ii

i i

m

t ii

ii t i t

1 1

t

tt

t

tt

1(A.13)

which then yields

==

=

d z z µ z µ

d z

( , ) ( ) ( )

( , ).

zz

zz

tim

t ip

q i t i t

im

t ip

q

1( )( )

1( )( )

tt

i

t i

tt

i

t i (A.14)

The expression given by Eq. (13) can be derived by writing out each entry of the matrix in Eq. (A.14) to get

= ==

=

=

=

d z

d z

z µ z µ z µ z µ( , ) ( )( )

( , )

( )( ).

zz

zz

zz

zz

tj k i

mt i

pq i j t

ji k t

k

im

t ip

q

ip

q i j tj

i k tk

ip

q

, 1( )( ) , ,

1( )( )

1| | ( )

( ) , ,

1| | ( )

( )

tt

i

t i

tt

i

t i

t i

t i

t i

t i

As noted before, the indicator function =d z( , ) 1t it only for the failed samples zi t and can be removed by taking the sum over the failedsamples in Eq. (13).

In order to show that Eq. (13) is the global minimum, we take the second-order derivative of ( )1 w.r.t. 1 as given by

==

d z zz

pq

( , ) ( )( )

.i

m

t ii

i

2 2

1

t

tt

1(A.15)

We get convexity in 1 because 21 is positive definite. Thus, Eq. (13) is the global optimum for Σ [61].

Appendix B. Most probable failure point

The point with the maximum likelihood of failure is called the most probable failure point (MPP), see [18,62] for further reading. Typically, thisis found by mapping Z ~ p to the standard normal space U 0 1( , diag( )) nr . Let the mapping be done by using some transformation =u zT [ ].Then the MPP can be found by minimizing the distance from the mean to the limit state failure boundary =zg ( ) 0 in the standard normal space. Theoptimization problem used to find the MPP is given by

=

u

ug T

min

subject to ( [ ]) 0.u

2

1nr

(B.1)

Appendix C. Defensive importance sampling

While exploring the design space, the system can have small and large failure probabilities. For small failure probabilities, importance samplingwith q is an efficient sampling scheme. For large failure probabilities, standard sampling from the nominal density p leads to good convergence of theestimate. Defensive importance sampling [63] proposes to sample from the mixed biasing density

= +q q p: (1 ) .

In [63] it is suggested to use 0.1 ≤ α < 0.5. However, this is for computing small failure probabilities only. An adaptive approach to choose α can beused to account for both rare and common events. Algorithm 3 describes one such adaptive method where we start with = 1 and sample the mean.Then decrease α if the mean has not converged (which is often the case in small failure probabilities), effectively sampling more from the biasing

Require: Nominal densityp, biasing density (can be mixture)q, designdt.Ensure: Adaptive mixture densityqα.

1: procedure AdaptiveISdensity(p, q,Pthresh)2: α0 = 1, k = 1;3: while PIS(dt) not convergeddo4: qαθt := (1− α)q+ αp5: mk = kP−1

thresh

6: ComputePIS(dt) with samples fromqαθt .

7: Assignα = |# of failed samples|mk

8: k = k+ 1.9: end while

10: return PISqαθt

(dt)

11: end procedure

Algorithm 3. Adaptive defensive importance sampling.

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

17

Page 18: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

density.Combining defensive importance sampling with IRIS, the information reuse biasing density with defensive importance sampling is given by

= +=

q q p: (1 ) .i

t

i0

1

*t i

References

[1] Kuschel N, Rackwitz R. Two basic problems in reliability-based structural optimi-zation. Math Methods Oper Res 1997;46(3):309–33.

[2] Yang R, Gu L. Experience with approximate reliability-based optimization methods.Struct Multidiscip Optim 2004;26(1–2):152–9.

[3] Liang J, Mourelatos ZP, Nikolaidis E. A single-loop approach for system reliability-based design optimization. J Mech Des 2007;129(12):1215–24.

[4] Liang J, Mourelatos ZP, Tu J. A single-loop method for reliability-based designoptimisation. Int J Product Dev 2008;5(1–2):76–92.

[5] Du X, Chen W. Sequential optimization and reliability assessment method for effi-cient probabilistic design. ASME 2002 International Design Engineering TechnicalConferences and Computers and Information in Engineering Conference. AmericanSociety of Mechanical Engineers; 2002. p. 871–80.

[6] Sopory A, Mahadevan S, Mourelatos ZP, Tu J. Decoupled and single loop methodsfor reliability-based optimization and robust design. ASME 2004 InternationalDesign Engineering Technical Conferences and Computers and Information inEngineering Conference. American Society of Mechanical Engineers; 2004. p.719–27.

[7] Yao W, Chen X, Luo W, van Tooren M, Guo J. Review of uncertainty-based multi-disciplinary design optimization methods for aerospace vehicles. Progr Aerosp Sci2011;47(6):450–79.

[8] Valdebenito MA, Schuëller GI. A survey on approaches for reliability-based opti-mization. Struct Multidiscip Optim 2010;42(5):645–63.

[9] Aoues Y, Chateauneuf A. Benchmark study of numerical methods for reliability-based design optimization. Struct Multidiscip Optim 2010;41(2):277–94.

[10] Rackwitz R. Reliability analysis–a review and some perspectives. Struct Saf2001;23(4):365–95.

[11] Hohenbichler M, Gollwitzer S, Kruse W, Rackwitz R. New light on first-and second-order reliability methods. Struct Saf 1987;4(4):267–84.

[12] Dubourg V, Sudret B, Bourinet J-M. Reliability-based design optimization usingkriging surrogates and subset simulation. Struct Multidiscip Optim2011;44(5):673–90.

[13] Bichon BJ, McFarland JM, Mahadevan S. Efficient surrogate models for reliabilityanalysis of systems with multiple failure modes. Reliab Eng Syst Saf2011;96(10):1386–95.

[14] Bichon BJ, Eldred MS, Mahadevan S, McFarland JM. Efficient global surrogatemodeling for reliability-based design optimization. J Mech Des 2013;135(1):11009.

[15] Qu X, Haftka R. Reliability-based design optimization using probabilistic sufficiencyfactor. Struct Multidiscip Optim 2004;27(5):314–25.

[16] Moustapha M, Sudret B, Bourinet J-M, Guillaume B. Quantile-based optimizationunder uncertainties using adaptive kriging surrogate models. Struct MultidiscipOptim 2016;54(6):1403–21.

[17] Moustapha M, Sudret B. Surrogate-assisted reliability-based design optimization: asurvey and a unified modular framework. Struct Multidiscip Optim 2019:1–20.

[18] Melchers R. Importance sampling in structural systems. Struct Saf 1989;6(1):3–10.[19] Liu JS. Monte Carlo strategies in scientific computing. Springer Science & Business

Media; 2008.[20] Rockafellar RT, Royset JO. On buffered failure probability in design and optimi-

zation of structures. Reliab Eng Syst Saf 2010;95(5):499–510.[21] Harajli MM, Rockafellar RT, Royset JO. Importance sampling in the evaluation and

optimization of buffered failure probability. Proceedings of 12th InternationalConference on Applications of Statistics and Probability in Civil Engineering(ICASP12), Vancouver, Canada, July 12-15, 2015. 2015. https://doi.org/10.14288/1.0076214.

[22] Depina I, Papaioannou I, Straub D, Eiksund G. Coupling the cross-entropy with theline sampling method for risk-based design optimization. Struct Multidiscip Optim2017;55(5):1589–612.

[23] Heinkenschloss M, Kramer B, Takhtaganov T, Willcox K. Conditional-value-at-riskestimation via reduced-order models. SIAM/ASA J Uncertainty Quantif2018;6(4):1395–423.

[24] Rubinstein RY, Kroese DP. The cross-entropy method: a unified approach to com-binatorial optimization, Monte-Carlo simulation and machine learning. SpringerScience & Business Media; 2013.

[25] Kroese DP, Rubinstein RY, Glynn PW. The cross-entropy method for estimation.Handbook of Statistics. 31. Elsevier; 2013. p. 19–34.

[26] Wang Z, Song J. Cross-entropy-based adaptive importance sampling using vonMises-Fisher mixture for high dimensional reliability analysis. Struct Saf2016;59:42–52.

[27] Cornuet J-M, Marin J-M, Mira A, Robert CP. Adaptive multiple importance sam-pling. Scandinavian J Stat 2012;39(4):798–812.

[28] Kurtz N, Song J. Cross-entropy-based adaptive importance sampling using Gaussianmixture. Struct Saf 2013;42:35–44.

[29] Au S-K, Beck JL. Estimation of small failure probabilities in high dimensions bysubset simulation. Probab Eng Mech 2001;16(4):263–77.

[30] Papaioannou I, Betz W, Zwirglmaier K, Straub D. MCMC algorithms for subset si-mulation. Probab Eng Mech 2015;41:89–103.

[31] Au S-K. On MCMC algorithm for subset simulation. Probab Eng Mech2016;43:117–20.

[32] Li J, Li J, Xiu D. An efficient surrogate-based method for computing rare failureprobability. J Comput Phys 2011;230(24):8683–97.

[33] Dubourg V, Sudret B, Deheeger F. Metamodel-based importance sampling forstructural reliability analysis. Probab Eng Mech 2013;33:47–57.

[34] Peherstorfer B, Kramer B, Willcox K. Combining multiple surrogate models to ac-celerate failure probability estimation with expensive high-fidelity models. JComput Phys 2017;341:61–75. https://doi.org/10.1016/j.jcp.2017.04.012.

[35] Peherstorfer B, Kramer B, Willcox K. Multifidelity preconditioning of the cross-entropy method for rare event simulation and failure probability estimation. SIAM/ASA J Uncertainty Quantif 2018;6(2):737–61.

[36] Kramer B, Marques A, Peherstorfer B, Villa U, Willcox K. Multifidelity probabilityestimation via fusion of estimators. J Comput Phys 2019;392:385–402.

[37] Ng LW-T. Multidelity approaches for design under uncertainty. MassachusettsInstitute of Technology; 2013.

[38] Ng LW-T, Willcox KE. Multifidelity approaches for optimization under uncertainty.Int J Numer Methods Eng 2014;100(10):746–72.

[39] Ng LW-T, Willcox KE. Monterlo information-reuse approach to aircraft conceptualdesign optimization under uncertainty. J Aircraft 2016;53(2):427–38. https://doi.org/10.2514/1.C033352.

[40] Cook LW, Jarrett JP, Willcox KE. Generalized information reuse for optimizationunder uncertainty with non-sample average estimators. Int J Numer Methods Eng2018;115(12):1457–76.

[41] Zhang G, Nikolaidis E, Mourelatos ZP. An efficient re-analysis methodology forprobabilistic vibration of large-scale structures. J Mech Des 2009;131(5):51007.

[42] Chaudhuri A, Waycaster G, Price N, Matsumura T, Haftka RT. NASA uncertaintyquantification challenge: an optimization-based methodology and validation. JAerosp Inf Syst 2015;12(1):10–34.

[43] Kuczera RC, Mourelatos ZP, Nikolaidis E, Li J. A simulation-based RBDO methodusing probabilistic re-analysis and a trust region approach. ASME 2009International Design Engineering Technical Conferences and Computers andInformation in Engineering Conference, American Society of Mechanical Engineers.2009. p. 1149–59.

[44] Beaurepaire P, Jensen H, Schuëller G, Valdebenito M. Reliability-based optimiza-tion using bridge importance sampling. Probab Eng Mech 2013;34:48–57.

[45] Owen AB. Monte Carlo theory, methods and examples. 2013.[46] Kullback S, Leibler RA. On information and sufficiency. Ann Math Stat

1951;22(1):79–86.[47] Rubinstein R, Samorodnitsky G. Variance reduction by the use of common and

antithetic random variables. J Stat Comput Simul 1985;22(2):161–80.[48] Glasserman P, Yao DD. Some guidelines and guarantees for common random

numbers. Manage Sci 1992;38(6):884–908.[49] Marti K. Differentiation of probability functions: The transformation method.

Comput Math Appl 1995;30(3–6):361–82.[50] Jensen H, Valdebenito M, Schuëller G, Kusanovic D. Reliability-based optimization

of stochastic systems using line search. Comput Methods Appl Mech Eng2009;198(49–52):3915–24.

[51] Lacaze S, Brevault L, Missoum S, Balesdent M. Probability of failure sensitivity withrespect to decision variables. Struct Multidiscip Optim 2015;52(2):375–81.

[52] Sobieszczanski-Sobieski J, Morris A, van Tooren M. Multidisciplinary design opti-mization supported by knowledge based engineering. John Wiley & Sons; 2015.

[53] SpaceX to begin testing on reusable Falcon 9 technology this year. https://www.nasaspaceflight.com/2012/01/spacex-testing-reusable-falcon-9-technology-this-year/; 2012. Accessed: 2018-11-14.

[54] Miller K, Sisco J, Nugent N, Anderson W. Combustion instability with a single-element swirl injector. J Propul Power 2007;23(5):1102–12.

[55] Yu Y, Sisco JC, Rosen S, Madhav A, Anderson WE. Spontaneous longitudinalcombustion instability in a continuously-variable resonance combustor. J PropulPower 2012;28(5):876–87.

[56] Feldman T, Harvazinski M, Merkle C, Anderson W. Comparison between simulationand measurement of self-excited combustion instability. 48th AIAA/ASME/SAE/ASEE Joint Propulsion Conference. 2012. p. 4088.

[57] Smith R, Ellis M, Xia G, Sankaran V, Anderson W, Merkle C. Computational in-vestigation of acoustics and instabilities in a longitudinal-mode rocket combustor.AIAA J 2008;46(11):2659–73.

[58] Smith R, Xia G, Anderson W, Merkle C. Computational studies of the effects ofoxidiser injector length on combustion instability. Combus Theory Model2012;16(2):341–68.

[59] Frezzotti ML, Nasuti F, Huang C, Merkle C, Anderson WE. Response functionmodeling in the study of longitudinal combustion instability by a quasi-1d Euleriansolver. 51st AIAA/ASME/SAE/ASEE Joint Propulsion Conference. 2015. p. 27–9.

[60] Xu J, Duraisamy K. Reduced-order reconstruction of model rocket combustor flows.53rd AIAA/SAE/ASEE Joint Propulsion Conference. 2017. p. 4918. https://doi.org/

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

18

Page 19: Reliability Engineering and System Safetykramer.ucsd.edu/img/pubs/information-Reuse-Importance...reliability at each optimization iteration, single-loop methods that in troduce optimality

10.2514/6.2017-4918.[61] Rockafellar RT. Lagrange multipliers and optimality. SIAM Rev

1993;35(2):183–238.[62] Du X, Chen W. A most probable point-based method for efficient uncertainty

analysis. J Des Manuf Autom 2001;4(1):47–66.[63] Hesterberg T. Weighted average importance sampling and defensive mixture dis-

tributions. Technometrics 1995;37(2):185–94.

A. Chaudhuri, et al. Reliability Engineering and System Safety 201 (2020) 106853

19