Global Optimal Solutions for Reservoir Engineering Applications

download Global Optimal Solutions for Reservoir Engineering Applications

of 10

Transcript of Global Optimal Solutions for Reservoir Engineering Applications

  • 7/29/2019 Global Optimal Solutions for Reservoir Engineering Applications

    1/10

    EngOpt 2008 - International Conference on Engineering Optimization

    Rio de Janeiro, Brazil, 01 - 05 June 2008.

    Global Optimal Solutions for Reservoir Engineering Applications

    Leonardo Jos do Nascimento Guimares

    Bernardo HorowitzSilvana Maria Bastos Afonso

    Univ. Federal de Pernambuco, Dept. Eng. Civil, Recife-PE, Brazil, [email protected], [email protected], [email protected]

    1. AbstractIn oil reservoir management it is necessary to periodically update the simulation model in order to be able to better predict its futurebehavior. The problem of history matching consists in determining rock and rock/fluid interaction properties in such a way that

    observed fluid productions are reproduced. It is a medium size inverse problem where the objective function measures the deviationof the predicted form the observed. The major difficulty results from the fact that each function evaluation requires a complete

    reservoir simulation, covering the observed production history, which is computationally expensive. Moreover, the problem is knownto be multimodal with several local minima.

    A common approach to tackle these problems is to construct cheap global approximation models of the responses often calledmetamodels or surrogates. These are based on simulation results obtained for a limited number of designs using global data fitting

    such as DACE. The optimization algorithm repetitive analysis needs are all performed using the cheap metamodels. In this study atwo-stage approach is employed based on the Efficient Global Optimization algorithm, EGO, due to Jones. First an initial sample ofdesigns is obtained using some Design of Experiments (DOE) technique such as Latin Hypercube. Parallel simulation runs for the

    initial sample are used to construct a kriging metamodel. In the second stage the metamodel is used to guide the search for promisingdesigns which are added to the sample in order to update the model until a suitable termination criterion is fulfilled. The selection ofdesigns which are adaptively added to the sample should balance the need for improving the value of the objective function with that

    of improving the quality of the prediction so that one does not get trapped in a local minimum. In the EGO algorithm this balance isachieved through the use of the Expected Improvement merit function.

    In this study the original EGO algorithm is modified to exploit parallelism. The modified algorithm is applied to two problems. Thefirst is a small two-variable multimodal problem that can be graphically represented. The second is a history matching algorithm

    which uses a finite element in-house simulator.

    2. Keywords: Global optimization; EGO; Reservoir engineering

    3. IntroductionIn oil reservoir management it is necessary to periodically update the simulation model in order to be able to better predict its future

    behavior. The problem of history matching consists in determining rock and rock/fluid interaction properties in such a way thatobserved fluid productions are reproduced. This is necessary since it is not possible to directly measure spatial distribution of theseproperties. It is a medium size inverse problem where the objective function measures the deviation of the predicted form the

    observed. The major difficulty results from the fact that each function evaluation requires a complete reservoir simulation, coveringthe observed production history, which may require several hours of CPU. Moreover, the problem is known to be multimodal withseveral local minima. As simulations are computationally expensive their direct use in optimization algorithms is infeasible. Acommon approach to tackle this problem is to construct cheap global approximation models of the responses often called metamodelsor surrogates. These are based on simulation results obtained for a limited number of designs using global data fit techniques such as

    DACE [15] The optimization algorithm repetitive analysis needs are all performed using the cheap metamodel. It is recognized theability of the metamodel to tame general nonsmoothness of the simulation responses [3]. Should nonsmoothness become more

    significant specific techniques may have to be employed [5].In this study a two-stage approach is employed based on the efficient global optimization algorithm, EGO [11].The EGO algorithm

    has been recently applied to reservoir engineering for reservoir characterization as well as EOR [13, 14, 20]. First an initial sample ofdesigns is obtained using some Design of Experiments (DOE) technique, such as Latin Hypercube [9]. Simulation runs for the design

    points are used to construct the metamodel. In the second stage the metamodel is used to guide the search for promising designswhich are then added to the sample in order to update the metamodel until a suitable termination criterion is fulfilled. The selectionof designs to be added to the sample is called Infill Sampling Criterion (ISC) [16]. ISC should balance the need of improving the

    value of the objective function with that of improving the quality of the prediction so that one does not get trapped in a localminimum. Considering the computational cost of the simulation and the availability of parallel computing it is highly desirable forthe ISC to include in each iteration multiple promising designs whose simulation may be performed concurrently. In this study the

    ISC proposed by Jones at al. [10] that uses the expected improvement merit function is modified to exploit parallelism.Initially the optimization strategy as well the construction of DACE metamodels are described. Then, two examples are presented.

    The first is a two-variable multimodal problem that can be represented graphically. The second is a history matching problem for atwo-dimensional two-phase small reservoir model.

    4. Optimization StrategyConsider problem (1) where the global optimizer is sought subject to box-type constraints:

    ( ) , subject to: , 1Minimzenl u

    i i ix R f x x x x i n = (1)

  • 7/29/2019 Global Optimal Solutions for Reservoir Engineering Applications

    2/10

    As mentioned above the optimization strategy adopted in this study is based on the efficient global optimization algorithm, EGO

    [10], modified by a proposed parallel infill sampling criterion. The strategy is described in the following sections starting out by theconstruction of the DACE metamodels followed by the original EGO infill sampling criterion. The EGO algorithm is then brieflydescribed and finally the proposed parallel sampling criterion motivation and implementation are presented.

    4.1. DACE Metamodels

    Metamodels construction typically involves one of the following strategies: data fitting schemes [7, 12, 18], Taylor series expansions

    [7, 8], and reduced basis [1]. Data fit type surrogates typically involve interpolation or regression (polynomial) of a set of datagenerated from the high-fidelity model. The regression models present two major drawbacks for a metamodel construction: (1) thedifficult to specify the regression terms as the functional form of the high fidelity function is unknown a priori; (2) the assumption of

    independent errors (do not consider the correlation between the points).Interpolation models commonly used as surrogates are based on techniques known as Kriging, a well-known stochastic based processmodel in the field of statistics and geostatistics, which started by late eighties to be used as approximation technique of outputs

    obtained from deterministic computer simulations. Kriging models [6, 10, 18] differ from regression models in the sense that they ingeneral give a global approximation to the response and also can capture oscillatory response trends. Moreover, the sample values areassumed exhibit spatial correlation with response values modeled via a Gaussian process around each sample location. After the

    unknowns are estimated some validation checks needs to be conducted to judge the quality of the generated substitute model to beused. This work will focus on Kriging type of approximation. The main aspects of this methodology are described next.

    4.1.1. Design of Experiments (DOE)The first step on the construction of a data fitting based metamodel is to generate the sampling points. These are unique locations in

    the design space determined by a design of experiments (DOE) [7, 12] approach. In such locations the response values of the highfidelity model are obtained to construct the approximated model (by instance, Kriging interpolation is very influenced by the

    sampling location). The samplings selection is a very important stage to build a reliable meta-model. Specifically for highcomputational cost function evaluations one must seek for an effective samplings plan, which means the minimum number of points

    that ensure a meta-model with good accuracy.Commonly considered approaches are Monte Carlo, Quasi Monte Carlo, Latin Hypercube sampling (LHS), orthogonal array,centroidal voronoi tessellation (CVT) [7, 12]. In this work LHS sampling will be used throughout. To obtain a LHS sampling, the

    range of each variable is divided into p bins (subintervals) of equal probability. Forndv design variables this partitioning yields atotal ofpndvsubintervals in the parameter space. Next, p samples are randomly selected in the design domain space under certain

    restrictions such as: each sample is randomly placed inside a domain partition and for each unidimentional projection (xi) of thesamples and partitions, there will be one and only one sample in each partition [9].The randomness inherent in the procedure means that there is more than one possibility of arrangement of sampling that meet the

    LHS criteria. As the LHS sampling is stochastic in nature it is advised to run such scheme several times and select the best samplingfor usage. This can be automatically calculated following the suggestion given by [12] in which for each LHS a quantity is

    obtained as:

    1

    2 21

    1=

    ( ) ( )

    m m

    i j ij i j ix x y y

    = =

    +

    (2)

    where m is the total number of samplings. According to [12], the LHS distribution that gives the minimum value for is the selectedsample.

    4.1.2. Kriging FormulationIn this technique the following model is considered to the true unknown function

    1

    ( ) ( ) ( )k

    j j

    j

    f N =

    = +x x x (3)

    In above equation the first part is a linear regression of the data with kregressors, in which j (j=1..k) are the unknowns and (x) theerror, is responsible to create a localized deviation from the global model. As previously mentioned, polynomials are generally used

    to constructNj(x). A traditional approach is called ordinary Kriging in which zero order (constant) function is employed.In the Kriging process a correlation between errors related to the distance between the corresponding points is assumed. Differentforms of correlation functions may be employed [8]. In this work the following Gaussian correlation form is assumed

    ( ) ( ) ( ) ( )( , ) exp ( , )i j i jCorr d = x x x x (4)

    in which ( ) ( )( , )i jd x x is a special normalized distance given by

    ( ) ( ) ( ) ( )

    1

    ( , )k

    ndvp

    i j j i

    k k k

    k

    d x x=

    = x x ( 0; [1,2])k kp (5)

    Where ndv is the total number of variables, k are the unknown correlation parameters used to fit the model which measures the

  • 7/29/2019 Global Optimal Solutions for Reservoir Engineering Applications

    3/10

    importance or activity ofxk andpk relates the smoothness of function in terms of variablexk[10]. In this workpk=2 is considered. Asalready pointed out this correlation form is so powerful that a simple constant term for the regression part of (3) (ordinary Kriging)

    can be used in substitution of the model previously presented such as

    ( ) ( )( ) ( )i if x = + x ( 1,... )i m= (6)

    where is the mean of the stochastic process and

    ( )

    ( )

    i x

    is Normal

    2

    (0, )

    .In the Kriging model a total of 2 2k+ unknowns 2 1( , , ... )k

    have to be obtained to compose the approximation. Such parameters

    are estimated by maximizing the likelihood function, lf, given by

    1

    1/ 2 2/2 2 /2

    1 ( ) ' ( )exp

    2(2 ) ( )n nlf

    =

    1 1R

    R

    f f

    (7)

    in which (1) ( )( ,..., )m Tf f=f is the true function at the samplings; (1,...,1)T=1 and R is a xm m correlation matrix with unity values

    along the diagonal whose ( , )i j entry is ( ) ( )( ), ( )i j

    Corr x x between any two of the m sampled data points( )ix and

    ( )jx .

    The maximization of the lf function (Eq.(7)) leads to [10]:

    1

    1 and'

    '

    =

    1

    1 1

    R

    R

    f1

    2 ( ) ( )

    '

    m

    =

    1 1Rf f

    (8)

    Under the application of above estimates into Eq.(7) and the maximization of lf function the remaining unknowns are obtained

    (correlation parameters and p ). After that the best linear unbiased predictor (BLUP) at any point of the design domain can be

    obtained as

    1 ( ) ( )Tf x r = + 1R f (9)

    in which r(x) is a correlation vector, which correlates an untried x and m sampled data points. Kriging formulation also allowsobtaining the mean squared error of the predictor at any untried point [10] as follows

    ( )2

    1

    2 2 1

    1

    1 1'

    ( ) 1 ' 1' 1s x

    = +

    r

    r r

    R

    RR

    (10)

    It will be convenient to work hereafter with the square root of the mean squared error,2( )s s x= , denoted by RMSE.

    In order to assess the adequacy of the Kriging model we use the PRESS error measure also referred to as leave one point out cross

    validation method. It assesses the accuracy of the model when individual data points are omitted from the data used to create theapproximation:

    ( )2

    1

    m

    i i

    i

    PRESS f f=

    = (11)

    where if is theth

    i function approximation model obtained by omitting theth

    i data point from the data set.

    4.2. Infill Sampling Criterion

    The second stage of the metamodel based approach used in this study searches at each iteration for promising design points to enterthe training sample set. The simplest approach would be to choose the minimizer of the predictor itself. This strategy would put too

    much emphasis on the local behavior of the objective function which would force convergence to the local minimum closest to thepredictor minimizer. This can be easily demonstrated in the example shown in Fig. 1 where the exact, true, function is depicted in a

    dashed line while the Kriging approximation based on five samples is plotted with a continuous line. There is a larger concentrationof samples on the left which can easily happen in the case of larger number of variables, especially after some new designs are added

    to the sample. Also plotted on the bottom of the figure is the value of the RMSE. The standard error is zero at the sampled points andlarger in the poorly sampled right half of the plot. If one chooses the predictor minimizer to enter the sample it is clear that theprocess will eventually converge to the local minimum on the left, missing the global minimum of the objective function.

    The example shows that one should not concentrate only on improving the value of the objective function but also on improvingaccuracy of the predictor. In the latter case designs with large prediction uncertainty should also be included in the training sample.In order to have a balanced approach one should search for points where the probability of improvement is higher [10]. Letfmin be the

    current best value of the objective function:

  • 7/29/2019 Global Optimal Solutions for Reservoir Engineering Applications

    4/10

    0 2 4 6 8 10 1210

    5

    0

    5

    10

    15

    Figure 1. Initial DACE model

    { }min 1 2( ), ( ), , ( )Nf Min f x f x f x= (12)

    The improvement,I(x), overfmin atx is defined by:

    { }min ( ) ( ), 0I x Max f f x= (13)

    If one assumes that ( )f x is a realization of a Gaussian process there is some probability it will improve upon fmin. If one weights the

    improvement by the associated probability one gets the expected improvement function:

    ( ) ( )EI x E I x= (14)

    which is the expected value of the improvement atx. It can be shown that [10]:

    min minmin

    ( ) ( )( ) ( )f f x f f x

    EI x f f x ss s

    = +

    (15)

    Where: = standard normal cumulative distribution function; = standard normal density function.

    The first term is the predicted difference between current minimum and ( )f x weighted by the probability of improvement [17].

    Hence it is large where ( )f x is likely to be smaller than fmin. The second term is large when ( )f x is close to fmin and s is large

    denoting much uncertainty with the prediction. Therefore the expected improvement function strikes a balance between exploiting

    regions of the design space where good solutions have been found and exploring regions that are undersampled thus having greateruncertainty [2]. A plot of the expected improvement function is shown on the bottom of Fig. 2. It can be seen that it has two localmaxima: one in the region of expected function decrease and another in the poorly sampled area.

    The optimization of the expected improvement will add to the sample a point very close to the local minimum of the true objectivefunction. But once this new point enters the training sample set the resulting expected improvement function attains a maximum

    close to the global minimum, as can be seen form Fig. 3. Now it can be readily perceived that the addition of this new design to the

    0 2 4 6 8 10 12 14

    Figure 2. Expected improvement function

  • 7/29/2019 Global Optimal Solutions for Reservoir Engineering Applications

    5/10

    sample will ensure final convergence to the true global minimum.

    As can be seen observed form Figs. 2 and 3 the expected improvement function vanishes at sampled points and tend to be highlymultimodal [10]. The local maxima of the expected improvement function are at designs with the highest probability of function

    decrease or predictor uncertainty. Therefore those designs are promising points to enter the sample in order to better update themetamodel [19].

    4.3. The EGO AlgorithmThe algorithm proposed by [10] is schematically described below:

    [1] Generate a small number of samples from the objective function:a) It has been suggested an initial sample ofN=10d, whereN=initial sample size and d=number of variables [10].

    Others proposeN=(d+1)(d+2)/2, the number of points necessary to fit a full quadratic polynomial [2].

    b) Latin hypercube or other sampling technique is used to generate the initial sample [9].[2] Construct an ordinary Kriging based metamodel from initial sample:

    a) Other technique such as radial basis functions may also be used [19].b) The metamodel is crossvalidated by leaving one observation at a time out and then predicting it based on the

    remaining sample points. If crossvalidation fails a log or inverse transformation is tried.

    [3] Find the design that maximizes the expected improvement function.[4] If maximum improvement is less than TOL* fmin, stop. The suggested value for TOL is 1% but this value may have to be

    reviewed if log or inverse transformation was applied [10].

    [5] Add new design to sample and update metamodel. Go to step [3].As the computation cost of reservoir simulation is high and the availability of parallel computation has increased dramatically in theoil industry setting it is highly desirable to include multiple designs in the ISC. In fact, as has been suggested earlier local maxima ofthe expected improvement function are generally promising points whose addition to the sample may increase efficiency of the

    algorithm.

    4.4. Parallel infill sampling criterion

    We propose below a parallel infill sampling criterion. It is motivated by the following observations:

    Local maxima of the expected improvement function are designs where either there is a high probability of objectivefunction decrease or high predictor uncertainty. These are promising designs to improve to improve the objective function

    as well as metamodel predictive capability.

    As one includes a local maximum of the expected improvement in the sample the value of the updated expectedimprovement drastically reduces in the neighborhood of the point and another local maximum becomes dominant as can beseen in Fig. 3.

    Let np be the number of available processors and ndnp be the number of promising designs to enter the sample. The proposedchange to step [3] of EGO algorithm is given below:Copy current sample set to temporary working sample set.Fori = 1nd

    Maximize the expected improvement function, obtaining*

    ix .

    Append the pair ( )* *, ( )i ix f x to temporary working sample.Temporarily update Kriging metamodel.

    Next i

    The following remarks further detail the proposed implementation: DIRECT algorithm [4] is used to optimize the expected improvement function. It is a derivative-free global optimizer that

    reaches a solution by selecting and subdividing at each iteration hyper-cubes that are most likely to contain the globaloptimum.

    Observe that the true function is never invoked to compute functional values at new sample points. The cheap metamodel isalways used instead.

    0 2 4 6 8 10 12 14

    Figure 3. Functions after addition of first local maximum

  • 7/29/2019 Global Optimal Solutions for Reservoir Engineering Applications

    6/10

    0 2 4 6 8 10 12 14

    Figure 4. Functions after addition of first promising point to temporary sample

    As Kriging predictor is used to compute * ( )if x there is no need to recomputed maximum likelihood parameters formetamodel updating.

    In Fig. 4 the maximizer of the expected improvement function evaluated using the Kriging predictor is added to temporaryworking sample and the metamodel is updated. Note that the added point lies on the continuous line representing the

    approximating metamodel. Note also the resulting function filters out the added solution so that next local maximum maybe found by the global optimizer.

    Correlation matrix R becomes ill-conditioned when samples get clustered around a given design. This happens becauserows and columns of R become almost identical [16]. Therefore we use the following safeguard when adding a point to thesample: if minimum distance of new point to the existing sample is less than tol, then perturb the new point in a random

    direction so that its distance from the nearest point found varies randomly form one to ten times tol. The value used fortolis:

    310 Mtol =

    (16)

    where: M= machine epsilon.

    In step [5] we add the nd additional design points to the sample where true functions are concurrently evaluated to updatethe metamodel for next iteration.

    5. ExamplesIn order to demonstrate the potentials of the proposed algorithm and efficacy of the parallel infill sampling criterion we present belowtwo illustrative examples. The first is an analytical two-variable multimodal problem that can be graphically represented. The secondis a two-dimensional, two-phase history matching problem of a five spot reservoir geometry.

    5.1. Example 1It is required to find the global minimizer of the problem below (MATLAB peaks function):

    ( ) ( ) ( )2 22 22 2

    1 2 1 21 2

    2

    2 1 13 511 1 2

    1 2

    1( ) 3 1 10Min

    5 3

    subject to: -3 , 3

    x x x xx x

    x R

    xf x x e x x e e

    x x

    + +

    =

    (17)

    As can be seen from Fig. 5(a) the function present several local minima. The exact global minimum is fmin=-8.1062 atx1=-0.0093,

    x2=1.5814.Using the standard EGO algorithm with tolerance equal to 1x10

    -3

    , after 6 iterations one gets f=-8.0986, at x1=-0.0250,x2=1.5999. Using one additional sample per iteration one gets after 4 iterations f=-8.1045, at x1=-0.0205,x2=1.5745. In two of theiterations the best point turned out to be the additional sample. This attests the advantage of using the parallel infill sampling

    criterion. Note that not only the number of iterations decreased but also the value of the objective function improved. The samplepoints, as well as the final Kriging metamodel can be seen in Fig. 5(b).

    5.2. Example 2

    Consider the two dimensional reservoir shown in Fig. 6(a) where the two regions indicated have different unknown permeabilities k1and k2. Production data for 27 years is shown in Fig. 6(b). These curves were obtained fork1= 10

    -6m2 and k2= 3x10-7m2 solving the

    reservoir simulation problem as described below, hereafter called the exact solution. It is required to find the values of the unknownpermeabilities that best match the production information. This can be formulated as an optimization problem whose objective

    function is the mismatch between the predicted and observed oil and/or water production. In the present case we used oil productiononly, as indicated below:

  • 7/29/2019 Global Optimal Solutions for Reservoir Engineering Applications

    7/10

    230

    1 2

    1

    Min ( ) log , subject to: 15.523 , 17

    obs

    j

    calcj j

    COPf x x

    COP=

    =

    x (18)

    where: x1 = - log(k1), x2 = - log(k2),obs

    jCOP = observed cumulative oil production at jth time step andcalc

    jCOP = calculeted

    cumulative oil production atjth time step, the total number of adopted time steps is 30. The calculated cumulative oil production ateach time step is obtained from an in house finite element two phase reservoir simulator. In the formulation of this problem, the massconservation of oil and water phases are obtained solving the equations below:

    ( )( )

    0 ,s

    w ot

    + = =

    q (19)

    wheres = saturation and = density of phase (w = water, o = oil), = rock porosity and q is the Darcy fluid flow, given by:

    ( )rkk

    p

    = q g (20)

    where k= rock absolute permeability, the unknowns, kr = phase relative permeability (function of phase saturation), = phase

    viscosity,p = phase pressure and g is the gravity vector.

    The reservoir is discretized by a finite element mesh shown in Fig. 7. In the reservoir simulation problem, the unknowns are water

    saturation (sw) and oil pressure (po). Oil and water saturations have the definition constraintso +sw = 1. The initial water saturation is0.5 and oil pressure is 4.5 MPa for the entire reservoir. Injection rate is fixed at 0.01m3/s and bottom hole pressure at the producer

    well is 1 MPa. Fig. 8(a) shows the water pressure distribution and Fig. 8(b) shows the oil saturation of the exact solution after 3 yearsof operation. Notice that because the central portion has a much lower permeability the saturation front lags behind the rest of thereservoir.

    (a) (b)Figure 6. Reservoir geometry and production data for the exact solution (k1= 10

    -6m2 and k2= 3x10-7m2)

    (a) (b)Figure 5. Contours of the exact function and its Kriging model at final solution

  • 7/29/2019 Global Optimal Solutions for Reservoir Engineering Applications

    8/10

    Fig. 9 shows the objective function and the final sampling with the global minimum also indicated by a green star. Observe that this

    function has two local minima, which is a common feature of this kind of problem. This problem was solved in 19 iterations usingthree additional samples per iteration. Often these additional sampling points were chosen as the best value of the objective function

    during the optimization process. Observe from Fig. 9 that samples next to both local minima were extracted. The obtained solution isx1 = 15.998 andx2 = 16.5997, which is very close to the exact values (x1 = 16 andx2 = 16.523).

    6. ConclusionsIn reservoir engineering many problems can be formulated as optimization problems. History matching is one of these problems

    known to be multimodal. Also, every function evaluation involves a complete reservoir simulation which may require several hoursof CPU time. A common approach to tackle these problems is to construct cheap global approximation models of the responses, oftencalled metamodels or surrogates.

    In this study a two-stage approach is employed based on the Efficient Global Optimization algorithm, EGO, due to Jones. First aninitial sample of designs is obtained using a Design of Experiments technique. Parallel simulation runs for the initial sample are usedto construct a Kriging metamodel. In the second stage the metamodel is used to guide the search for promising designs which are

    added to the sample in order to update the model until a suitable termination criterion is fulfilled. The selection of designs which areadaptively added to the sample should balance the need for improving the value of the objective function with that of improving the

    quality of the prediction so that one does not get trapped in a local minimum. In the EGO algorithm this balance is achieved throughthe use of the Expected Improvement merit function.

    The original EGO algorithm is modified to exploit parallelism by selecting some local minima of the merit function to enter thesample at each iteration. The modified algorithm is applied successfully to two problems. In both problems the advantages of

    adopting the proposed parallel infill sampling criterion are noticeable: the total number of iterations is decreased, often the additionalsamples turned out to be the best design thereby improving the quality of the surrogate, and the quality of final solution is also

    improved. Since evaluation of the additional sampling points is done concurrently, the total CPU time to solve the problem issignificantly decreased.

    (a) (b)

    Figure 8. Water pressure and oil saturation at 3 years (exact solution)

    Figure 7. Finite element mesh

  • 7/29/2019 Global Optimal Solutions for Reservoir Engineering Applications

    9/10

    Figure 9. Contours of the objective function and final sampling

    7. References1. Afonso, S.M.B., and Patera, T. Structural Optimization in the framework of reduced basis method, in Iberian Latin American

    Congress on Computational Methods, Ouro Preto, Brazil, 2003.2. Bichon, B.J., Eldred, M.S., Swiler, L.P., Mahedavan, S., and McFarland, J.M. Multimodal reliability assessment for complex

    engineering applications using efficient global optimization, paper AIAA-2007-1946 in Proceedings of the 48thAIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, Honolulu, 2007.

    3. Eldred, M.S. and Dunlavy, D.M. Formulations for surrogate-based optimization with data fit, multifidelity, and reduced ordermethods, paper AIAA-2006-7117, ISSMO Multidisciplinary Analysis and Optimization Conference, Portsmouth, 2006.

    4. Finkel, D.E. DIRECT optimization algorithm users guide, Center for Research and Scientific Computation, CRSC-TR03-11,

    North Carolina State University, Raleigh, 2003.5. Forester, A.I.J., Keane, A.J. and Bresloff, N.W. Design and analysis of noisy computer experiments, AIAA Journal, 2006, 44(10),

    2331-2339.

    6. Gano, S.E., and Renaud, J.E. Variable fidelity optimization using a Kriging based scaling function, 10th AIAA/ISSMOMultidisciplinary Analysis and Optimization Conference, Albany, 2004.

    7. Giunta, A.A. and Watson, L.T. A comparison of approximation modeling techniques: polynomial versus interpolating models,

    paper AIAA-98-4758 in Proceedings of the 7th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis andDesign, St. Louis, 1998.

    8. Giunta, A.A. and Eldred, M.S. Implementation of a trust region model management strategy in the DAKOTA optimizationtoolkit, paper AIAA-2000-4935 in Proceedings of the 8th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis

    and Optimization, Long Beach, 2000.9. Giunta, A.A., Wojtkiewicz, S.F., and Eldred, M.S. Overview of modern design of experiments methods for computational

    simulations, paper AIAA-2003-0649 in Proceedings of the 41st Aerospace Sciences Meeting, Reno, 2003.10. Jones, D.R., Schonlau, M., and Welch, W.J. Efficient global optimization of expensive black-box functions, Journal of Global

    Optimization, 1998, 13(4), 455-492.

    11. Jones, D.R. A taxonomy of global optimization methods based on response surfaces, Journal of Global Optimization, 2001, 21,345-383.

    12. Keane, A.J. and Nair, P.B. Computational Approaches for Aerospace Design: The pursuit of Excellence,Willey (2005).13. Queipo, N.V., Pintos, S., Contreras, N., Rincon, N., Colmenares, J. Surrogate modeling-based optimization for the integration

    of static and dynamic data into a reservoir description, SPE63065, SPE Annual Technical Conference, Dallas, 2000,

    14. Queipo, N.V., Goicochea, J.V., and Pintos, S. Surrogate modeling-based optimization of SAGD processes, Journal ofPetroleum Science and Engineering, 2002, 35, 83-93.

    15. Sacks, J., Welch, W.J., Mitchell, W.J. and Wynn, H.P. Design and analysis of computer experiments, Statistical Science, 1989,4(4), 409-435.

    16. Sasena, M.J. Flexibility and efficiency enhancements for constrained global design optimization with Kriging approximations,PhD Thesis, University of Michigan, Ann Arbor, 2002.

    17. Schonlau, M. Computer experiments and global optimization, PhD Thesis, University of Waterloo, Ontario, Canada, 1997.

  • 7/29/2019 Global Optimal Solutions for Reservoir Engineering Applications

    10/10

    18. Simpson, T.W., Peplinski, J.D., Koch, P.N., and Allen, J.K. Metamodels for computer based engineering design: survey andrecommendations, Engineering with Computers, 2001, 17, 129-150.

    19. Sobester, A., Leary, S.J. and Keane, A.J. On the design of optimization strategies based on global response surfaceapproximation models, Journal of Global Optimization, 2005, 33(1), 31-59.

    20. Zerpa, L.E., Queipo, N.V., Pintos, S., and Salager, J.L. An optimization methodology of alkaline-surfactant-polymer floodingprocesses using field scale numerical simulation and multiple surrogates, Journal of Petroleum Science and Engineering, 2005, 47,

    197-208.