tutr_dx_ASO

18
Using Adaptive Single-Objective Optimization To choose the correct optimization method for a given problem, you must understand your problem; to understand your problem, you must first explore it—which requires the selection of an optimization method. DesignXplorer’s Adaptive Single-Objective Optimization (ASO) is a robust, adaptive algorithm that simplifies this process, allowing you to explore your design space during an actual optimization run. In this advanced tutorial, we’ll use four different optimization scenarios (including one that uses ASO) to explore the design space and find the global optimum for the same problem. We’ll examine the results and benefits of each method for solving this particular problem, learning how the performance of different algorithms in combination compare with the performance of an ASO system. Note This advanced tutorial assumes that you are familiar with ANSYS Workbench and DesignXplorer’s Goal Driven Optimization functionality. For an introduction to Goal Driven Optimization in version 14.5, see the tutorial Performing a Goal Driven Optimization Study. This tutorial is divided into the following sections: 1. What is Adaptive Single-Objective Optimization? 2. Problem Definition 3. Basic Project Setup 4. Scenario 1: Kriging-NLPQL Response Surface Optimization to NLPQL Direct Optimization 5. Scenario 2: NLPQL Direct Optimization to NLPQL Direct Optimization 6. Scenario 3: Screening Direct Optimization to NLPQL Direct Optimization 7. Scenario 4: Adaptive Single-Objective Direct Optimization 8. Time to Spare? 9.What Have We Learned? 1. What is Adaptive Single-Objective Optimization? Adaptive Single-Objective Optimization is a gradient-based mathematical optimization method that is available only for Direct Optimization systems. It combines a Latin Hypercube Sampling (LHS) Design of Experiments, a Kriging response surface, and the Nonlinear Programming by Quadratic Lagrangian (NLPQL) optimization algorithm with domain reduction to locate the global optima. When we say an optimization method is “adaptive,” it means that it is internally powered by response surface technology. When the level of accuracy is not acceptable, it performs design point updates and refines the surface. When the level of accuracy is good enough, it uses approximation instead. 2. Problem Definition The problem is a non-convex analytic function with two input parameters. The definition of the problem is as follows: 1 Release 14.5 - © SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information of ANSYS, Inc. and its subsidiaries and affiliates.

description

anis

Transcript of tutr_dx_ASO

  • Using Adaptive Single-Objective Optimization

    To choose the correct optimization method for a given problem, you must understand your problem;

    to understand your problem, you must first explore itwhich requires the selection of an optimization

    method. DesignXplorers Adaptive Single-Objective Optimization (ASO) is a robust, adaptive algorithm

    that simplifies this process, allowing you to explore your design space during an actual optimization

    run.

    In this advanced tutorial, well use four different optimization scenarios (including one that uses ASO)

    to explore the design space and find the global optimum for the same problem. Well examine the

    results and benefits of each method for solving this particular problem, learning how the performance

    of different algorithms in combination compare with the performance of an ASO system.

    Note

    This advanced tutorial assumes that you are familiar with ANSYS Workbench and

    DesignXplorers Goal Driven Optimization functionality. For an introduction to Goal Driven

    Optimization in version 14.5, see the tutorial Performing a Goal Driven Optimization Study.

    This tutorial is divided into the following sections:

    1.What is Adaptive Single-Objective Optimization?

    2. Problem Definition

    3. Basic Project Setup

    4. Scenario 1: Kriging-NLPQL Response Surface Optimization to NLPQL Direct Optimization

    5. Scenario 2: NLPQL Direct Optimization to NLPQL Direct Optimization

    6. Scenario 3: Screening Direct Optimization to NLPQL Direct Optimization

    7. Scenario 4: Adaptive Single-Objective Direct Optimization

    8.Time to Spare?

    9.What Have We Learned?

    1. What is Adaptive Single-Objective Optimization?

    Adaptive Single-Objective Optimization is a gradient-based mathematical optimization method that

    is available only for Direct Optimization systems. It combines a Latin Hypercube Sampling (LHS) Design

    of Experiments, a Kriging response surface, and the Nonlinear Programming by Quadratic Lagrangian

    (NLPQL) optimization algorithm with domain reduction to locate the global optima.

    When we say an optimization method is adaptive, it means that it is internally powered by response

    surface technology. When the level of accuracy is not acceptable, it performs design point updates and

    refines the surface. When the level of accuracy is good enough, it uses approximation instead.

    2. Problem Definition

    The problem is a non-convex analytic function with two input parameters. The definition of the problem

    is as follows:

    1Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

    of ANSYS, Inc. and its subsidiaries and affiliates.

  • Minimize

    f x x

    1 2

    ( )

    Where

    And

    e

    e

    [ ( ) ]

    3

    5

    =

    + e

    + [ ( ) ]

    This analytic function has three local maxima, one local minimum, and one global minimum point at

    (0.2282;-1.6256), with a corresponding objective function value of -6.5511.

    3. Basic Project Setup

    Download the Project Input Files

    1. To access tutorials and their input files on the ANSYS Customer Portal, go to http://support.ansys.com/

    training.

    2. Download the ANSYS_DX_ASO.zip file.

    3. Extract the filesAnalyticFunction2D.xlsx (Excel input file) and AnalyticFunction2D.inp(MAPDL input file).

    Create the Project

    1. Open ANSYS Workbench 14.5.

    Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.2

    Using Adaptive Single-Objective Optimization

  • 2. Create a new project.

    3. Add a Component System to the Project Schematic, as follows:

    For Windows, add a Microsoft Office Excel system.

    For Linux, add a Mechanical APDL system.

    4. Attach the input file.

    For Windows, attach AnalyticFunction2D.xlsx.

    For Linux, attachAnalyticFunction2D.inp.

    5. Define input and output parameters.

    Right-click the Analysis cell and select Edit Configuration or Edit.

    In the Analysis workspace, define inputs and outputs as follows:

    6. Return to the Project Schematic. Note that the Parameter Set bar has been added.

    7. Update the project.

    8. Save the project as DX_ASO.wbpj.

    Next, well run four different optimization scenarios, compare their results, and determine which optim-

    ization method was best for this particular problem.

    4. Scenario 1: Kriging-NLPQL Response Surface Optimization to NLPQL

    Direct Optimization

    For this scenario, well begin by setting up and running a Response Surface Optimization. Then well

    plug the results of the first optimization into a Direct Optimization.

    4.1. Run the Response Surface Optimization

    First, add a Response Surface Optimization system to your Project Schematic. Well configure and

    update each Response Surface Optimization component in order: Design of Experiments, Response

    Surface, and Optimization. Then well run the optimization and view the results.

    3Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

    of ANSYS, Inc. and its subsidiaries and affiliates.

    Scenario 1: Kriging-NLPQL Response Surface Optimization to NLPQL Direct Optimiz-

    ation

  • Configure and Update Design of Experiments

    1. Open the Design of Experiments workspace.

    2. In the Properties view for the Design of Experiments node, set properties as follows:

    Set Design of Experiments Type to Optimal Space-Filling Design.

    Set Samples Type to User-Defined Samples.

    Set Number of Samples to 10.

    3. In the Properties view for each input parameter, set properties as follows:

    Set Lower Bound to 3.

    Set Upper Bound to 3.

    4. Update the Design of Experiments component.

    The DOE generates a table of design points that are solved and used as input for the response surface

    calculation

    Configure and Update the Response Surface

    Because our problem is a type of function that cannot be approximated with a quadratic response

    surface, we need to select an alternate type of response surface. Kriging is a good choice because it

    can approximate the function by using automatic refinement to enrich the response surface and obtain

    the required accuracy.

    1. Open the Response Surface workspace.

    2. In the Properties view for the Response Surface node, set properties as follows:

    Set Response Surface Type to Kriging.

    Set Refinement Type to Auto.

    Set Maximum Number of Refinement Points to 100.

    Set Maximum Predicted Relative Error (%) to 5.

    3. In the Outline view, select the Min-Max Search check box.

    4. Update the Response Surface component.

    Review Results

    The Response Surface Properties view shows that the Kriging with automatic refinement converged

    after 54 additional refinement points were created. If you select the Min-Max Search node, shown below,

    in the Table view we can review the approximate value of the objective function (-5.8019) and the

    parameter minimums (P1 = 0.35345 and P2 = 1.5925). We will use these minimum values to initialize

    the Optimization component of the Response Surface Optimization.

    Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.4

    Using Adaptive Single-Objective Optimization

  • Under the Metrics node, select Convergence Curves to see the auto-Kriging Convergence Curves chart.

    Configure and Update the Optimization

    1. Open the Optimization workspace.

    2. In the Properties view for the Optimization node, set properties as follows:

    Set Optimization Method to NLPQL.

    Set Derivative Approximation to Central Difference.

    3. Select the Objectives and Constraints node in the Outline view and add an Objective of Minimize to

    parameter P3.

    4. In the Properties view for the Domain node, set the Starting Value property for each input to the

    minimum found earlier by the Response Surface Min-Max Search:

    Set P1 to 0.35345.

    Set P2 to -1.5925.

    5. Update the Optimization component.

    Review Results

    In the Optimization Properties view, the Optimization Status shows that the optimization has con-

    verged. If you select Candidate Points under the Results node, the Table view shows that the best

    5Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

    of ANSYS, Inc. and its subsidiaries and affiliates.

    Scenario 1: Kriging-NLPQL Response Surface Optimization to NLPQL Direct Optimiz-

    ation

  • candidate is the original NLPQL Starting Point (this is expected because the Min-Max Search is based

    on the NLPQL algorithm). When verified, this candidate point has an objective function value of -6.4009.

    4.2. Run the Direct Optimization

    Next, well use the NLPQL method for the Direct Optimization. Although we know it is dependent on

    the Starting Point, we can get reasonable starting points for the inputs by using results of the Response

    Surface Optimization. Also, we can use the response surface exploration to reduce the domain of the

    Direct Optimization +/-0.3 in each direction.

    1. Open the Optimization workspace.

    2. In the Properties view for the Optimization node, set properties as follows:

    Set Optimization Method to NLPQL.

    Set Derivative Approximation to Forward Difference.

    3. Select the Outline view Objectives and Constraints node and add an Objective of Minimize to para-

    meter P3.

    4. In the Properties view for the Domain node, assign values for each input parameter, as follows:

    For P1:

    Starting Value = 0.35345

    Lower Bound = 0.053447

    Upper Bound = 0.65345

    For P2:

    Starting Value = 1.5925

    Lower Bound = 1.8925

    Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.6

    Using Adaptive Single-Objective Optimization

  • Upper Bound = 1.2925

    Note

    Starting Value Is set to the minimum found earlier by the Response Surface Min-Max

    Search. the Lower Bound and Upper Bound are to +0.3 and 0.3 of the Starting

    Value

    5. Update the Optimization component.

    Review Results

    In the Optimization Properties view, the Optimization Status shows that the optimization has con-

    verged. Four iterations and 12 design points were needed to find the minimum. If you select Candidate

    Points under the Results node, the Table view shows that Candidate Point 1 now exactly matches

    expected objective function value of -6.5511.

    4.3. How effective was the approach used for Scenario 1?

    The Response Surface Optimization is a good way to explore the design space, but for this example is

    expensive in terms of the number of design points required (10 to build the Design of Experiments,

    and 54 to enrich the Kriging response surface). Once built, the Kriging response surface does allow us

    to find the area containing the global minimum, but the Response Surface Optimization method alone

    cannot obtain an accurate candidate point (unless more design points are generated to further enrich

    the Kriging response surface). Running an NLPQL Direct Optimization afterward, with the Response

    Surface Optimization candidate as the starting point and with a reduced domain, is a good way to get

    more accuracy from the response surface-based approach.

    5. Scenario 2: NLPQL Direct Optimization to NLPQL Direct Optimization

    In Scenario 2, we will begin by running an Direct Optimization that uses the NLPQL optimization

    method. Then, well run a second Direct Optimization that is the exactly the same as the first, except

    with a different starting point. Take the optimization property values from the screenshots provided.

    Configure and Update the First NLPQL Direct Optimization

    We will begin with the first NLPQL Direct Optimization system.

    1. Add a Direct Optimization system to the Project Schematic.

    7Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

    of ANSYS, Inc. and its subsidiaries and affiliates.

    Scenario 2: NLPQL Direct Optimization to NLPQL Direct Optimization

  • 2. Open the Optimization workspace.

    3. Set optimization properties as shown below:

    When the Optimization node is selected, set optimization Properties as follows:

    When the Objectives and Constraints node is selected, edit the optimization Table as follows:

    When the Domain node input parameters are selected, edit the parameter Properties as follows:

    4. Update the Optimization component.

    Review Results

    In the Optimization node Properties view, the Optimization Status property shows you that the op-

    timization has not converged within 20 iterations (the number defined by the Maximum Number of

    Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.8

    Using Adaptive Single-Objective Optimization

  • Iterations property). During those 20 iterations, NLPQL ran 104 design points. In the Table view summary,

    you can see that the objective function of the candidate point is 0.00072229.

    Configure and Update the Second NLPQL Direct Optimization

    Next, well run the second NLPQL Direct Optimization system. Note that we will be changing the starting

    point, but will not be using the results of the last optimization; the objective function obtained was

    not close enough to the expected value to be usable.

    1. Add a Direct Optimization system to the Project Schematic.

    2. Open the Optimization workspace.

    3. In the Properties view, configure this optimization exactly the same way as the last one, with the following

    exception: Give input parameter P2 a Starting Value of 2.

    Note

    This is a randomly selected value, not based on the results of the previous optimization

    system.

    4. Update the Optimization component.

    9Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

    of ANSYS, Inc. and its subsidiaries and affiliates.

    Scenario 2: NLPQL Direct Optimization to NLPQL Direct Optimization

  • Review Results

    In the Optimization node Properties view, the Optimization Status shows that the optimization has

    converged. Eight iterations and 31 design points were needed to find the global minimum. In the Table

    view summary, you can see that the objective function of the candidate point is -6.5511, the expected

    value.

    How effective was the approach used for Scenario 2?

    In Scenario 2, if we didn't know the global minimum ahead of time, we might think that the results of

    the first NLPQL Direct Optimization are gooduntil we run the second one and achieve a better result.

    The two optimizations in this scenario illustrate the importance of the Starting Value in a gradient-

    based method optimization such as NLPQL, especially when the objective function is not convex and

    contains several local optima. Because this problem has one local minimum and one global minimum,

    the NLPQL algorithm alone cannot find the global optimum without a good starting point. This is also

    true of the Mixed-Integer Sequential Quadratic Programming (MISQP) optimization method.

    6. Scenario 3: Screening Direct Optimization to NLPQL Direct Optimization

    In Scenario 3, we will take what we learned from Scenario 2 (that NLPQL needs a good starting point).

    Well begin by running a Screening Direct Optimization to explore the design space. Results from this

    optimization will be then be used as the starting point for an NLPQL Direct Optimization. Take the op-

    timization property values from the screenshots provided.

    Configure and Update the Screening Direct Optimization

    1. Add a Direct Optimization system to the Project Schematic.

    2. Open the Optimization workspace.

    3. In the Properties view, set optimization properties as shown below:

    When the Optimization node is selected, set optimization Properties as follows:

    Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.10

    Using Adaptive Single-Objective Optimization

  • When the Objectives and Constraints node is selected, edit the optimization Table as follows:

    When the Domain node input parameters are selected, edit the parameter Properties as follows:

    4. Update the Optimization component.

    Review Results

    In the Optimization node Properties view, the Optimization Status property shows that the Screening

    optimization used 20 evaluations to generate a sample set of 20 design points and identify three can-

    didate points. In the Table summary view, you can see that for the best candidate, parameter P1 has

    a value of 0.75, parameter P2 has a value of 1.725, and the objective value of the function (output

    P3) is -4.2983.

    11Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

    of ANSYS, Inc. and its subsidiaries and affiliates.

    Scenario 3: Screening Direct Optimization to NLPQL Direct Optimization

  • Configure and Update the NLPQL Direct Optimization

    1. Add a Direct Optimization system to the Project Schematic.

    2. Open the Optimization workspace.

    3. In the Properties view, set optimization properties as shown below:

    When the Optimization node is selected, set optimization Properties as follows:

    When the Objectives and Constraints node is selected, edit the optimization Table as follows:

    Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.12

    Using Adaptive Single-Objective Optimization

  • When the Domain node input parameters are selected, edit the parameter Properties as follows:

    4. Update the Optimization component.

    Review Results

    In the Optimization node Properties view, the Optimization Status property shows that he optimization

    has converged. Five iterations and the creation of 18 new design points were needed to find the global

    minimum. In the Table summary view, you can see that for the best candidate, the objective value of

    the function is -6.5511, again matching the expected value.

    How effective was the approach used for Scenario 3?

    Scenario 3 is very effective because we begin by running a Screening optimization to explore the design

    space and find the best candidate point, which is then used as the starting point for the NLPQL Direct

    Optimization. This approach is more effective than the two previous ones; instead of 54 total design

    points, it requires only 38 (20 samples to run the Screening, and 18 samples for the NLPQL to reach

    convergence).

    However, however, keep in mind that the candidate point found by the Screening must be good enough

    to guarantee convergence of the NLPQL; convergence depends on the space-filling ability of the

    Screening to create enough samples to adequately explore the parameter space. Also, a Screening op-

    timization could be expensive when you have a large number of input parameters.

    13Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

    of ANSYS, Inc. and its subsidiaries and affiliates.

    Scenario 3: Screening Direct Optimization to NLPQL Direct Optimization

  • 7. Scenario 4: Adaptive Single-Objective Direct Optimization

    In Scenario 4, we will run only a single Adaptive Single-Objective Direct Optimization system. Take the

    optimization property values from the screenshots provided.

    Configure the ASO Direct Optimization System

    1. Add a Direct Optimization system to the Project Schematic.

    2. Open the Optimization workspace.

    3. In the Properties view, set optimization properties as shown below:

    When the Optimization node is selected, set optimization Properties as follows:

    When the Objectives and Constraints node is selected, edit the optimization Table as follows:

    When the Domain node input parameters are selected, edit the parameter Properties as follows:

    Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.14

    Using Adaptive Single-Objective Optimization

  • 4. Update the Optimization component.

    Review Results

    In the Optimization node Table view, the Status property shows that the optimization converged after

    55 evaluations. The Candidate Points section shows that the best candidate, parameter P1, has a value

    of 0.2281, parameter P2 has a value of -1.6252, and the corresponding objective function value is

    6.5511. These values show that the optimization has reached the expected global minimum.

    In the Trade-off chart, note that the refinement is targeted to a small area of the surface.

    The History charts for the input parameters show the successive steps of the domain reduction performed

    by the Adaptive Single-Objective method.

    To view each chart in the Charts view, select the objective an/or constrain or input parameter under

    the Outline view Domain node. The following History charts show the evolution of parameters P1, P2,

    and P3.

    15Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

    of ANSYS, Inc. and its subsidiaries and affiliates.

    Scenario 4: Adaptive Single-Objective Direct Optimization

  • Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.16

    Using Adaptive Single-Objective Optimization

  • How effective was the approach used for Scenario 4?

    Although the convergence required 55 points, more than the number needed for Scenario 2 (a total of

    54) or Scenario 3 (a total of 38), the main benefit of the Adaptive Single-Objective optimization method

    is its ease of use; it automatically zooms in on a solution by adaptive methods. The Adaptive Single-

    Objective method:

    Offers a fully automated method of finding the global optimum, using sampling, a response surface,

    and the NLPQL algorithm.

    Employs targeted refinement (the refinement of the internal response surface is driven by the optim-

    ization objective), so does not expend time or resources on refining the surface in areas not relevant

    to the optimization.

    Finds the optimal point without requiring results from a prior optimization.

    Reaches a high level of accuracy early in the optimization process, accelerating the optimization

    process by enabling you to accept intermediate results.

    8. Time to Spare?

    Try to find the global maximum of the objective function by using the Adaptive Single-Objective optim-

    ization method.

    9. What Have We Learned?

    During this tutorial, we learned that DesignXplorer offer multiple ways to find the global optimum for

    a given function.

    We looked at different types of optimization: Response Surface Optimization, Direct Optimization, and

    an approach that combined systems of both types.

    17Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential information

    of ANSYS, Inc. and its subsidiaries and affiliates.

    What Have We Learned?

  • Response Surface Optimization allows you to select the type of Design of Experiments and response

    surface best suited to your problem.

    In general, a Response Surface Optimization is excellent for design exploration and finding an ap-

    proximated optimum quickly. It is not as effective for optimization purposes, though, because the

    response surface is built before the optimization objectives are defined, which prevents targeted

    refinement. Because the entire design space is refined, the optimization could be very expensive,

    requiring a large number of design point updates to obtain a response surface that is accurate all

    over. It is not the best approach for optimization because time and resources are spent on parts

    of the design space that are not relevant to the optimization.

    A Response Surface Optimization is a good way to quickly find interesting areas of the design

    space, but does sacrifice some accuracy to achieve greater efficiency. For very complex response

    surfaces, the number of trials to generate a high enough quality response surface may exceed the

    number that would have been required for a direct solve. Response Surface Optimization offers

    candidate verification and is excellent for design exploration of sensitivities, determination, and

    responses.

    Direct Optimization does not have a single associated response surface, but can retrieve information

    via data links from other design exploration components that contain design point data, giving you

    the ability to reuse data. It is a good choice when the number of parameters or problems with building

    a good response surface make Response Surface Optimization infeasible.

    Overall, it is the more efficient, accurate approach for an optimization study. Refinement is driven

    by the objective, with the creation of a new response surface on a smaller domain with each itera-

    tion; the smaller the domain, the easier the surface construction and the more accurate the approx-

    imation. Although Direct Optimization uses real solves, each design point update is worth the ex-

    pense; each update is targeted on the area most relevant to the optimization, allowing the refine-

    ment process to progressively zoom in on the optimum.

    We looked at different optimization methods: NLPQL, Screening, and Adaptive Single-Objective.

    NLPQL can add accuracy to the response surface-based approach, but is highly dependent on the

    quality of the starting point.

    Screening is an good option for the initial exploration of a design space because its space-filling abilities

    allow it to locate a viable candidate point (possibly to be used as a starting point for an NLPQL optim-

    ization). Screening can be expensive, though, when there are many input parameters.

    Adaptive Single-Objective is an adaptive method that combines the best of DesignXplorer technologies:

    a DOE, an internal response surface, domain reduction and error prediction. It provides both accuracy

    and speed without needing prior results to initialize the optimization, and allows you to balance your

    available time and resources with your desired level of accuracy. While a Response Surface Optimization

    or the NLPQL algorithm may be sufficient for exploring problems that are convex or smooth, the Adaptive

    Single-Objective algorithm is a better optimization choice when you are not already very familiar with

    your problem.

    Release 14.5 - SAS IP, Inc. All rights reserved. - Contains proprietary and confidential informationof ANSYS, Inc. and its subsidiaries and affiliates.18

    Using Adaptive Single-Objective Optimization

    Using Adaptive Single-Objective Optimization1. What is Adaptive Single-Objective Optimization?2. Problem Definition3. Basic Project Setup4. Scenario 1: Kriging-NLPQL Response Surface Optimization to NLPQL Direct Optimization4.1. Run the Response Surface Optimization4.2. Run the Direct Optimization4.3. How effective was the approach used for Scenario 1?

    5. Scenario 2: NLPQL Direct Optimization to NLPQL Direct Optimization6. Scenario 3: Screening Direct Optimization to NLPQL Direct Optimization7. Scenario 4: Adaptive Single-Objective Direct Optimization8. Time to Spare?9. What Have We Learned?