c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary...

38
This may be the author’s version of a work that was submitted/accepted for publication in the following source: Periaux, Jacques, Gonzalez, Felipe, & Lee, D (2012) MOO methods for multidisciplinary design using parallel evolutionary al- gorithms, game theory and hierarchical topology: theoretical aspects (part 1). In Periaux, J & Verstaete, T (Eds.) Introduction to optimization and mul- tidisciplinary design in aeronautics and turbomachinery [Lecture Series 2012-03]. von karman Institute for Fluid Dynamics, Belgium, pp. 1-37. This file was downloaded from: https://eprints.qut.edu.au/70240/ c Consult author(s) regarding copyright matters This work is covered by copyright. Unless the document is being made available under a Creative Commons Licence, you must assume that re-use is limited to personal use and that permission from the copyright owner must be obtained for all other uses. If the docu- ment is available under a Creative Commons License (or other specified license) then refer to the Licence for details of permitted re-use. It is a condition of access that users recog- nise and abide by the legal requirements associated with these rights. If you believe that this work infringes copyright please provide details by email to [email protected] Notice: Please note that this document may not be the Version of Record (i.e. published version) of the work. Author manuscript versions (as Sub- mitted for peer review or as Accepted for publication after peer review) can be identified by an absence of publisher branding and/or typeset appear- ance. If there is any doubt, please refer to the published source. http:// store.vki.ac.be/ introduction-to-optimization-and-multidisciplinary-design-in-aeronautics-and-t html

Transcript of c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary...

Page 1: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

This may be the author’s version of a work that was submitted/acceptedfor publication in the following source:

Periaux, Jacques, Gonzalez, Felipe, & Lee, D(2012)MOO methods for multidisciplinary design using parallel evolutionary al-gorithms, game theory and hierarchical topology: theoretical aspects (part1).In Periaux, J & Verstaete, T (Eds.) Introduction to optimization and mul-tidisciplinary design in aeronautics and turbomachinery [Lecture Series2012-03].von karman Institute for Fluid Dynamics, Belgium, pp. 1-37.

This file was downloaded from: https://eprints.qut.edu.au/70240/

c© Consult author(s) regarding copyright matters

This work is covered by copyright. Unless the document is being made available under aCreative Commons Licence, you must assume that re-use is limited to personal use andthat permission from the copyright owner must be obtained for all other uses. If the docu-ment is available under a Creative Commons License (or other specified license) then referto the Licence for details of permitted re-use. It is a condition of access that users recog-nise and abide by the legal requirements associated with these rights. If you believe thatthis work infringes copyright please provide details by email to [email protected]

Notice: Please note that this document may not be the Version of Record(i.e. published version) of the work. Author manuscript versions (as Sub-mitted for peer review or as Accepted for publication after peer review) canbe identified by an absence of publisher branding and/or typeset appear-ance. If there is any doubt, please refer to the published source.

http:// store.vki.ac.be/ introduction-to-optimization-and-multidisciplinary-design-in-aeronautics-and-turbomachinery.html

Page 2: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

1

MOO Methods for Multidisciplinary Design Using Parallel Evolutionary Algorithms, Game Theory and Hierarchical Topology: Theoretical Aspects (Part 1)

J. Periaux†, L. F. Gonzalez* and D.S Lee † †

CIMNE/UPC, Barcelona, Spain and JYU, Jyvaskyla, Finland e-mail: (jperiaux, ds.chris.lee )@gmail.com

*

School of Engineering Systems, Queensland University of Technology (QUT), Australian Research Centre for Aerospace Automation (ARCAA), Brisbane, Australia,

e-mail: [email protected]

CIMNE/UPC, Barcelona, Spain e-mail: [email protected]

1. SUMMARY Two lecture notes describe recent developments of evolutionary multi objective optimization

(MO) techniques in detail and their advantages and drawbacks compared to traditional deterministic optimisers.

The role of Game Strategies (GS), such as Pareto, Nash or Stackelberg games as companions or pre-conditioners of Multi objective Optimizers is presented and discussed on simple mathematical functions in Part I , as well as their implementations on simple aeronautical model optimisation problems on the computer using a friendly design framework in Part II.

Real life (robust) design applications dealing with UAVs systems or Civil Aircraft and using the EAs and Game Strategies combined material of Part I & Part II are solved and discussed in Part III providing the designer new compromised solutions useful to digital aircraft design and manufacturing.

Many details related to Lectures notes Part I, Part II and Part III can be found by the reader in [68].

2. INTRODUCTION AND MOTIVATION

Optimisation is an integrated part of global aeronautical design as small changes in geometry gain in structural weight and reduction of aerodynamic drag. In aerospace engineering design and optimisation the engineer is usually involved with a problem which involves not only one single objective but also numerous objectives and multi-physics environments. Hence a systematic approach, which accounts for the interaction and trade-offs between multiple objectives, variables, constraints and disciplines, is required. This approach is called Multi-objective (MO) and Multidisciplinary Design Optimisation (MDO).

Capturing the solution of a MO and MDO problem in aeronautics requires the use of CFD and FEA computations which are time consuming, and involve the evaluation of candidate solutions of non-linear equations with several millions of mesh points and the computations of prohibitive gradients. There are different approaches for solving a MDO problem using traditional deterministic optimisation techniques [1,2,5,6,29,57,59].

New algorithms such as Evolution Algorithms (EAs) are good for complex cases problems where the search space can be multi-modal, non-convex or discontinuous, with multiple local minima and with noise. There are also problems where we look for a set of Pareto solutions, a Nash equilibrium point or other solutions like ones issued from Stackelberg games. Optimisation techniques can be combined with approximation techniques for expensive computations, for multi-fidelity analysis, for complex MDO problems incorporating additional compatibility constraints and variables into the system and in applications with complicated search spaces where the design space dimension varies.

Page 3: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

2

Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg [48] . EAs are based on Darwinian evolution, whereby populations of individuals evolve over a search space and adapt to the environment by the use of different mechanisms such as mutation, crossover and selection. An attractive feature of EAs is that they evaluate multiple candidate members of a population and are capable of finding non dominated solutions distributed on the so called Pareto front. EAs have been successfully applied to different aeronautical design and CFD problems and due to their robustness properties they have been recently used to explore the capabilities of EAs for aircraft, wing, aerofoil and rotor blade design [9-12,18-20,24-26,36-41,50]. However one drawback of EAs is that they are slow to converge as they require a large number of function evaluations and have poor performance with increasing number of variables. Hence the continuing challenge in the scientific community has been to develop robust and fast numerical techniques to overcome these difficulties and make the complex task of design and optimisation with EAs operational in aeronautics. In these notes we describe the implementation of a numerical method for the optimisation of aeronautical systems that uses a robust evolutionary technique, which is scalable to preliminary design studies with higher fidelity models for the solution. The content of Lecture Part I is organised as follows: section 2 gives the reader an overview of evolutionary methods, section 3 describes mechanics of EAs, section 4 discusses parallel EAs , section 5 multi-objective EAs and game theory, constrained optimisation and EAs and Game Strategies is described in section 6, hierarchical EAs are described in section 7, asynchronous EAs in section 8, Hybrid Games definition and their application to the optimisation of a Bump for drag reduction are introduced and results discussed in section 9, the development of evolutionary algorithms for design and optimisation in aeronautics is considered in section 10, section 11 compares EAs with other optimisation methods, applications of EAs , the advantages and limitations of canonical EAs for aeronautical design problems is described in section 12, finally conclusions are presented in section 13.

3. EVOLUTIONARY METHODS 3.1.1A brief History of EAs

The content of the following section is inspired from the book of M. Mitchell [33].The reader will consult this reference for further details. The idea of using evolution as an optimisation tool for engineering problems by computer scientists appeared in the late 50’ and early 60’.The idea was to evolve a population of candidate solutions to solve a problem , using operators inspired by natural selection. In the 60’, Rechenberg introduced “Evolution Strategies” (ES)for airfoil design; this approach being continued by H. Schweffel [52]. Other computer scientists developed evolution inspired algorithms for optimisation and machine learning at the same period when the electronic computer appeared for the first time. Genetic Algorithms (GAs) were invented by J. Holland in the late 60’ and developed by Holland and his students (D. Goldberg among many others) at the University of Michigan to study in the computer the phenomenon of adaptation as it occurs in nature. Holland’s book [22] on “Adaptation in Natural and Artificial Systems” presented the genetic algorithm as an abstraction of biological evolution which was a major innovation due to the biological concept of population, selection, crossover and mutation. The theoretical foundation of the genetic algorithms was built on the notion of “schemas” and “building blocks” which is explained in details in many books devoted to Genetic Algorithms ( see D. Goldberg for instance [17]. In the last decade there has been widespread interaction among researchers studying evolutionary computation methods, and the GAs, Memetic or Cultural Algorithms , Evolution Strategies, Differential Evolution (DE) , Evolutionary Programming and other evolutionary approaches have been unified by the scientific community into an umbrella named Evolutionary Algorithms (EAs). 3.1.2 EAs fundamentals

One of the emerging techniques for MDO and MO problems are Evolutionary Algorithms (EAs). Evolutionary algorithms are design and optimisation algorithms that mimic the natural

Page 4: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

3

process of 'survival of the fittest'. Broadly speaking they operate simply through the iterated mapping of one population of solutions to another population of solutions. This is contrasted with conventional deterministic search techniques such as the simplex method, conjugate gradient method and others, which proceed from one given sub-optimal solution to another, until an optimum solution is reached. Evolutionary algorithms are not deterministic, so that for identical problems and identical starting conditions, the evolution of the solution will not follow the same path on repeated applications. It is for this reason that EAs fall into the category of stochastic (randomised) optimisation methods. Some other stochastic methods that are used are the Monte-Carlo approach, the directed random walk and simulated annealing. The process of evolution in EAs is of course not completely random, because in this case the performance of the algorithm would be no better than simple guessing, and at worst would be equivalent to complete enumeration of the parameter search space. Evolutionary algorithms work by exploiting population statistics to some greater or lesser extent, so that when newer individual solutions or offspring are generated from parents, some will have inferior characteristics and some will have superior characteristics. The general working principles of the iterated mapping then reduces to generating an offspring population, removing a certain number of “below average” evaluated individuals, and obtaining the subsequent population. This can be summarised as the repeated application of two operators on the population, the variation operator (the generation of offspring) and the selection operator (the survival of the fittest) [17]. The various approaches to EAs in the literature only differ in the operation of these two operators. The origin of evolutionary algorithms for parameter optimisation seems to have appeared independently in two separate streams, Genetic Algorithms (GAs) and Evolution Strategies (ESs).

Some of the advantages of EAs are that they require no derivatives or gradients of the objective function, have the capability of finding globally optimum solutions amongst many local optima, are easily executed in parallel and can be adapted to arbitrary solver codes without major modification. Another major advantage of EAs is that they can tackle multi-objective problems directly (by considering vector fitnesses and not the more traditional weighted aggregation of several criteria). It is shown in the sequel of these notes how this feature is used intensively for the capture of Pareto solutions of multi criteria optimisation problems.

3.2.1 Genetic Algorithms

The simple Genetic Algorithm was founded on principles developed by Holland [22] in 1975, and a number of research topics both in theory and application were developed. It is generally accepted however, that the modern GA was placed on its strong foundation in optimisation research by Goldberg [17]. Goldberg's initial applications of the GA were in real-world topics such as gas pipeline control. The original GA technique revolved around a single binary string (or base 2) encoding of the chromosomes, which is the genetic material each individual carries. The binary coded GA's variation operator is comprised of two parts, crossover and mutation. Crossover interchanges portions of parental chromosomes while mutation involves the random switching of letters in the chromosome. The selection operator has taken many forms, the most basic being the stochastic fitness-proportionate (or roulette wheel) method8. Genetic Algorithms have developed significantly in the past decade, and these developments will be considered further throughout this discussion. Why using GAs ?

Further details of this section can be found in [32, 62]. Genetic Algorithms are search procedures based on the mechanics of natural selection and Darwin’s main principal: survival of the fittest. They have been introduced by J. Holland [22] who explained the adaptive process of natural systems and laid down the two main principles of GAs: the ability of simple bit-string representation to encode complicated structures and the power of simple transformations to improve such structures. A few years after, D. Goldberg brought GAs in non convex optimisation theory for quantitative study of optima and introduced a decisive thrust in the GAs research field. A major line in research for GAs is robustness: they are computationally simple and powerful in their search for improvement and are not limited by restrictive assumptions about the search space

Page 5: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

4

(continuity, existence of derivatives, uni modality). Furthermore, they accommodate well discontinuous environments (see later in the notes the capture of discontinuous Pareto fronts). GAs are search procedures which use semi-random search but allow a wider exploration in the search space compared to classical optimisation methods which are not so robust but work nicely in a narrow problem domain. How GAs are different from usual numerical optimisation tools ?

- GAs are indifferent to problem specific: an example in shape airfoil optimisation is the value of the drag of an airfoil which represents the so-called fitness function;

- GAs are using codings of decision variables by adapting artificial chromosomes or individual rather than adapting the parameters themselves. In practice, the GA user codes the possible solution as finite –length strings;

- GAs process populations via evolutive generations compared to point by point conventional methods which use only local information and can be trapped in a local minimum;

- GAs use randomised operators instead of strictly deterministic rules. The above items contribute to the robustness of GAs.

General Presentation of GAs using Binary coding GAs are different from the conventional search procedures encountered in engineering

optimisation. To understand the mechanism of GAs, consider a minimization problem with a cost index J= f(x), where the parameter is x. The first step of the optimisation process is to encode x as a finite-length string. The length of the binary string is chosen according the required accuracy. For a binary string of length l= 8 bits, the lower bound xmin is mapped to 00000000 and the upper bound xmax is mapped to 11111111, with a linear mapping in between. Then for any given string the corresponding value x can be calculated according to: x=xmin + 1/ (2*l-1). (xmax - xmin).With this coding, the initial population is constituted by N individuals, and each of them is a potential solution to the problem.

We must define now a set of GA operators that use the initial population and then create a new population at every generation. There are many GA operators, but the most important are selection, crossover and mutation. Selection consists in choosing the solutions which are going to form a new generation. The main idea is that selection should depend on the value of the fitness function: the higher the fitness is, the higher the probability is for the individual to be chosen (akin to the concept of survival of the fittest). But it remains a probability, which means that is not a deterministic choice: even solutions with a comparative low fitness may be chosen, and they may reveal very good in the course of events (e.g. if the optimisation is trapped in a local minimum). Two major selection techniques are Roulette Wheel and Tournament. (see Goldberg [17]).

Reproduction is a process by which a string is copied in the following generation.It may be copied with no change, but it may also undergo a mutation, according to a fixed mutation probability Pm. However the main way to fill up the new generation is through the operator called crossover.

A 001 / 01110 001 10010 A’

B 111/ 10010 111 01110 B’ /: cutting site

First, two strings are randomly selected and put in the mating pool. Second, a position

along the two strings is selected according to a uniform random law. Finally, based on the crossover probability Pc, the paired strings exchange all characters following the cross site. Clearly the crossover randomly exchanges structured information between parents A and B to produce two offspring A’ and B’; which are expected to combine the best characters of their

Page 6: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

5

parents. The last operator, called mutation, is a random alteration of a bit at a string position, and is based on a mutation probability Pm. In the present case, a mutation means flipping a bit 0 to 1 and vice versa. The mutation operator enhances population diversity and enables the optimisation to get out of local minima of the search space. Description of a simple Genetic Algorithm

For a problem P, the minimisation of f (x) , a simple binary coded GA works as follows:

(1) generate randomly a population of N individuals; (2) evaluate the fitness function of each individual phenotype; (3) select a pair of parents with a probability of selection depending of the value of the fitness

function. One individual can be selected several times; (4) crossover the selected pair at a randomly selected cutting point with probability Pc to

form new children; (5) mutate the two offspring by flipping a bit with probability Pm; (6) repeat steps 3,4,5 until a new population has been generated; (7) go to step 2 until convergence.

After several generations, one or several highly fitted individuals in the population represent the solution of the problem P. The main parameters to adjust for convergence are the size N of the population, the length l of the bit string, the probabilities Pc and Pm of cross over and mutation respectively. Several examples of minimisation problems with the use of GAs are implemented in the computer step by step and results presented in the companion notes.

3.2.2 Evolution Strategies

Of particular interest to this work are Evolution Strategies (ES) [51]. The first algorithm worked using only two individuals, one parent and one offspring. Each individual was real coded; each problem variable was assigned a floating point value in the chromosome. The variation operator involved applying a random mutation to each floating point value in the parental chromosome to arrive at the offspring individual. The selection operator was entirely deterministic, and was simply the result of a competition between parent and offspring to determine which remained. In the standard nomenclature this strategy is denoted the (1+1) ES, the first digit indicating the number of parents, the ' + ' indicating competition between parents and offspring and the final digit indicating the number of offspring. From the beginning the ES has been designed almost exclusively with real coding in mind, as opposed to original GA variants where real parameter optimisation comes about by the piecewise interpretation of the binary chromosome associated with each individual. An evolution strategy would therefore seem a logical starting point for evolutionary optimisation using real coded problem variables.

Subsequent developments in ESs introduced multi-membered populations for both parents and offspring. The first algorithm of this type was the (μ+1) ES [52]. This worked by applying some variation operator to the parent population to produce a single offspring. The offspring is selected by determining whether it is better than the worst member of μ, and if so it replaces the worst member μ. Both the (1+1) ES and the (μ+1) ES used deterministic control of the mutation size (variations applied to design variables) which were normally distributed when applied to real coded problems. The recent developments in both GAs and ESs have greatly modified their variation and selection operators, to the point where it is not clear whether such a nomenclature division is nowadays particularly justified. The main difference that exists between them today is still the predominance of adaptive mutations in ESs, which have made them very attractive for real coded optimisation, although GA research has produced some related concepts.

Page 7: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

6

A pseudo code of a canonical evolution strategy is illustrated in algorithm 1. A population ( o ) is initialised and then evaluated. Then for a number of generations ( g ) and while a stopping condition (maximum number of function evaluation or target fitness value) is not met, off springs

(1g ) go recursively through the process of recombination, mutation, evaluation and selection.

EP, ES and GA have distinguishing features according to their evolution modelling and

representation, but this classification has been blurred as the features of one method have been incorporated into other methods. For instance, some GA applications and developments have abandoned the bit string for a floating-point representation, ES uses some form of crossover operators for reproduction and EP has been extended and is not only limited to evolution of finite state machines. As there is no longer clear separation between these methods any more, these systems are regarded as "Evolutionary Computation" or "Evolution Algorithms". In general, an EA (that is including ES, GA and EP), can be defined as indicated in Algorithm1:

Algorithm 1: Label Canonical Evolution Strategy.

3 MECHANICS OF EAS

EAs share common elements: representation of individuals, fitness function, and an iterative selection based on fitness, recombination, mutation, elitism and the dilemma between the exploration and exploitation of the search space. Representation of Individuals

There are many types of representations, the most common being binary and floating point. The binary representation uses a bit string to represent an individual. With this representation, the real design variables are transformed into binary numbers that are concatenated to form a chromosome. This chromosome encodes the total number of design variables of the problem. In floating-point representation, a vector of real numbers characterises an individual. In aerofoil design, for example, the design variables are the control points for a Bézier or Spline curve that generates the aerodynamic shape. It has been reported by different researchers that real-coded EAs have outperformed binary-coded EAs in many applications. The explanation for this is that in a binary representation the variables are concatenated to represent an individual and this result in a big string length which is difficult to handle. Another problem is that the binary representation of real design parameters presents a difficulty with what is called “hamming cliffs” which is the discrepancy between the representation space and the problem space. As a consequence, it is difficult for a binary-coded EA to exploit the search in the vicinity of the current population. On

Initialise: oinit

Evaluate: of

g=0 while stopping condition not met, Recombine: gg

R reco 1

Mutate: 11 gR

gM mut

Evaluate: 11 gM

g f

Select: selg 1 (plus strategy) or,

selg 1 (comma strategy)

g=g+1 loop

Page 8: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

7

the other hand, a real-code representation is conceptually closer to the real design space and the length of the string vector is equal to the number of design variables. Fitness Function

Similar to the concept of survival of the fittest in nature, EAs use a fitness function to evaluate the performance to determine the quality of the vector string and to define whether the individual is a suitable candidate for the next generation. The fitness function is a critical aspect of EAs; a general rule is that it should reflect as closely as possible the desired real aspect of the solution. Examples of fitness functions in aeronautics can be, for example, drag minimisation, aerodynamic performance or gross weight. Evaluation

Evaluation consists of the means by which each individual in the population is evaluated. These could be, for example, an analytical function or a complex CFD or FEA analysis. For a real- world problem in aeronautics the means for evaluation can be a panel code or a CFD code that evaluates the flow around the aerofoil and provides an estimate of lift and drag coefficients, or an aero-structural analysis that computes aerodynamic performance and structural weight that can be used to compute the fitness function. Selection

The selection process is where individuals compete and are selected to produce offspring for the next generation; design candidates are selected by comparing their fitness values. Several parent selection techniques have been proposed, but the application of them is usually problem dependent [17, 32]. A method that is normally used is fitness proportional selection. In this case the selection probability of the individuals is calculated by dividing their fitness by the sum of all the other fitnesses of the individuals. Parents can also be selected by roulette wheel selection [17] or Stochastic Universal Sampling [32]. An individual is selected by spinning the wheel, which is divided according to the selection probability. Tournament selection operates by choosing some individuals randomly from a population and selecting the best from this group to survive in the next generation. Its simplest form is binary selection, whereby two random pairs of individuals are selected from the population and the pair with higher fitness is copied to the mating pool or population. Another method for selection is ranking, whereby individuals are ranked by their fitness values. The best individual receives rank 1; the second receives rank 2 and so on. A selection probability is reassigned in accordance with the ranking order.

An appropriate level of selection pressure is critical for the success of the evolution. If too much pressure is applied there could be loss of diversity, and premature convergence occurs. This is because the population is not infinite, and some individuals who are comparatively highly fit but not optimal rapidly dominate the population. The basic idea is then to control the number of reproductive opportunities that each individual has in order to prevent highly fit individuals taking over the population. On the other hand, if a low selection pressure is imposed, the search is ineffective and will take excessive time for convergence. Recombination

Recombination, also known as crossover, is the process in which two or more parent individuals (or chromosomes) are combined to produce an offspring chromosome (individual). Recombination is necessary in those cases when the offspring is to have multiple parents, since mutation by itself provides no mixing of the chromosomes. Mutation and Adaptation

The importance of mutation is to keep diversity in the population and to expand the search to areas that cannot be represented by the current population. Different mutation operators have been proposed. A common method is uniform mutation, whereby a random number with probability p

Page 9: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

8

is added to each component of the individuals [17,32]. In Gaussian mutation, a number from a Gaussian distribution with zero mean is added to each component of the individual vector. Another approach developed by Hansen and Ostermier [21] uses Covariance Matrix Adaption (CMA) and a mutative strategy parameter control (MSC) that is applied to the adaptation of all parameters of an N dimensional normal mutation distribution and provides a second order estimation of the problem topology. Some of their results indicate that in problems Elitism

Another aspect of EAs is an elitist strategy. This is because, as the process of evolution in EAs depends on stochastic operators, there is no guarantee that there would be a monotonic improvement in the fitness function value. With an elitist strategy the best individuals are copied to the next generation without applying any evolutionary operators.

Exploration – Exploitation Dilemma In EAs, a critical aspect is the balance between the exploration for areas of the search space and exploitation of the learned knowledge to progress in the evolution. As these are conflicting objectives, EA researchers have developed different alternatives to balance these trade-offs. Therefore, when developing or selecting an EA it is important to test the algorithm so that it has a good balance between these two criteria and that it has capabilities for benchmarking different mathematical test functions to test its robustness and performance. An area that has shown promising results is using the concepts of sub-populations that explores and exploits different regions of the search space or that get refined as the evolutions progresses.

4 DISTRIBUTED AND PARALLEL EAS

One of the main drawbacks of EAs is the CPU time, as they require many function evaluations. For this reason it was realised that it was necessary to incorporate some sort of parallel computing techniques. EAs are particularly adaptable to parallel computing, as candidate individuals or populations are sent to remote machines for evaluation. The most common approach for parallelisation is global parallel EAs. This consists of a master- slave implementation whereby the master controls the process and sends individuals to solver nodes where their fitness is evaluated by processors (slaves). The master collects the results and applies evolutionary operators to produce the next generation. A different approach is to divide the global population into several sub-populations [43].

Instead of relying on a single large population, PGAs use a network of interconnected small populations, thus defining a paradigm called the Island model [34]. Figure 1 shows such a model, with the greyed area corresponding to a node (or sub-population) and its connections to its neighbours. The idea is that each of these sub- Populations will evolve independently for a given period of time (or epoch). After each epoch, a period of migration and information exchange takes place, before the isolation resumes.

Page 10: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

9

Figure 1: Parallel EAs

Figure 2 shows that this sort of approach can lead different subpopulations to explore different regions of the search space during the isolation stage. Yet, the most promising solutions are shared by the whole set of sub-population, since those solutions are sent to the neighbours.

Figure 2: Isolation and migration stages.

The main idea is the use of small size of interconnected sub-populations instead of a single

large population; these sub-populations evolve independently on each node for a time or period called an epoch. After each epoch a period of migration and information is exchanged between nodes and successive new periods of isolation occur.

With this approach sub-populations can explore different regions of the search space; by

doing this the robustness improves and it is easier to escape from local minima. Another common approach is to preserve the global population while parallelising the EA operators that are restricted to neighbouring individuals. This is considered an extension to the second approach and is categorised as a cellular EA. Details of these methods can be found in Veldhuizen et al. [60] and Cantu-Paz et al. [7]. A comparison that shows the robustness and superiority of PEA over EAs through a better diversity strategy is explicitly shown for mathematical functions in reference [54]

Page 11: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

10

5 MULTI-OBJECTIVE EAs AND GAME THEORY

Aeronautical design problems often require a simultaneous optimisation of inseparable objectives and associated number of constraints. A multi-criteria optimisation problem can be formulated as:

NixfMinimise i 1)(: (1)

Subject to constraints: KkxhMjxg kj 10)(10)( (2)

Where fi are the objective functions, N is the number of objectives and x is an N-dimensional

vector where its arguments are the decision variables and kj hg , are the inequality and equality

constraints. There are many variants and developments of multi-objective approaches; these include the

lexicographic approach, traditional aggregating functions and Pareto and Nash approaches [31]. These concepts are extended and applied in combination with EAs in the following sections. 5.1 Game Theory and EAs

The simplest way to address the problem of multiple objectives optimisation is to use a scalar objective, generally obtained through some linear combination of weighted objectives. Such an approach may be of interest in some cases – particularly if the weight of each criterion is known beforehand – but besides its ad hoc character, it has several drawbacks since there is a loss of information and a need to define the weights associated to each objective. Moreover, the behaviour of the algorithm is very sensitive and is biased by the values of these weights [8, 51]. Schaffer was the first to propose a genetic algorithm approach for multiple objectives through his Vector Evaluated Genetic Algorithms (VEGA [51]), but it was biased towards the extrema of each objective. Goldberg proposed a solution to this particular problem with both non-dominance Pareto-ranking and sharing, in order to distribute the solutions over the entire Pareto front [16]. This idea was further developed in [23] and [56], and leads to many applications. All of these approaches are based on Pareto ranking and use either sharing or mating restrictions to ensure diversity; a good overview can be found in [16]. In the following, the first section presents a Pareto-based multi objective algorithm inspired by Non-dominated Sorting Genetic Algorithm (NSGA [56]). It is a cooperative approach which gives a whole set of solutions – the Pareto front. The second section is devoted to an original non-cooperative multiple objective algorithm, which is not based on Pareto dominance but on Nash equilibria. Pareto Optimality

The main interest of a Pareto based evolutionary algorithm is that the optimisation process does not have to combine the objectives in any way. It derives the fitness values of the solutions directly from the comparison of their respective objective vectors.

A common way to represent the solution to a multi-objective problem is by the use of the concept of Pareto optimality or non-dominated individuals [13]. Figure 3 shows the Pareto optimality concept for a two conflicting objectives problem. A solution to a given multi-objective problem is the Pareto optimal set, found using a cooperative game which computes the set of non-dominated solutions. This spans the complete range of compromised designs between the two objectives. Most real world problems involve a number of inseparable objectives where there is no unique optimum, but a set of compromised individuals known as Pareto optimal (or non-dominated) solutions. We use the Pareto optimality principle where a solution to a multi-objective problem is considered Pareto optimal if there is no other solutions that better satisfy all the

Page 12: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

11

objectives simultaneously. The objective of the optimisation is then to provide a set of Pareto optimal solutions that represent a trade-off of information amongst the objectives.

Figure 3: Pareto Optimality. For a minimisation problem, a vector x1 is said partially less than vector x2 if and iff: )()(:)()(: 2121 xfxfandxfxfi iiiii (3)

In this case the solution x1 dominates the solution x2. As EAs consider multiple points simultaneously, they are capable of finding a number of

solutions in a Pareto set. Pareto selection ranks the population and selects the non-dominated individuals for the Pareto front. A comprehensive theory, literature review and implementation of Multi-objective EAs (MOEAs) including the NSGAII and VEGA algorithms is given by Deb in reference [13] Nash equilibrium

When it comes to multiple objective optimisation, Pareto GAs have now become a sort of standard. With the introduction of non-dominance Pareto-ranking and sharing (in order to distribute the solutions over the entire Pareto front) the Pareto GAs are a very efficient way to find wide range of solutions to a given problem. However, another multiple objective scheme, this time a non-cooperative one, has been presented by J. Nash in the early 50's [35] This approach introduced the notion of player and aimed at solving multiple objective optimisation problems originating from Game Theory and Economics.

Definition

Nash optima define a non-cooperative multiple objective optimisation approach first proposed by J. F. Nash [35]. Since it originated in Games Theory and Economics, the notion of player is often used and we kept it. For an optimisation problem with n objectives, a Nash strategy consists in having w players, each optimizing its own criterion. However, each player has to optimise his criterion given that all the other criteria are fixed by the rest of the players. When no player can further improve his criterion, it means that the system has reached a state of equilibrium called Nash Equilibrium. For a set of two variables x and y, let E be the search space for the first

criterion and F the search space for the second criterion. A strategy pair FEyx , is said to be in Nash equilibrium if and only if :

Page 13: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

12

yxfyxf EEx

E ,inf,

yxfyxf FFy

F ,inf,

The definition of a Nash equilibrium may also be generalised to N players. In this case, a solution

muuu ,...1 with, u the total number of variables, is a Nash equilibrium if and only if: ivi ,

miiiimiiii vvvuufuuuuuf ,....,,,,...,....,,,,... 111111

Where if is the optimisation criterion for player i

With a classical approach, the main problem with Nash Equilibria is that they are very difficult to find. It is generally easier to prove that a given solution is a Nash Equilibrium, but exhibiting such a solution may reveal very hard. And it becomes almost impossible the criteria are non-differentiable functions. Next section will show that GAs offer an elegant alternative. Merging Nash Equilibrium and GAs The idea is to bring together GA and Nash strategy in order to make the GA the Nash Equilibrium [15, 17, 35]. In the following, we present how such merging can be achieved with 2 players trying to optimise 2 different objectives. Of course, it is possible to have n players optimizing n criteria, as presented in the previous section. But to make things as clear as possible, we will restrict ourselves to n=2. Let s = XY be the string representing the potential solution for a dual objective optimisation problem. X corresponds to the subset of variables handled by Player-1, and optimised along criterion 1. Y corresponds to the subset of variables handled by Player-2 and optimised along criterion 2. Thus, as advocated by Nash theory, Player-1 optimises s with respect to the first criterion by modifying X, while Y is fixed by Player-2. Symmetrically, Player-2 optimises s with respect to the second criterion by modifying Y while X is fixed by Player-1. The next step consists in creating two different populations, one for each player. Player-1’s optimisation task is performed by population 1 whereas Player-2's optimisation task is performed by population 2. Let Xk-1 be the best value found by Player-1 at generation k-1, and Yk-1 the best value found by Player-2 at generation k-1. At generation k , Player~1 optimises Xk while using Yk-1 in order to evaluate s (in this case, s= Xk Yk-1 ) At the same time, Player-2 optimises Yk while using Xk-1 (s= Xk-1Yk ). After the optimisation process, Player-1 sends the best value X k to Player-2 who will use it at generation k+1. Similarly, Player-2 sends the best value Xk to Player-1 who will use it at generation k+1. Nash equilibrium is reached when neither Player~1 nor Player-2 can further improve their criteria. As for the repartition of the variables between the players (i.e. which player should optimise which variable), it depends on the structure of the problem. If the problem has n criteria and n variables, it is straightforward that each player should optimise one different variable. However, for problems where there are more optimisation variables than criteria, the players must share among themselves the variables. The repartition may be arbitrary, but in most real-life problems, the structure of the problem is likely to suggest a way to divide those variables. This is illustrated in figure 4.

Page 14: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

13

Figure 4: Nash EAs.

Generalisation to N players. The generalisation of this approach to N players derives naturally from the case with 2 players.

Let us consider players optimizing a set of function objectives. The optimisation variables are distributed among the players, in such a way that each player handles a subset of the set of optimisation variables. Let be the optimisation variables (each can consist of a set of several variables). The Nash GA will then work by using different populations. The first population will optimise using criterion while all the other variables are fixed by players. Population 2 will optimise using criterion while all the other variables are fixed by players, and so on. The different players still have to send each other information about their best result after each generation. This setting may seem to be similar to that of the so-called Island Model in Parallel Genetic Algorithms (PGA [34]). However, there is a fundamental difference in the sense that PGAs use the same criterion for each sub-population whereas Nash GAs uses different criteria (thus introducing the notion of equilibrium). We first developed Nash GAs with binary-coded GAs to solve combinatorial discrete problems [53]. But for the examples presented, we used a version based on real-coded GAs. So in the following, each player is evolved by a real-coded GA, using a non-uniform mutation scheme [31]. It also uses distance-dependent mutation, a technique we evolved to maintain diversity in small populations. Instead of a fixed mutation rate, each offspring has its mutation rate computed after each mating. And this mutation rate depends on the distance between the two parents (for more details, see [53]). Ensuring diversity is of particular interest, since it makes it possible to have very small populations. And this was quite useful, since we used populations of size as small as 10 in the examples presented in the following. 5.2 Nash equilibrium of a two mathematical functions minimisation problem

Let us consider a game with 2 players A and B , with the following objective functions :

Page 15: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

14

22

22

3,

1,

yxyyxf

yxxyxf

B

A

The following section describes how to determine analytically and numerically N ash equilibrium for these two. Analytical solving

The definitions offered above for a Nash equilibrium are the mathematical ones, but they are not very practical to use. A way to find a Nash equilibrium is to use the notion of rational reaction set. Let DA be the rational reaction set for A , and DB the rational reaction set for B .

BAyxDA , such as yxfyxf AA ,,

BAyxDB , such as yxfyxf BB ,,

An intuitive insight on rational reaction sets is that they are the set of the best solutions a player can achieve for different strategies of his opponent. DA and DB can be built by finding the x and y that satisfy the following equations:

0,

|

0,

|

y

yxfyD

x

yxfxD

BB

AA

The Nash Equilibrium is the intersection of the two rational reaction sets DA and DB.

If we go back to the example we have presented, the rational reaction set DA is the solution of the equation:

0

,

x

yxf A

Which gives

1202120,

xyyxx

x

yxf A

It follows that the rational reaction set DA is the line y=2x-1 . The second rational reaction set DB is defined by the solution of the equation:

0

,

y

yxfB

2

302320

,

x

yyxyy

yxfB

Page 16: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

15

Hence, the rational reaction DB is the line 2

3

xy

Since the Nash Equilibrium is the intersection of the two rational sets, it can be determined by solving the system:

3

73

5

73

12

2

312

y

x

y

xyx

y

xy

Which means that the Nash Equilibrium is the point

33.2

66.1

3

73

5

Optimisation by Nash GAs

Figure 5 . Computed Nash equilibrium.

Figure 5 shows the evolution of the best chromosome for the two populations. Population 1 which optimises criterion fA converges towards 0.88. Population 2 which optimises criterion fB converges towards 0.88 as well. Both populations converge after only 50 generations. Those values seem to be different from the ones found analytically, but this is because the figure shows the optimisation process on the objective space. We can easily check that:

Page 17: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

16

88.03

7,

3

5

Af and 88.03

7,

3

5

Bf

The point

88.0

88.0 is actually the point

33.2

66.1

B

A

f

f in the plan of the criteria, which means that

we actually find the right point -- that is

88.0

88.0-- in the plan (x,y).

Note: if the objective functions fA and fB are permuted, with player A still handling x and player B still handling y , then the Nash equilibrium is not a single point. The analytical solving yields the line x= y and the different runs of the algorithm converge towards different points on that line.

Stackelberg equilibrium

Stackelberg equilibrium is the solution of a hierarchical game (also competitive game), which can be found in Loridan P. and Morgan J. [29]. A Stackelberg game has a non-symmetrical structure with completely different roles of players. For instance in the case an optimisation problem with two criteria, Nash and Stackelberg games are implemented with two players, each player is in charge of one criterion and choose his best decision in a rational reaction set. During a Nash game, the two players choose their best strategies according to the one decided by the other player to improve their own criteria (their gain). The players associated to a Nash game have the symmetric role, while for a Stackelberg game, the leader-follower roles of players are hierarchically defined.

A two-player Stackelberg equilibrium can be characterized as follows: Suppose A denotes the search space of first player - the leader, and B the search space of second player - the follower, then a strategy pair BAyx *)*,( is Stackelberg equilibrium if and only if:

*),(inf*)*,( yxfyxf AAx

A

where Af denotes the gain of the first player, *y is the solution of the following minimisation problem with respect to the y decision variable:

leaderfromfrozenxwithyxfyxf BBy

B ),(inf),(

where Bf denotes the gain of second player, and x the design variable value received by the first player.

Nash and Stackelberg games are implemented with GAs in the following section.

Nash /GAs and Stackelberg/ GAs Genetic Algorithms possess robustness for the capture of the global solution of multi-modal

optimisation problems. On the other hand Nash games can be used for design under conflict and Stackelberg games for hierarchical design. Therefore it is quite natural to combine the two approaches, a Nash Game with GAs or a Stackelberg game with GAs in order to solve multi-criterion design optimisation problems either under conflict or with a hierarchy. The resulting decentralised optimisation can be considered as a set of decision maker algorithms for real design in aerospace engineering well suited to distributed parallel computing environment.

Page 18: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

17

Figure 6: Nash/GA flowchart. In a Nash game, each player uses a GAs to improve his own criterion along generations

constrained by strategies of the other player. In applications, design variables are geometrically split between players who exchange symmetrically their best strategies (best chromosomes) at each generation. Such a process is continued until no player can further improve its criterion. At this stage the system has reached the Nash equilibrium. One of the evident properties of Nash/GAs is their inherent parallel structure during evolution. A flow chart of a Nash/ GAs is shown in figure 6. Let S=XY be the string representing the potential solution for a dual objective optimisation, where X corresponds to the first criterion and Y to the second one. Player 1 optimises X (Y is fixed by Player 2) and Player 2 optimises Y (X is fixed by Player 1). Each player has his own GAs with population. Nash equilibrium is reached when neither player can further improve its criterion (see [64] for more details).

Figure 7: Stackelberg GAs flowchart The GAs- implementation of a Stackelberg game with two players is described in figure 7.

Let denotes by S=XY, X and Y are the strategy set of first player and second player respectively (the first player being the leader and the second player the follower. For each individual x frozen of the leader’s decision set, the follower searches the corresponding *Y to improve his gain.

Player 1, Pop1(X) Player 2, Pop2(Y)

Choose randomly *0

*0YX

Gen. 1

Gen. 2

Gen. k

Optimisation of S=XY

decision X: leader, decision Y: follower

leader ),( YX i

follower ),1( YX

follower ),2( YX

follower ),( YXn

follower ),3( YX

Initialization of Pop1 and Pop2

Optimize *01YX Optimize 1

*0YX

Optimize *12YX Optimize 2

*1 YX

Optimize *

1kkYX

Optimize

kk YX *1

Result is **kkYX

Page 19: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

18

Once all individuals of the leader’s decision set have received the corresponding *Y values, then the leader changes X to improve his gain. This process is repeated till the leader could not furthermore improve his gain, this means that such system reaches an Stackelberg equilibrium.

5.3 A simple example using Nash/GAs and Stackelberg/GAs approaches

Two analytical and differentiable functions are chosen to explain how to solve multi-objective optimisations through Nash/GAs and Stackelberg/GAs approaches as follows:

5,5)()3(:2

)()1(:122

2

221 yx

yxyfF

yxxfF (4)

The optimisation problems are to minimise f1 and f2 in search space ]5,5[, yx using

Nash/GAs and Stackelberg/GAs in a two-player game with which the global search space will be decomposed into two parts and each player takes one. With the Nash/GAs approach, the two players have a symmetrical role during game, but a non-symmetrical and nested role in Stackelberg/GAs approach.

The analytical solutions for Nash strategy and Stackelberg strategy could be easily calculated

according to the mathematical definition of the game [64], and are shown in table 1.

Nash Stackelberg

F1 leader F2 leader X 1.666625 1.399945 1.799998 Y 2.333309 2.200030 2.600001 F1 0.888857 0.800092 1.280001 F2 0.888943 1.280002 0.800004

Analytical result

X=5/3 Y=7/3

F1=F2=8/9

X=1.4 Y=2.2 F1=0.8

F2=1.28

X=1.8 Y=2.6

F1=1.28 F2=0.8

Table 1: Analytical and numerical results: comparison

0 0.5 1 1.5 2 2.5 3

X

0.5

1

1.5

2

2.5

3

3.5

Y

F1 leader, StackelbergF2 leader, StackelbergNash

0 5 10 15 20 25 30 35 40

Number of generations

0

0.5

1

1.5

2

Fit

ness

F1 leader, StackelbergF2 leader, StackelbergF1, NashF2, Nash

Figure 8: Game solutions reaching equilibrium Figure 9: Comparison of convergence

The numerical solutions are obtained using Nash/ GAs and Stackelberg / GAs approaches,

and shown on table 1.It can be noticed that computerised solutions with GAs agree quite well with analytical solution.

Page 20: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

19

Figure 8 shows the solution trace during the GAs optimisation process. In the case of a Stackelberg strategy, different leader definition provides different optimal solutions, and these solutions are different from the Nash equilibrium. Finally figure 9 shows a comparison of convergence history of different optimisation strategies reached after 25 generations.

6 APPLICATION OF EAs TO CONSTRAINED PROBLEMS

Engineering problems usually involve a number of constraints due to technical, manufacturing, human resources requirements and limitations. It is necessary to incorporate those constraints into the design and optimisation process to obtain a realistic design. Evolution Algorithms are unconstrained optimisation procedures; therefore some handling techniques have been introduced to incorporate constraints into the fitness functions.

Different approaches have been developed in order to satisfy design constraints [24, 28]. The use of the penalty function is the most common approach and is based on adding penalties to the objective function [24, 49]. When applying a penalty to an infeasible individual it is important to determine if it is to be penalised for simply being infeasible or penalised also by some amount related to its in-feasibility and the number of constraints violated. As reported by different researchers [49], penalties which are functions of the distance from feasibility perform better than those which are only a function of the number of violated constraints. Joines and Houck [24] describe static penalties and dynamic penalties. In the first, the user defines several levels of violation and a penalty coefficient is chosen for each so that the penalty coefficient increases as a higher level of violation is reached. The drawback of this approach is that it requires a high number of parameters. In the latter, the dynamic function increases as the optimisation progresses through generations.

Other methods include annealing penalties that are similar to simulated annealing in which the penalty coefficients are changed once the algorithm is trapped in a local optimum. The main problem with this approach is that it is sensitive to the values of its parameters and it is difficult to choose an appropriate cooling scheme. Adapting penalties work by modifying the penalty based on a feedback from the last k generations. The inconvenience with this approach is the selection of the number of generations to wait before it is applied.

The Death penalty is the easiest way to handle constraints and works by rejecting infeasible individuals; the main drawback is the loss of information that can be contained in the individual that is discarded. It can also be lengthy, especially in cases where it is difficult to approach the feasible region.

Within an algorithm these are specified by the user. They may take the form of simple upper and lower bounds on the object variables, but many more complicated constraints exist and these must be satisfied during the optimisation process. Problems are often posed so that only certain combinations of object variables can be considered or their bounds are not simply 'upper' and 'lower' but also 'not between' and 'not if'. Object variables merely represent the genotype (numerical representation) of the individual, and further constraints will probably exist on the phenotype (physical representation) of the individual as well. Constraints such as these may be imposed on a particular solution such as weight, geometry or some other physical characteristic which is undesirable. Often whether there has been an excursion from the phenotypic problem constraints, this can only be determined after the fitness function has been applied, and this may slow overall performance.

Two basic methods of handling constraints are considered in this work; the 'rejection' method and the 'penalty' method. The rejection method simply involves rejecting any individual which is not compliant with the constraints, by not allowing it an opportunity to contest insertion into the main population. The merit of the rejection method is that no penalty scheme needs to be devised for handling individuals that are out of bounds, and therefore only solutions which satisfy the constraints fully are admitted. The disadvantage with this approach is that individuals which are close to the boundary but not within it are rejected out of hand, even though they may contain useful genetic information.

Page 21: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

20

The penalty method involves adding some penalty fitness to f which (in the context of a minimisation problem) reduces its fitness with respect to other individuals in the population, reducing the likelihood that it will be selected next time. For example, if a certain solution-dependent value (s) must be less than a given value (v) we can construct a penalty function:

2' . vsvshff

Where 'f is the (possibly) penalised fitness, f is the original fitness and ...h is the Heaviside function. The advantage of the penalty method is that individuals with good genetic material can be allowed to converge from outside the boundary to inside the boundary, if possible. In this work, multi-objective fitnesses are penalised by adding equal values of the penalty to each fitness value. This ensures that between two otherwise equal solutions, one which is penalised can never dominate one which is not. The disadvantage with the penalty method is of course that the penalty function needs to be devised with some care, especially considering there may be many such functions to devise. In the example given, a question of weight arises: Should a more severe

term such as 3510 vs or a less severe term such as vs 4

1 have been used instead? In these

cases problem specific knowledge is required, so the user must make a 'best guess' of the penalty to apply, or run a number of cases to gain some experience with the particular case involved. Possibly the best compromise is the use of both rejection and penalty methods together, so that rejection is used on solutions that are obviously not feasible and will not lead to further improvement, while penalties are applied to solutions that show promise, but exceed allowable limits by a small margin. These two methods are used throughout this work, and they will be referred to as 'hard' (rejection) and 'soft' (penalty) bounds respectively.

7 HIERARCHICAL GENETIC (EVOLUTIONARY) ALGORITHMS (HEAS) Hierarchical Genetic Algorithms (HGAs), or, in general, Hierarchical EAs (HEAs), are a

particular approach which has been developed by Sefrioui and Périaux [55]. This approach uses a hierarchical topology for the layout of the sub-populations; Figure 10 illustrates this concept. The bottom layer can be entirely devoted to exploration, the intermediate layer is a compromise between exploitation and exploration and the top layer concentrates on refining solutions.

The main feature is the interaction between the given layers. The best solutions progress from the bottom layer to the top layer where they are refined. This circulation of solutions up and down the tree becomes even more interesting if we keep in mind that each node can be handled by a different EA where specific parameters can be tuned. In other words, the nodes of each layer can have a different purpose, defined by their associated EA:

1. The top layer concentrates on refining solutions. That can be achieved by tuning the EA in a way that takes very small steps between successive crossover and mutation operations.

2. The intermediate layer is a compromise between exploitation and exploration.

3. The bottom layer can be entirely devoted to exploration. That means that the EA can

make large leaps in the search space.

All the bottom layer nodes can use a less accurate, fast model to compute the fitness function of the individuals of the sub-population. Even though these solutions may be evaluated rather roughly, the hierarchical topology allows their information content to be used. As these solutions are sent up to the intermediate layer during the migration phase, they are re-evaluated using a more precise model to give a more accurate representation of the actual quality of the solution. However, model two is also a compromise model, as it is deliberately not too precise for the sake of speed. The process is repeated again by sending the solutions up to the top layer during the

Page 22: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

21

migration process. These solutions are re-evaluated with model one, the most precise model that gives a genuinely accurate value for the fitness function. In a practical implementation, this model can take the form of: Bottom Level: a coarse grid or simple algebraic equations and historical trends for the different physics involved. Intermediate Level: an intermediate grid or more complex models for the physics involved: potential flow solvers or panel methods for aerodynamics, simple FEA modules for structures and blade element theory or helicoidal vortex model for propulsion. Top Level: a refined grid or more precise models: Navier-Stokes equations for aerodynamics and refined FEA for structural analysis.

Whitney et al. [62, 63] tested the performance of a traditional EA, Hierarchical EA and a Hierarchical EA with multiple models, based on the computational expense needed. It was found that when compared to a traditional EA implementation with a single population evolutionary algorithm, the hierarchical approach can speed up an optimisation process by a factor of three.

Figure 10 describes the three layer topology of EAs which allows the flexibility of using different mutation parameters for exploration or exploitation of the search space and also different kinds of models or meshes for a given model.

Figure 10: Hierarchical Topology

8 ASYNCHRONOUS PARALLEL EAS

Parallelisation strategies are common with EAs, but the problem is that researchers refer to the use of parallel computing but do not provide good detail on the parallelisation strategy employed. A variant of the classical parallel EAs implementation was proposed by Whitney [62, 63]. In this case the remote solvers do not need to run at the same speed (synchronise) or even on the same local network. Solver nodes can be added or deleted dynamically during the execution. This parallel implementation requires modifications to the canonical Evolution strategy which ordinarily evaluates entire populations simultaneously. The distinctive method of an asynchronous approach is that it generates only one candidate solution at a time and only re-incorporates one individual at a time, rather than an entire population at every generation as is usual with traditional EAs. Consequently, solutions can be generated and returned out of order. This allows the implementation of an asynchronous fitness evaluation, giving the method its name. This is an extension of the work by Wakunda and Zell [61]. With an asynchronous approach there is no

Page 23: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

22

waiting time (or bottleneck) for individuals to return. As soon as a solution is available, it is incorporated back into the process. Figure 11 illustrates these concepts.

Figure 11: Parallel Computing and Asynchronous Evaluation

Example of performance of the parallel asynchronous will be presented in the second part of these notes.

9 HYBRID PARETO-NASH GAMES

The Hybrid-Game uses the concepts of Nash-game and Pareto optimality and hence it can simultaneously produce Nash-equilibrium and a set of Pareto non-dominated solutions [65]. The reason for implementation of Nash-game is to speed up to search one of the global solutions. The global solution or elite design from Nash-game will be seeded to a Pareto-game at every generation. Each Nash-Player has its design criteria using own optimisation strategy. The shape of hybrid Nash-HAPEA topology is a top view of trigonal pyramid as shown in Figure 12.

Figure 12. Example: Hybrid-Game Topology.

It can be seen that the optimiser consists of three Nash players with one Pareto player. Each Nash player is located in a symmetrical array at 60 (Line 1, Line2 and Line 3). Each Nash player can have two hierarchical sub-players. As an example, problem considers 6 design variables (DV1 to DV6). The distributions of design variable are; Nash-Player1 (black circle) only considers black square design components (DV1, DV4), and DV2 and DV5 are considered by Nash-Player 2 (blue

Page 24: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

23

circle). Nash-Player 3 considers DV3 and DV6. The Pareto player considers whole design variable span (DV1 to DV6). The Pareto-Player and multi-fidelity Nash sub-players are optional by problem definition. For instance, the Pareto-Player will be used if the problem considers multi-objective or multidisciplinary design optimisation. However the Pareto-Player will not be used if the problem considers a reconstruction or inverse design optimisation problem since Pareto fronts are not required. The multi-fidelity Nash sub-players will be not used if the problem requires more than four Nash-Players. The topology of hybrid Nash-HAPEA is flexible; if there are four Nash players then the shape will be a quadrangular pyramid. 9.1 Algorithms for HAPMOEA and HYBRID-GAME The algorithms for HAPMOEA and Hybrid-Game are shown in Figures 13a and 13b where it is assumed that the problem considers the fitness function f = min (x1, x2, x3). The validation of Hybrid-Game and HAPMOEA can be found in References [9]. 9.1.1 HAPMOEA-L3 (Figure 13a) The hybrid Pareto-Nash method has eight main steps as follows; Step1: Define population size and number of generation for hierarchical topology (node 0 to node 6), dimension of decision variables (x1, x2, x3) and their design bounds, model quality (layer 1 (node 0): high, layer 2 (node 1, node 2): intermediate, layer3 (node 3 to node 6): low). Step2: Initialize seven random populations for node 0 to node 6. while termination condition (generation or elapsed time or pre-defined fitness value) Step3: Generate offspring using mutation or recombination operations. Step4: Evaluate offspring corresponding to fitness functions. Step4-1: Evaluate offspring for each node in terms of high, intermediate, low. Step5: Sort each population for each node based on its fitness. Step6: Replace best individual into non-dominated population of each node. end Step7: Designate results; Pareto optimal front obtained by node 0 at first layer (precise model) for multi-objective design problem otherwise plot convergence of optimization based on best-so-far individual. Step8: Do post-optimization process; if problem considers aerodynamic wing design Mach sweep will be plotted corresponding to objective (CD, CL, L/D). 9.1.2 HYBRID GAME (Figure 13b) The method has eight main steps as follows; Step1: Define population size and number of generation for Nash-Players (N-Player1, N-Player2, N-Player3) and Pareto Player (P-Player), dimension of decision variables (x1, x2, x3) and their design bounds. Split decision variables for each player (N-Player 1: x1, N-Player 2: x2, N-Player 3: x3, P-Player: x1, x2, x3). Step2: Initialize random population for each player. while termination condition (generation or elapsed time or pre-defined fitness value) Step3: Generate offspring using mutation or recombination operations. Step4: Evaluate offspring corresponding to fitness functions. Step4-1: Evaluate offspring in Nash-Game. N-Player1: use x1 with design variables x2, x3 fixed by N-Player2 and N-Player3. N-Player2: use x2 with design variables x1, x3 fixed by N-Player1 and N-Player3. N-Player3: use x3 with design variables x1, x2 fixed by N-Player1 and N-Player2. Step4-2: Evaluate offspring for P-Player. if (the first offspring at each generation is considered) P-Player: use elite design (x1*, x2*, x3*) obtained by Nash-Game at Step4-1.

Page 25: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

24

else P-Player: use x1, x2, x3 obtained by mutation or recombination operation as default. Step5: Sort each population for each player based on its fitness. Step6: Replace the non-dominated individual into best population for P-Player. end Step7: Designate results; P-Player: Plot Pareto optimal front for multi-objective design problem otherwise plot convergence of optimization based on best-so-far individual Nash-Game: plot Nash-equilibrium obtained by N-Player1, N-Player2, N-Player3 Step8: Do post-optimization process; if problem considers aerodynamic wing design Mach sweep will be plotted corresponding to objective (CD, CL, L/D).

Figure 13a. Algorithm of HAPMOEA. Figure 13b. Algorithm of Hybrid-Game

9.2 Example of a 2-D Design Optimization of a Bump for drag reduction problems 9.2.1 SCB Design optimisation using hapmoea and hybrid-game The baseline design (RAE 2822) is shown in Figure 14. The problem considers the critical flow conditions; M∞ = 0.8, Cl = 0.175, Re = 18.63 106 where two shocks occur, one in the suction side and one in the pressure side of RAE 2822.

Page 26: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

25

Figure 14. Baseline design (RAE 2822) aerofoil.

9.2.2 SCB Design using HAPMOEA and Hybrid-Game Problem Definition The problem considers a single-objective SCB design optimisation using HAPMOEA (Eq. 1) and Hybrid-Game (Equations 1 -3) to minimize total drag at flow conditions; M = 0. 8, Cl = 0.175, Re = 18.63 106.

_ , _ min minGP Total Viscous Wavef U SCB L SCB Cd Cd Cd (1)

*1 _ , _ minNP Totalf U SCB L SCB Cd (2)

*2 _ , _ minNP Totalf U SCB L SCB Cd (3)

where U_SCB* and L_SCB* represent the elite SCB designs obtained by Nash-Player 1 and Nash-Player 2 respectively. Design variables A Shock Control Bump (SCB) can be constructed using three design variables, namely length, height and peak position as shown below. The centre of the SCB will be located where the shock occurs on the transonic aerofoil design. The SCB is parameterised with Beziers splines.

Aerodynamic tools In this example the Euler - Boundary layer code MSES written by Drela [67] is utilised. The MSES software is a coupled viscous/inviscid Euler method for the analysis and design of multi-element or single-element airfoils. It is a based on a streamline-based Euler discretization and a two-equation integral boundary layer formulation are coupled through the displacement thickness and solved simultaneously by a full Newton method. To obtain a prescribed lift coefficient Cl, the angle of attack of the aerofoil is adapted. Figure 15 shows an example of a mesh obtained by MSES that used during the optimisation.

Page 27: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

26

Figure 15. RAE 2822 aerofoil mesh obtained by MSES.

Implementation for HAPMOEA The following conditions are for multi-resolution/population hierarchical populations. - 1st Layer: Population size of 10 with a computational grid of 36 213 points (Node0). - 2nd Layer: Population size of 20 with a computational grid of 24 131 points (Node1, Node2). - 3rd Layer: Population size of 20 with a computational grid of 24 111 points (Node3 ~

Node6). Note: these grid conditions (2nd and 3rd layer) produce less than a 5% accuracy error when compared to the precise model on the 1st layer (Node0). Implementation for Hybrid-Game

The conditions for Hybrid-Game are; - Global Player: Population size of 10 with a computational grid of 36 213 points. - Nash-Player 1: Population size of 10 with a computational grid of 24 131 points. - Nash-Player 2: Population size of 10 with a computational grid of 24 131 points.

Interpretation of Numerical Results The HAPMOEA (Figure 16 left) was allowed to run for 12 hours and 1,295 function evaluations (f = 0.01285) while the Hybrid-Game (Figure 2 right) run for 1.15 hours and 406 function evaluations to reach same fitness function value (f = 0.01282) using a single 4 2.8 GHz processor. The computational cost for Hybrid-Game is only 9.5% of HAPMOEA computation cost due to Nash-Game characteristics (decomposition of design problem) and evaluation mechanism. In other words, the use of Nash-Game besides hierarchical topology improves the MOEA efficiency by 90%. Table 2 compares the aerodynamic characteristics obtained by the baseline design (RAE 2822) and the baseline with optimal SCB obtained by HAPMOEA and Hybrid-Game. It can be seen that the baseline design with optimal SCB can reduce wave drag by 75% which leads to 33% of total drag. Such drag reduction improves 49% of the lift to drag ratio.

Page 28: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

27

Figure 16. Convergence objective obtained by HAPMOEA (upper) and Hybrid-Game (lower).

Table 1. Aerodynamic characteristics obtained by HAPMOEA. SCB CdTotal CdWave L/D

Baseline 0.01918 0.00886 9.13 with SCB (HAPMOEA) 0.01285 (- 33%) 0.00225 (- 75%) 13.59 (+ 49%)

with SCB (Hybrid-Game) 0.01282 (- 33%) 0.00214 (- 76%) 13.65 (+ 50.0%) Figure 17 compares the shape of baseline design and baseline with SCB geometry obtained by HAPMOEA (left) and Hybrid-Game (right). The optimal SCBs obtained by HAPMOEA are located between (0.5604, 0.0595) and (0.8506, 0.0270) on the suction side and between (0.3633, -0.0591) and (0.6618, -0.0287) on the pressure side. The design parameters for the optimal SCB are shown in Table 2. The optimal double SCBs obtained by Hybrid-Game are located at between (0.5566, 0.0597) and (0.8543, 0.0263) on the suction side, and between (0.3624, -0.0591) and (0.6619, -0.0287) on the pressure side of RAE 2822.

Figure 17. Geometries comparison between the baseline design and baseline with optimal D-SCB obtained by HAPMOEA (upper) and Hybrid-Game (lower).

Page 29: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

28

Table 2. Optimal SCB design parameters.

SCB SCBL (%c) SCBH (%c) SCBP (%SCBL) U_SCB (HAPMOEA) 29.02 0.603 83.8 L_SCB (HAPMOEA) 29.85 0.643 84.8

U_SCB (Hybrid-Game) 29.77 0.644 84.8 L_SCB (Hybrid-Game) 29.94 0.643 84.8

Figure 18 compares Cp contours obtained by the baseline design and baseline with optimal SCB. It can be seen that both upper and lower SCB decelerates the supersonic flow and the position of shock is moved towards to trailing edge when compared to the baseline design shown in Figure 18 (left). Using only the optimal SCB on suction side of RAE 2822 reduces the total drag by 14% while applying both the optimal SCB on the suction and pressure sides of RAE 2822 produces 33% lower total drag when compared to the baseline design.

Figure 18. P/P0 contour obtained by the baseline (upper) and the baseline design with the optimal D-SCB - Hybrid-Game (lower).

Page 30: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

29

Figure 19. Drag reduction (upper) and lift to drag ratio improvement (lower) at normal flow conditions.

Note: Condi represents ith flow conditions. This optimal double SCB is also tested at five different flight conditions to show the benefit of using a double SCB. The histogram shown in Figure 19 shows the comparison of total drag and the ratio of lift to drag obtained by the baseline design and optimal double SCB from HAPMOEA and Hybrid-Game. It can be seen that the optimal double SCB at critical flight conditions reduces more total drag by 15 to 33% and improves lift to drag ratio by 18 to 49% at normal flight conditions when compared to the baseline design.

Cond1: M = 0.750, Cl = 0.690, Re = 18.63 106, Cond2: M = 0.760, Cl = 0.560, Re = 18.63

106

Cond3: M = 0.770, Cl = 0.430, Re = 18.63 106, Cond4: M = 0.785, Cl = 0.300, Re = 18.63 106

Cond5: M = 0.800, Cl = 0.175, Re = 18.63 106

To summarise this optimisation test case, the use of optimal double SCB obtained by HAPMOEA and Hybrid-Game is beneficial at both normal and critical flow conditions due to significant transonic drag reduction. In addition, Hybrid-Game improves MOEA efficiency while generating high quality optimal solution when compared to HAPMOEA software. More details on this optimisation problem solved with Hybrid game/HAPEA software can be found in [66].

10 THE DEVELOPMENT OF EVOLUTIONARY ALGORITHMS FOR DESIGN AND OPTIMISATION IN AERONAUTICS

Page 31: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

30

The potential benefits of EAs for optimisation problems in engineering have been recognised for some time. The following sub-sections describe the application of EAs to some aeronautical and engineering areas. Single and Multi-element Aerofoil Design

The most common application of EAs to aerodynamic shape optimisation is to aerofoil design, which includes works by Marco et al. [30] and Whitney et al. [62], Périaux et al. [41] and Obayashi [37]. Optimisation of cascade aerofoils using EAs can be found in Obayashi et al. [36, 37]. Some studies on the use of parallel genetic algorithms for aerofoil design and optimisation are presented in Jones et al. [25], Sefrioui et al. [53] and De Falco et al. [14]. These applications give a general overview of the benefits of EAs for aerofoil shape optimisation and give insight into some of the complexities that arise when applying and coupling EAs for multi-objective problems and parallel computations. There are only a few studies that apply EAs to multi-element high-lift aerofoil design. In [45], Quagliarella, for example, used a viscous solver for high-lift aerofoil design and produced optimal results for single- and multi-objective problems. Gonzalez et al [18] used an Euler solver for the reconstruction of the target pressure distribution over a three-element aerofoil set. This study illustrated that an optimal combination of design variables for the slat and flap can be obtained, but it is necessary to use full Navier-Stokes equations and a good topology representation that does not vary with the mesh representation to fully account for the changes in the evolution process and the complexities of this type of flow. Intake/Nozzle Design

There are a few applications of EAs for nozzle design and optimisation. Sefrioui and Périaux, [55] for example, used a Nash Genetic algorithm for two objectives and were able to find a compromise, Nash equilibrium solution. In a different study Sefrioui and Périaux used a Hierarchical Genetic Algorithm to find optimal nozzle geometries [55]. Whitney et al. [63] compared different EA approaches for inverse nozzle design. Optimisation results for 2-D and 3-D air intakes were studied by Knight [26]; in that study Knight was able to find optimal geometries for single and multiple flight points. Wing Design

Different studies explore the potential benefits of EAs for wing design and optimisation. Obayashi, for example, applied EAs for several wing platform design problems [36, 38]. In these studies, different niching and elitist models are applied to a Multi Objective Genetic Algorithm (MOGA) to find optimal Pareto fronts for transonic and supersonic wing design. The transonic case considered three (3) objectives: minimise aerodynamic drag, minimise aircraft weight and maximise fuel weight stored in the wing. The constraints imposed were lift greater than given aircraft weight and structural strength greater than aerodynamic loads.

Other applications of EAs for wing design include the work by Oyama et al. [40], Takahashi et al. [58] and Anderson et al. [4]. Gonzalez et al. [20] illustrated the application of a hierarchical topology of EAs for multi-objective multidisciplinary wing design. Important results of these studies indicate the importance of variable fidelity models, the broad applicability and the ability of EAs to find optimal Pareto solutions for three-dimensional applications and problems with more than two objectives. Aircraft Design

On aircraft design Crispin [10] applied GAs for aircraft conceptual and preliminary design and found it useful to obtain reasonable and feasible designs. Crossley et al. [11] applied GA for helicopter conceptual design and were able to show the effectiveness of GAs and obtain optimal feasible configurations. One of the results of his work was that the use of parametric variations conducted by GAs can significantly reduce the amount of time and money in the early stages of aircraft design. Ali and Behdinan [3] applied GAs to determine an optimal combination of design variables for a medium-size transport aircraft. They studied different selection and crossover strategies and indicated that with a GA approach it was possible to generate feasible and efficient

Page 32: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

31

conceptual designs. In these studies, the authors also highlighted the effectiveness and importance of EAs in saving money in the initial stages of the design process. Parmee and Watson [42] proposed a preliminary airframe design using co-evolutionary multi-objective genetic algorithms. Their algorithm was able to find local objective optimal solutions after a few generations and identify paths to trace the trade-off surface to some extent. This research also mentions that on-line sensitivity analysis has a role to play as the number of objectives increases and suggests that quicker, less detailed runs can easily be achieved using smaller population sizes.

Other applications of EAs for aircraft design include the work by Cvetkovick et al. [12] and Raymer [46,47]. Not only traditional transport or commercial aircraft have been studied and optimised using EAs. A few studies on the benefits of EAs in exploring large variations in the design space have been conducted for novel, non-notional configurations such as Unmanned Aerial Vehicles (UAVs/UCAVs) and Micro Aircraft Vehicles [18-20].

11 COMPARATIVE STUDIES OF EAS AND OTHER METHODS FOR MDO

A considerable amount of research has been devoted to comparing EAs with traditional deterministic techniques [44]. A comprehensive study on the use of adjoint or Genetic algorithms for multi-objective viscous aerofoil optimisation was performed by Pulliam et al. [44]. These studies provide an indication of the benefits and broad applicability and performance of EAs, but in general, it can be said that the application of a method depends both on the particular problem to solve and the different specific EA parameters of the evolutionary method. Little research has been conducted on comparing the application of EAs and other methods for MDO. Raymer [46] applied and compared different MDO methods, Monte Carlo, Random Walk, Simulated Annealing, GAs and orthogonal steepest descent search to enhance aircraft conceptual design and MDO. In his research Raymer applied these methods to four aircraft concepts: a fighter, a commercial airliner, an asymmetrical light twin and a tactical UAV showing a broad applicability of GA. One of the conclusions of his work indicates that aircraft conceptual design can be improved by proper application of optimisation methods for MDO. He found that the proper selection of a technique can reduce the weight and cost of an aircraft concept by minor changes in the design variables. His results also indicate that the orthogonal steepest descent method provides a slightly better result with the same number of function evaluations, but, as the number of variables is increased, the evolutionary methods seem to be superior. Raymer limited his research to single-objective problems, used only one type of fidelity model for the aircraft analysis and limited his research to seven design variables and the inclusion of any propulsion variables other than engine size via T/W.

12 ADVANTAGES AND LIMITATIONS OF TRADITIONAL EAS FOR AERONAUTICAL PROBLEMS

Even though there are definite advantages to using EAs they have not seen widespread use in industry or for multidisciplinary design optimisation applications. The main reason for this is the computational expense involved, as they require more function evaluations than gradient-based methods. Therefore, the continuing challenge has been to improve their performance and develop new numerical techniques. It is clear that a possible and realistic avenue is the application of robust and efficient evolutionary methods for MDO. One of the viable alternatives are the Hierarchical Asynchronous Parallel Evolutionary Algorithms (HAPEA) [18-20,62,63] The foundation of this algorithm lies on traditional evolution strategies and incorporate the concepts of multi-criteria optimisation, hierarchical topology, parallel computing, asynchronous evaluation and hame theory (Pareto tournament, Nash Approach).

13 CONCLUSIONS

This lecture described the basic concepts of EAs, and a short review of different approaches and industrial needs for MDO was presented. Details of evolutionary algorithms and their specific

Page 33: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

32

applications to aeronautical design problems were discussed. The paper provided specific details on a particular EA used in this research. It is noted that there are different methods, architectures and applications of optimisation and multidisciplinary design optimisation methods for aeronautical problems. However, still further research and proposals for alternative methods are still required to address the industrial and academic challenges and needs of this industry. EAs can be an alternative option to satisfy some of these needs, as they are easily coupled, particularly adaptable, easily parallelised, require no gradient of the objective function(s), have been used for multi-objective optimisation and successfully applied to different aeronautical design problems. Nonetheless, EAs have seen little application at an industrial level due to the computational expense involved in this process and the fact that they require a larger number of function evaluations, compared to traditional deterministic techniques.

The continuing research has focused on development and applications of canonical evolution algorithms for their application to aeronautical design problems. It is desirable therefore to have a single framework that allows:

Solving single and multi-objective problems that can be deceptive, discontinuous,

multi-modal. Incorporation of different game strategies-Pareto, Nash, Stackelberg Implementation of multi-fidelity approaches Parallel Computations Asynchronous evaluations

This framework and its use in many applications are described in details in Part 2 and Part 3 of the notes . ACKNOWLEDGEMENTS In this paper the authors gratefully acknowledge E. Whitney, K. Srinivas, and M. Sefrioui, for many enhanced discussions and interactions on hierarchical EAs, Asynchronous EAs and game theory; J.F. Wang is also acknowledged by the second author for implementing and testing GAs software coupled to game strategies. Finally we are also grateful to K. Deb for fruitful discussions in several places, Sydney, Bangalore and recently Jyvaskyla, on multi objective optimisation problems. REFERENCES

1. Alexandrov, N. M. and Lewis, E. M., Comparative Properties of Collaborative Optimisation and Other Approaches to MDO, Proceedings of the First ASMO UK / ISSMO Conference on Engineering Design Optimization, July 8-9, La Jolla, California, July 8-9, 1999.

2. Alexandrov, N. M. and Lewis, R. M., Analytical and Computational Properties of

Distributed Approaches to MDO, AIAA 2000-4718, September 2000.

3. Ali N. and Behdinan, K.. Conceptual Aircraft Design-A Genetic Search and Optimisation Approach. In Proceedings of the 23rd International Congress of Aeronautical Sciences Sciences, ICAS 2002, Toronto, Canada, 2002.

4. Anderson, M. B. and Gerbert., G. A. Using Pareto Genetic Algorithms for Preliminary

Subsonic Wing Design, AIAA Paper 96-4023, AIAA, Washington, D. C., 1996.

Page 34: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

33

5. Bartholomew, P. The Role of MDO within Aerospace Design and Progress towards an MDO Capability, AIAA-98-4705, pp 2157-2165, 7th AIAA/USAF/NASA/ ISSMO Symposium on Multidisciplinary Analysis and Optimization, AIAA, St. Louis, Mo,1998.

6. Braum, R., Gage, P., Kroo, I. and Sobieski , I. Implementation and Performance Issues

in C.O, NASA-AIAA-96-4017, 1996.

7. Cantu-Paz, E. Efficient and Accurate Parallel Genetic Algorithms, Kluwer Academic Pub, 2000.

8. Coello Coello, C.A., Van Veldhuizen, D.A. and Lamont, G.B. Evolutionary Algorithms

for Solving Multi-Objective Problems, Kluwer Academic Publishers, New York, March, 2002.

9. Chen, H.Q., Mantel, B., Periaux. J. and Sefrioui, M., Solution of some Non Linear Fluid

Dynamics Problems by means of Genetic Algorithms, in Experimentation, Modeling and Computation in Flow, Turbulence and Combustion, Eds, Chetverushkin, B., Desideri, J.-A., Kuznetsov, Y.A., Muzafarov, K.A., Periaux, J. and Pironneau, O., John Wiley, Volume 2, Computational methods in Applied Sciences, 1997

10. Crispin, Y. Aircraft Conceptual Optimization Using Simulated Evolution, 1994. AIAA

Paper 94-0092.

11. Crossley, A. W. and Laananen, H. Design of Helicopters via Genetic Algorithm, Journal of Aircraft, 3(6), November–December 1996.

12. Cvetkovi´c, D. and Parmee, I.C. Designer’s Preferences and Multi-objective Preliminary

Design Processes; In I.C. Parmee, editor, Proceedings of the Fourth International Conference on Adaptive Computing in Design and Manufacture (ACDM’2000), pages 249–260, PEDC, University of Plymouth, UK, 2000. Springer.

13. Deb, K. Multi-Objective Optimization Using Evolutionary Algorithms, Wiley, 2002

14. De Falco, I., DelBalio, A. D., Cioppa, R. and Tarantino, E. A Parallel Genetic Algorithm

for Transonic Airfoil Optimisation; Evolutionary Computation, 1, 1995.

15. Dasgupta, D. and Michalewicz, Z. Evolutionary Algorithms in Engineering Applications. Springer-Verlag, 1997.

16. Fonseca, C. M. and Fleming, P. J.. An overview of evolutionary algorithms in

multiobjective optimisation; in Evolutionary Computation, volume 3, pages 1.16, 1995.

17. Goldberg, D. Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley,1989

18. Gonzalez, L.F , Whitney, E.J. , Periaux, J., Sefrioui, M. and Srinivas, K. A Robust

Evolutionary Technique for Inverse Aerodynamic Design, Design and Control of Aerospace Systems Using Tools from Nature; Proceedings of the 4th European Congress on Computational Methods in Applied Sciences and Engineering, Volume II, ECCOMAS 2004, Jyvaskyla, Finland, July 24-28, 2004, Eds: P. Neittaanmaki,T. Rossi, S. Korotov , E. Onate, J. Periaux and D. Knorzer, University of Jyvaskyla, Jyvaskyla, CD ISBN 951-39-1869-6.

Page 35: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

34

19. González, L. F., Whitney, E. J., Srinivas, K., Wong, K. C. and Périaux , J. Multidisciplinary Aircraft Conceptual Design Optimisation Using a Hierarchical Asynchronous Parallel Evolutionary Algorithm (HAPEA) ; in I.C. Parmee, editor, Proceedings of the Sixth International Conference on Adaptive Computing in Design and Manufacture (ACDM'2004), volume 6, Bristol, UK, April 2004. Springer-Verlag.

20. Gonzalez, LF , Whitney, E., Periaux, J , Sefrioui, M. Srinivas, K. A Robust Evolutionary Technique for Inverse Aerodynamic Design. Design and Control of Aerospace Systems Using Tools from Nature. Proceedings of the 4th European Congress on Computational Methods in Applied Sciences and Engineering (2) , 24-28, 2004

21. González, L., Whitney, E., Srinivas, K. and Periaux, J. Multidisciplinary Aircraft Design and Optimisation Using a Robust Evolutionary Technique with Variable Fidelity Models, AIAA Paper 2004-4625; in CD Proceedings 10th AIAA/ISSMO Multidisciplinary Analysis and Optimization Conference, Aug. 30 - Sep. 1, 2004, Albany, NY.

22. Hansen, N. and Ostermeier, A. Completely Derandomised Self-Adaption in Evolution

Strategies. In Evolutionary Computation, volume 9(2), pages 159–195. MIT Press, 2001.

23. Holland, J.H. Adaption in Natural and Artificial Systems, University of Michigan Press. 1975.

24. Horn, J., Nafpliotis, N. and Goldberg, D. A niched Pareto genetic algorithm for

multiobjective optimisation; in Proceedings First IIIE Conference on Evolutionary Computation Symposium on the theory of Computing, 1994.

25. Joines, J. A. and Houck, C. R. On the Use of Non-Stationary Penalty Functions to Solve

Nonlinear Constrained Optimization Problems with GA’s. In International Conference on Evolutionary Computation, pages 579–584, 1994.

26. Jones, B., Crossley, W. and Lyrintzis, A. Aerodynamic and Aeroacoustic Optimization of

Airfoils via a Parallel Genetic Algorithm, In Proceedings of Seventh AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimisation, St. Louis, Missouri, September 1998.

27. Knight, D. Applications of Genetic Algorithms to High Speed Air Intake Design, in K. C.

Giannakoglou, D. T. Tsahalis, J. Périaux, K. D. Papailiou, and T. Fogarty, editors, Evolutionary Methods for Design Optimization and Control with Applications to Industrial Problems, pages 43–50, Athens, Greece, 2001. International Center for NumericalMethods in Engineering (CIMNE).

28. Kroo, I., Altus, S., Braun, R., Gage, P. and Sobieski, I., Multi disciplinary Optimization

Methods for Aircraft Preliminary Design, AIAA 94-4325, Fifth AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, September 7-9, Panama City, Florida, 1994.

29. Kumar, V. Algorithms for Constraint-Satisfaction Problems: A Survey, AI Magazine,

13(1):32–44, 1992.

30. Loridan, P., Morgan J. A Theoretical Approximation Scheme for Stackelberg Games, Optimization Theory and Applications, 61(1):95-110, 1989

31. Marco, N., Lanteri, S., Désidéri, J. A. and Périaux, J. A Parallel Genetic Algorithm for

Multi-objective Optimisation in Computational Fluid Dynamics, INRIA Report 1998.

Page 36: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

35

32. Marler, R. T. and Arora, J. S. Survey of Multi-objective Optimization Methods for

Engineering, Structural and Multidisciplinary Optimization, 26(6):369–395, 2002.

33. Michalewicz, Z. Genetic Algorithms + Data Structures = Evolution Programs, Artificial Intelligence. Springer-Verlag, 1992.

34. Mitchell, M., An introduction to Genetic Algorithms, The MIT Press, 1977.

35. Mühlenbein, H., Schomisch, M. and Born, J., The parallel genetic algorithm as function

optimiser, Parallel Computing, 17(6-7):619.632, 1991. 36. Nash, J. F. Equilibrium points in N-person games, In Proceedings of the National

Academy of Science, number 36, pages 46–49, 1950.

37. Obayashi, S. and Tsukahara, T. Comparison of Optimization Algorithms for Aerodynamic Shape Design, AIAA Journal, 35(8):1413–1415, August 1997.

38. Obayashi, S. Aerodynamic Optimization with Evolutionary Algorithms. In von Karman

Institute for Fluid Dynamics–Lecture Series. Belgium, April 1997.

39. Obayashi, S. Multidisciplinary Design Optimization of Aircraft Wing Planform Based on Evolutionary Algorithms. In Proceedings of the 1998 IEEE International Conference on Systems, Man, and Cybernetics, La Jolla, California, IEEE, October 1998

40. Obayashi, S. and Takanashi, S.. Genetic Optimization of Target Pressure Distributions

for Inverse Design Methods. AIAA Journal, 34(5):881–886, May 1996.

41. Oyama, A., Liou, M.-S. and Obayashi, S. Transonic Axial-Flow Blade Shape Optimization Using Evolutionary Algorithm and Three-Dimensional Navier-Stokes Solver, 9th AIAA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, Atlanta, Georgia, September, 2002.

42. Périaux, J., Sefrioui, M., Whitney, E. J., González, L., Srinivas, K. and Wang, J. Multi-

Criteria Aerodynamic Shape Design Problems in CFD Using a Modern Evolutionary Algorithm on Distributed Computers, In S. Armfield, P. Morgan, and K. Srinivas, Eds, Proceedings of the Second International Conference on Computational Fluid Dynamics(ICCFD2), Sydney, Australia, July 2002. Springer.

43. Periaux, JP, Lee, DS., Gonzalez, LF. and Srinivas K. Fast reconstruction of aerodynamic shapes using evolutionary algorithms and virtual nash strategies in a CFD design environment, Journal of computational and applied mathematics 232 (1), 61-71, 2009

44. Parmee, I. and Watson, A. H.. Preliminary Airframe Design Using Co-Evolutionary

Multiobjective Genetic Algorithms, in W. Banzhaf, J. Daida, A. E. Eiben, M. H. Garzon, V. Honavar, M. Jakiela and R. E. Smith, Eds, Proceedings of the Genetic and Evolutionary Computation Conference, volume 2, pages 1657-1665, Orlando, Florida, USA, Morgan Kaufmann, July 1999.

45. Pettey C. B., Leuze, M. R. and Grefenstette, J. J. A Parallel Genetic Algorithm, In

Proceedings of the Second International Conference on Genetic Algorithms, Buenos Aires, Argentina, 1998.

Page 37: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

36

46. Pulliam, T. H., Nemec, M., Hoist, T. and Zingg, D. W.. Comparison of Evolutionary (Genetic) Algorithm and Adjoint Methods for Multi-Objective Viscous Airfoil Optimizations, in Proceedings of the 41th Aerospace Sciences Meeting and Exhibit, Reno, Nevada, January 2003. AIAA.

47. Quagliarella, D. and Vicini, A. Designing High-Lift Airfoils Using Genetic Algorithms, in

K. Miettinen, M. Mäkelä, P.Neittaanmäki, and J. Periaux, Eds, Proceedings of EUROGEN’99, Jyväskylä, Finland, 1999. University of Jyváskylä.

48. Raymer, D. Enhancing Aircraft Conceptual Design using Multidisciplinary Optimization.

PhD thesis, KTH, Department of Aeronautics, FLYG 2002-2, 2002.

49. Raymer, D. Aircraft Design: A Conceptual Approach, American Institute of Aeronautics and Astronautics, Third Edition, 1999.

50. Rechenberg, I., Evolution Strategie: Optimierung technisher Systeme nach Principen der

biologischen Evolution, Frommann-Holsboog Verlag, Stuttgart, 1973

51. Richardson, J. T., Palmer, M. R., Liepins, G. and Hilliard, M. Some Guidelines for Genetic Algorithms with Penalty Functions–Incorporating Problem Specific Knowledge into Genetic Algorithms. In Schaffer J. D., Edr, Proceedings of the Third International Conference on Genetic Algorithms, pages 191–197, George Mason University, 1989. Morgan Kaufmann.

52. Ruben, P., Chung, J. and Behdinan, K.. Aircraft Conceptual Design Using Genetic

Algorithms, in Proceedings of the 8th AIAA/USAF/NASA/ISSMO Symposium on Multidisciplinary Analysis and Optimization, AIAA, Long Beach, California, September 2000.

53. Schaffer, J.D. Multiple objective optimization with vector evaluated genetic algorithms,

Pages 93.100, Carnegie-Mellon, Pittsburgh, 1985. Lawrence Erlbaum Associates.

54. Schweffel, H. P., Evolution strategie and numerische Optimierung, PhD. Thesis, Technische Universität Berlin, 1975

55. Sefioui, M., Periaux, J., Ganascia J.-G.. Fast convergence thanks to diversity,

Evolutionary Programming V. Proc. of the 5th Annual Conference on Evolutionary Programming. L.J.Fogel, P.J.Angeline and T.Back editors. MIT Press, 1996.

56. Sefrioui, M. Algorithmes Evolutionnaires pour le Calcul Scientifique. Application a

l'electromagnetisme et a la Mecanique des Fluides Numériques, These de doctorat Univ. Paris 6, 1998

57. Sefrioui, M. and Périaux, J. A Hierarchical Genetic Algorithm Using Multiple Models

for Optimization, in M. Schoenauer, K. Deb, G. Rudolph, X. Yao, E. Lutton, J.J. Merelo and H.-P. Schwefel, editors, Parallel Problem Solving from Nature, PPSN VI, pages 879-888, Springer, 2000.

58. Srinivas, N., and Deb, K. Multiobjective optimisation using non-dominated sorting in

genetic algorithms , in Evolutionary Computation 2 (3), pages 221.248, 1995.

59. Sobieski, J., Haftka, R. Multidisciplinary Aerospace Design Optimization: Survey of Recent Developments, AIAA Paper No. 96-0711, 1996.

Page 38: c Consult author(s) regarding copyright mattersTurbomachinery, May 7-11, 2012 2 Evolutionary Algorithms (EAs) have been pioneered in the late 60’ by J. Holland [22] and I. Rechenberg

VKI lecture series on Introduction to Optimization and Multidisciplinary Design in Aeronautics and Turbomachinery, May 7-11, 2012

37

60. Takahashi, S., Obayashi, S. and Nakahashi, K. Transonic Shock-free Wing Design with Multiobjective Genetic Algorithms, In Proceedings of the International Conference on Fluid Engineering, Tokyo, Japan, July 1997. JSME.

61. Thomas, Z. and Green, A. Multidisciplinary Design Optimization Techniques:

Implications and Opportunities for Fluid Dynamics Research, AIAA Paper-1999-3798, Jun, 1999

62. Van Veldhuizen, D. A., Zydallis, J.B. and Lamont, G. B. Considerations in Engineering

Parallel Multi objective Evolutionary Algorithms, IEEE Transactions on Evolutionary Computation, Vol. 7, No. 2, pp. 144--173, April 2003.

63. Wakunda, J., Zell, A. Median-selection for parallel steady-state evolution strategies, in

Marc Schoenauer, Kalyanmoy Deb, Günter Rudolph, Xin Yao, Evelyne Lutton, Juan Julian Merelo, and Hans-Paul Schwefel, Eds, Parallel Problem Solving from Nature – PPSN VI, pages 405–414, Berlin, Springer, 2000.

64. Whitney, E. J. A Modern Evolutionary Technique for Design and Optimisation in

Aeronautics. PhD Thesis, The University of Sydney, 2003.

65. Whitney, E., Sefrioui, M., Srinivas, K. and Périaux, J. Advances in Hierarchical, Parallel Evolutionary Algorithms for Aerodynamic Shape Optimisation, JSME (Japan Society of Mechanical Engineers) International Journal, Vol. 45, No. 1, 2002.

66. Wang, J.F. Distributed Design Optimisation using GAs and Game Theory and related

applications to high lift CFD problems in Aerodynamics, These 3eme cycle Univ. Paris 6, 2001

67. Lee, D.S., Uncertainty Based Multiobjective and Multidisciplinary Design Optimization

in Aerospace Engineering, The Univ. of Sydney, Sydney, NSW, Australia, PhD, section 10.7, p.p. 348-370 (2008)

68. Lee, DS, Gonzalez, LF, Srinivas, K Auld, D., Periau, J. Multi-objective/multidisciplinary design optimisation of blended wing body UAV via advanced evolutionary algorithms AIAA Paper 2007–36, 45th AIAA Aerospace Science Meeting and Exhibit, 2007.

69. D. S. Lee, J. Periaux, L. F. Gonzalez, K. Srinivas, E. Onate, Active Flow Control

Bump Design using Hybrid Nash-Game coupled to evolutionary Algorithms, in Proc. V European Conference on Computational Fluid Dynamics ECCOMAS CFD 2010, J. C. F. Pereira and A. Sequeira (Eds), Lisbon, Portugal, 14–17 June 2010

70. M. Drela, A User's Guide to MSES 2.95. MIT Computational Aerospace Sciences

Laboratory, September (1996)

71. Gonzalez, L.F, Periaux, J. Lee, D.S . Evolutionary Optimisation and Game Strategies for Advanced Multi physics Design in Aerospace Engineering , Springer, to appear 2010