[IEEE 2011 Third World Congress on Nature and Biologically Inspired Computing (NaBIC) - Salamanca,...

6

Click here to load reader

Transcript of [IEEE 2011 Third World Congress on Nature and Biologically Inspired Computing (NaBIC) - Salamanca,...

Page 1: [IEEE 2011 Third World Congress on Nature and Biologically Inspired Computing (NaBIC) - Salamanca, Spain (2011.10.19-2011.10.21)] 2011 Third World Congress on Nature and Biologically

Local Optimization in Global Multi-Objective Optimization Algorithms

Algirdas Lan inskas, Julius Žilinskas Institute of Mathematics and Informatics

Vilnius University Vilnius, Lithuania

[email protected], [email protected]

Pilar Martínez Ortigosa Dept. of Computer Architecture and Electronics

University of Almería Almería, Spain [email protected]

Abstract—A hybrid Multi-Objective Optimization

Algorithm based on the NSGA-II algorithm is presented

and evaluated. The local optimization algorithm called

SASS has been modified in order to be suitable for

multi-objective optimization where the local

optimization is intended towards non-dominated points.

The modified local optimization algorithm has been

incorporated into NSGA-II in order to improve

performance.

Keywords—multi-objective optimization; Pareto set;

Non-dominated Sorting Genetic Algorithm; Single

Agent Stochastic Search.

I. INTRODUCTION

Many real-world optimization problems deal with more than one objective function. The difficulty arises when there are conflicts among the different objectives. Indeed, many optimization problems with multiple objectives exist in science and industry [1]. They are called Multi-objective Optimization Problems (MOPs).

In general, the n-dimensional MOP with m objectives

1 2( ), ( ),..., ( )mf f fx x x

is to find a vector

( )* * *1 2, ,..., ,nx x x=*

x

which minimizes the objective vector

( )1 2( ) ( ), ( ),..., ( )mf f f=F x x x x .

The vector ( )1 2, ,..., nx x x=x

is called decision vector and the set consisting of all decision vectors – search space (denoted by X ). The corresponding set of objective vectors is called objective space [2].

Objectives of the given MOP often conflict with each other – improvement of one objective can lead to deterioration of another. Thus it is impossible to find a single solution which optimizes all objectives at once. Though there exists a set of concurrent solutions called Pareto set [3].

There are a lot of methods for solving given MOP, for instance, mathematical programming techniques such as scalarization [4] or continuation methods [5] which are very

efficient in finding single solutions, but may have trouble in finding the entire Pareto set in certain cases. There also are global methods including multi-objective evolutionary algorithms (MOEAs) [6].

There are many techniques to design MOEA which have different characteristics and advantages. The advantages of particular technique can be better utilized by hybridizing two or more different techniques, for example, hybridizing different search methods, search and updating methods or different search methods in different search phases. A special case of hybrid MOEAs, known as memetic MOEAs [7], is hybrid of local optimization method and MOEA. One of the first memetic algorithms was proposed in [8] and one of the most recent approaches to design memetic MOEA is proposed in [9].

In this paper we will focus to evolutionary algorithms which are based on biological evolution mechanism: reproduction, mutation, recombination, and selection of individuals, as in nature. In concrete we will focus to Non-dominated Sorting Genetic Algorithm (NSGA) [10] which is a very well known evolutionary algorithm for multi-objective optimization, and on our proposed Multi-Objective Single Agent Stochastic Search (MOSASS). Our proposed memetic algorithm developed by hybridizing both latter algorithms. In relation with the nature, NSGA imitates biological evaluation of individuals and the MOSASS imitates adaptation of individuals to the natural conditions.

II. MULTI-OBJECTIVE OPTIMIZATION

A. Dominance Relation and Pareto Optimality

Suppose a and b are two decision vectors from the search space of a given MOP. Regarding to the dominance relation, these decision vectors can be related to each other in two possible ways: either one dominates the other or none of them is dominated [10].

The decision vector a dominates [11] decision vector b

(denoted by a b ) if:

(i) ( ) ( )i if f≤a b for all {1... }i m∈ and

(ii) exists at least one {1... }j m∈ such that ( ) ( )j jf f<a b .

The decision vector a is called dominator of the decision vector b .

The decision vector a is indifferent [1] to the decision vector b (denoted by a b ) if:

∧/ /a b b a .

323978-1-4577-1124-4/11/$26.00 c©2011 IEEE

Page 2: [IEEE 2011 Third World Congress on Nature and Biologically Inspired Computing (NaBIC) - Salamanca, Spain (2011.10.19-2011.10.21)] 2011 Third World Congress on Nature and Biologically

The decision vector X∈a is non-dominated regarding to the set X if there are no decision vector X∈b such that b a [1] (decision vector a have no dominators regarding to the search space X ).

The set of non-dominated regarding to a set X X′ ⊂decision vectors is called Pareto set and the corresponding set of objective vectors is called Pareto front [1].

The set of objective vectors that are non-dominated regarding to the entire search space is called Pareto-optimal set and the corresponding set of objective vectors is called Pareto-optimal front [1].

Three objective vectors ( ), ( )x yF F and ( )zF of the corresponding decision vectors ,x y and z are illustrated

in Figure 1(a): the decision vector x has no dominators and dominates vectors y and z , moreover decision vectors z

and y have one dominator each and they are indifferent to each other. Figure 1(b) illustrates a set of objective vectors with the Pareto front (filled points).

B. Pareto Ranking

In multi-objective optimization the fitness value (the value that prescribes the optimality) of particular decision vector can be estimated by its relation against other decision vectors.

One of the methods for computing fitness values for the decision vectors is to let him directly reflect to the dominance relation – to assign the number of dominators of each vector as its fitness value. This procedure is called Pareto ranking and the values assigned – Pareto ranks.Lower Pareto rank means better solution and the best possible Pareto rank is equal to zero which can be assigned to decision vectors belonging to the Pareto set.

C. Performance Metrics

The term of quality of solution of MOP is substantially more complex than solution of a single-objective optimization problem, because the solution of MOP consists of multiple objectives. The solution of MOP (Pareto set) usually must have the following properties [13]:

• the distance from the obtained Pareto front to the Pareto-optimal front should be minimized;

• objective vectors in obtained Pareto front must be uniformly distributed – distances between neighbor vectors must be similar. In some cases Pareto front can consist of clusters of uniformly distributed decision vectors.

Figure 1. Illustration of (a) three objective vectors and (b) the Pareto front.

• the extent of the obtained Pareto front should be maximized – for each objective, a wide range of values should be covered by the non-dominated objective vectors.

We will introduce several frequently used metrics to estimate the quality of obtained Pareto front which will be used in discussion of the results of our experimental investigation.

Size of the Pareto front is the number of non-dominated objective vectors in the obtained Pareto front.

Size of dominated space is the hyper-volume of a region made by the obtained Pareto front and the given reference point (Ref) [1].

Two examples of dominated space are shown in Figure 2. Pareto front in Figure 2(a) has one more point and better (more similar to uniform) distribution than Pareto front in Figure 2(b). Therefore the dominated space of the first one is larger.

Note that the hyper-volume value depends on the reference point.

Distance to the Pareto-optimal front gives the average distance from the obtained Pareto front ( P′ ) to the Pareto-optimal front P . We denote it by ( )D P′ and calculate using the following formula [13]:

{ }1

( ) min ( ) ( ) ;P

D P PP ′ ′∈

′ ′= − ∈′ a

F a F a a .

Pareto extent gives the maximum extent in each dimension of the objective vector to estimate the range to which the Pareto front spreads out. In the case of two objectives, this equals the distance of the two outer points of the Pareto front. In general this metric can be expressed as given in [13] by

{ }1

( ) max || ( ) ( ) ||; ,m

i i

i

E P f f P=

′ ′ ′ ′ ′ ′= − ∈a b a b ,

where m is the number of the objectives.

D. Non-dominated Sorting Genetic Algorithm

The Non-dominated Sorting Genetic Algorithm (NSGA) was proposed in [10], and was one of the first MOEA. Since the NSGA was applied to various problems [14, 15]. The updated version of the NSGA algorithm – the algorithm NSGA-II – has been proposed in [12]. Algorithm starts with an initial parent population, P consisting of N randomly generated decision vectors. Binary tournament selection,

Figure 2. Illustration of dominated spaces of two Pareto fronts.

F(y)

F(z)F(x)

f2

f1

(a)

f2

f1

(b) Ref

f2

f1

(a) Ref

f2

f1

(b)

324 2011 Third World Congress on Nature and Biologically Inspired Computing

Page 3: [IEEE 2011 Third World Congress on Nature and Biologically Inspired Computing (NaBIC) - Salamanca, Spain (2011.10.19-2011.10.21)] 2011 Third World Congress on Nature and Biologically

recombination and mutation genetic operators are used to create a new child population Q (procedure make-new-pop,in the Algorithm 1). Usually the new population has the same size as parent population P . Further, parent and child populations are combined into one population R . The elements of combined population are sorted regarding to their Pareto ranks (procedure nondominated-sorting) and first N elements are chosen as a new parent population for the next generation. In the case of selecting two or more decision vectors with the same rank, a density estimation based on measuring the crowding distance [12] to the surrounding vectors of the same rank is used to choose the most promising solutions.

NSGA-II algorithm is given in Algorithm 1.

III. MULTI-OBJECTIVE SINGLE AGENT STOCHASTIC

SEARCH AGLORITHM

A. Single Agent Stochastic Search Algorithm

The Single Agent Stochastic Search (SASS) is a random local optimization algorithm. Like most random optimization algorithms, SASS is based on single agent stochastic search strategy [16]. The neighbor point is calculated by adding random vector ξ to the point s which has the best fitness value, found so far. S.S. Rao and D.C. Karnop [17,18] use a uniform random variable for selecting new neighbor point while J. Matyas utilized Gaussian perturbations [19]. F.J. Solis and J.B. Wets enhanced this approach by evaluating s ξ− if the evaluation at s ξ+ does not improve the current value of the objective function [20]. This approach for choosing neighbor point is used in SASS algorithm which is given in Algorithm 2.

The standard deviation of the random perturbation σspecifies the size of the sphere that most likely contains the perturbation vector, and the bias b locates the center of the sphere based on direction of a past success trial. The contraction and expansion of are performed when the number of successes ( ) or failures ( ) in selecting a neighbor point, that decreases the value of the objective function, is greater than the user given constants and

, respectively. The contraction ( ) and expansion () constants as well as upper and lower bounds of standard

deviation ( and ) are given by the user as input

parameters.

Algorithm 1: NSGA-II Input: ,P itermax ;

0t = ;while t itermax<

Q = make-new-pop ( P );R P Q= ∪ ;

nondominated-sorting ( )R ;: [0 : ]P R N= ;

: 1t t= + ;Output: P ;

The values 0.4 and 0.2, which modify the bias value and the constant values , ,

are taken from [20]. The stopping

criterion is based on the maximum number of iterations [17].

B. Multi-Objective Single Agent Stochastic Search

SASS algorithm has been successfully used in evolutionary algorithms for single-criteria optimization problems [21, 22]. We modified it to apply for multi-objective optimization and call the new algorithm Multi-Objective Single Agent Stochastic Search (MOSASS).

The MOSASS algorithm starts with an initial decision vector , empty archive and constant limit, which defines how many points can be stored in the archive. The other SASS parameters are

given as MOSASS initial parameters as well. As in SASS the correction of standard deviation σ and

selection of a neighbor decision vector s ξ+ are performed. Since objective function at selected neighbor decision vector is evaluated, the dominance relation between vectors s ξ+

and s is checked. If s sξ+ , then the current vector s is changed by the new one s ξ+ and the algorithm continues to the next iteration. If neighbor decision vector s ξ+ does not dominate the current decision vector s , but is indifferent with s and has no dominators regarding to the decision vectors stored in the archive, decision vector s ξ+ is added to the archive as non-dominated decision vector. If s ξ+ is dominated by s or at least one decision vector from the archive, then the neighbor decision vector s ξ+ is changed

Algorithm 2: SASS Input: , , , , , , ,sup infs ex ct Scnt Fcnt iterσ σ ;

0; 0; 0; 0; 1;b k scnt fcnt σ= = = = =

while k iter<

,

,:

,

;

sup inf

ex if scnt Scnt

cx if fcnt Fcnt

if

otherwise

σ

σσ

σ σ σ

σ

⋅ >

⋅ >=

<

Generate a random vector ~ ( , )N bξ σ I distribution;

if ( ) ( )f s f sξ+ <

: ;

: 0.2 0.4 ; : 1; : 0;

s s

b b scnt scnt fcnt

ξ

ξ

= +

= ⋅ + ⋅ = + =

else if ( ) ( )f s f sξ− <

: ;

: 0.4 ; : 1; 0;

s s

b b scnt scnt fcnt

ξ

ξ

= −

= − ⋅ = + =

else

: 0.5 ; : 1; 0;b b fcnt fcnt scnt= ⋅ = + =

: 1;k k= +

Output s .

σscnt fcnt

Scnt

Fcnt ct

ex

supσ infσ

2ex = 0.5ct = 5,Scnt = 3,Fcnt =

1,supσ = 510infσ −=

s

, , , , , ,sup infex ct Scnt Fcnt iterσ σ

2011 Third World Congress on Nature and Biologically Inspired Computing 325

Page 4: [IEEE 2011 Third World Congress on Nature and Biologically Inspired Computing (NaBIC) - Salamanca, Spain (2011.10.19-2011.10.21)] 2011 Third World Congress on Nature and Biologically

to opposite one s ξ− and fitness checking procedure is repeated with it.

After every iteration all dominated points are removed from the archive and it is verified if the number of elements in the archive does not exceed 2 limit⋅ . If the size of the archive exceeds 2 limit⋅ , the archive is reduced by removing decision vectors with the lowest crowding distances to the surrounding decision vectors.

The detailed MOSASS algorithm is given in the Algorithm 3.

C. Memetic NSGA+MOSASS algortihm

In order to improve some performance metrics of original NSGA-II algorithm we developed memetic algorithm by hybridizing it with our proposed MOSASS algorithm. After a number of NSGA-II generations, the k selected decision vectors are locally optimized using a number of MOSASS iterations. The number of NSGA-II generations after which MOSASS must be performed, the number of MOSASS iterations and the value of k are given by the user.

All non-dominated decision vectors found so far (suppose there are K such vectors) are chosen for the local optimization. If K k> , then K k− random vectors are rejected. If K k< , then additional k K− dominated decision vectors are randomly chosen from the rest of the population.

IV. EXPERIMENTAL INVESTIGATION

The proposed algorithm NSGA+MOSASS has been experimentally investigated by solving different multi-objective optimization problems. Results obtained with NSGA+MOSASS have been compared with the results obtained with the original NSGA-II implemented as given in literature [12].

Both algorithms have been performed for 25000 function evaluations. A set of well known test functions (Table I) have been used and 30 independent runs for each of them have been performed. The crossover probability equal to 0.8 and mutation rate equal to 1/ d have been used for genetic operations. The population size has been chosen to be equal to 100.

Local optimization has been performed after every 1000 function evaluations for 10 decision vectors. 10 MOSASS iterations have been performed for each decision vector.

The performance of the algorithms has been compared by four metrics which are described in Section II.C. The reference point ( , )Ref x y= used for hyper-volume calculation was chosen so that x would be equal to the maximum abscissa and y – to the maximum ordinate of the Pareto-optimal front of respective test function.

The results of the experimental investigation are presented in Table II and Figure 3. The table consists of values of four performance metrics – Pareto size, hyper-volume, distance to Pareto-optimal and Pareto extent – obtained during experimental investigation, and time required for computations. All values are averages of 30

Algorithm 3: MOSASS Input: , , , , , , , , ,sup infs archive limit ex ct Scnt Fcnt iterσ σ ;

0 00; 0; 0; 0; 1;b k scnt fcnt σ= = = = =

while k iter<

1

1

1

1

,

,

,

;

k

k

k

sup k inf

k

ex if scnt Scnt

cx if fcnt Fcnt

if

otherwise

σ

σσ

σ σ σ

σ

⋅ >

⋅ >=

<

Generate a random vector ~ ( , )N bξ σ I distribution;

if ( )s sξ+

: ;

: 0.2 0.4 ; : 1; : 0;

s s

b b scnt scnt fcnt

ξ

ξ

= +

= ⋅ + ⋅ = + =

else if ( ) ( )( ) ~ & | ( )s s s archive s sξ ξ′ ′+ ∃ ∈ +

: ( );

: 0.2 0.4 ; : 1; : 0;

archive archive s

b b scnt scnt fcnt

ξ

ξ

= ∪ +

= ⋅ + ⋅ = + =

else if ( )s sξ−

: ;

: 0.4 ; : 1; 0;

s s

b b scnt scnt fcnt

ξ

ξ

= −

= − ⋅ = + =

else if ( ) ( )( ) ~ & | ( )s s s archive s sξ ξ′ ′− ∃ ∈ −

: ( );

: 0.4 ; : 1; 0;

archive archive s

b b scnt scnt fcnt

ξ

ξ

= ∪ −

= − ⋅ = + =

if | | 2archive limit≥ ⋅

Reduce the size of the archive to the limit value. Output: ,s archive ;

independent runs. The standard deviations of each metric are also presented. For clarity the improvement (in percents) of the performance parameters using NSGA+MOSASS algorithm is presented graphically in the Figure 3.

Table 1 and Figure 3 show that the results obtained by NSGA+MOSASS are better than results obtained with original NSGA-II comparing by all metrics. The average of points in the resulting Pareto front was increased for all test functions by up to 40 %, although the standard deviation was increased too. The average of hyper-volume of the obtained Pareto front was increased by up to 14 % while standard deviation was reduced for all test functions. NSGA+MOSASS was able to find Pareto fronts which are closer to Pareto optimal front – average distance to the Pareto optimal was up to 68 % lower, so the performance metric was increased by up to 68 %. The Pareto extent was also increased and its standard deviation was reduced for all test functions, except test function F4 – the standard deviation was a little bit increased. The algorithm NSGA+MOSASS was up to 21 % faster for all test functions. This is due to less non-dominated sorting procedures in NSGA+MOSASS during local optimization.

326 2011 Third World Congress on Nature and Biologically Inspired Computing

Page 5: [IEEE 2011 Third World Congress on Nature and Biologically Inspired Computing (NaBIC) - Salamanca, Spain (2011.10.19-2011.10.21)] 2011 Third World Congress on Nature and Biologically

TABLE1. TEST FUNCTIONS USED DURING EXPERIMENTAL INVESTIGATION

ID n Mathematical expression

F1 50 11 1 2

2

( ) 9( ) ( ), ( ) ( ) 1 , ( ) 1 , 0 1, 1,...,

( ) 1

n

i i

i

ff x f g g x x i n

g n =

= = ⋅ − = + ≤ ≤ =−

xx x x x x

x

F2 50 ( )2

11 1 2

2

( ) 9( ) , ( ) ( ) 1 , 1 , 0 1, 1,...,

( ) 1

n

i i

i

ff x f g g x x i n

g n =

= = ⋅ − = + ≤ ≤ =−

xx x x x

x

F3 50 ( )1 1

1 1 2 12

( ) ( ) 9( ) , ( ) ( ) 1 sin(10 ( )) , 1 ,

( ) ( ) 1

0 1, 1,...,

n

i

i

i

f ff x f g f g x

g g n

x i n

π=

= = ⋅ − − = +−

≤ ≤ =

x xx x x x x

x x

F4 50 ( )1

2 0.25

4 6 11 1 2

2

( ) 1( ) 1 sin (6 ), ( ) 1 , 1 9 ,

( ) 1

0 1, 1,...,

nx

i

i

i

ff e x f g x

g n

x i n

π−

=

= − ⋅ = − = + ⋅−

≤ ≤ =

xx x x

x

TABLE2. RESULTS OF THE EXPERIMENTAL INVESTIGATION

Test problem

Algorithm Pareto size Hyper-volume

Distance to Pareto-optimal

Pareto extent Time

avg stdev avg stdev avg stdev avg stdev avg

F1 NSGA 84.3 6.67 0.624 0.0048 0.0251 0.0032 1.36 0.0185 3.68

NSGA+MOSASS 91.9 5.34 0.649 0.0026 0.0087 0.0017 1.39 0.0102 2.97

F2 NSGA 58.2 4.97 0.268 0.0063 0.0441 0.0049 1.15 0.0353 3.40

NSGA+MOSASS 81.4 9.95 0.306 0.0058 0.0141 0.0038 1.27 0.0337 2.92

F3 NSGA 55.5 5.64 0.742 0.0067 0.0114 0.0016 1.89 0.0876 3.63

NSGA+MOSASS 62.3 6.53 0.761 0.0031 0.0071 0.0009 1.9 0.0307 2.86

F4 NSGA 76.5 9.11 0.048 0.0019 0.2451 0.0101 0.73 0.0015 9.28

NSGA+MOSASS 82 9.52 0.052 0.0012 0.2308 0.0111 0.75 0.0016 7.75

Figure 3. Improvement of the performance metrics using NSGA+MOSASS algorithm

94

65

-1

19

40

14

68

101412

3

38

0.3

21

7 9 60.4

16

-10

0

10

20

30

40

50

60

70

Pareto size Hypervolume Distance to pareto optimal

Pareto extent Time

Imp

rov

emen

t (%

)

Performance metric

F1 F2 F3 F4

2011 Third World Congress on Nature and Biologically Inspired Computing 327

Page 6: [IEEE 2011 Third World Congress on Nature and Biologically Inspired Computing (NaBIC) - Salamanca, Spain (2011.10.19-2011.10.21)] 2011 Third World Congress on Nature and Biologically

V. CONCLUSIONS

A hybrid multi-objective optimization algorithm has been proposed and investigated. The algorithm is based on NSGA-II and local multi-objective optimization developed by modifying a single-objective local optimization algorithm SASS. The advantages of the proposed algorithm have been evaluated by the experimental investigation. The results of the investigation show that incorporation of the developed local multi-objective optimization algorithm into NSGA-II increases all quality metrics that have been measured.

ACKNOWLEDGEMENTS

This work has been funded by grant TIN2008-01117 from the Spanish Ministry of Science and Innovation and P08-TIC-3518 from the Junta de Andalucía, in part financed by the European Regional Development Fund (ERDF).

This work has also been funded by a grant (No. LSS-580000-1405) from the Agency for Science, Innovation and Technology (Eurostars project “Production Effectiveness Navigator, pen!PEN6232”).

The authors would like to thank COST Action IC0805 “Open European Network for High Performance Computing on Complex Environments”.

REFERENCES

[1] S. Mostaghim, “Multi-objective Evolutionary Algorithms: Data structures, Convergence,” and Diversity, PhD Thesis, Universitat Paderborn, 2004.

[2] J.J. Durillo, A.J. Nebro, C.A.C. Coello, J. García-Nieto, F. Luna, E. Alba, “A Study of Multiobjective Metaheuristics When Solving Parameter Scalable Problems,” IEEE Transactions on Evolutionary Computation, vol. 14(4), p. 618-635, 2010, ISSN: 1089-778X.

[3] W. Stadler, “A survey of multicriteria optimization or the vector maximum problem,” Journal of Optimization Theory and Applications, vol. 29 (1) p. 1–52, 1979.

[4] J Fliege, B.F. Svaiter, “Steepest descent methods for multicriteria optimization,” Mathematical Methods of Operations Research, vol. 51(3), p. 479–494, 2000.

[5] C. Hillermeier, “Nonlinear Multiobjective Optimization-A Generalized Homotopy Approach,” Birkhauser, 2001.

[6] K. Deb, “Multi-Objective Optimization using Evolutionary Algorithms,” John Wiley & Sons, Chichester, UK, 2001. ISBN 0-471-87339-X.

[7] A. Zhoua, B.Y. Qu, H. Li, S.Z Zhao, P. N. Suganthan, Q. Zhangd, “Multiobjective Evolutionary Algorithms: A Survey of The State of The Art,” Swarm and Evolutionary Computation 1, p 32-49, 2011.

[8] H. Ishibuchi, T. Murata, “A multiobjective genetic local search algorithm and its application to flowshop scheduling,” IEEE Transactions on Systems, Man, and Cybernetics Part C: Applications and Reviews, vol. 28 (3), p 392–403, 1998.

[9] A. Lara, G. Sanchez, C.A.C. Coello, O. Schutze, “HCS: A New Local Search Strategy for Memetic Multi-objective Evolutionary Algorithms,” IEEE transactions on evolutionary computation, vol 14(1), 2010.

[10] N. Srinivas, K. Deb, “Multi-Objective function optimization using non-dominated sorting genetic algorithms,” Evolutionary Computation, vol. 2(3), p. 221–248, 1995.

[11] I. F. Sbalzarini , S. Müller , P. Koumoutsakos, “Multiobjective Optimization Using Evolutionary Algorithms,” Center for Turbulence Research, Proceedings of the Summer Program 2000, p. 63-74, 2000.

[12] K. Deb, S. Agrawal, A. Pratap, T. Meyarivan, “A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II,” Lecture Notes in Computer Science, vol. 1917/2000, p. 849-858, 2000.

[13] E. Zitzler and K. Deb and L. Thiele, “Comparison of Multiobjective Evolutionary Algorithms: Empirical Results,” Evolutionary Computation, vol. 8(2), p. 173-195, 2000.

[14] K. Mitra, K. Deb, S. K. Gupta, “Multiobjective dynamic optimization of an industrial Nylon 6 semibatch reactor using genetic algorithms,” Journal of Applied Polymer Science, vol. 69(1), p. 69–87, 1998.

[15] D. S. Weile, E. Michielssen, D. E. Goldberg, “Genetic algorithm design of Pareto-optimal broad band microwave absorbers,” IEEE Transactions on Electromagnetic Compatibility, vol. 38(4), 1996.

[16] L. G. Casado, I. Garcia, P. G. Szabo, T. Csendes, “Packing Equal Circles in a Square II. – New Results for Up to 100 Circles Using the TAMSASS-PECS Algorithm,” New Trends in Equilibrium Systems, Boston, 1-9, 2000.

[17] S. S. Rao, “Optimization Theory and Applications,” John Willey and Sons, New York, 1978.

[18] D. C. Karnop, “Random Search techniques for optimization problems,” Automatica, vol. 1, p. 111-121, 1963.

[19] J. Matyas, “Random optimization,” Automatization and Remote Control, vol. 26, p. 244-251, 1965.

[20] F. J. Solis, J. B. Wets, “Minimization by Random Search Techniques,” Math of Operations Research, vol. 6(1), p. 19-50, 1981.

[21] P. M. Ortigosa, I. Garcia, M. Jelasity, “Reliability and performance of UEGO, a clustering-based global optimizer,” Journal of Global Optimization, vol. 19 (3), p 265-289, 2001.

[22] P. M. Ortigosa, J. L. Redondo, I. Garcia, J. J. Fernandez, “A population global optimization algorithm to solve the image alignment problem in electron crystallography,” Journal of Global Optimization, vol. 37 (4), p. 527-539.

328 2011 Third World Congress on Nature and Biologically Inspired Computing