NC. 1. ETSNT 2009 - A Comparison of Multiobjective Evolutionary Algorithms
-
Upload
manish-khare -
Category
Documents
-
view
217 -
download
1
Transcript of NC. 1. ETSNT 2009 - A Comparison of Multiobjective Evolutionary Algorithms
-
7/31/2019 NC. 1. ETSNT 2009 - A Comparison of Multiobjective Evolutionary Algorithms
1/5
Manish Khare
School of IT, Centre for Development of Advanced Computing, Noida, India.
[email protected],Tushar Patnaik
School of IT, Centre for Development of Advanced Computing, Noida, India.
Ashish Khare
Department of Electronics and Communication, University of Allahabad, India.
In this paper, a systematic comparison of various
evolutionary approaches to multiobjectiveoptimization using six carefully chosen test functionsis given. Each test function involves a particular
feature that is known to cause difficulty in theevolutionary optimization process, mainly inconverging to the Pareto-optimal front (e.g.,
multimodality and deception). By investigating thesedifferent problem features separately, it is possible topredict the kind of problems to which a certain
technique is or is not well suited. However, incontrast to what was suspected beforehand, theexperimental results indicate a hierarchy of thealgorithms under consideration.
Evolutionary Algorithms,
Multiobjective optimization, Pareto optimality, test
function.
The term evolutionary algorithm (EA) stands for a
class of stochastic optimization methods that
simulate the process of natural evolution. The origin
of EAs can be traced back to the late 1950s, and
since the 1970s several evolutionary methodologies
have been proposed, mainly genetic algorithms,
evolutionary programming, and evolution strategies.
All of these approaches operate on a set of candidate
solutions. Using strong simplifications, this set is
subsequently modified by the two basic principles:
selection and variation. While selection mimics the
competition for reproduction and resources among
living beings, the other principle, variation imitates
the natural capability of creating new living beings
by means of recombination and mutation [1]. The
paper is structured as follows: Section 2 introduces
key concepts of multiobjective optimization and
defines the terminology used in this paper
mathematically. Section 3 gives a brief overview of
the multiobjective EAs. Section 4 describes the testfunctions, their construction, and their choices.
Section 5 gives experimental results and comparison
of different evolutionary algorithm approach. Section
6 gives the result analysis and conclusion.
Optimization problems involve multiple, conflicting
objectives are often approached by aggregating the
objectives into a scalar function and solving the
resulting single-objective optimization problem. In
contrast, in this study, we are concerned with finding
a set of optimal trade-offs, the so-called Pareto-
optimal set. In the following, we formalize this well-known concept and also define the difference
between local and global Pareto-optimal sets.
A multiobjective search space is partially ordered in
the sense that two arbitrary solutions are related to
each other in two possible ways: either one
dominates the other or neither dominates [1].
Let us consider, without loss of
generality, a multiobjective minimization problem
with m decision variables (parameters) and n
objectives:
Minimize y = f(x) = (f1 (x), ----- , fn (x))
Where x = (x1, ----- , xm) X
y = (y1, ---- ,y) Y(1)
and where x is called decision vector, X parameter
space, y objective vector, and Y objective
space. A decision vector a X is said to dominate a
decision vector b X if and only if
I {1, 2, ----, n} : fi (a) fi (b)
j = {1, 2, ----, n) : fi (a) < fi (b) (2)
National Conference on Emerging Trends in Software & Networking Technologies, April 17-18, 2009 (ETSNT'09)
Amity Institute of Information Technology, Amity University, Uttar Pradesh 205
-
7/31/2019 NC. 1. ETSNT 2009 - A Comparison of Multiobjective Evolutionary Algorithms
2/5
Let a X be an arbitrary
decision vector.
1. The decision vector a is said to be
nondominated regarding a set X/ X if and only if
there is no vector in X/ which dominates a; formally
(3)If it is clear within the context which set X / is meant,
we simply leave it out.
2. The decision vector a is Pareto-optimal if
and only if a is nondominated regarding X.
Pareto-optimal decision vectors cannot be improved
in any objective without causing a degradation in at
least one other objective; they represent, in our
terminology, globally optimal solutions. However,
analogous to single-objective optimization problems,
there may also be local optima which constitute a
nondominated set within a certain neighborhood.
This corresponds to the concepts of global and local
Pareto-optimal sets introduced in [2].
Consider a set of decision
vectors X/ X.
The set X/ is denoted as a local Pareto-optimal
set if and only if
(4)
Where || . || is a corresponding distance metric and
>0, >0.
The set X/ is called a global Pareto-optimal set
if and only if
(5)
Note that a global Pareto-optimal set does notnecessarily contain all Pareto-optimal solutions. If
we refer to the entirety of the Pareto-optimal
solutions, we simply write Pareto optimal set; the
corresponding set of objective vectors is denoted as
Pareto-optimal front.
Two major problems must be addressed when an
evolutionary algorithm is applied to multiobjective
optimization[4]:
1. How to accomplish fitness assignment and
selection, respectively, in order to guide the searchtowards the Pareto-optimal set.
2. How to maintain a diverse population in order to
prevent premature convergence and achieve a well
distributed trade-off front.
Often, different approaches are classified with regard
to the first issue, where one can distinguish between
criterion selection, aggregation selection, and Pareto
selection. Methods performing criterion selection
switch between the objectives during the selection
phase. Each time an individual is chosen for
reproduction, potentially a different objective will
decide which member of the population will be
copied into the mating pool. Aggregation selection is
based on the traditional approaches to multiobjective
optimization where the multiple objectives are
combined into a parameterized single objectivefunction. The parameters of the resulting function are
systematically varied during the same run in order to
find a set of Pareto-optimal solutions.
Deb [2] has identified several features that may cause
difficulties for multiobjective EAs in
1) converging to the Pareto-optimal front and
2) maintaining diversity within the population.
Concerning the first issue, multimodality, deception,
and isolated optima are well-known problem areas in
single-objective evolutionary optimization. Thesecond issue is important in order to achieve a well
distributed non dominated front. However, certain
characteristics of the Pareto-optimal front may
prevent an EA from finding diverse Pareto optimal
solutions: convexity or non convexity, discreteness,
and non uniformity. For each of the six problem
features mentioned, a corresponding test function is
constructed following the guidelines in Deb [2]. We
thereby restrict ourselves to only two objectives in
order to investigate the simplest case first. In our
opinion, two objectives are sufficient to reflect
essential aspects of multiobjective optimization.
Moreover, we do not consider maximization or
mixed minimization/maximization problems.
Minimize, T (x) = (f1 (x1), f2 (x2))
Subject to, f2 (x) = g ( x2, x3, ..xm)h (f1 (x1),
g ( x2, x3, ..xm))
Where, x = (x1, x2, ..xm ) (6)
The f1 is a function of the first decision variable only,
g is a function of the remaining m-1 variables, h are
the function values of f1 and g. The test functions
differ in these three functions as well as in the
number of variables m and in the values the variables
may take.
DEFINITION 4[2]: Six test functions T1, T2, ----- T6that follow the scheme given in Equation 6:
The test function t1 has a convex Pareto optimal
front
f1(x1) = x1
g (x2,., xm) = 1 + 9
2
/( 1)m
i
i
x m=
h(f1,g)= 1- 1( / )f g (7)
National Conference on Emerging Trends in Software & Networking Technologies, April 17-18, 2009 (ETSNT'09)
Amity Institute of Information Technology, Amity University, Uttar Pradesh 206
-
7/31/2019 NC. 1. ETSNT 2009 - A Comparison of Multiobjective Evolutionary Algorithms
3/5
where m = 30 & x i[0,1]. The Pareto optimal frontis formed with g=1.
The test function t2 has a non convex counter
part of t1f1(x1) = x1
g (x2,.xm) = 1 + 9.
2
/( 1)m
i
i
x m=
h (f1,g) = 1-
2
1( / )g (8)
where m = 30 & x i[0,1]. The Pareto optimal frontis formed with g=1.
The test function t3 represents the discreteness
features: its Pareto optimal front consists of
several non- contiguous convex parts.
f1(x1) = x1
g (x2,.xm) = 1 + 9.2
/( 1)m
i
i
x m=
h (f1,g) = 1- 1 /f g - (f1/g) sin (10 xi) (9)
where m = 30 & x i[0,1]. The Pareto optimal frontis formed with g=1.
The introduction of the sine function in h causes
discontinuity in the objective space.
The test function t4 represents 219 local Pareto
optimal sets & therefore tests for EAs ability to
deal with multimodality.
f1(x1) = x1
g (x2,.xm) = 1 + 10 (m-1).(2
2
m
i
i=
10 cos (4 xi))
h (f1,g) = 1 - 1( / )f g (10)
where m=10, xi[0,1] & x2, xm[-5,5].The global Pareto optimal front is formed with g=1,
the best local pareto optimal front g=1.25.
The test function t5 describes a deceptive
problem and distinguishes itself from the other
test functions in that x i represents a binary
string:
f1 (x1) = 1+ u (x1)
g (x2,.xm) =2 ( ( ))
m
ii v u x=
h (f1,g) = 1/ f1 (11)
Where u (xi) gives the number of ones in the bit
vector xi (unitation),
v (u (xi)) =
i i
i
2 + u (x ), if u (x ) < 5
1, if u (x ) = 5
and m = 11, x1 {0, 1}30, and x2,.xm {0, 1}5.The true Pareto optimal front is formed with g (x) =
10, while the test deceptive Pareto optimal front is
represented by the solutions for which g (x) =11. The
global Pareto optimal front as well as the local ones
is convex.
The test function t6 includes two difficulties
caused by the non uniformity of the searchspace:
, the Pareto-optimal solutions are non uniformly
distributed along the global Pareto front (the front is
biased for solutions for which f1 (x) is near one);
, the diversity of the solutions is lower near
the Pareto optimal front highest away from the
front:
f1 (x1) = 1 - exp (-4x1) sin6 (6 x1)
g (x2,.xm) = 1 + 9.(( )0.25
2
/( 1)m
i
i
x m=
h (f1,g) = 1-2
1( / )g (12)
where m = 10, xi [0, 1]. The Pareto optimal frontis formed with g (x) = 1 and is non convex.
We compare with six algorithms on the six described
test functions of previous section.
1. Non-dominated Sorting Genetic Algorithm
(NSGA) [3].
2. Elitist Non-dominated Sorting Genetic
Algorithm (NSGA II) [4].
3. Strength Pareto Evolutionary Approach (SPEA)[5].
4. Improved Strength Pareto Evolutionary
Approach (SPEA2) [6].
5. Pareto Archived Evolution Strategy (PAES) [7].
6. Pareto Enveloped based Selection Algorithm
(PESA) [8].
Independent of the algorithm and the test function,
each simulation run was carried out using the
following parameters:
Number of generations: 250,
Population size : 100
Number of objectives: 2,Crossover rate: 0.9
Mutation rate: 0.03,
Niching parameter : 0.48862
Domination pressure tdom: 10
In Fig. 16, the nondominated fronts achieved by the
different algorithms are visualized. Per algorithm and
National Conference on Emerging Trends in Software & Networking Technologies, April 17-18, 2009 (ETSNT'09)
Amity Institute of Information Technology, Amity University, Uttar Pradesh 207
-
7/31/2019 NC. 1. ETSNT 2009 - A Comparison of Multiobjective Evolutionary Algorithms
4/5
test function, the outcomes of the first five runs were
unified, and then the dominated solutions were
removed from the union set; the remaining points are
plotted in the figures. Also shown are the Pareto-
optimal fronts (lower curves), as well as additional
reference curves (upper curves). The latter curves
allow a more precise evaluation of the obtained
trade-off fronts and were calculated by adding0.1.|max{f2(x)} min{f2 (x)}| to the f2 values of the
Pareto-optimal points.
0 0.2 0.4 0.6 0.8 1
0
1
2
3
4
F1
F
2
NSGA2
NSGA
SPEA
SPEA2
PAES
PESA
Fig 1. Test function T1 (convex).
0 0.2 0.4 0.6 0.8 1
0
1
2
3
4
F1
F
2
NSGA2
NSGA
SPEA
SPEA2
PAES
PESA
Fig 2. Test function T2 (nonconvex).
0 0.2 0.4 0.6 0.8
0
1
2
3
4
F1
F2
NSGA2
NSGA
SPEA
SPEA2
PAES
PESA
Fig 3. Test function T3 (discrete).
0 0.2 0.4 0.6 0.8 1
0
10
20
30
40
F1
F
2
NSGA
NSGA2
SPEA
SPEA2
PAES
PESA
Fig 4. Test function T4 (multimodal)
0 5 10 15 20 25 30
0
2
4
6
F1
F
2
NSGA
NSGA2
SPEA
SPEA2
PAES
PESA
Fig 5. Test function T5 (multimodal).
0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
1
2
3
4
5
6
7
8
F1
F
2
NSGA2
NSGA
SPEA
SPEA2
PAES
PESA
Fig 6. Test function T6 (nonuniform).
National Conference on Emerging Trends in Software & Networking Technologies, April 17-18, 2009 (ETSNT'09)
Amity Institute of Information Technology, Amity University, Uttar Pradesh 208
-
7/31/2019 NC. 1. ETSNT 2009 - A Comparison of Multiobjective Evolutionary Algorithms
5/5
We have carried out a systematic comparison of
several multiobjective EAs on six different test
functions. Major results are:
The each test functions require the choice of
three functions, each of which controls a
particular aspect of difficulity for a
multiobjective GA. One function (f1), tests an
algorithms ability to handle difficulties alongthe pareto-optimal region; function (g) tests an
algorithms ability to handle difficulties lateral
to the Pareto-optimal region; and function (h)
tests an algorithms ability to handle difficulties
arising because of different shapes of the Pareto-
optimal region.
It has been seen that for the chosen test problems
and parameter setting NSGA-II & PESA
outperforms other multi objective evolutionary
algorithm. NSGA-II & SPEA2 follows elitism
mechanism to prevent the loss of non-dominated
solution, but then also NSGA-II gives better
result than SPEA, as NSGA-II does not require
specifying external population size that isrequired in SPEA. If we take large external
population size, then there will be a danger of
the external population being overcrowded with
non-dominated solutions. On the other hand, if a
small external population is used, the effect of
elitism is lost. NSGA-II, SPEA, SPEA2 gives
better result than NSGA, as NSGA lacks elitism
property to prevent the loss of non-dominated
solutions. PAES & PESA gives better result than
NSGA, NSGA-II, SPEA, SPEA2 in case of four
test function T1 (Convex), T2 (Non-Convex), T3(Discreteness), & T6 (Non-uniform), and gives
worst result than other method in case of test
function- T4
(Multi model), T5
(Deceptive), but
PAES & PESA have one big disadvantage, these
two algorithms gives good result, when we take
very large number of iteration (20,000 -25,000),
but all other four algorithms (NSGA, NSGA-II,
SPEA, SPEA2) work properly for small iteration
(250). SPEA2 gives better performance than
SPEA, but not give better performance
comparison to all other algorithm (NSGA,
NSGA-II, PAES, and PESA).
Among the considered test function, T4 (Multi
model) & T5 (Deceptive) seem to be hardest
problem, since none of the algorithms was able
to involve a global Pareto optimal set. Finally, it
can be observed that biased search space
together with non-uniform represented Pareto
optimal front (T6) makes it difficult for the
MOEAs to evolve a well distributed non-
dominated set.
I would like to acknowledge Dr. Alok Singh, Reader
University of Hyderabad, Dr. P. R. Gupta, Head
School of IT, CDAC, Noida for the help and
guidance provided by them from time to time. We
also wish to thanks to Amity University Staff for
publishing this paper in conference.
[1]. Eckart Zitzler (1999): EvolutionaryAlgorithm for Multiobjective Optimization
Methods & Application: Ph. D. Thesis, SwissFederal Institute of Technology (ETH), Zurich.
[2]. Deb, K. (1999): Multi-objective geneticalgorithms: Problem difficulties and construction
of test problems. Evolutionary Computation,7(3):205230.
[3]. Kalyanmoy Deb & N. Srinivas (1995):
Multiobjective Optimization Using Non
Dominated Sorting Genetic Algorithm,Evolutionary Computation Journal 2 (3): 221-
248.[4]. Kalyanmoy Deb, Samir Agarwal, Amrit Pratap
& T. Meyarivan (2000): A Fast Elitist NonDominated Sorting Genetic Algorithm for Multi
Objective Optimization: NSGA-II, IndianInstitute of Technology, Kanpur: Kanpur Genetic
Algorithm Laboratory (KanGAL) Report No.-200001.
[5]. Eckart Zitzler & Lothar Thiele (1998): AnEvolutionary Algorithm for Multi Objective
Optimization: The Strength Pareto Approach,Technical Report, 43, Zurich, Switzerland:
Computer Engineering & Networks Laboratory(TIK), Swiss Federal Institute of Technology(ETH).
[6]. Eckart Zitzler, Marco Laumanns & Lothar Thiele
(2001): SPEA2: Improving The StrengthPareto Evolutionary Algorithm, TechnicalReport, 103, Zurich, Switzerland: ComputerEngineering & Networks Laboratory (TIK),
Swiss Federal Institute of Technology (ETH).[7]. J. D. Knowles & D. W. Corne (2000):
Approximating The Non Dominated Front UsingThe Pareto Archived Evolution Strategy,Evolutionary Computation Journal 8 (2): 149-172.
[8]. D. W. Corne, J. D. Knowles & M. J. Ootes(2000): The Pareto Envelope-Based Selection
Algorithm for Multiobjective Optimization. In
Proceedings of the Sixth InternationalConference on Parallel Problem solving from
Nature VI (PPSN VI): 839 848.
National Conference on Emerging Trends in Software & Networking Technologies, April 17-18, 2009 (ETSNT'09)
Amity Institute of Information Technology, Amity University, Uttar Pradesh 209