Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm...

22
International Journal of Computational Intelligence Research. ISSN 0973-1873 Vol.2, No.3 (2006), pp. 287–308 c Research India Publications http://www.ijcir.info Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art Margarita Reyes-Sierra and Carlos A. Coello Coello CINVESTAV-IPN (Evolutionary Computation Group), Electrical Engineering Department, Computer Science Section, Av. IPN No. 2508, Col. San Pedro Zacatenco, M´ exico D.F. 07300, M ´ EXICO [email protected] [email protected] Abstract: The success of the Particle Swarm Optimiza- tion (PSO) algorithm as a single-objective optimizer (mainly when dealing with continuous search spaces) has motivated re- searchers to extend the use of this bio-inspired technique to other areas. One of them is multi-objective optimization. De- spite the fact that the first proposal of a Multi-Objective Par- ticle Swarm Optimizer (MOPSO) is over six years old, a con- siderable number of other algorithms have been proposed since then. This paper presents a comprehensive review of the vari- ous MOPSOs reported in the specialized literature. As part of this review, we include a classification of the approaches, and we identify the main features of each proposal. In the last part of the paper, we list some of the topics within this field that we consider as promising areas of future research. I. Introduction Optimization problems that have more that one objective function are rather common in every field or area of knowl- edge. In such problems, the objectives to be optimized are normally in conflict with respect to each other, which means that there is no single solution for these problems. Instead, we aim to find good “trade-off” solutions that represent the best possible compromises among the objectives. Particle Swarm Optimization (PSO) is a heuristic search technique (which is considered as an evolutionary algorithm by its authors [18]) that simulates the movements of a flock of birds which aim to find food. The relative simplicity of PSO and the fact that is a population-based technique have made it a natural candidate to be extended for multi-objective optimization. Moore and Chapman proposed the first extension of the PSO strategy for solving multi-objective problems in an un- published manuscript from 1999 1 [41]. After this early attempt, a great interest to extend PSO arose among re- 1 This paper may be found in the EMOO repository located at: http://delta.cs.cinvestav.mx/˜ccoello/EMOO/ searchers, but interestingly, the next proposal was not pub- lished until 2002. Nevertheless, there are currently over twenty five different proposals of MOPSOs reported in the specialized literature. This paper provides the first survey of this work, attempting to classify these proposals and to delineate some of the potential research paths that could be followed in the future by researchers in this area. The remainder of this paper is organized as follows. In Section II, we provide some basic concepts from multi- objective optimization required to make the paper self- contained. Section III presents an introduction to the PSO strategy and Section IV presents a brief discussion about ex- tending the PSO strategy for solving multi-objective prob- lems. A complete review of the MOPSO approaches is pro- vided in Section V. We provide a brief discussion about the convergence properties of PSO and MOPSO in Section VI. In Section VII, possible paths of future research are dis- cussed and, finally, we present our conclusions in Section VIII. II. Basic Concepts We are interested in solving problems of the type 2 : minimize ~ f (~x) := [f 1 (~x),f 2 (~x),...,f k (~x)] (1) subject to: g i (~x) 0 i =1, 2,...,m (2) h i (~x)=0 i =1, 2,...,p (3) where ~x = [x 1 ,x 2 ,...,x n ] T is the vector of decision variables, f i : IR n IR, i =1, ..., k are the objective functions and g i ,h j : IR n IR, i =1, ..., m, j =1, ..., p are the constraint functions of the problem. 2 Without loss of generality, we will assume only minimization problems.

Transcript of Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm...

Page 1: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

International Journal of Computational Intelligence Research.ISSN 0973-1873 Vol.2, No.3 (2006), pp. 287–308c© Research India Publications http://www.ijcir.info

Multi-Objective Particle Swarm Optimizers:A Survey of the State-of-the-Art

Margarita Reyes-Sierra and Carlos A. Coello Coello

CINVESTAV-IPN (Evolutionary Computation Group), Electrical Engineering Department, Computer Science Section,Av. IPN No. 2508, Col. San Pedro Zacatenco, Mexico D.F. 07300, MEXICO

[email protected]@cs.cinvestav.mx

Abstract: The success of the Particle Swarm Optimiza-tion (PSO) algorithm as a single-objective optimizer (mainlywhen dealing with continuous search spaces) has motivated re-searchers to extend the use of this bio-inspired technique toother areas. One of them is multi-objective optimization. De-spite the fact that the first proposal of a Multi-Objective Par-ticle Swarm Optimizer (MOPSO) is over six years old, a con-siderable number of other algorithms have been proposed sincethen. This paper presents a comprehensive review of the vari-ous MOPSOs reported in the specialized literature. As part ofthis review, we include a classification of the approaches, andwe identify the main features of each proposal. In the last partof the paper, we list some of the topics within this field that weconsider as promising areas of future research.

I. Introduction

Optimization problems that have more that one objectivefunction are rather common in every field or area of knowl-edge. In such problems, the objectives to be optimized arenormally in conflict with respect to each other, which meansthat there is no single solution for these problems. Instead,we aim to find good “trade-off” solutions that represent thebest possible compromises among the objectives.

Particle Swarm Optimization (PSO) is a heuristic searchtechnique (which is considered as an evolutionary algorithmby its authors [18]) that simulates the movements of a flockof birds which aim to find food. The relative simplicity ofPSO and the fact that is a population-based technique havemade it a natural candidate to be extended for multi-objectiveoptimization.

Moore and Chapman proposed the first extension of thePSO strategy for solving multi-objective problems in an un-published manuscript from 19991 [41]. After this earlyattempt, a great interest to extend PSO arose among re-

1This paper may be found in the EMOO repository located at:http://delta.cs.cinvestav.mx/˜ccoello/EMOO/

searchers, but interestingly, the next proposal was not pub-lished until 2002. Nevertheless, there are currently overtwenty five different proposals of MOPSOs reported in thespecialized literature. This paper provides the first surveyof this work, attempting to classify these proposals and todelineate some of the potential research paths that could befollowed in the future by researchers in this area.

The remainder of this paper is organized as follows. InSection II, we provide some basic concepts from multi-objective optimization required to make the paper self-contained. Section III presents an introduction to the PSOstrategy and Section IV presents a brief discussion about ex-tending the PSO strategy for solving multi-objective prob-lems. A complete review of the MOPSO approaches is pro-vided in Section V. We provide a brief discussion aboutthe convergence properties of PSO and MOPSO in SectionVI. In Section VII, possible paths of future research are dis-cussed and, finally, we present our conclusions in SectionVIII.

II. Basic Concepts

We are interested in solving problems of the type2:

minimize ~f(~x) := [f1(~x), f2(~x), . . . , fk(~x)] (1)

subject to:

gi(~x) ≤ 0 i = 1, 2, . . . ,m (2)

hi(~x) = 0 i = 1, 2, . . . , p (3)

where ~x = [x1, x2, . . . , xn]T is the vector of decisionvariables,fi : IRn → IR, i = 1, ..., k are the objectivefunctions andgi, hj : IRn → IR, i = 1, ...,m, j = 1, ..., pare the constraint functions of the problem.

2Without loss of generality, we will assume only minimization problems.

Page 2: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

288 Margarita Reyes-Sierra and Carlos A. Coello Coello

dominated solutions

f

f2

1

Figure. 1: Dominance relation in a bi-objective space.

To describe the concept of optimality in which we areinterested, we will introduce next a few definitions.

Definition 1. Given two vectors~x, ~y ∈ IRk, we say that~x ≤ ~y if xi ≤ yi for i = 1, ..., k, and that~x dominates~y(denoted by~x ≺ ~y) if ~x ≤ ~y and~x 6= ~y.

Figure 1 shows a particular case of thedominancerelation in the presence of two objective functions.

Definition 2. We say that a vector of decision variables~x ∈ X ⊂ IRn is nondominatedwith respect toX , if theredoes not exist another~x′ ∈ X such that~f(~x′) ≺ ~f(~x).

Definition 3. We say that a vector of decision variables~x∗ ∈ F ⊂ IRn (F is the feasible region) isPareto-optimalif it is nondominated with respect toF .

Definition 4. ThePareto Optimal SetP∗ is defined by:

P∗ = {~x ∈ F|~x is Pareto-optimal}Definition 5. ThePareto Front PF∗ is defined by:

PF∗ = {~f(~x) ∈ IRk|~x ∈ P∗}Figure 2 shows a particular case of thePareto front in the

presence of two objective functions.We thus wish to determine the Pareto optimal set from the setF of all the decision variable vectors that satisfy (2) and (3).Note however that in practice, not all the Pareto optimal setis normally desirable (e.g., it may not be desirable to havedifferent solutions that map to the same values in objectivefunction space) or achievable.

III. Particle Swarm Optimization

James Kennedy and Russell C. Eberhart [30] originallyproposed the PSO algorithm for optimization. PSO is apopulation-based search algorithm based on the simulationof the social behavior of birds within a flock. Although orig-inally adopted for balancing weights in neural networks [17],PSO soon became a very popular global optimizer, mainly in

dominated solutions

Pareto front solutions

f

f2

1

Figure. 2: The Pareto front of a set of solutions in a twoobjective space.

problems in which the decision variables are real numbers3

[32, 19].According to Angeline [3], we can make two main distinc-

tions between PSO and an evolutionary algorithm:

1. Evolutionary algorithms rely on three mechanisms intheir processing: parent representation, selection of in-dividuals and the fine tuning of their parameters. In con-trast, PSO only relies on two mechanisms, since PSOdoes not adopt an explicit selection function. The ab-sence of a selection mechanism in PSO is compensatedby the use of leaders to guide the search. However, thereis no notion of offspring generation in PSO as with evo-lutionary algorithms.

2. A second difference between evolutionary algorithmsand PSO has to do with the way in which the individ-uals are manipulated. PSO uses an operator that setsthe velocity of a particle to a particular direction. Thiscan be seen as a directional mutation operator in whichthe direction is defined by both the particle’s personalbest and the global best (of the swarm). If the direc-tion of the personal best is similar to the direction ofthe global best, the angle of potential directions will besmall, whereas a larger angle will provide a larger rangeof exploration. In contrast, evolutionary algorithms usea mutation operator that can set an individual in any di-rection (although the relative probabilities for each di-rection may be different). In fact, the limitations exhib-ited by the directional mutation of PSO has led to theuse of mutation operators similar to those adopted inevolutionary algorithms.

Two are the key aspects by which we believe that PSO hasbecome so popular:

3It is worth noting that there have been proposals to use alternative en-codings with PSO (e.g., binary [31] and integer [26]), but none of them hasbeen as popular as the original proposal in which the algorithm operatesusing vectors of real numbers.

Page 3: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

Multi-Objective Particle Swarm Optimizers 289

1. The main algorithm of PSO is relatively simple (sincein its original version, it only adopts one operator forcreating new solutions, unlike most evolutionary algo-rithms) and its implementation is, therefore, straight-forward. Additionally, there is plenty of source codeof PSO available in the public domain (see for exam-ple: http://www.swarmintelligence.org/codes.php ).

2. PSO has been found to be very effective in a wide va-riety of applications, being able to produce very goodresults at a very low computational cost [32, 20].

In order to establish a common terminology, in the follow-ing we provide some definitions of several technical termscommonly used:

• Swarm: Population of the algorithm.

• Particle: Member (individual) of the swarm. Each par-ticle represents a potential solution to the problem beingsolved. The position of a particle is determined by thesolution it currently represents.

• pbest(personal best): Personal best position of a givenparticle, so far. That is, the position of the particle thathas provided the greatest success (measured in terms ofa scalar value analogous to the fitness adopted in evolu-tionary algorithms).

• lbest(local best): Position of the best particle memberof the neighborhood of a given particle.

• gbest(global best): Position of the best particle of theentire swarm.

• Leader: Particle that is used to guide another particletowards better regions of the search space.

• Velocity (vector): This vector drives the optimizationprocess, that is, it determines the direction in which aparticle needs to “fly” (move), in order to improve itscurrent position.

• Inertia weight: Denoted byW , the inertia weight isemployed to control the impact of the previous historyof velocities on the current velocity of a given particle.

• Learning factor: Represents the attraction that a parti-cle has toward either its own success or that of its neigh-bors. Two are the learning factors used:C1 andC2. C1

is thecognitivelearning factor and represents the attrac-tion that a particle has toward its own success.C2 is thesocial learning factor and represents the attraction thata particle has toward the success of its neighbors. Both,C1 andC2, are usually defined as constants.

• Neighborhood topology: Determines the set of parti-cles that contribute to the calculation of thelbestvalueof a given particle.

In PSO, particles are “flown” through hyperdimensionalsearch space. Changes to the position of the particles withinthe search space are based on the social-psychological ten-dency of individuals to emulate the success of other individ-uals.

The position of each particle is changed according to itsown experience and that of its neighbors. Let~xi(t) denotethe position of particlepi, at time stept. The position ofpi is then changed by adding a velocity~vi(t) to the currentposition, i.e.:

~xi(t) = ~xi(t− 1) + ~vi(t) (4)

The velocity vector reflects the socially exchanged infor-mation and, in general, is defined in the following way:

~vi(t) = W~vi(t− 1) + C1r1(~xpbesti− ~xi(t))

+C2r2(~xleader − ~xi(t)) (5)

where andr1, r2 ∈ [0, 1] are random values.Particles tend to be influenced by the success of anyone

they are connected to. These neighbors are not necessarilyparticles which are close to each other in parameter (deci-sion variable) space, but instead are particles that are closeto each other based on a neighborhood topology that definesthe social structure of the swarm [32].

Particles can be connected to each other in any kind ofneighborhood topology represented as a graph. In the fol-lowing, list some typical neighborhood graphs used in PSO.

• Empty graph: In this topology, particles are isolated.Each particle is connected only with itself, and it com-pares its current position only to its own best positionfound so far (pbest) [19]. In this case,C2 = 0 in Equa-tion 5.

• Local best: In this topology, each particle is affectedby the best performance of itsk immediate neighbors.Particles are influenced by the best position within theirneighborhood (lbest), as well as their own past experi-ence (pbest) [19]. Whenk = 2, this structure is equiv-alent to a ring topology such as the one shown in Fig-ure 3. In this case,leader=lbestin Equation 5.

• Fully connected graph: This topology is the oppositeof theempty graph. The fully connected topology con-nects all members of the swarm to one another. Eachparticle uses its history of experiences in terms of itsown best solution so far (pbest) but, in addition, the par-ticle uses the position of the best particle from the entireswarm (gbest). This structure is also calledstar topol-ogy in the PSO community [19]. See Figure 4. In thiscase,leader=gbestin Equation 5.

• Star network: In this topology, one particle is con-nected to all others and they are connected to only that

Page 4: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

290 Margarita Reyes-Sierra and Carlos A. Coello Coello

Figure. 3: The ring neighborhood topology that representsthe local best scheme, whenk = 2. In this commonlocalbest case, each particle is affected only by its two immediateadjacent neighbors. Each circle represents a particle.

Figure. 4: The fully connected graph represents thefullyconnectedneighborhood topology (each circle represents aparticle). All members of the swarm are connected to oneanother.

focal particle

Figure. 5: The star network topology (each circle representsa particle). Tne focal particle is connected to all the otherparticles and they are connected to only that one.

Figure. 6: The tree network topology (each circle representsa particle). All particles are arranged in a tree. A particle isinfluenced by its own best position so far (pbest) and by thebest position of the particle that is directly above in the tree.Here, we show an example of a topology defined by a regulartree with a height equal to 3, degree equal to 4 and a total of21 particles.

one (calledfocal particle) [19]. See Figure 5. Parti-cles are isolated from one another, as all informationhas to be communicated through thefocal particle. Thefocal particle compares performances of all particles inthe swarm and adjusts its trajectory towards the best ofthem. That performance is eventually communicatedto the rest of the swarm. This structure is also calledwheel topology in the PSO community. In this case,leader=focalin Equation 5.

• Tree network: In this topology, all particles arearranged in a tree and each node of the tree containsexactly one particle [28]. See Figure 6. A particle isinfluenced by its own best position so far (pbest) andby the best position of the particle that is directly abovein the tree (parent). If a particle at a child node hasfound a solution that is better than the best so far solu-tion of the particle at the parent node, both particles areexchanged. In this way, this topology offers a dynamicneighborhood. This structure is also calledhierarchicaltopology in the PSO community. In this case,leader=pbestparent in Equation 5.

The neighborhood topology is likely to affect the rate ofconvergence as it determines how much time it takes to the

Page 5: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

Multi-Objective Particle Swarm Optimizers 291

BeginInitialize swarmLocate leaderg = 0While g < gmax

For each particleUpdate Position (Flight)EvaluationUpdatepbest

EndForUpdate leaderg++

EndWhileEnd

Figure. 7: Pseudocode of the general PSO algorithm.

particles to find out about the location of good (better) re-gions of the search space. For example, since in thefully con-nectedtopology all particles are connected to each other, allparticles receive the information of the best solution from theentire swarm at the same time. Thus, when using such topol-ogy, the swarm tends to converge more rapidly than whenusing local besttopologies, since in this case, the informa-tion of the best position of the swarm takes a longer time tobe transferred. However, for the same reason, thefully con-nectedtopology is also more susceptible to suffer prematureconvergence (i.e., to converge to local optima) [20].

Figure 7 shows the way in which the general (single-optimization) PSO algorithm works. First, the swarm is ini-tialized. This initialization includes both positions and ve-locities. The correspondingpbestof each particle is initial-ized and the leader is located (usually thegbestsolution isselected as the leader). Then, for a maximum number of iter-ations, each particle flies through the search space updatingits position (using (4) and (5)) and itspbestand, finally, theleader is updated too.

IV. Particle Swarm Optimization for Multi-Objective Problems

In order to apply the PSO strategy for solving multi-objectiveoptimization problems, it is obvious that the original schemehas to be modified. As we saw in Section II, the solutionset of a problem with multiple objectives does not consistof a single solution (as in global optimization). Instead, inmulti-objective optimization, we aim to find a set of differentsolutions (the so-called Pareto optimal set). In general, whensolving a multi-objective problem, three are the main goalsto achieve [73]:

1. Maximize the number of elements of the Pareto optimalset found.

2. Minimize the distance of the Pareto front produced by

our algorithm with respect to the true (global) Paretofront (assuming we know its location).

3. Maximize the spread of solutions found, so that we canhave a distribution of vectors as smooth and uniform aspossible.

Given the population-based nature of PSO, it is desirableto produce several (different) nondominated solutions with asingle run. So, as with any other evolutionary algorithm, thethree main issues to be considered when extending PSO tomulti-objective optimization are [13]:

1. How to select particles (to be used as leaders) in orderto give preference to nondominated solutions over thosethat are dominated?

2. How to retain the nondominated solutions found duringthe search process in order to report solutions that arenondominated with respect to all the past populationsand not only with respect to the current one? Also, it isdesirable that these solutions are well spread along thePareto front.

3. How to maintain diversity in the swarm in order to avoidconvergence to a single solution?

As we could see in the previous section, when solvingsingle-objective optimization problems, the leader that eachparticle uses to update its position is completely determinedonce a neighborhood topology is stablished. However, in thecase of multi-objective optimization problems, each particlemight have a set of different leaders from which just one canbe selected in order to update its position. Such set of lead-ers is usually stored in a different place from the swarm, thatwe will call external archive4: This is a repository in whichthe nondominated solutions found so far are stored. The so-lutions contained in the external archive are used as leaderswhen the positions of the particles of the swarm have to beupdated. Furthermore, the contents of the external archive isalso usually reported as the final output of the algorithm.

Figure 8 shows the way in which a general MOPSO al-gorithm works. We have marked withitalics the processesthat make this algorithm different from the general PSO al-gorithm for single objective optimization.

First, the swarm is initialized. Then, a set of leaders is alsoinitialized with the nondominated particles from the swarm.As we mentioned before, the set of leaders is usually storedin an external archive. Later on, some sort of quality measureis calculated for all the leaders in order to select (usually)one leader for each particle of the swarm. At each genera-tion, for each particle, a leader is selected and the flight isperformed. Most of the existing MOPSOs apply some sortof mutation operator5 after performing the flight. Then, the

4This external archiveis also used by many Multi-Objective Evolution-ary Algorihtms (MOEAs).

5The mutation operators adopted in the PSO literature have also beencalledturbulenceoperators.

Page 6: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

292 Margarita Reyes-Sierra and Carlos A. Coello Coello

BeginInitialize swarmInitialize leaders in an external archiveQuality(leaders)g = 0While g < gmax

For each particleSelect leaderUpdate Position (Flight)MutationEvaluationUpdatepbest

EndForUpdate leaders in the external archiveQuality(leaders)g++

EndWhileReport results in the external archive

End

Figure. 8: Pseudocode of a general MOPSO algorithm.

particle is evaluated and its correspondingpbest is updated.A new particle replaces itspbest particle usually when thisparticle is dominated or if both are incomparable (i.e., theyare both nondominated with respect to each other). After allthe particles have been updated, the set of leaders is updated,too. Finally, the quality measure of the set of leaders is re-calculated. This process is repeated for a certain (usuallyfixed) number of iterations.

As we can see, and given the characteristics of the PSOalgorithm, the issues that arise when dealing with multi-objective problems are related with two main algorithmic de-sign aspects [64]:

1. Selection and updating of leaders:

• How to select a single leader out of set of non-dominated solutions which are all equally good?Should we select this leader in a random way orshould we use an additional criterion (to promotediversity, for example)?

• How to select the particles that should remain inthe external archive from one iteration to another?

2. Creation of new solutions:

• How to promote diversity through the two mainmechanisms to create new solutions: updating ofpositions (Equations 4 and 5) and mutation (turbu-lence) operator.

These issues are discussed in more detail in the next sub-sections.

A. Leaders in Multi-Objective Optimization

Since the solution of a multi-objective problem consist of aset of equally good solutions, it is evident that the concept ofleader traditionally adopted in PSO has to be changed.

A few researches have avoided the problem of defining anew concept of leader for multi-objective problems by adopt-ing aggregating functions (i.e., weighted sums of the objec-tives) or approaches that optimize each objective separately.We will briefly discuss these approaches in Section V.

However, it is important to indicate that the majority of thecurrently proposed MOPSO approaches redefine the conceptof leader.

As we mentioned before, the selection of a leader is a keycomponent when designing a MOPSO approach. The moststraightforward approach is to consider every nondominatedsolution as a new leader and then, just one leader has to beselected. In this way, aquality measure that indicates howgood is a leader is very important. Obviously, such featurecan be defined in several different ways. As we will see inSection V, there exist already different proposals to deal withthis issue.

One posible way of defining suchquality measure can berelated to density measures. Promoting diversity may bedone through this process by means of mechanisms basedon somequality measures that indicate the closeness of theparticles within the swarm.

Several authors have proposed leader selection techniquesthat are based on density measures. In order to help un-derstanding the specific approaches that are going to be de-scribed later on, we present here two of the most importantdensity measures used in the area of multi-objective opti-mization:

• Nearest neighbor density estimator[16]. The near-est neighbor density estimator gives us an idea of howcrowded are the closest neighbors of a given particle, inobjective function space. This measure estimates theperimeter of the cuboid formed by using the nearestneighbors as the vertices. See Figure 9.

• Kernel density estimator [22, 15]: When a particle issharing resources with others, its fitness is degraded inproportion to the number and closeness to particles thatsurround it within a certain perimeter. A neighborhoodof a particle is defined in terms of a parameter calledσshare that indicates the radius of the neighborhood.Such neighborhoods are calledniches. See Figure 10.

B. Retaining and Spreading Nondominated Solutions

As we mentioned before, it is important to retain the non-dominated solutions found alongall the search process sothat we can report at the end those solutions that are non-dominated with respect to all the previous populations. Thisis important not only for pragmatic reasons, but also for the-oretical ones [54].

Page 7: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

Multi-Objective Particle Swarm Optimizers 293

i−1

i

i+1

f

f

1

2

Figure. 9: The nearest neighbor density estimator for an ex-ample with two objective functions. Particles with a largervalue of this estimator are preferred.

σshare

σshare

Figure. 10: For each particle, a niche is defined. Particleswhose niche is less crowded are preferred.

The most straightforward way of retaining solutions thatare nondominated with respect to all the previous populations(or swarms) is to use an external archive. Such an archivewill allow the entrance of a solution only if: (a) it is non-dominated with respect to the contents of the archive or (b)it dominates any of the solutions within the archive (in thiscase, the dominated solutions have to be deleted from thearchive).

This approach has, however, the drawback of increasingthe size of the archive very quickly. This is an important is-sue because the archive has to be updated at each generation.Thus, this update may become very expensive, computation-ally speaking, if the size of the archive grows too much. Inthe worst case, all members of the swarm may wish to enterinto the archive, at each generation. Thus, the correspond-ing updating process, at each generation, has a complexity ofO(kN2), whereN is the size of the swarm andk is the num-ber of objectives. In this way, the complexity of the updatingprocess for the complete run is ofO(kMN2), whereM isthe total number of iterations.

Thus, mainly due to practical reasons, archives tend tobe bounded [13], which makes necessary the use of an ad-ditional criterion to decide which nondominated solutionsto retain, once the archive is full. In evolutionary multi-objective optimization, researchers have adopted differenttechniques to prune the archive (e.g., clustering [74] andgeographical-based schemes that place the nondominated so-lutions in cells in order to favor less crowded cells whendeleting in-excess nondominated solutions [34]). However,the use of an archive introduces additional issues: for ex-ample, do we impose additional criteria to enter the archiveinstead of just using nondominance (e.g., use the distributionof solutions as an additional criterion)?

Note that, strictly speaking, three archives should be usedwhen extending PSO for multi-objective optimization: onefor storing the global best solutions, one for the personal bestvalues and a third one for storing the local best (if applica-ble). However, in practice, few authors report the use of morethan one archive in their MOPSOs.

Besides the use of an external file, it is also possible touse a plus selection in which parents compete with their chil-dren and those which are nondominated (and possibly com-ply with some additional criterion such as providing a betterdistribution of solutions) are selected for the following gener-ation. In the case of PSO, a plus selection involves selectingfrom a merge of two consecutive swarms.

More recently, other researchers have proposed the useof relaxed forms of dominance. The main one adopted inPSO has beenε-dominance [36], which is illustrated in Fig-ure 11. The main use of this concept in multi-objective PSOhas been to filter solutions in the external archive. By usingε-dominance, we define a set of boxes of sizeε and only onenondominated solution is retained for each box (e.g., the oneclosest to the lower lefthand corner). This is illustrated inFigure 12, for a bi-objective case. The use ofε-dominance,

Page 8: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

294 Margarita Reyes-Sierra and Carlos A. Coello Coello

f /1

f f

f /2

f f1 1

2

2

ε

ε

epsilondominancedominance

normal

Figure. 11: To the left, we can see the area that is dominatedby a certain solution. To the right, we graphically depict theε-dominance concept. In this case, the area being dominatedhas been extended by a value proportional to the parameterε(which is defined by the user).

f 1

f 2

ε 2ε 3ε 4ε 5ε

ε

1 2

34

6

5

7

Figure. 12: An example of the use ofε-dominance in anexternal archive. Solution 1 dominates solution 2, thereforesolution 1 is preferred. Solutions 3 and 4 are incomparable.However, solution 3 is preferred over solution 4, since solu-tion 3 is the closer to the lower lefthand corner representedby point (2ε,2ε). Solution 5 dominates solution 6, thereforesolution 5 is preferred. Solution 7 is not accepted since itsbox, represented by point (2ε,3ε) is dominated by the boxrepresented by point (2ε,2ε).

as proposed in [36] and illustrated in Figure 12, guaranteesthat the retained solutions are nondominated with respect toall solutions generated during the run. It is worth noting,however, that, when usingε-dominance, the size of the fi-nal external archive depends on theε-value, which is nor-mally a user-defined parameter [36]. Mostaghim and Teich[43] have found that when comparingε-dominance againstexisting clustering techniques for fixing the archive size, theε-dominance method can find solutions much faster (com-putationally speaking) than the clustering technique with acomparable (and even better in some cases) convergence anddiversity.

C. Promoting Diversity while Creating New Solutions

It is well-known that one of the most important features ofthe PSO algorithm is its fast convergence. This is a positivefeature as long as we don’t have premature convergence (i.e.,convergence to a local optimum).

Premature convergence is caused by the rapid loss of di-versity within the swarm. So, the appropriate promotion ofdiversity in PSO is a very important issue in order to controlits (normally fast) convergence.

As we mentioned in Section IV-A, when adopting PSO forsolving multi-objective optimization problems, it is possibleto promote diversity through the selection of leaders. How-ever, this can be also done through the two main mechanismsused for creating new solutions:

1. Updating of positions. As we mentioned in SectionIII, the use of different neighborhood topologies deter-mines how fast is the process of transfering the infor-mation through the swarm (since a neighborhood deter-mines who theleaderparticle is in Equation 5). Sincein a fully connectedtopology all particles are connectedwith each other, the information is transferred fasterthan in the case of alocal bestor atreetopology, since inthese cases particles have smaller neighborhoods. Un-der the same argument, a specified neighborhood topol-ogy also determines how fast is diversity lost within theswarm. Since in afully connectedtopology, the tranferof information is fast, when using this topology, diver-sity within the swarm is also lost rapidly. In this way,topologies that define neighborhoods smaller than theentire swarm for each particle can also preserve diver-sity within the swarm a longer time.

On the other hand, diversity can also be promoted bymeans of the inertia weight (W in Equation 5). As it wasdefined in Section III, the inertia weight is employed tocontrol the impact of the previous history of velocitieson the current velocity. Thus, the inertia weight influ-ences the trade-off between global (wide-ranging) andlocal (nearby) exploration abilities [58]. A large inertiaweight facilitates global exploration (searching new ar-eas) while a smaller inertia weight tends to facilitate lo-cal exploration to fine-tune the current search area. The

Page 9: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

Multi-Objective Particle Swarm Optimizers 295

value of the inertia weight may vary during the opti-mization process. Shi [59] asserted that by linearly de-creasing the inertia weight from a relatively large valueto a small one through the course of the PSO run, thePSO tends to have more global search ability at the be-ginning of the run and have more local search abilitynear the end of the run. On the other hand, Zheng etal. [72] argue that either global or local search abilityassociates with a small inertia and that a large inertiaweight provides the algorithm more chances to be sta-bilized. In this way, inspired on the process of the sim-ulated annealing algorithm, the authors proposed to usean increasing inertia weight through the PSO run.

The addition of velocity to the current position to gen-erate the next position is similar to the mutation opera-tor in evolutionary algorithms, except that “mutation” inPSO is guided by the experience of a particle and that ofits neighbors. In other words, PSO performs “mutation”with a “conscience” [58].

2. Through the use of a mutation (or turbulence) oper-ator.

As mentioned in the previous section, when a particleupdates its position, a mutation with “conscience” oc-curs. Sometimes, however, some unconciousness or“craziness”, as called by Kennedy and Eberhart in theoriginal proposal of PSO [30], is needed. Craziness,also referred as turbulence, reflects the change in a par-ticle’s flight which is out of its control [21].

In general, when a swarm stagnates, that is, when thevelocities of the particles are almost zero, it becomesunable to generate new solutions which might lead theswarm out of this state. This behavior can lead to thewhole swarm being trapped in a local optimum fromwhich it becomes impossible to escape. Since the globalbest individual attracts all members of the swarm, it ispossible to lead the swarm away from a current locationby mutating a single particle if the mutated particle be-comes the new global best. This mechanism potentiallyprovides a means both of escaping local optima and ofspeeding up the search [62].

In this way, the use of a mutation operator is very impor-tant in order to escape from local optima and to improvethe exploratory capabilities of PSO. When a solution ischosen to be mutated each component is then mutated(randomly changed) or not with certain probability. Ac-tually, different mutation operators have been proposedthat mutate components of either the position or the ve-locity of a particle.

In our experience, the choice of a good mutation oper-ator is a difficult task that has a significant impact onperformance. On the other hand, once we have selecteda specific mutation operator another difficult task is todecide how much mutation to apply: with how much

probability, in which moments of the process, in whichspecific component of a particle, etc.

Several proposed approaches have used different mu-tation operators, however, there are also approacheswhich do not use any kind of mutation operator and thatshow good performance. So, the use of mutation is anissue that certainly deserves a more careful study.

V. A Taxonomy of Approaches

The taxonomy that we propose to classify the current MOP-SOs is the following:

• Aggregating approaches

• Lexicographic ordering

• Sub-Population approaches

• Pareto-based approaches

• Combined approaches

• Other approaches

We will discuss next each of these types of approaches.Also, Table 1 summarizes all the different approaches andindicates their most important features.

A. Aggregating Approaches

Under this category we consider approaches that combine (or“aggregate”) all the objectives of the problem into a singleone. In other words, the multi-objective problem is trans-formed into a single-objective one. This is not a new idea,since aggregating functions can be derived from the well-known Kuhn-Tucker conditions for nondominated solutions[35].

• Parsopoulos and Vrahatis [50]: This algorithm adoptsthree types of aggregating functions: (1) a conventionallinear aggregating function (where weights are fixedduring the run), (2) a dynamic aggregating function(were weights are gradually modified during the run)and (3) the bang bang weighted aggregation approach(were weights are abruptly modified during the run)6

[29]. In all cases, the authors adopt thefully connectedtopology.

• Baumgartner et al. [6]: This approach, based on thefully connectedtopology, uses linear aggregating func-tions. In this case, the swarm is equally partitionedinto n subswarms, each of which uses a different set ofweights and evolves into the direction of its own swarmleader. The approach adopts a gradient technique toidentify the Pareto optimal solutions.

6This approach has the peculiarity of being able to generate nonconvexportions of the Pareto front, which is something that traditional linear aggre-gating functions cannot do [14].

Page 10: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

296 Margarita Reyes-Sierra and Carlos A. Coello Coello

B. Lexicographic Ordering

In this method, the user is asked to rank the objectives in or-der of importance. The optimum solution is then obtained byminimizing the objective functions separately, starting withthe most important one and proceeding according to the as-signed order of importance of the objectives [40]. Lexico-graphic ordering tends to be useful only when few objectivefunctions are used (two or three), and it may be sensitive tothe ordering of the objectives [10].

• Hu and Eberhart [24]: In this algorithm, only one objec-tive is optimized at a time using a scheme similar to lex-icographic ordering [13]. This approach adopts thering(local best) topology. No external archive is adoptedin this case. However, in a further version of this ap-proach [25], the authors incorporate an external archive(called “extended memory”) and introduce some furtherimprovements to their dynamic neighborhood PSO ap-proach.

C. Sub-Population Approaches

These approaches involve the use of several subpopulationsas single-objective optimizers. Then, the subpopulationssomehow exchange information or recombine among them-selves aiming to produce trade-offs among the different so-lutions previously generated for the objectives that were sep-arately optimized.

• Parsopoulos et al. [49] studied a parallel version ofthe Vector Evaluated Particle Swarm (VEPSO) methodfor multi-objective problems. VEPSO is a multi-swarmvariant of PSO, which is inspired on the Vector Evalu-ated Genetic Algorithm (VEGA) [56, 57]. In VEPSO,each swarm is evaluated using only one of the objec-tive functions of the problem under consideration, andthe information it possesses for this objective functionis communicated to the other swarms through the ex-change of their best experience (gbestparticle). The au-thors argue that this process can lead to Pareto optimalsolutions.

• Chow and Tsui [8]: In this paper, the authors use PSOas an autonomous agent response learning algorithm.For that sake, the authors propose to decompose theaward function of the autonomous agent into a set oflocal award functions and, in this way, to model the re-sponse extraction process as a multi-objective optimiza-tion problem. A modified PSO called “Multi-SpeciesPSO” is introduced by considering each objective func-tion as a species swarm. A communication channel isestablished between the neighboring swarms for trans-mitting the information of the best particles, in order toprovide guidance for improving their objective values.Also, the authors use the flight formula of thefully con-nectedtopology, but include aneighbor swarm refer-

ence velocity. Such velocity is directly related with thebest particle within each subswarm (similar tolbest).

D. Pareto-Based Approaches

These approaches use leader selection techniques based onPareto dominance. The basic idea of all the approaches con-sidered here is to select as leaders to the particles that arenondominated with respect to the swarm. Note however, thatseveral variations of the leader selection scheme are possi-ble since most authors adopt additional information to selectleaders (e.g., information provided by a density estimator) inorder to avoid a random selection of a leader from the currentset of nondominated solutions.

• Moore and Chapman [41]: This algorithm was pre-sented in an unpublished document and it is based onPareto dominance. The authors emphasize the impor-tance of performing both an individual and a groupsearch (a cognitive component and a social component).In this approach, the personal best (pbest) of a particleis a list of all the nondominated solutions it has foundin its trajectory. When selecting apbest, a particle fromthe list is randomly chosen. Since thering topologyis used, when selecting the best particle of the neigh-borhood, the solutions contained in thepbestlists arecompared, and a nondominated solution with respect tothe neighborhood is chosen. The authors don’t indicatehow they choose thelbestparticle when more that onenondominated solution is found in the neigborhood.

• Ray and Liew [53]: This algorithm (based on afullyconnectedtopology) uses Pareto dominance and com-bines concepts of evolutionary techniques with the par-ticle swarm. The approach uses a nearest neighbordensity estimator to promote diversity (by means of aroulette selection scheme of leaders based on this value)and a multilevel sieve to handle constraints (for this, theauthors adopt the constraint and objective matrices pro-posed in some of their previous research [52]). The setof leaders maintained by the authors can be consideredan external archive.

• Fieldsend and Singh [21]: This approach uses an uncon-strained elite external archive (in which a special datastructure called “dominated tree” is adopted) to storethe nondominated individuals found along the searchprocess. The archive interacts with the primary pop-ulation in order to define leaders. The selection of thegbestfor a particle in the swarm is based on the structuredefined by the dominated tree. First, a composite pointof the tree is located based on dominance relations, andthen the closest member (in objective function space)of the composite point is chosen as the leader. On theother hand, a set of personal best particles found (non-dominated) is also maintained for each swarm member,and the selection is performed uniformly. This approach

Page 11: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

Multi-Objective Particle Swarm Optimizers 297

also uses a “turbulence” operator that is basically a mu-tation operator that acts on the velocity value used bythe PSO algorithm.

• Coello et al. [11, 12]: This proposal is based on the ideaof having an external archive in which every particlewill deposit its flight experiences after each flight cycle.The updates to the external archive are performed con-sidering a geographically-based system defined in termsof the objective function values of each particle. Thesearch space explored is divided on hypercubes. Eachhypercube receives a fitness value based on the num-ber of particles it contains. Thus, in order to select aleader for each particle of the swarm, a roulette-wheelselection using these fitness values is first applied, to se-lect the hypercube from which the leader will be taken.Once a hypercube has been selected, the leader is ran-domly chosen. This approach also uses a mutation op-erator that acts both on the particles of the swarm, andon the range of each design variable of the problem tobe solved.

In more recent work, Toscano and Coello [66] use theconcept of Pareto dominance to determine the flight di-rection of a particle. The authors adopt clustering tech-niques to divide the population of particles into severalswarms. This aims to provide a better distribution ofsolutions in decision variable space. Each sub-swarmhas its own set of leaders (nondominated particles). Ineach sub-swarm, a PSO algorithm is executed (leadersare randomly chosen) and, at some point, the differentsub-swarms exchange information: the leaders of eachswarm are migrated to a different swarm in order to vari-ate the selection pressure. Also, this approach does notuse an external archive since elitism in this case is anemergent process derived from the migration of leaders.

• Srinivasan and Hou [61]: This approach, called Parti-cle Swarm Inspired Evolutionary Algorithm (PS-EA),is a hybrid between PSO and an evolutionary algorithm.The main aim is to use EA operators (mutation, for ex-ample) to emulate the workings of PSO mechanisms,based on afully connectedtopology. Since the authorsmention that the final swarm constitutes the final solu-tion (Pareto front), we conclude that a plus selection isperformed at each iteration of the algorithm. Also, theauthors use a niche count and a Pareto ranking approachin order to assign a fitness value to the particles of theswarm. However, the selection technique used is notdescribed in the paper.

• Mostaghim and Teich [44]: They propose asigmamethod in which the leader for each particle is selectedin order to improve the convergence and diversity of aMOPSO approach. The idea of the sigma method issimilar to compromise programming [13]. In order toselect a leader for each particle of the swarm, asigma

value is assigned to each particle of the swarm and ofthe external archive. Each particle of the swarm selectsas its leader the particle of the external archive with theclosest sigma value. The use of the sigma values makesthe selection pressure of PSO even higher, which maycause premature convergence in some cases. The au-thors also use a “turbulence” operator, which is appliedon decision variable space. This approach has been suc-cessfully applied to the molecular force field parame-trization problem [42].

In further work, Mostaghim and Teich [43] studied theinfluence ofε-dominance [36] on MOPSO methods.ε-dominance is compared with existing clustering tech-niques for fixing the external archive size and the so-lutions are compared in terms of computational time,convergence and diversity. The results show that theε-dominance method can find solutions much faster thanthe clustering technique with a comparable (and evenbetter in some cases) convergence and diversity. Theauthors suggest a new density measure (sigma method)inspired on their previous work [44]. Also, based onthe idea that the initial external archive from which theparticles have to select a leader has influence on thediversity of solutions, the authors propose the use ofsuccessive improvements adopting a previous externalarchive of solutions. In this way, in more recent work,Mostaghim and Teich [45] propose a new method calledcoveringMOPSO (cvMOPSO) which retakes this idea.This method works in two phases. In phase 1, a MOPSOalgorithm is run with an external archive with restrictedsize and the goal is to obtain a good approximation ofthe Pareto-front. In the phase 2, the non-dominated so-lutions obtained from the phase 1 are considered as theinput external archive of the cvMOPSO. The particles inthe swarm of the cvMOPSO are divided into subswarmsaround each non-dominated solution after the first gen-eration. The task of the subswarms is to cover the gapsbetween the non-dominated solutions obtained from thephase 1. No restrictions on the archive size are imposedin the phase 2.

• Bartz et al. [5]: This approach starts from the ideaof introducing elitism (through the use of an externalarchive) into PSO. Different methods for selecting anddeleting particles (leaders) from the archive are ana-lyzed to generate a satisfactory approximation of thePareto front. The deletion methods analyzed are basedon the contribution of each particle to the diversity ofthe Pareto front. Selecting methods are either inverselyrelated to the fitness value or based on the previous suc-cess of each particle. The authors provide some statisti-cal analysis in order to assess the impact of each of theparameters used by their approach.

• Li [37]: This approach is based on afully connectedtopology and incorporates the main mechanisms of the

Page 12: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

298 Margarita Reyes-Sierra and Carlos A. Coello Coello

NSGA-II [16] to the PSO algorithm. In this approach,once a particle has updated its position, instead of com-paring the new position only against thepbestpositionof the particle, all thepbestpositions of the swarm andall the new positions recently obtained are combined injust one set (given a total of2N solutions, whereN isthe size of the swarm). Then, the approach selects thebest solutions among them to conform the next swarm(by means of a nondominated sorting). The authordoesn’t specify which values are assigned to the velocityof pbestpositions, in order to consider them as particles.This approach also selects the leaders randomly fromthe leaders set (stored in an external archive) amongthe best of them, based on two different mechanisms:a niche count and a nearest neighbor density estimator.This approach uses a mutation operator that is applied ateach iteration step only to the particle with the smallestdensity estimator value (or the largest niche count).

• Reyes and Coello [60]: This approach is based onPareto dominance and the use of a nearest neighbor den-sity estimator for the selection of leaders (by means ofa binary tournament). This proposal uses two externalarchives: one for storing the leaders currently used forperforming the flight and another for storing the finalsolutions. The density estimator factor is used to filterout the list of leaders whenever the maximum limit im-posed on such list is exceeded. Only the leaders withthe best density estimator values are retained. On theother hand, the concept ofε-dominance is used to se-lect the particles that will remain in the archive of finalsolutions. Additionally, the authors propose a schemein which they subdivide the population (or swarm) intothree different subsets. A different mutation operator isapplied to each subset. Note however, that for all otherpurposes, a single swarm is considered (e.g., for select-ing leaders). This approach is based on afully connectedtopology.

• Alvarez-Benitez et al. [2]: The authors propose meth-ods based exclusively on Pareto dominance for select-ing leaders from an unconstrained nondominated (ex-ternal) archive. Three different selection techniques arepresented: One technique that explicitly promotes di-versity (calledRoundsby the authors), one techniquethat explicitly promotes convergence (calledRandom)and finally one technique that is a weighted probabilis-tic method (calledProb) and forms a compromise be-tweenRandomandRounds. Also, the authors proposeand evaluate four mechanisms for confining particles tothe feasible region, that is, constraint-handling meth-ods. The authors show that probabilistic selection favor-ing archival particles that dominate few particles pro-vides good convergence towards the Pareto front whileproperly covering it at the same time. Also, they con-clude that allowing particles to explore regions close to

the constraint boundaries is important to ensure conver-gence to the Pareto front. This approach uses a turbu-lence factor that is added to the position of the particleswith certain probability.

• Ho et al. [23]: The authors propose a novel formulafor updating velocity and position particles, based onthree main modifications to the known flight formulafor the fully connectedtopology. First, since the au-thors argue that the random factorsr1 andr2 in Equa-tion 5 are not completely independent, they propose touse: r2 = 1 − r1. Second, they propose to incorpo-rate the term(1 −W ) in the second and third terms ofEquation 5, whereW = rnd(0, 1). Third (and last),under the argument of allowing a particle to fly some-times back, the authors propose to allow the first term ofEquation 5 being negative with a 50% probability. Onthe other hand, the authors introduce a “craziness” op-erator in order to promote diversity within the swarm.This “craziness” operator is applied (with certain prob-ability) to the velocity vector before updating the po-sition of a particle. Finally, the authors introduce oneexternal archive for each particle and one global exter-nal archive for the whole swarm. The archive of eachparticle stores the latest Pareto solutions found by theparticle and the global archive stores the current Paretooptimal set. Every time a particle updates its position,it selects its personal best from its own archive and theglobal best from the global archive. In both cases, theauthors use a roulette selection mechanism based on thefitness values of the particles (assigned using the mech-anism originally proposed by Zitzler et al. [74], for theSPEA algorithm) and on an “age” variable that the au-thors introduce and that is increased at each generation.

• Villalobos-Arias et al. [68]: The authors propose a newmechanism to promote diversity in multi-objective opti-mization problems. Although the approach is indepen-dent of the search engine adopted, they incorporate itinto the MOPSO proposed in [12]. The new approachis based on the use of stripes that are applied on theobjective function space. Based on an analysis for abi-objective problem, the main idea of the approach isthat the Pareto front of the problem is “similar” to theline determined by the minimal points of the objectivefunctions. In this way, several points (that the authorscall stripe centers) are distributed uniformly along suchline, and the particles of the swarm are assigned to thenearest stripe center. When using this approach for solv-ing multi-objective problems with PSO, one leader isused in each stripe. Such leader is selected minimiz-ing a weighted sum of the minimal points of the ob-jective functions. The authors show that their approachovercomes the drawbacks on other popular mechanismssuch asε-dominance [36] and the sigma method pro-posed in [44].

Page 13: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

Multi-Objective Particle Swarm Optimizers 299

• Salazar-Lechuga and Rowe [55]: The main idea of thisapproach is to use PSO to guide the search with the helpof niche counts (applied on objective function space)[22] to spread the particles along the Pareto front. Theapproach uses an external archive to store the best parti-cles (nondominated particles) found by the algorithm.Since this external archive helps to guide the search,the niche count is calculated for each of the particlesin the archive and the leaders are chosen from this setby means of an stochastic sampling method (roulettewheel). Also, the niche count is used as a criterion toupdate the external archive. Each time the archive isfull and a new particle wants to get in, its niche count iscompared with the niche count of the worst solution ofthe archive. If the new particle is better than the worstparticle, then the new particle enters into the archive andthe worst particle is deleted. Niche counts are updatedwhen inserting or deleting a particle from the archive.

• Raquel and Naval [51]: As in [60], this approach in-corporates the concept of nearest neighbor density esti-mator for selecting the global best particle and also fordeleting particles from the external archive of nondom-inated solutions. When selecting a leader, the archiveof nondominated solutions is sorted in descending or-der with respect to the density estimator, and a particleis randomly chosen from the top part of the list. On theother hand, when the external archive is full, it is againsorted in descending order with respect to the densityestimator value and a particle is randomly chosen to bedeleted, from the bottom part of the list. This approachuses the mutation operator proposed in [12] in such away that it is applied only during a certain number ofgenerations at the beginning of the process. Finally, theauthors adopt the constraint-handling technique fromthe NSGA-II [16].

• Zhao and Cao [71]: This approach is very similar to theproposal of Coello and Lechuga [11]. However, the au-thors indicate that they maintain two external archives,but one of them is actually a list that keeps thepbestparticle for each member of the swarm. The another ex-ternal archive stores the nondominated solutions foundalong the evolutionary process. This truncated archiveis similar to the adaptive grid of PAES [34]. The au-thors apply their approach to solve the economic loaddispatch problem. With this aim, they employ a fuzzy-based mechanism to extract the best compromise solu-tion, in which they incorporate the preferences of thedecision maker. The approach adopts a linear mem-bership function to represent the goals of each objec-tive function. This membership function is adopted tomodify the ranking of the nondominated solutions as tofocus the search on the single solution that attains themaximum membership in the fuzzy set.

Janson and Merkle [27] proposed a hybrid particle

swarm optimization algorithm for multi-objective op-timization, called ClustMPSO. ClustMPSO combinesthe PSO algorithm with clustering techniques to divideall particles into several subswarms. For this aim, theauthors use theK-means algorithm. Each subswarmhas its own nondominated front and the total nondom-inated front is obtained from the union of the fronts ofall the subswarms. Each particle randomly selects itsneighborhood best (lbest) particle from the nondomi-nated front of the swarm to which it belongs. Also, aparticle only selects a newlbest particle when the cur-rent is no longer a nondominated solution. On the otherhand, the personal best (pbest) of each particle is up-dated based on dominance relations. Finally, the au-thors define that a subswarm is dominated when noneof its particles belongs to the total nondominated front.In this way, when a subwarm is dominated for a certainnumber of consecutive generations, the subswarm is re-located. The proposed algorithm is tested on an artifi-cial multi-objective optimization function and on a real-world problem from biochemistry, called the moleculardocking problem. The authors reformulate the molec-ular docking problem as a multi-objective optimizationproblem and, in this case, the updating of thepbest par-ticle is also based on the weighted sum of the objec-tives of the problem. ClustMPSO outperforms a well-known Lamarckian Genetic Algorithm that had beenpreviously adopted to solve such problem.

E. Combined Approaches

• Mahfouf et al. [39]: The authors propose an AdaptiveWeighted PSO (AWPSO) algorithm, in which the veloc-ity is modified by including an acceleration term that in-creases as the number of iterations increases. This aimsto enhance the global search ability at the end of run andto help the algorithm to jump out of local optima. Also,a weighted aggregating function is introduced withinthe algorithm for performance evaluation and to guidethe selection of the personal and global bests. The au-thors use dynamic weights to generate Pareto optimalsolutions. When the population is losing diversity, amutation operator is applied to the positions of certainparticles and the best of them are retained. Finally, theauthors include a nondominated sorting algorithm to se-lect the particles from one iteration to the next. Sinceplus selection is adopted, an external archive is not nec-essary in this case. This approach is applied in the opti-mal design of heat-treated alloy steels.

• Xiao-hua et al. [69]: The authors propose an In-telligent Particle Swarm Optimization (IPSO) algo-rithm for multi-objective problems based on an Agent-Environment-Rules (AER) model to provide an appro-priate selection pressure to propel the swarm populationtowards the Pareto optimal front. In this model, the au-

Page 14: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

300 Margarita Reyes-Sierra and Carlos A. Coello Coello

thors modify thefully connectedflight formula includ-ing the lbestposition of the neighborhood of each par-ticle. The neighborhood of a particle is determined bya lattice-like topology. On the other hand, each particleis taken as an agent particle with the ability of mem-ory, communication, response, cooperation and self-learning. Each particle has its position, velocity and en-ergy, which is related to its fitness. All particles live ina latticelike environment, which is called an agent lat-tice, and each particle is fixed on a lattice-point. In orderto survive in the system, they compete or cooperate withtheir neighbors so that they can gain more resources (in-crease energies). Each particle has the ability of cloningitself, and the number of clones produced depends ofthe energy of the particle. General agent particles andlatency agent particles (those who have smaller energybut contain certain features—e.g., favoring diversity—that make them good candidates to be cloned) will becloned. The aim of the clonal operator (which is mod-eled in the clonal selection theory also adopted with ar-tificial immune systems [46]) is to increase the compe-tition among particles, maintain diversity of the swarmand improving the convergence of the process. Also, aclonal mutation operator is used. Leaders are selectedbased on the energy values of the particles. Finally, thisapproach adopts an external archive in order to store thenondominated solutions found throughout the run andto provide the final solution set.

F. Other Approaches

Here, we consider the approaches that could not fit any of themain categories previously described.

• Li [38]: This author proposes themaximinPSO, whichuses a fitness function derived from the maximin strat-egy proposed by Balling [4] to determine Pareto-domination. The author shows that one advantage ofthis approach is that no additional clustering or nichingtechnique is needed, since the maximin fitness of a so-lution can tell us not only if a solution is dominated ornot, but also if it is clustered with other solutions, i.e.,the approach also provides diversity information. In thisapproach, for each particle, a different leader is selectedfor each of the decision variables to conform a singleglobal best. Leaders (stored in an external archive) arerandomly selected based on the maximin fitness.

• Zhang et al. [70]: This approach (based on afully con-nectedtopology) attempts to improve the selection ofgbest andpbest when the velocity of each particle isupdated. For each objective function, there exists bothagbest and apbest for each particle. In order to updatethe velocity of a particle, the algorithm defines thegbestof a particle as the average of the complete set ofgbestparticles. Analogously, thepbest is computed using ei-ther a random choice or the average from the complete

set ofpbest values. This choice depends on the disper-sion degree between thegbest andpbest values of eachparticle.

VI. Convergence Properties of PSO andMOPSO

Recently, some theoretical studies about the convergenceproperties of PSO have been published. As in the case ofmany evolutionary algorithms, these studies have concludedthat the performance of the PSO is sensitive to control para-meter choices [20].

Most of the theoretical studies are based on simplified PSOmodels, in which a swarm consisting of one particle of onedimension is studied. Thepbestandgbestparticles are as-sumed to be constant throughout the process. Also, the termsφ1 = c1r1, φ2 = c2r2 (used in Equation 5) are assumed tobe constant. Under these conditions, particle trajectories andconvergence of the swarm have been analyzed.

In the theoretical studies developed about PSO, conver-gence has been defined as follows:

Definition 6. Considering the sequence of global best so-lutions{gbestt}∞t=0, we say that the swarm converges iff

lim t→∞gbestt = p

wherep is an arbitrary position in the search space.Sincep refers to an arbitrary solution, Definition 6 does

not mean convergence to a local or global optimum.The first studies on the convergence properties of PSO

were developed by Ozcan and Mohan [47, 48]. Ozcan andMohan studied a PSO under the conditions previously de-scribed but, in addition, their model did not consider the in-ertia weight. They concluded that, when0 < φ < 4, whereφ = φ1 + φ2, the trajectory of a particle is a sinusoidal wavewhere the initial conditions and parameter choices determinethe amplitude and frequency of the wave. Also, they con-cluded that the periodic nature of the trajectory may causea particle to repeatedly search regions of the search spacealready visited, unless another particle in its neighborhoodfinds a better solution.

In [67], van den Bergh developed a model of PSO underthe same conditions, but considering the inertia weight. Vanden Berg proved that, whenw > 1

2 (c1 + c2)− 1, the particleconverges to the point

φ1pbest + φ2gbest

φ1 + φ2.

In this way, ifc1 = c2, the particle converges to the point

pbest + gbest

2.

Since these conclusions were obtained under the assumptionof φ1 andφ2 being constants, van den Bergh generalized hismodel considering the stochastic nature ofφ1 andφ2. In this

Page 15: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

Multi-Objective Particle Swarm Optimizers 301

neighborhood leaders selection external dynamic mutationtopology based on archive W operator

Aggregating approachesParsopolous and Vrahatis [50] fully connected single-objective no yes no

(1.0 → 0.4)Baumgartner et al. [6] fully connected single-objective no no noLexicographic orderingHu and Eberhart [24] ring single-objective no yes no

rnd(0.5, 1.0)Hu et al. [25] ring single-objective yes yes no

rnd(0.5, 1.0)Sub-Population approachesParsopoulos et al. [49] fully connected single-objective yes no noChow and Tsui [8] fully connected single-objective no no noPareto-Based approachesMoore and Chapman [41] ring dominance no no noRay and Liew [53] fully connected density estimator yes no noFieldsend and Singh [21] fully connected dominance & yes no yes

closenessCoello et al. [11, 12] fully connected density of solutions yes no yesToscano and Coello [66] fully connected randomly no no noSrinivasan and Hou [61] fully connected niche count & no no yes

dominanceMostaghim and Teich [44] fully connected sigmavalue yes no yesMostaghim and Teich [43] fully connected sigmavalue yes no yesMostaghim and Teich [45] fully connected sigmavalue yes no yesBartz et al. [5] fully connected density of solutions; yes no no

successLi [37] fully connected niche count; yes yes yes

density estimator (1.0 → 0.4)Reyes and Coello [60] fully connected density estimator yes yes yes

rnd(0.1, 0.5)Alvarez-Benitez et al. [2] fully connected dominance yes no yesHo et al. [23] fully connected fitness &age yes yes yes

proposedVillalobos-Arias et al. [68] fully connected stripes yes no yesSalazar-Lechuga and Rowe [55] fully connected niche count yes no noRaquel and Naval [51] fully connected density estimator yes no yesZhao and Cao [71] fully connected fuzzy membership yes no noJanson and Merkle [27] fully connected random yes no noCombined approachesMahfouf et al. [39] fully connected single-objective no yes yes

rnd(0.15, 1.0)Xiao-hua et al. [69] fully connected & energyvalue yes yes yes

lattice (0.6 → 0.2)Other approachesLi [38] fully connected maximinfitness yes yes no

(1.0 → 0.4)Zhang et al. [70] fully connected composite leader no yes no

(0.8 → 0.4)

Table 1: Complete list of the MOPSO proposals reviewed. For each proposal, we indicate the corresponding neighborhoodtopology adopted, leader selection scheme used and whether the approach incorporates some dynamic scheme for the inertiaweight (W ), an external archive and a mutation operator.

Page 16: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

302 Margarita Reyes-Sierra and Carlos A. Coello Coello

case, he concluded (assuming uniform distributions) that theparticle then converges to the position:

(1− a)pbest + agbest

wherea = c2c1+c2

. In this way, van den Bergh showed that aparticle converges to a weighted average between its personalbest and its neighborhood best position.

As we said before, in order to ensure convergence, the con-dition w > 1

2 (c1 + c2)−1 must hold. However, it is possibleto choose values ofc1, c2 andw such that the condition isviolated, and the swarm still converges [67]: if

φratio =φcrit

c1 + c2

is close to 1.0, whereφcrit = sup {φ | 0.5φ − 1 < w},φ ∈ (0, c1 + c2], the swarm has convergent behavior. Thisimplies that the trajectory of the particle will convergemostof the time, occasionally taking divergent steps.

The studies developed by Ozcan and Mohan, and van derBergh, consider trajectories that are not constricted. In [9],Clerc and Kennedy provide a theoretical analysis of particlebehavior in which they introduce a constriction coefficientwhose objective is to prevent the velocity from growing outof bounds.

As we could see, the convergence of PSO has been proved.However, we can only ensure the convergence of PSO to thebest position visited by all the particles of the swarm. Inorder to ensure convergence to the local or global optimum,two conditions are necessary:

1. Thegbestt+1 solution can be no worse than thegbesttsolution (monotonic condition).

2. The algorithm must be able to generate a solution in theneighborhood of the optimum with nonzero probability,from any solutionx of the search space.

In [67], van den Bergh provides a proof to show that thebasic PSO is not a local (neither global) optimizer. This isdue to the fact that, although PSO satisfies the monotoniccondition indicated above, once the algorithm reaches thestate wherex = pbest = gbest for all particles in theswarm, no further progress will be made. The problem isthat this state may be reached beforegbest reaches a mini-mum, whether be local or global. The basic PSO is thereforesaid to prematurely converge. In this way, the basic PSO al-gorithm is not a local (global) search algorithm, since it hasno guaranteed convergence to a local (global) minimum froman arbitrary initial state.

Also, van den Bergh suggests two ways of extending PSOin order to make it a global search algorithm. The first is re-lated to the generation of new random solutions. In general,the introduction of a mutation operator is useful. Neverthe-less, forcing PSO to perform a random search in an area sur-rounding the global best position, that is, forcing the globalbest position to change in order to prevent stagnation (by

means of a hill-climbing search, for example), is also a suit-able mechanism [20]. On the other hand, van den Bergh alsoproposes to use a “multi-start PSO”, in which when the algo-rithm has converged (under some criteria), it records the bestsolution found and the particles are randomly reinitialized.

To the best of our knowledge, until this date, there are nostudies about the convergence properties of MOPSOs. Fromthe discussion previously provided, we can conclude that itis possible to ensure convergence, by correctly setting theparameters of the flight formula. But, as in the case of single-optimization, such property does not ensure the convergenceto the true Pareto front, in this case. In the case of multi-objective optimization, we may conclude that we still needconditions (1) and (2), to ensure convergence. However, inthis case, condition (1) may change to:

1. The solutions contained in the external archive at itera-tion t + 1 should be nondominated with respect to thesolutions generated in all iterationsτ , 0 ≤ τ ≤ t+1, sofar (monotonic condition).

The use of theε-dominance based archiving as proposed in[36] ensures this condition, but the normal dominance-basedstrategies do not, unless they make sure that for any solutiondiscarded from the archive one with equal or dominating ob-jective vector is accepted. In this way, given a MOPSO ap-proach, and assuming it satisfies condition (1), it remains toexplore if it satisfies condition (2), to ensure global conver-gence to the true Pareto front.

VII. Future Research Paths

As we have seen, despite the fact that MOPSOs started to bedeveloped less than ten years ago, the growth of this field hasexceeded even the most optimistic expectations. By lookingat the papers that we reviewed, the core of the work on MOP-SOs has focused on algorithmic aspects, but there is muchmore to do in this area. In this section, we will provide someinsights regarding some topics that we believe that are worthinvestigating within the next few years:

• Emphasis on Efficiency: The current MOPSOs arenot algorithms particularly complex (in terms of theirdata structures, memory management and so on), andare quite effective (more than state-of-the-art multi-objective evolutionary algorithms in some cases). So,why to make things more complicated regarding algo-rithmic design? Is there room for new developmentsin this regard? We believe that there is, but we haveto focus our work in a new direction. For example,few people have tried to exploit the very high conver-gence rate commonly associated with PSO to design an“ultra-efficient” MOPSO. It would be very useful (forreal-world applications) to have a MOPSO that couldproduce reasonably good approximations of the Paretofront of multi-objective optimization problems with 20

Page 17: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

Multi-Objective Particle Swarm Optimizers 303

or 30 decision variables with less than 5000 fitness func-tion evaluations. A first attempt to design such a type ofMOPSO is reported in [64] but more work in that direc-tion is certainly expected, since this topic has been re-cently explored with other types of multi-objective evo-lutionary algorithms as well [33].

• Self-Adaptation of Parameters in MOPSOs:The de-sign of MOPSOs with no parameters that have to befine-tuned by the user is another topic that is worthstudying. In evolutionary multi-objective optimizationin general, the use of self-adaptation or on-line adapta-tion mechanisms is scarce (see for example [63, 1, 7]),and we are only aware of one multi-objective evolu-tionary algorithm which was designed to be parameter-less: the microGA2 [65]. The design of a parameterlessMOPSO requires a careful study of the velocity updateformula adopted in PSO, and an assessment of the im-pact of each of its components in the performance of aMOPSO. Even the inertia and learning factors whichare normally assumed constants in PSO may benefitfrom an on-line adaptation mechanism when dealingwith multi-objective optimization problems.7

• Theoretical Developments:There is not much theoret-ical work on PSO in general (see for example [9]) and,therefore, the lack of research on theoretical aspects ofMOPSOs is, by no means, surprising. It would be inter-esting to perform a theoretical study of the run-time andconvergence properties of a MOPSO (see Section VI).Other aspects such as the fitness landscapes and dynam-ics of a MOPSO are also very attractive theoretical re-search topics.

• Applications: Evidently, no algorithm will ever be use-ful if we cannot find a good application for it. MOPSOshave been used in a few applications (see Section V),but not so extensively as other multi-objective evolu-tionary algorithms. The reason may be that MOPSOsare younger and less known than, for example, multi-objective genetic algorithms. However, a well-designedMOPSO may be quite useful in real-world applications,mainly if, as we mentioned before, its very fast conver-gence rate is properly exploited. At some point in thenear future, we believe that there will be an importantgrowth in the number of applications that adopt MOP-SOs as their search engine.

VIII. Conclusions

We have reviewed the state-of-the-art regarding extensionsof PSO to handle multiple objectives. We have started byproviding a short introduction to PSO in which we described

7Readers interested in this topic may be interestedin looking at the work of Maurice Clerc, available at:http://clerc.maurice.free.fr/pso/ .

its basic algorithm and its main topologies. We have alsoindicated the main issues that have to be considered whenextending PSO to multi-objective optimization, and then wehave analyzed each of them in more detail.

We have also proposed a taxonomy to classify the currenttechniques reported in the specialized literature, and we haveprovided a survey of approaches based on such a taxonomy.

Finally, we have provided some topics that seem (from theauthors’ perspective) as very promising paths for future re-search in this area. Considering the current rate of growth ofthis area, we expect a lot of more activity within the next fewyears. However, the switch to new areas different from purealgorithm development may attract newcomers to this fieldand may contribute to keep it alive for several more years.

Acknowledgments

The authors gratefully acknowledge the comments fromMarco Laumanns and the anonymous reviewers, whichgreatly helped them to improve the contents of this paper.

The first author acknowledges support from CONACyTthrough a scholarship to pursue graduate studies at the Com-puter Science Section of the Electrical Engineering Depart-ment at CINVESTAV-IPN. The second author acknowledgessupport from CONACyT project number 45683.

References

[1] Hussein A. Abbass. The self-adaptive pareto differen-tial evolution algorithm. InCongress on EvolutionaryComputation (CEC’2002), volume 1, pages 831–836,Piscataway, New Jersey, May 2002. IEEE Service Cen-ter.

[2] Julio E. Alvarez-Benitez, Richard M. Everson, andJonathan E. Fieldsend. A MOPSO algorithm based ex-clusively on pareto dominance concepts. InThird Inter-national Conference on Evolutionary Multi-CriterionOptimization, EMO 2005., pages 459–473, Guanajuato,Mexico, 2005. LNCS 3410, Springer-Verlag.

[3] Peter J. Angeline. Evolutionary optimization ver-sus particle swarm optimization: Philosophy and per-formance differences. In V.W. Porto, N. Saravanan,D. Waagen, and A.E. Eiben, editors,Evolutionary Pro-gramming VII. 7th International Conference, EP 98,pages 601–610. Springer. Lecture Notes in ComputerScience Vol. 1447, San Diego, California, USA, March1998.

[4] Richard Balling. The maximin fitness function; multi-objective city and regional planning. In Carlos M. Fon-seca, Peter J. Fleming, Eckart Zitzler, Kalyanmoy Deb,and Lothar Thiele, editors,Second International Con-ference on Evolutionary Multi-Criterion Optimization,EMO 2003, pages 1–15, Faro, Portugal, April 2003.

Page 18: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

304 Margarita Reyes-Sierra and Carlos A. Coello Coello

Springer. Lecture Notes in Computer Science. Volume2632.

[5] Thomas Bartz-Beielstein, Philipp Limbourg, Kon-stantinos E. Parsopoulos, Michael N. Vrahatis, JornMehnen, and Karlheinz Schmitt. Particle swarm opti-mizers for pareto optimization with enhanced archivingtechniques. InCongress on Evolutionary Computation(CEC’2003), volume 3, pages 1780–1787, Canberra,Australia, December 2003. IEEE Press.

[6] U. Baumgartner, Ch. Magele, and W. Renhart. Paretooptimality and particle swarm optimization. IEEETransactions on Magnetics, 40(2):1172–1175, March2004.

[7] Dirk Buche, Sibylle Muller, and Petro Koumoutsakos.Self-adaptation for multi-objective evolutionary algo-rithms. In Carlos M. Fonseca, Peter J. Fleming,Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele, ed-itors, Second International Conference on Evolution-ary Multi-Criterion Optimization, EMO 2003, pages267–281, Faro, Portugal, April 2003. Springer. LectureNotes in Computer Science. Volume 2632.

[8] Chi-kin Chow and Hung-tat Tsui. Autonomous agentresponse learning by a multi-species particle swarm op-timization. InCongress on Evolutionary Computation(CEC’2004), volume 1, pages 778–785, Portland, Ore-gon, USA, June 2004. IEEE Service Center.

[9] Maurice Clerc and James Kennedy. The particleswarm–explosion, stability, and convergence in a mul-tidimensional complex space. IEEE Transactionson Evolutionary Computation, 6(1):58–73, February2002.

[10] Carlos A. Coello Coello. A comprehensive surveyof evolutionary-based multiobjective optimization tech-niques.Knowledge and Information Systems. An Inter-national Journal, 1(3):269–308, August 1999.

[11] Carlos A. Coello Coello and Maximino SalazarLechuga. MOPSO: A proposal for multiple objectiveparticle swarm optimization. InCongress on Evolution-ary Computation (CEC’2002), volume 2, pages 1051–1056, Piscataway, New Jersey, May 2002. IEEE Ser-vice Center.

[12] Carlos A. Coello Coello, Gregorio Toscano Pulido, andMaximino Salazar Lechuga. Handling multiple objec-tives with particle swarm optimization.IEEE Trans-actions on Evolutionary Computation, 8(3):256–279,June 2004.

[13] Carlos A. Coello Coello, David A. Van Veldhuizen, andGary B. Lamont.Evolutionary Algorithms for SolvingMulti-Objective Problems. Kluwer Academic Publish-ers, New York, May 2002.

[14] Indraneel Das and John Dennis. A closer look at draw-backs of minimizing weighted sums of objectives forpareto set generation in multicriteria optimization prob-lems.Structural Optimization, 14(1):63–69, 1997.

[15] Kalyanmoy Deb and David E. Goldberg. An investiga-tion of niche and species formation in genetic functionoptimization. In J. David Schaffer, editor,Proceedingsof the Third International Conference on Genetic Algo-rithms, pages 42–50, San Mateo, California, June 1989.George Mason University, Morgan Kaufmann Publish-ers.

[16] Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, andT. Meyarivan. A fast and elitist multiobjective geneticalgorithm: NSGA–II.IEEE Transactions on Evolution-ary Computation, 6(2):182–197, April 2002.

[17] Russell C. Eberhart, Roy Dobbins, and Patrick K.Simpson.Computational Intelligence PC Tools. Mor-gan Kaufmann Publishers, 1996.

[18] Russell C. Eberhart and Yuhui Shi. Comparison be-tween genetic algorithms and particle swarm optimiza-tion. In V. W. Porto, N. Saravanan, D. Waagen, andA.E. Eibe, editors,Proceedings of the Seventh AnnualConference on Evolutionary Programming, pages 611–619. Springer-Verlag, March 1998.

[19] Andries P. Engelbrecht, editor.Computational Intelli-gence: An Introduction. John Wiley & Sons, England,2002.

[20] Andries P. Engelbrecht.Fundamentals of Computa-tional Swarm Intelligence. John Wiley & Sons, 2005.

[21] Jonathan E. Fieldsend and Sameer Singh. A multi-objective algorithm based upon particle swarm optimi-sation, an efficient data structure and turbulence. InProceedings of the 2002 U.K. Workshop on Compu-tational Intelligence, pages 37–44, Birmingham, UK,September 2002.

[22] David E. Goldberg and Jon Richardson. Genetic algo-rithms with sharing for multimodal function optimiza-tion. In Proceedings of the Second International Con-ference on Genetic Algorithms, pages 41–49. LawrenceErlbaum Associates, 1987.

[23] S.L. Ho, Yang Shiyou, Ni Guangzheng, Edward W.C.Lo, and H.C. Wong. A particle swarm optimization-based method for multiobjective design optimizations.IEEE Transactions on Magnetics, 41(5):1756–1759,May 2005.

[24] Xiaohui Hu and Russell Eberhart. Multiobjective op-timization using dynamic neighborhood particle swarmoptimization. InCongress on Evolutionary Computa-tion (CEC’2002), volume 2, pages 1677–1681, Piscat-away, New Jersey, May 2002. IEEE Service Center.

Page 19: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

Multi-Objective Particle Swarm Optimizers 305

[25] Xiaohui Hu, Russell C. Eberhart, and Yuhui Shi. Par-ticle swarm with extended memory for multiobjectiveoptimization. InProceedings of the 2003 IEEE SwarmIntelligence Symposium, pages 193–197, Indianapolis,Indiana, USA, April 2003. IEEE Service Center.

[26] Xiaohui Hu, Russell C. Eberhart, and Yuhui Shi.Swarm intelligence for permutation optimization: Acase study on n-queens problem. InProceedings of the2003 IEEE Swarm Intelligence Symposium, pages 243–246, Indianapolis, Indiana, USA, 2003. IEEE.

[27] Stefan Janson and Daniel Merkle. A new multi-objective particle swarm optimization algorithm usingclustering applied to automated docking. In Marıa J.Blesa, Christian Blum, Andrea Roli, and Michael Sam-pels, editors,Hybrid Metaheuristics, Second Interna-tional Workshop, HM 2005, pages 128–142, Barcelona,Spain, August 2005. Springer. Lecture Notes in Com-puter Science Vol. 3636.

[28] Stefan Janson and Martin Middendorf. A hierarchicalparticle swarm optimizer. InCongress on EvolutionaryComputation (CEC’2003), pages 770–776, Camberra,Australia, 2003. IEEE Press.

[29] Yaochu Jin, Tatsuya Okabe, and Bernhard Sendhoff.Dynamic weighted aggregation for evolutionary multi-objective optimization: Why does it work and how?In Lee Spector, Erik D. Goodman, Annie Wu, W.B.Langdon, Hans-Michael Voigt, Mitsuo Gen, SandipSen, Marco Dorigo, Shahram Pezeshk, Max H. Gar-zon, and Edmund Burke, editors,Proceedings of theGenetic and Evolutionary Computation Conference(GECCO’2001), pages 1042–1049, San Francisco, Cal-ifornia, 2001. Morgan Kaufmann Publishers.

[30] James Kennedy and Russell C. Eberhart. Particleswarm optimization. InProceedings of the 1995 IEEEInternational Conference on Neural Networks, pages1942–1948, Piscataway, New Jersey, 1995. IEEE Ser-vice Center.

[31] James Kennedy and Russell C. Eberhart. A discretebinary version of the particle swarm algorithm. InProceedings of the 1997 IEEE Conference on Systems,Man, and Cybernetics, pages 4104–4109, Piscataway,New Jersey, 1997. IEEE Service Center.

[32] James Kennedy and Russell C. Eberhart.Swarm Intel-ligence. Morgan Kaufmann Publishers, San Francisco,California, 2001.

[33] Joshua Knowles and Evan J. Hughes. Multiobjectiveoptimization on a budget of 250 evaluations. In Car-los A. Coello Coello, Arturo Hernandez Aguirre, andEckart Zitzler, editors,Third International Conferenceon Evolutionary Multi-Criterion Optimization, EMO

2005, pages 176–190, Guanajuato, Mexico, March2005. Springer. Lecture Notes in Computer ScienceVol. 3410.

[34] Joshua D. Knowles and David W. Corne. Ap-proximating the nondominated front using the paretoarchived evolution strategy.Evolutionary Computation,8(2):149–172, 2000.

[35] Harold W. Kuhn and Albert W. Tucker. Nonlinear pro-gramming. In J. Neyman, editor,Proceedings of theSecond Berkeley Symposium on Mathematical Statisticsand Probability, pages 481–492, Berkeley, California,1951. University of California Press.

[36] Marco Laumanns, Lothar Thiele, Kalyanmoy Deb, andEckart Zitzler. Combining convergence and diversity inevolutionary multi-objective optimization.Evolution-ary Computation, 10(3):263–282, Fall 2002.

[37] Xiaodong Li. A non-dominated sorting particleswarm optimizer for multiobjective optimization. InErick Cantu-Paz et al., editor,Proceedings of theGenetic and Evolutionary Computation Conference(GECCO’2003), pages 37–48. Springer. Lecture Notesin Computer Science Vol. 2723, July 2003.

[38] Xiaodong Li. Better spread and convergence: Particleswarm multiobjective optimization using the maximinfitness function. In Kalyanmoy Deb et al., editor,Pro-ceedings of the Genetic and Evolutionary ComputationConference (GECCO’2004), pages 117–128, Seattle,Washington, USA, June 2004. Springer-Verlag, LectureNotes in Computer Science Vol. 3102.

[39] Mahdi Mahfouf, Min-You Chen, and Derek ArturhLinkens. Adaptive weighted particle swarm optimisa-tion for multi-objective optimal design of alloy steels.In Parallel Problem Solving from Nature - PPSN VIII,pages 762–771, Birmingham, UK, September 2004.Springer-Verlag. Lecture Notes in Computer ScienceVol. 3242.

[40] Kaisa Miettinen. Nonlinear Multiobjective Optimiza-tion. Kluwer Academic Publishers, Boston, Massa-chusetts, 1999.

[41] Jacqueline Moore and Richard Chapman. Applicationof particle swarm to multiobjective optimization. De-partment of Computer Science and Software Engineer-ing, Auburn University, 1999.

[42] S. Mostaghim, M. Hoffmann, P.H. Konig, Th. Frauen-heim, and J. Teich. Molecular force field para-metrization using multi-objective evolutionary algo-rithms. In 2004 Congress on Evolutionary Computa-tion (CEC’2004), volume 1, pages 212–219, Portland,Oregon, USA, June 2004. IEEE Service Center.

Page 20: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

306 Margarita Reyes-Sierra and Carlos A. Coello Coello

[43] Sanaz Mostaghim and Jurgen Teich. The role ofε-dominance in multi objective particle swarm optimiza-tion methods. InCongress on Evolutionary Compu-tation (CEC’2003), volume 3, pages 1764–1771, Can-berra, Australia, December 2003. IEEE Press.

[44] Sanaz Mostaghim and Jurgen Teich. Strategies for find-ing good local guides in multi-objective particle swarmoptimization (MOPSO). InProceedings of the 2003IEEE Swarm Intelligence Symposium, pages 26–33, In-dianapolis, Indiana, USA, April 2003. IEEE ServiceCenter.

[45] Sanaz Mostaghim and Jurgen Teich. Covering pareto-optimal fronts by subswarms in multi-objective parti-cle swarm optimization. InCongress on Evolution-ary Computation (CEC’2004), volume 2, pages 1404–1411, Portland, Oregon, USA, June 2004. IEEE ServiceCenter.

[46] Leandro Nunes de Castro and Jonathan Timmis.An In-troduction to Artificial Immune Systems: A New Com-putational Intelligence Paradigm. Springer-Verlag,2002.

[47] Ender Ozcan and Chilukuri K. Mohan. Analysis of asimple particle swarm optimization system. InIntel-ligent Engineering Systems Through Artificial NeuralNetworks, pages 253–258, 1998.

[48] Ender Ozcan and Chilukuri K. Mohan. Particle swarmoptimization: Surfing the waves. InCongress on Evo-lutionary Computation (CEC’1999), pages 1939–1944,Washington D.C., USA, 1999. IEEE Press.

[49] Konstantinos E. Parsopoulos, Dimitris K. Tasoulis, andMichael N. Vrahatis. Multiobjective optimization us-ing parallel vector evaluated particle swarm optimiza-tion. In Proceedings of the IASTED International Con-ference on Artificial Intelligence and Applications (AIA2004), volume 2, pages 823–828, Innsbruck, Austria,February 2004. ACTA Press.

[50] Konstantinos E. Parsopoulos and Michael N. Vrahatis.Particle swarm optimization method in multiobjectiveproblems. InProceedings of the 2002 ACM Sympo-sium on Applied Computing (SAC’2002), pages 603–607, Madrid, Spain, 2002. ACM Press.

[51] Carlo R. Raquel and Jr. Prospero C. Naval. An effec-tive use of crowding distance in multiobjective parti-cle swarm optimization. InProceedings of the Geneticand Evolutionary Computation Conference (GECCO-2005), pages 257–264, Washington, DC, USA, June2005. ACM Press.

[52] Tapabrata Ray, Tai Kang, and Seow Kian Chye. Anevolutionary algorithm for constrained optimization. InDarrell Whitley, David Goldberg, Erick Cantu-Paz, Lee

Spector, Ian Parmee, and Hans-Georg Beyer, editors,Proceedings of the Genetic and Evolutionary Computa-tion Conference (GECCO’2000), pages 771–777, SanFrancisco, California, 2000. Morgan Kaufmann.

[53] Tapabrata Ray and K.M. Liew. A swarm metaphor formultiobjective design optimization.Engineering Opti-mization, 34(2):141–153, March 2002.

[54] Gunter Rudolph. On a multi-objective evolutionary al-gorithm and its convergence to the pareto set. InPro-ceedings of the 5th IEEE Conference on EvolutionaryComputation, pages 511–516, Piscataway, New Jersey,1998. IEEE Press.

[55] Maximino Salazar-Lechuga and Jonathan Rowe. Par-ticle swarm optimization and fitness sharing to solvemulti-objective optimization problems. InCongress onEvolutionary Computation (CEC’2005), pages 1204–1211, Edinburgh, Scotland, UK, September 2005. IEEEPress.

[56] J. David Schaffer.Multiple Objective Optimization withVector Evaluated Genetic Algorithms. PhD thesis, Van-derbilt University, 1984.

[57] J. David Schaffer. Multiple objective optimization withvector evaluated genetic algorithms. InGenetic Algo-rithms and their Applications: Proceedings of the FirstInternational Conference on Genetic Algorithms, pages93–100. Lawrence Erlbaum, 1985.

[58] Yuhui Shi and Russell Eberhart. Parameter selectionin particle swarm optimization. InEvolutionary Pro-gramming VII: Proceedings of the Seventh annual Con-ference on Evolutionary Programming, pages 591–600,New York, USA, 1998. Springer-Verlag.

[59] Yuhui Shi and Russell Eberhart. Empirical study of par-ticle swarm optimization. InCongress on Evolution-ary Computation (CEC’1999), pages 1945–1950, Pis-cataway, NJ, 1999. IEEE Press.

[60] Margarita Reyes Sierra and Carlos A. Coello Coello.Improving PSO-based multi-objective optimizationusing crowding, mutation andε-dominance. InThird International Conference on Evolutionary Multi-Criterion Optimization, EMO 2005., pages 505–519,Guanajuato, Mexico, 2005. LNCS 3410, Springer-Verlag.

[61] Dipti Srinivasan and Tian Hou Seow. Particle swarm in-spired evolutionary algorithm (PS-EA) for multiobjec-tive optimization problem. InCongress on Evolution-ary Computation (CEC’2003), volume 3, pages 2292–2297, Canberra, Australia, December 2003. IEEEPress.

Page 21: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

Multi-Objective Particle Swarm Optimizers 307

[62] Andrew Stacey, Mirjana Jancic, and Ian Grundy. Parti-cle swarm optimization with mutation. InProceedingsof the Congress on Evolutionary Computation, pages1425–1430, Camberra, Australia, 2003. IEEE Press.

[63] Kay Chen Tan, Tong Heng Lee, and Eik Fun Khor. Evo-lutionary algorithms with dynamic population size andlocal exploration for multiobjective optimization.IEEETransactions on Evolutionary Computation, 5(6):565–588, December 2001.

[64] Gregorio Toscano Pulido. On the Use of Self-Adaptation and Elitism for Multiobjective ParticleSwarm Optimization. PhD thesis, Computer Sci-ence Section, Department of Electrical Engineering,CINVESTAV-IPN, Mexico, September 2005.

[65] Gregorio Toscano Pulido and Carlos A. Coello Coello.The micro genetic algorithm 2: Towards online adap-tation in evolutionary multiobjective optimization. InCarlos M. Fonseca, Peter J. Fleming, Eckart Zitzler,Kalyanmoy Deb, and Lothar Thiele, editors,Sec-ond International Conference on Evolutionary Multi-Criterion Optimization, EMO 2003, pages 252–266,Faro, Portugal, April 2003. Springer. Lecture Notes inComputer Science. Volume 2632.

[66] Gregorio Toscano Pulido and Carlos A. CoelloCoello. Using clustering techniques to improvethe performance of a particle swarm optimizer. InKalyanmoy Deb et al., editor,Proceedings of theGenetic and Evolutionary Computation Conference(GECCO’2004), pages 225–237, Seattle, Washington,USA, June 2004. Springer-Verlag, Lecture Notes inComputer Science Vol. 3102.

[67] Frans van den Bergh.An Analysis of Particle SwarmOptimizers. PhD thesis, Department of Computer Sci-ence, University of Pretoria, Pretoria, South Africa,2002.

[68] Mario Alberto Villalobos-Arias, Gregorio ToscanoPulido, and Carlos A. Coello Coello. A proposal touse stripes to maintain diversity in a multi-objectiveparticle swarm optimizer. InProceedings of the 2005IEEE Swarm Intelligence Symposium, pages 22–29,Pasadena, California, USA, June 2005. IEEE Press.

[69] Zhang Xiao-hua, Meng Hong-yun, and Jiao Li-cheng.Intelligent particle swarm optimization in multiobjec-tive optimization. InCongress on Evolutionary Compu-tation (CEC’2005), pages 714–719, Edinburgh, Scot-land, UK, September 2005. IEEE Press.

[70] L.B. Zhang, C.G. Zhou, X.H. Liu, Z.Q. Ma, and Y.C.Liang. Solving multi objective optimization prob-lems using particle swarm optimization. InCongresson Evolutionary Computation (CEC’2003), volume 3,

pages 2400–2405, Canberra, Australia, December2003. IEEE Press.

[71] Bo Zhao and Yi jia Cao. Multiple objective parti-cle swarm optimization technique for economic loaddispatch. Journal of Zhejiang University SCIENCE,6A(5):420–427, 2005.

[72] Yong-Ling Zhen, Long-Hua Ma, Li-Yan Zhang, and Ji-Xin Qian. On the convergence analysis and parame-ter selection in particle swarm optimization. InPro-ceedings of the Second International Conference onMachine Learning and Cybernetics, pages 1802–1807.IEEE Press, 2003.

[73] Eckart Zitzler, Kalyanmoy Deb, and Lothar Thiele.Comparison of Multiobjective Evolutionary Algo-rithms: Empirical Results.Evolutionary Computation,8(2):173–195, Summer 2000.

[74] Eckart Zitzler and Lothar Thiele. Multiobjective evo-lutionary algorithms: A comparative case study and thestrength pareto approach.IEEE Transactions on Evolu-tionary Computation, 3(4):257–271, November 1999.

Author Biographies

Margarita Reyes-Sierra received the B.Sc. degree inmathematics and an M.Sc. in artificial intelligence from theUniversidad Veracruzana, Mexico, in 2000 and 2002, respec-tively.

She is currently working towards the Ph.D. degree in theDepartment of Electrical Engineering at the Centro de Inves-tigacion y de Estudios Avanzados del Instituto PolitecnicoNacional (CINVESTAV-IPN), in Mexico City.

Her current research interests include: evolutionarymultiobjective optimization, particle swarm optimizationand evolutionary algorithms in general.

Carlos A. Coello Coello received the B.Sc. degree incivil engineering from the Universidad Autonoma de Chia-pas, Mexico, and the M.Sc. and the PhD degrees in com-puter science from Tulane University, USA, in 1991, 1993,and 1996, respectively.

He is currently a professor (CINVESTAV-3D Researcher)at the electrical engineering department of CINVESTAV-IPN, in Mexico City, Mexico. Dr. Coello has authoredand co-authored over 120 technical papers and several book

Page 22: Multi-Objective Particle Swarm Optimizers: A Survey … · Multi-Objective Particle Swarm Optimizers: A Survey of the State-of-the-Art ... 288 Margarita Reyes-Sierra and Carlos A.

308 Margarita Reyes-Sierra and Carlos A. Coello Coello

chapters. He has also co-authored the bookEvolutionaryAlgorithms for Solving Multi-Objective Problems(KluwerAcademic Publishers, 2002). Additionally, Dr. Coello hasserved in the program committees of over 40 internationalconferences and has been technical reviewer for over 35international journals. He is currently associate editor ofthe IEEE Transactions on Evolutionary Computationandmember of the editorial boards of the journalsEvolution-ary Computation, Soft Computing, Engineering Optimiza-tion and theInternational Journal of Computational Intel-ligence Research. He also chairs theTask Force on Multi-Objective Evolutionary Algorithmsof the IEEE Computa-tional Intelligence Society. He is a senior member of theIEEE, and is a member of the ACM, Sigma Xi, and the Mex-ican Academy of Sciences.

His major research interests are: evolutionary multiobjec-tive optimization, constraint-handling techniques for evolu-tionary algorithms, and evolvable hardware.