Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi...

22
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 1, FEBRUARY 2017 131 A Vector Angle-Based Evolutionary Algorithm for Unconstrained Many-Objective Optimization Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and diversity into consideration, this paper suggests a vector angle-based evolu- tionary algorithm for unconstrained (with box constraints only) many-objective optimization problems. In the proposed algo- rithm, the maximum-vector-angle-first principle is used in the environmental selection to guarantee the wideness and uniformity of the solution set. With the help of the worse-elimination prin- ciple, worse solutions in terms of the convergence (measured by the sum of normalized objectives) are allowed to be conditionally replaced by other individuals. Therefore, the selection pressure toward the Pareto-optimal front is strengthened. The proposed method is compared with other four state-of-the-art many- objective evolutionary algorithms on a number of unconstrained test problems with up to 15 objectives. The experimental results have shown the competitiveness and effectiveness of our proposed algorithm in keeping a good balance between convergence and diversity. Furthermore, it was shown by the results on two prob- lems from practice (with irregular Pareto fronts) that our method significantly outperforms its competitors in terms of both the convergence and diversity of the obtained solution sets. Notably, the new algorithm has the following good properties: 1) it is free from a set of supplied reference points or weight vectors; 2) it has less algorithmic parameters; and 3) the time complex- ity of the algorithm is low. Given both good performance and nice properties, the suggested algorithm could be an alternative tool when handling optimization problems with more than three objectives. Index Terms—Convergence and diversity, decomposition, evolutionary algorithm, many-objective optimization. I. I NTRODUCTION M ULTIOBJECTIVE problems (MOPs) with at least four conflicting objectives are informally known as many- objective optimization problems (MaOPs) [1], [2]. MaOPs exist widely in real-world applications [1], such as engineer- ing design [3], water distribution system design [4], airfoil Manuscript received November 2, 2015; revised February 19, 2016, May 14, 2016, and June 22, 2016; accepted July 3, 2016. Date of publi- cation July 7, 2016; date of current version January 26, 2017. This work was supported in part by the Natural Science Foundation of China under Grant 61472143 and Grant 61170081, and in part by the Scientific Research Special Plan of the Guangzhou Science and Technology Programme under Grant 201607010045. (Corresponding author: Yuren Zhou.) Y. Xiang, Y. Zhou, and Z. Chen are with the School of Data and Computer Science, Sun Yat-sen University, Guangzhou 510006, China, and also with the Collaborative Innovation Center of High Performance Computing, Guangzhou 510006, China (e-mail: [email protected]; [email protected]). M. Li is with the Center of Excellence for Research in Computational Intelligence and Applications, School of Computer Science, University of Birmingham, Birmingham B15 2TT, U.K. Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TEVC.2016.2587808 designing problem [5], and automotive engine calibration problem [6]. Recently, MaOPs have caught wide attention from the evolutionary multiobjective optimization (EMO) community. A number of many-objective evolutionary algo- rithms (MaOEAs) have been proposed to deal with optimiza- tion problems having more than three objectives [7]–[15]. Generally, an MOP with only box constraints can be stated as follows: Minimize F(x) = ( f 1 (x), f 2 (x),..., f m (x)) T subject to: x (1) where m is the number of objectives. For MaOPs, m 4. In (1), = n i=1 [l i , u i ] R n is called the decision space. Here, n is the number of decision variables; u i and l i are the upper and lower bounds of the ith decision variable, respectively. Unlike in single-objective optimization problems, there is no single solution available for an MOP, but a set of non- dominated solutions known as a Pareto-optimal set (PS). For x PS, its objective vector F(x) constitutes the Pareto front (PF) [16]. Since multiobjective evolutionary algorithms (MOEAs) are typically capable of finding multiple tradeoff solutions in a single simulation run, they have been widely and successfully used to solve optimization problems with mostly two or three objectives [16]–[18]. It has been shown that Pareto-based MOEAs, such as non- dominated sorting genetic algorithm II (NSGA-II) [17], the improved strength Pareto evolutionary algorithm (SPEA2) [19], the Pareto archived evolution strategy [20], and Pareto envelope-based selection algorithm II (PESAII) [21] encountered difficulties when handling MaOPs, mainly due to the obstacles caused by the dominance resistance (DR) and the active diversity promotion (ADP) phenomena [22], [23]. The DR phenomenon refers to the incomparability of solu- tions in terms of the Pareto dominance relation, and the main reason is that the proportion of nondominated solutions in a population tends to rise rapidly as the number of objectives increases [1]. Actually, the probability of any two solutions being comparable is η = 1/2 m1 in an m-dimensional objec- tive space [10]. For two or three objectives, this probability is 1/2 and 1/4, respectively. However, η is already as low as 0.002 when m takes the value of 10. As a result of DR phenomenon, the dominance-based primary selection criterion fails to distinguish between solutions, and then the diversity- based secondary selection criterion is activated to determine the survival of solutions in the environmental selection. 1089-778X c 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Transcript of Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi...

Page 1: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 1, FEBRUARY 2017 131

A Vector Angle-Based Evolutionary Algorithm forUnconstrained Many-Objective Optimization

Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen

Abstract—Taking both convergence and diversity intoconsideration, this paper suggests a vector angle-based evolu-tionary algorithm for unconstrained (with box constraints only)many-objective optimization problems. In the proposed algo-rithm, the maximum-vector-angle-first principle is used in theenvironmental selection to guarantee the wideness and uniformityof the solution set. With the help of the worse-elimination prin-ciple, worse solutions in terms of the convergence (measured bythe sum of normalized objectives) are allowed to be conditionallyreplaced by other individuals. Therefore, the selection pressuretoward the Pareto-optimal front is strengthened. The proposedmethod is compared with other four state-of-the-art many-objective evolutionary algorithms on a number of unconstrainedtest problems with up to 15 objectives. The experimental resultshave shown the competitiveness and effectiveness of our proposedalgorithm in keeping a good balance between convergence anddiversity. Furthermore, it was shown by the results on two prob-lems from practice (with irregular Pareto fronts) that our methodsignificantly outperforms its competitors in terms of both theconvergence and diversity of the obtained solution sets. Notably,the new algorithm has the following good properties: 1) it isfree from a set of supplied reference points or weight vectors;2) it has less algorithmic parameters; and 3) the time complex-ity of the algorithm is low. Given both good performance andnice properties, the suggested algorithm could be an alternativetool when handling optimization problems with more than threeobjectives.

Index Terms—Convergence and diversity, decomposition,evolutionary algorithm, many-objective optimization.

I. INTRODUCTION

MULTIOBJECTIVE problems (MOPs) with at least fourconflicting objectives are informally known as many-

objective optimization problems (MaOPs) [1], [2]. MaOPsexist widely in real-world applications [1], such as engineer-ing design [3], water distribution system design [4], airfoil

Manuscript received November 2, 2015; revised February 19, 2016,May 14, 2016, and June 22, 2016; accepted July 3, 2016. Date of publi-cation July 7, 2016; date of current version January 26, 2017. This workwas supported in part by the Natural Science Foundation of China underGrant 61472143 and Grant 61170081, and in part by the Scientific ResearchSpecial Plan of the Guangzhou Science and Technology Programme underGrant 201607010045. (Corresponding author: Yuren Zhou.)

Y. Xiang, Y. Zhou, and Z. Chen are with the School of Data andComputer Science, Sun Yat-sen University, Guangzhou 510006, China,and also with the Collaborative Innovation Center of High PerformanceComputing, Guangzhou 510006, China (e-mail: [email protected];[email protected]).

M. Li is with the Center of Excellence for Research in ComputationalIntelligence and Applications, School of Computer Science, University ofBirmingham, Birmingham B15 2TT, U.K.

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TEVC.2016.2587808

designing problem [5], and automotive engine calibrationproblem [6]. Recently, MaOPs have caught wide attentionfrom the evolutionary multiobjective optimization (EMO)community. A number of many-objective evolutionary algo-rithms (MaOEAs) have been proposed to deal with optimiza-tion problems having more than three objectives [7]–[15].

Generally, an MOP with only box constraints can be statedas follows:

Minimize F(x) = ( f1(x), f2(x), . . . , fm(x))T

subject to: x ∈ � (1)

where m is the number of objectives. For MaOPs, m ≥ 4.In (1), � = ∏n

i=1 [li, ui] ⊆ Rn is called the decision space.

Here, n is the number of decision variables; ui and li arethe upper and lower bounds of the ith decision variable,respectively.

Unlike in single-objective optimization problems, there isno single solution available for an MOP, but a set of non-dominated solutions known as a Pareto-optimal set (PS). For∀ x ∈ PS, its objective vector F(x) constitutes the Paretofront (PF) [16]. Since multiobjective evolutionary algorithms(MOEAs) are typically capable of finding multiple tradeoffsolutions in a single simulation run, they have been widely andsuccessfully used to solve optimization problems with mostlytwo or three objectives [16]–[18].

It has been shown that Pareto-based MOEAs, such as non-dominated sorting genetic algorithm II (NSGA-II) [17],the improved strength Pareto evolutionary algorithm(SPEA2) [19], the Pareto archived evolution strategy [20], andPareto envelope-based selection algorithm II (PESAII) [21]encountered difficulties when handling MaOPs, mainly due tothe obstacles caused by the dominance resistance (DR) andthe active diversity promotion (ADP) phenomena [22], [23].The DR phenomenon refers to the incomparability of solu-tions in terms of the Pareto dominance relation, and the mainreason is that the proportion of nondominated solutions in apopulation tends to rise rapidly as the number of objectivesincreases [1]. Actually, the probability of any two solutionsbeing comparable is η = 1/2m−1 in an m-dimensional objec-tive space [10]. For two or three objectives, this probabilityis 1/2 and 1/4, respectively. However, η is already as lowas 0.002 when m takes the value of 10. As a result of DRphenomenon, the dominance-based primary selection criterionfails to distinguish between solutions, and then the diversity-based secondary selection criterion is activated to determinethe survival of solutions in the environmental selection.

1089-778X c© 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Page 2: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

132 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 1, FEBRUARY 2017

This is the so-called ADP problem which may be harmfulto the convergence of the approximated PFs. According tothe experiment carried out in [10], the NSGA-II withoutthe diversity maintenance mechanism obtains even betterconvergence metric [24] results than the original NSGA-IIon the 5-objective DTLZ2 problem. Furthermore, for the10-objective problem, the diversity maintenance mechanismeven makes the population gradually move away from thetrue PF.

Many efforts have been made to improve the scalability ofPareto-based MOEAs to make them suitable for MaOPs. Toaddress the DR phenomenon, a number of modified or relaxedPareto dominance relations have been proposed to enhance theselection pressure toward the true PF. The ε-dominance [25],CDAS-dominance [26], α-dominance [27], fuzzy Pareto dom-inance [28], L-dominance [29], θ -dominance [30], and soon [31]–[33] have been experimentally verified to be moreeffective than the original Pareto dominance relation whensearching toward the optimum.

Regarding the ADP phenomenon, several customizeddiversity-based approaches have been developed to weakenor avoid the adverse impact of diversity maintenance.Wagner et al. [34] showed that assigning the crowding dis-tance of boundary solutions a zero value, instead of infinity,could significantly improve the convergence of NSGA-II onMaOPs. Adra and Fleming [35] introduced a diversity manage-ment method named DM1 that adaptively determines whetheror not to activate diversity promotion according to the distribu-tion of the population. It has been shown by the experimentalresults on a set of test problems with up to 20 objectives, thatNSGA-II with DM1 operator performed consistently betterthan NSGA-II in terms of both the convergence and diver-sity [1]. Focused on the secondary selection criterion forPareto-based algorithms, Li et al. [23] introduced a shift-based density estimation (SDE) strategy and applied it to threepopular Pareto-based algorithms, i.e., NSGA-II, SPEA2, andPESAII, forming three new algorithms (i.e., NSGA-II+SDE,SPEA2+SDE, and PESAII+SDE). In SDE, poorly convergedindividuals are pulled into crowded regions and assigned ahigh density value, thus they are easily eliminated during theevolutionary process. It has been demonstrated by experimen-tal results on DTLZ and TSP test problems with up to tenobjectives that SPEA2+SDE was very effective in balancingconvergence and diversity. Recently, Deb and Jain [8] adopteda new diversity maintenance mechanism in the frameworkof NSGA-II, and proposed a reference-point-based evolution-ary algorithm (NSGA-III) for many-objective optimization. Ineach generation of NSGA-III, the population is divided intodifferent layers by using the nondominated sorting procedure.Then, objective vectors are adaptively normalized. Next, eachpopulation member is associated with a particular referenceline based on a proximity measure. Finally, a niche preserva-tion operation is applied to ensure a set of diversified solutions.In NSGA-III, the reference points can be either generatedsystematically or supplied by the users.

In MaOEAs, balancing convergence and diversity is ofhigh importance, and is also the main challenge in the field.Yang et al. [7] proposed a grid-based evolutionary algo-rithm (GrEA) for MaOPs. In GrEA, the grid dominance and

grid difference defined in the grid environment are used toincrease the selection pressure toward the true Pareto-optimalfront. Besides, three grid-based criteria, i.e., grid ranking,grid crowding distance, and grid coordinate point distance,are introduced to guarantee the good distribution among solu-tions. It was revealed by the experimental results on 52 testinstances that GrEA was effective and competitive in bal-ancing convergence and diversity. Praditwong and Yao [36]and Wang et al. [37] proposed the two-archive algorithm(Two_Arch) and its improved version Two_Arch2. In thesealgorithms, the nondominated solution set is divided intotwo archives, namely convergence archive (CA) and diver-sity archive (DA). In Two_Arch2, the two archives areassigned different selection principles, i.e., indicator-based andPareto-based for CA and DA, respectively. Additionally, anew Lp-norm-based (p < 1) diversity maintenance schemeis designed in Two_Arch2 for MaOPs. The experimentalresults showed that Two_Arch2 performed well on a set ofMaOPs with satisfactory convergence, diversity, and complex-ity. Li et al. [11] developed a steady-state algorithm namedMOEA/directional diversity (DD) for MaOPs by combiningdominance- and decomposition-based approaches to achieve abalance between convergence and diversity. In MOEA/DD, atwo-layer weight vector generation method is designed to pro-duce a set of well-distributed weight vectors. Cheng et al. [38]proposed a many-objective evolutionary algorithm (namedMaOEA-DDFC) which is based on DD and favorable con-vergence (FC). In this algorithm, a mating selection basedon FC is used to enhance the selection pressure, and theenvironmental selection based on DD and FC is applied to bal-ance diversity and convergence. Experimental results showedthat the proposed algorithm performed competitively to sevenstate-of-the-art algorithms on a set of test instances.

Decomposition-based algorithms decompose an MOP intoa set of subproblems, and optimize them in a collabora-tive way. The multiobjective evolutionary algorithm basedon decomposition (MOEA/D) [39] is a representative of thiskind of algorithms. In these algorithms, a predefined set ofweight vectors are used to specify multiple search direc-tions toward different parts of the PF. Since the weightvectors of subproblems are widely distributed, the obtainedsolutions are expected to have a wide spread over the PF.Decomposition-based methods seem to be exempt from theinsufficient selection pressure problem, because they do notrely on Pareto-dominance relation to distinguish between indi-viduals. MOEA/D and MOEA/DD were demonstrated to behighly competitive when handling MaOPs [8], [11].

However, decomposition-based algorithms face their ownchallenges. First, decomposition-based methods need to spec-ify a set of weight vectors which significantly affect thealgorithms’ performance in terms of diversity [1]. But theconfiguration of weight vectors is not always an easy task.Although the two-layer weight vector generation method isadopted in MOEA/DD to produce weight vectors, it still suf-fers from the curse of dimensionality. For instance, in an m =50 objective space, this method will generate

(50 + 2 − 1

2

)

+(

50 + 1 − 11

)

= 1325 weight vectors even for h1 = 2

Page 3: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

XIANG et al.: VAEA FOR UNCONSTRAINED MANY-OBJECTIVE OPTIMIZATION 133

and h2 = 1. Here, h1 and h2 are the number of divisions inthe boundary layer and inside layer, respectively. Second, howto maintain the uniformity of intersection points of the speci-fied search directions and the problems’ PF is a key issue indecomposition-based EMO algorithms. When facing problemswith a highly irregular PF (e.g., a discontinuous or degen-erate front), the uniformly-distributed weight vectors cannotguarantee the uniformity of the intersection points [40].

Given the above, we here consider a new optimizer basedon the search directions of the evolutionary population itself.In the evolutionary process, the current population is anapproximation of the true PF, and this approximation becomesmore and more accurate as the evolution proceeds [41].Therefore, the search can be guided by the current popula-tion. In this way, the new method does not require referencepoints or weight vectors. Specifically, this paper proposes avector angle-based evolutionary algorithm (VaEA) to solveMaOPs. VaEA takes both convergence and diversity intoconsideration, and is expected to achieve a good balancebetween them. In each generation of VaEA, the objective vec-tors are normalized. Then, the critical layer Fl is identifiedby using the nondominated sorting method. In the environ-mental selection, the maximum-vector-angle-first principle isapplied to selecting solutions from Fl one by one so asto ensure the diversity of the population set. By using theworse-elimination principle, VaEA allows worse solutions interms of the convergence (measured by the sum of eachnormalized objective) to be conditionally replaced by otherindividuals so as to keep a balance between convergence anddiversity.

It should be noted here that approximating the PF inmultiple search directions is one of the major features indecomposition-based algorithms. From this point of view,VaEA shares the same idea as in decomposition-basedmethods. This is because individuals in the current pop-ulation define multiple search directions toward the PF.However, there exists a significant difference between VaEAand decomposition-based methods: In decomposition-basedalgorithms, an MOP is usually converted into a numberof scalar optimization subproblems [39] or a set of simpleMOPs [42], but this is not the case in VaEA. The mainproperties of VaEA can be summarized as follows.

1) No weight vectors or reference points need to be spec-ified beforehand. Actually, the evolutionary process inVaEA is dynamically guided by the current popula-tion because: a) the population in different generationsis usually distinct from each other and b) the searchdirections in each generation are dynamically selectedaccording to the maximum-vector-angle-first principle.From this point of view, VaEA shares a similar ideaas in some decomposition algorithms [43]–[45], wherethe weight vectors are adaptively adjusted according tothe distribution of the current population. Thus, in thesealgorithms the search directions are also dynamicallychanged.

2) Parameter-Less property of VaEA. Apart from somecommon GA parameters, such as the population size, ter-mination condition and parameters in genetic operators,

VaEA does not need to configure any new algorith-mic parameter. In MOEA/DD, three more parametersare involved, i.e., the penalty parameter in PBI, theneighborhood size and the probability used to selectin the neighborhood [11]. These parameters have to becarefully configured in the practical application of thealgorithm.

3) Lower time complexity. As will be shown inSection II-C, the worst time complexity of VaEA ismax{O(Nlogm−2N), O(mN2)} which is equal to that ofNSGA-III, MOEA/DD, and Two_Arch2, and lower thanMaOEA-DDFC, GrEA, and SPEA2+SDE.

In [26], the concept of vector angle was also adopted in thedesign of MOEAs. However, our method is totally differentfrom the method in [26]. First and foremost, the vector anglein [26] is used to constrict or expand the dominance area ofsolutions. The aim is to enhance the selection pressure towardthe optimal fronts. However, in this paper, the vector angle isadopted for the purpose of diversity promotion. Second, themethod in [26] considers the vector angle between a solu-tion and each coordinate axis. In VaEA, we calculate anglebetween each pair of solutions as a measurement of close-ness. Finally, the vector angle in [26] can be set to any valuebetween 0 and π . While the angle in VaEA is restricted to theinterval [0, π/2]. The recently proposed MOEA/D-M2M [42]also used the concept of vector angle. In MOEA/D-M2M, withthe help of vector angles, an MOP is divided into a numberof simple multiobjective subproblems which are solved in acollaborative way. The major difference between VaEA andMOEA/D-M2M is that the vector angle in the latter is used todivide the whole objective space into a number of subregionsso as to specify different subpopulations for approximating asmall part of the true PF. However, as stated before, the anglein VaEA is adopted to measure the closeness between a pairof solutions. Furthermore, a set of direction vectors should bechosen in MOEA/D-M2M. But, in VaEA, we do not need tospecify any direction vector.

In the remainder of this paper, we first outline our pro-posed VaEA in detail in Section II. Then, we present thesimulation results on a set of test instances in Section III.Thereafter, VaEA is applied to two problems from practice,and is compared with two competitive algorithms NSGA-IIIand MOEA/DD. Finally, Section V concludes this paper andgives some remarks for future studies.

II. PROPOSED ALGORITHM: VAEA

A. General Framework

The pseudo code of the proposed method is shown inAlgorithm 1. The VaEA shares a common framework thatis employed by many evolutionary algorithms. First, a pop-ulation with N solutions is randomly initialized in the wholedecision space �. Then more potential solutions are selectedinto the mating pool according to the fitness value of eachindividual. In what followed, a set of offspring solutions Qis generated by applying crossover and mutation operations.Finally, N solutions are selected from the union set of P and Qby adopting an environmental selection procedure. The above

Page 4: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

134 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 1, FEBRUARY 2017

Algorithm 1 Framework of the Proposed VaEA1: Initialization(P)

2: G = 13: While G ≤ Gmax do4: P′ = Mating_selection(P)

5: Q = Variation(P′)6: S = P ∪ Q7: P = Environmental_selection(S)

8: G + +9: End While10: Return P

steps continue until the number of generations G reaches toits maximum value, i.e., Gmax.

As we will describe in the following section, the VaEAperforms a careful elitist preservation strategy and maintainsdiversity among solutions by putting more emphasis on solu-tions that have the maximum vector angle to individuals whichhave already survived. Therefore, we do not use any specialreproduction operations in VaEA. The offspring generation Qis constructed by applying the usual crossover and mutationoperations to parents that are randomly picked from P. As inNSGA-III, the well-known simulated binary crossover (SBX)crossover [46] (with a relatively larger value of distributionindex) and the polynomial mutation operators are used in thevariation step (line 5) in Algorithm 1.

B. Environmental Selection

The framework of the environmental selection in VaEA issimilar to that of NSGA-II [17] or NSGA-III [8], but the niche-preservation operation is quite different from other existingMaOEAs. Algorithm 2 shows the general framework of theenvironmental selection procedure. First, the set S is normal-ized by the function normalization (line 2 in Algorithm 2).Then the norm of each solution in S is calculated in the nor-malized objective space (line 3). The nondominated sortingprocedure is used to divide solutions into different layers, andthen the last layer Fl is determined (lines 4–8). If the popula-tion is full, then return P. Otherwise, K = N − |P| solutionsfrom Fl are added into P one by one by using functions asso-ciate and niching (lines 13 and 14) which are designed on thebasis of vector angles. In what follows, we will describe themin more details.

1) Normalization: The normalization procedure is shownin Algorithm 3. The ideal point Zmin = (zmin

1 , zmin2 , . . . , zmin

m )T

is determined by finding the minimum value of each objec-tive for all solutions in S, i.e., zmin

i = min2Nj=1 fi(xj), xj ∈

S, and i = 1, 2, . . . , m. Similarly, the nadir pointZmax = (zmax

1 , zmax2 , . . . , zmax

m )T can be worked out (line 2 inAlgorithm 3). For each solution xj ∈ S, its objective vectorF(xj) is normalized to F′(xj) = ( f ′

1(xj), f ′2(xj), . . . , f ′

m(xj))T

through the following equation:

f ′i (xj) = fi(xj) − zmin

i

zmaxi − zmin

i

, i = 1, 2, . . . , m. (2)

During the normalization process, the fitness value of xj

[denoted by fit(xj)] is also calculated. This fitness assignment

Algorithm 2 Environmental_Selection(S)Input: S, the union of P and QOutput: P, the new population1: P = ∅, i = 12: Normalization(S)3: Compute_norm(S)4: (F1, F2, . . .) = Non-dominated-sorting(S)5: While |P| + |Fi| ≤ N6: P = P

⋃Fi and i = i + 1

7: End While8: The last front to be included Fl = Fi

9: If |P| = N, then10: return P11: Else12: Solutions to be chosen from Fl: K = N − |P|13: Associate each member of Fl with a solution in

P: Associate(P, Fl)

14: Choose K solutions one by one from Fl to cons-truct final P: P = Niching(P, Fl, K)

15: End If16: Return P

Algorithm 3 Normalization(S)Input: S, the union of P and Q

1: Find Zmin, where zmini = 2N

minj=1

fi(xj), i = 1, 2, . . . , m

2: Find Zmax, where zmaxi = 2N

maxj=1

fi(xj), i = 1, 2, . . . , m

3: For j = 1 to 2N4: fit(xj) = 05: For i = 1 to m6: f ′

i (xj) = (fi(xj) − zmini )/(zmax

i − zmini )

7: fit(xj) = fit(xj) + f ′i (xj)

8: End For9: End For

method considers the sum of each normalized objective value.Mathematically

fit(xj) =m∑

i=1

f ′i (xj). (3)

This fitness value roughly reflects the convergence informa-tion of an individual. Two factors—the number of objectivesand the performance in each objective—determine this esti-mation function. A solution with good performance in themajority of objectives is likely to obtain a better (i.e., lower)fit value. It is notable that the accuracy of the estimation canbe influenced by the shape of an MaOP’s PF [10]. However,as will be shown in Algorithm 5, the fitness value in VaEA isused to determine which one to survive between two closestsolutions in terms of the vector angle (in the objective space).Hence, the influence of the shape of the PF may be alleviated.

2) Computation of Norm and Vector Angles: The func-tion Compute_norm(S) (line 3 in Algorithm 2) calculates thenorm (in the normalized objective space) of each solution in S.

Page 5: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

XIANG et al.: VAEA FOR UNCONSTRAINED MANY-OBJECTIVE OPTIMIZATION 135

For xj, its norm [denoted as norm(xj)] is defined as

norm(xj) �

√√√√

m∑

i=1

f ′i (xj)2. (4)

The norm is stored in an attribute named norm and is usedto calculate vector angles between two solutions in the nor-malized objective space. The vector (acute) angle betweensolutions xj and yk is defined as

angle(xj, yk) � arccos

∣∣∣∣

F′(xj) • F′(yk)

norm(xj) · norm(yk)

∣∣∣∣ (5)

where F′(xj) • F′(yk) returns the inner product between F′(xj)

and F′(yk), and its definition is

F′(xj) • F′(yk) =m∑

i=1

f ′i (xj) · f ′

i (yk). (6)

Obviously, angle(xj, yk) ∈ [0, π/2].3) Association: After determining the last layer Fl by using

the nondominated sorting procedure, the proposed algorithmselects K = N − |P| solutions from Fl on the basis of vectorangles if the population P is not full. Before the description ofthe association operation, we first give some basic definitions.

Definition 1: The target solution of xj ∈ Fl is defined asthe solution in P to which xj has the minimum vector angle.We denote this minimum vector angle as θ(xj), and the cor-responding index of the target solution is written as γ (xj).If xj has multiple target solutions, we can simply select arandom one.

Definition 2: The vector angle from a solution xj ∈ Fl tothe population P is defined as the angle between xj and itstarget solution, i.e., angle(xj, P) = θ(xj).

Definition 3: Extreme solutions in a solution set aredefined as solutions that have minimum angle to m vectors(1, 0, . . . , 0), (0, 1, . . . , 0), . . . , and (0, 0, . . . , 1). We denoteby ei, i = 1, 2, . . . , m these extreme solutions.

The association procedure is given in Algorithm 4. If P isempty (this is common when a higher dimensional objectivespace is considered), we first sort solutions in Fl in ascend-ing order according to fitness values defined by (3). Then,m extreme solutions in Fl are first added into P (lines 3–4in Algorithm 4). The inclusion of extreme solutions aims atthe promotion of the diversity, especially the extensiveness ofthe population. Thereafter, the first m best converged solu-tions in terms of the fitness value are considered and addedinto P (lines 5–6 in Algorithm 4). Note here that well con-verged solutions may be also extreme solutions. In this case,we only add these solutions once. Once a solution has beenadded, it will be removed from Fl. For each of the remainingmembers in Fl (i.e., xj), its γ (xj) and θ(xj) are initialized to−1 and ∞, respectively (lines 9–10 in Algorithm 4). Thenfor each yk ∈ P, the angle between xj and yk is calculatedaccording to (5). Finally, γ (xj) and θ(xj) are updated accord-ingly if a smaller angle is found (lines 13–16 in Algorithm 4).Fig. 1 illustrates the association operation in a 2-D objectivespace. According to vector angles, both x1 and x2 are asso-ciated with y1; x3 is associated with y2, and x4 is associatedwith y4. For x3, the θ(x3) and γ (x3) are σ and 2, respectively.

Algorithm 4 Association(P, Fl)

Input: P and Fl

1: If P = ∅2: Fl = Sort(Fl) // Sort Fl according to fitness values3: // Firstly, add m extreme solutions ei into P4: P = P ∪ {ei}, i = 1, . . . , m5: // Secondly, add the first m best converged solutions6: P = P ∪ {xj}, j = 1, . . . , m7: End If8: For each xj ∈ Fl

9: γ (xj) = −110: θ(xj) = ∞11: For each yk ∈ P12: Calculate angle between xj and yk using (5)13: If angle < θ(xj)

14: γ (xj) = k15: θ(xj) = angle16: End If17: End For18: End For

Fig. 1. Illustration of the association operation.

4) Niche-Preservation: In VaEA, the niche preservationprocedure shown in Algorithm 5 considers both the conver-gence and diversity of the population that are guaranteed bythe worse-elimination and maximum-vector-angle-first princi-ples, respectively. Each member in the last layer Fl has a flagvalue that indicates whether this solution has been added intothe population P or not. And it is initialized as false at thebeginning of the procedure (lines 1 and 2 in Algorithm 5).Next, for each k ≤ K, the indexes of the members in Fl

that have the maximum and minimum vector angles to P (thedefinition of vector angles from a solution to a set is givenby Definition 2) are worked out and denoted by ρ and μ,respectively (lines 4 and 5).

Line 6 in Algorithm 5 is the maximum-vector-angle-firstprinciple that aims to select the best solution in terms ofthe maximum vector angle, and add it into the populationto construct the new P. Details of this principle is shown inAlgorithm 6. If ρ is null (meaning that all members in Fl

have been added), then the procedure returns P. Otherwise, xρ

Page 6: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

136 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 1, FEBRUARY 2017

Algorithm 5 P = Niching(P, Fl, K)

Input: P, Fl and KOutput: The new P1: T = |Fl| // |Fl| returns the cardinality of Fl

2: flag(xj) = false, xj ∈ Fl, j = 1, 2, . . . , T3: For k = 1 to K

// Find the index with the maximum vector angle

4: ρ = argT

maxj=1

{θ(xj)|xj ∈ Fl ∧ (flag(xj) == false)}// Find the index with the minimum vector angle

5: μ = argT

minj=1

{θ(xj)|xj ∈ Fl ∧ (flag(xj) == false)}6: P = Maximum-vector-angle-first (P, ρ, T)7: P = Worse-elimination(P, μ, T)

8: End For9: Return P

is added into P and its flag is modified to true (lines 2–3).When xρ has been added, the vector angles from the remain-ing individuals in Fl to the new population P should be updatedaccordingly. To achieve this, we just need to calculate the vec-tor angle between xρ and each member in Fl whose flag valueis false (line 6). If a smaller angle is found, then the updateis executed according to lines 8 and 9.

For example, in Fig. 1, x2 has the maximum vector angle(2σ ) to P = {y1, y2, y3, y4}. According to the maximum-vector-angle-first principle, x2 will be added. Clearly, theimprovement of the distribution (mainly the uniformity) of Pby adding x2 is more obvious than by adding other solutions,e.g., x3 and x4. After x2 is added, the region around it can beexploited by the algorithm and better solutions along the direc-tion of x2 may be attainable. Now suppose two more solutionsneed to be added, and then the choice will be x1 and x3. Onthe contrary, x4 can be never considered because it has a zerovector angle to P. This is also understandable because a bet-ter solution Y4 in the same direction as x4 has already beenincluded in the population P.

According to the above procedure, our proposed VaEAselects solutions dynamically and it is expected to keep a welldistributed population. However, as discussed elsewhere [10],most individuals in a population are incomparable in terms ofconvergence since Pareto dominance often loses its effective-ness to differentiate individuals when dealing with MaOPs.In fact, for many-objective problems, most individuals fallinto the first layer in the nondominated sorting procedure,thereby providing insufficient selection pressure toward thetrue PF. As a result, the diversity-based selection criterion willplay a crucial role in determining the survival of individu-als, which may make the individuals in the final populationhave a good distribution but may be also far away from thedesired PF.

To alleviate the above phenomenon, VaEA allows worsesolutions in terms of the convergence to be conditionallyreplaced by other individuals so as to keep a balance betweenconvergence and diversity. This is called the worse-eliminationprinciple (line 7 in Algorithm 5) and it is described inAlgorithm 7 in details.

Algorithm 6 P = Maximum-Vector-Angle-First (P, ρ, T)Input: P, ρ and TOutput: The new P1: If ρ == null, return P2: P = P ∪ {xρ} // xρ is orderly added3: flag(xρ) = true4: For j = 1 to T5: If flag(xj) == false // xj is a member of Fl

6: Calculate angle between xj and xρ using (5)7: If angle < θ(xj) // xj is associated with xρ

// Update the vector angle from xj to P8: θ(xj) = angle

// Update the corresponding index.9: γ (xj) = |P|10: End If11: End If12: End For13: return P

Algorithm 7 P = Worse-Elimination(P, μ, T)

Input: P, μ and TOutput: The new P1: If (μ = null) ∧ (θ(xμ) <

π/2N+1 )

2: r = γ (xμ) // yr is the rth solution in P3: If (fit(yr) > fit(xμ)) ∧(flag(xμ) == false)4: Replace yr with xμ

5: flag(xμ) = true6: For j = 1 to T7: If flag(xj) == false8: Calculate angle between xj and xμ by (5)9: If γ (xj) = γ (xμ)

10: If angle < θ(xj)

11: θ(xj) = angle12: γ (xj) = r13: End If14: Else15: θ(xj) = angle16: End If17: End If18: End For19: End If20: End If21: Return P

Specifically, if the angle between xμ (not null) and its targetsolution is smaller than σ = ((π/2)/N + 1), where N isthe population size, then the target solution yr ∈ P (wherer = γ (xμ), the index of the target solution) is replaced byxμ when xμ is not added and is better than yr in terms ofthe fitness value calculated according to (3) (see lines 1–5in Algorithm 7). The basic idea behind the worse-eliminationprinciple is that it is often unnecessary to keep multiple solu-tions with identical search directions. In VaEA, the similarityof search directions is measured by the vector angle. If theangle between two solutions is smaller than the thresholdσ = ((π/2)/N + 1), then they are deemed to search in thesimilar direction. In this case, only the better one can survive.

Page 7: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

XIANG et al.: VAEA FOR UNCONSTRAINED MANY-OBJECTIVE OPTIMIZATION 137

Fig. 2. Illustration of updating angles in the worse-elimination principle.

The value of σ is the angle between two solutions in the idealdirection division with N solutions. Take Fig. 1 as an example,σ = ((π/2)/8 + 1) = 0.1745 = 10◦. Usually, the popula-tion size N is a larger integer (e.g., over 100), then σ takes avery small value which is small enough to determine the samesearch directions.

After replacement, the angles from each member xj ∈ Fl

to the new population P should be updated accordingly. Weconsider the following two scenarios.

1) γ (xj) = γ (xμ) means that xj and xμ are associated withdifferent individuals in P. In this case, we just need tocheck whether the angle between xj and xμ is smallerthan θ(xj) or not. If so, θ(xj) and γ (xj) are updated toangle and r, respectively (lines 6–12 in Algorithm 7).Otherwise, they are kept unchanged.

2) γ (xj) = γ (xμ) indicates that xj and xμ are associatedwith the same individual in P (i.e., yr). In this situation,we just need to update the value of θ(xj) (line 15 inAlgorithm 7). It should be noted here that a smaller vec-tor angle may be found if the angle between xj and xμ

is larger than θ(xj), but this happens in a very extremesituation. This is because xj is closest to yr and theangle between xμ and yr is very small, hence, with greatprobability, xj has closest vector angle to xμ as well.

To illustrate this update procedure, we give an example inFig. 2 through a 2-D objective space. Suppose all eight solu-tions are in the first layer and Y1 = (1, 0) has been addedinto P. According to the maximum-vector-angle-first princi-ple, Y2 = (0, 1) and Yr = (0.5, 0.5) are then included orderly.Once Yr has been added, the procedure identifies a solutionxμ = (0.45, 0.51) that searches in a similar direction. Sincethe advantage of xμ over Yr in the first objective is more thanthe disadvantage in the second objective, xμ is better than Yr

in terms of the fitness value (fit(xμ) = 0.96 < fit(Yr) = 1).Therefore, Yr is replaced by xμ. Next, the angles fromXi(i = 1, 2, 3, 4) to P should be modified (or equivalently, thetarget solutions should be updated). For X1, its target solu-tion is different from that of Xμ (the former is associatedwith Y2, while the latter is associated with Yr), but the anglebetween X1 and Xμ is smaller than the original angle (i.e.,angle between X1 and Y2). Therefore, according to the first

TABLE ITIME COMPLEXITY OF PEER ALGORITHMS

scenario, the target solution for X1 is modified to Xμ. ForX4, its target solution is also different from that of Xμ, butthe angle between X4 and Xμ is larger than the original one.Hence, it is unnecessary to change its target solution.

The X2, X3 and Xμ have the same target solution Yr. Thisfalls into the second scenario. Therefore, both X2 and X3 arestill associated with the rth solution in P (Xμ is now in therth position). However, θ(X2) and θ(X3) should be modifiedaccordingly after replacing Yr by Xμ.

C. Computational Complexity Analysis

The normalization (line 2 in Algorithm 2) of a populationwith 2N solutions having m objectives requires O(mN) divi-sions. The calculation of norm (line 3 in Algorithm 2) fora population of size 2N also needs O(mN) additions. Thenondominated sorting (line 4 in Algorithm 2) has a timecomplexity of O(Nlogm−2N) [47] in terms of the numberof comparisons. Both the association and niching operations(lines 13 and 14 in Algorithm 2) require O(mN2) additions inthe worst case. Therefore, the overall worst-case complexityof one generation in VaEA is max{O(Nlogm−2N), O(mN2)}.1The worst time complexity of some MaOEAs is summarizedin Table I.

D. Discussion

After describing details of VaEA, this section presents sim-ilarities and differences between VaEA and NSGA-III as wellas MOEA/DD. Particularly, the relationship between VaEAand MOEA/D is well discussed. First and foremost, as statedprevious, the major difference is that our proposed VaEA doesnot need to provide a set of reference points or weight vec-tors beforehand. Just as described in Section I, it is not easy tospecify a number of well-distributed reference points or weightvectors. Other differences can be summarized as follows.

1) Similarities and Differences Between VaEA andNSGA-III:

1) Both of them use nondominated sorting procedure todivide the union population into different layers. Andthe last layer is important and should be identified first.

1If we only consider comparisons, the time complexity of VaEAwould be max{O(Nlogm−2N), O(N2)}. The nondominated sorting requiresO(Nlogm−2N) comparisons, and both association and niching operations needO(N2) comparisons

Page 8: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

138 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 1, FEBRUARY 2017

2) Both of them involve the normalization of the unionpopulation. However, the normalization procedure ofNSGA-III is much more complicated than that of VaEA:in VaEA, the population is normalized by using onlyideal points and nadir points of the population, while thenormalization in NSGA-III needs to calculate anothercomponent, i.e., the intercepts along each objective axis,which is based on the solution of a system of linear equa-tions [30]. Therefore, the normalization of VaEA is timecheaper than that of NSGA-III.

3) Both VaEA and NSGA-III consider a niching techniquefor the balance between the diversity and convergence.In NSGA-III, each population member is associated witha reference line based on the perpendicular distance thatcould be measured by angles to some extent. The niche-preserving operation in both NSGA-III and VaEA addssolutions in Fl one by one. However, the differences arealso clear. First, the association targets are different: inNSGA-III, the members are associated with referencelines that are determined by a set of predefined ref-erence points, while members in VaEA are associatedwith already included solutions. Second, in NSGA-III,all members in St = ∪l

i=1Fi need an association oper-ation, while in VaEA, only members in Fl need thisoperation. Finally, the association in VaEA is dynamic,while that in NSGA-III is static. In VaEA, the mem-ber in Fl may be associated with another solution whena new solution is added into the population. However,the member in NSGA-III is associated with a fixedreference line.

2) Similarities and Differences Between VaEA andMOEA/DD:

1) Both of them divide the population into several nondom-ination levels based on the concept of Pareto dominance.However, the selection procedure in MOEA/DD doesnot fully obey the decision made by Pareto dominance.Particularly, a solution that is associated with an isolatedsubregion, will be preserved without reservation even ifit belongs to the last nondomination level. Although, thismechanism benefits the diversity maintenance, it willslow the convergence speed of MOEA/DD [11].

2) Unlike VaEA, MOEA/DD is a steady-state algorithm.That is to say, once a new solution has been generated,it is used to update the nondomination level structure,as well as the population.

3) Both of them use a scalar function to measure the con-vergence of a solution. In VaEA, the convergence isevaluated by the fitness value defined in (3), while inMOEA/DD the PBI objective function is used. AlthoughPBI function provides a joint measurement of bothconvergence and diversity, it introduces a parameter θ

that should be determined beforehand. Note that wedo not consider the diversity when evaluating a solu-tion in the worse-elimination principle, this is becausethe diversity among solutions is guaranteed by themaximum-vector-angle-first principle.

3) Relationship Between VaEA and MOEA/D: In the asso-ciation procedure of VaEA (Algorithm 4), by computing the

angles, a solution in the last front is actually associatedwith a search direction pointing toward the ideal point inthe normalized objective space. One of the major features inMOEA/D is that it approximates the PF along multiple searchdirections. That is, the search in MOEA/D is guided by a num-ber of search directions (or subproblems). From this point ofview, VaEA uses the same idea as in MOEA/D with dynam-ical decomposition methods [43] and [45], where the searchdirections are adaptively or dynamically changed during thesearch process so as to obtain a better distribution. However,VaEA does not convert an MOP into a number of scalar opti-mization subproblems, which is one of the major differencesbetween the two algorithms.

III. SIMULATION RESULTS

This section is devoted to the experimental study for theverification of the performance of the VaEA algorithm.2

We compare the proposed algorithm with MOEA/DD3 [11],NSGA-III4 [8], MOEA/D5 [39], and MOMBI26 [49] on testproblems from both DTLZ [50] and WFG test suites [51].These peer algorithms have been demonstrated to be effec-tive when handling MaOPs, and they can be dividedinto two classes: 1) the reference-points/weight-vectors-based algorithms (MOEA/DD, NSGA-III, and MOEA/D) and2) indicator-based algorithm (MOMBI2). Brief descriptionsof MOEA/DD, MOEA/D, and NSGA-III can be found inSection I. MOMBI2, one of the recently proposed indicator-based MaOEAs which uses R2 indicator as the selection cri-terion. Compared with the hypervolume (HV) [52], R2 isattractive due to its low computational cost and weak-Paretocompatibility [49]. In MOMBI2, individuals are ranked byusing the achievement scalarizing function. Like MOEA/D andMOEA/DD, MOMBI2 also needs a set of supplied weight vec-tors. As shown by the experimental results in [49], MOMBI2presented superiorities over other MOEAs, and could be asuitable alternative for solving MaOPs.

A. Test Problems

For the purpose of performance comparison, four test prob-lems from the DTLZ test suite (DTLZ1 to DTLZ4), and nineones from the WFG test suite (WFG1 to WFG9) are chosen forour empirical studies. All these problems can be scaled to anynumber of objectives and decision variables. For each problem,the number of objectives is set to 3, 5, 8, 10, and 15, respec-tively. According to [50], the number of decision variables isset to n = m + r − 1 for DTLZ test instances, where r = 5for DTLZ1 and r = 10 for DTLZ2 to DTLZ4. Following thesuggestions in [51], the number of decision variables is set to

2The code of VaEA is available at https://www.researchgate.net/profile/Xiang_Yi9/publications.

3The code of MOEA/DD is downloaded from http://www.cs.bham.ac.uk/∼likw/code/MOEADD.zip.

4The code of NSGA-III is downloaded from http://learn.tsinghua.edu.cn:8080/2012310563/ManyEAs.rar which is maintained by X. Yao’sresearch team.

5The code of MOEA/D is downloaded from http://dces.essex.ac.uk/staff/zhang/webofmoead.htm.

6The code of MOMBI2 is downloaded from http://jmetal.github.io/jMetal/.

Page 9: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

XIANG et al.: VAEA FOR UNCONSTRAINED MANY-OBJECTIVE OPTIMIZATION 139

n = k + l for WFG test instances, where the position-relatedvariable k = 2 × (m − 1), and the distance-related variablel = 20.

According to [51], PFs of the above test problems havevarious characteristics (e.g., linear, convex, concave, mixed,multimodal, etc.) and they pose a significant challenge for anEMO algorithm to find a well-converged and well-distributedsolution set.

B. Performance Metrics

In our experimental study, three widely used metrics arechosen to evaluate the performance of each algorithm. Theyare the inverted generational distance (IGD) [53], generalizedSPREAD (or �) [54], and HV [52]. IGD and HV can providea joint measurement of both the convergence and diversity ofobtained solutions, while the generalized SPREAD concenterson evaluating the distribution of an approximated solution set.

1) IGD [39], [53]: Let P be an approximation set, and P∗be a set of nondominated points uniformly distributed alongthe true PF, and then the IGD metric is defined as follows:

IGD(P) = 1

|P∗|∑

z∗∈P∗dist

(z∗, P

)(7)

where dist(z∗, P) is the Euclidean distance between z∗ andits nearest neighbor in P, and |P∗| is the cardinality of P∗.The advantages of IGD measure are twofold [55]: one is itscomputational efficiency. The other is its generality: if |P∗| islarge enough to cover the true PF very well, then both con-vergence and diversity of approximate set P can be measuredby IGD(P). Due to these two nice properties, IGD has beenfrequently used to evaluate the performance of MOEAs forMaOPs [7], [8], [23], [56], etc. For an EMO algorithm, asmaller IGD value is desirable.7

2) Generalized SPREAD [54]: To measure the extentof spread by the set of computed solutions, the metricSPREAD [57] was first proposed for only two-objective prob-lems. However, in the context of many-objective optimization,this metric is not applicable. In this paper, we use the gen-eralized SPREAD (denoted by �) instead, which was firstproposed in [54]

�(P) =∑m

i=1 dist(ei, S) + ∑z∈S |dist(z, S) − d|

∑mi=1 dist(ei, S) + |S| ∗ d

(8)

where S is a set of solutions, S∗ is the set of Pareto-optimalsolutions, e1, e2, . . . , em are m extreme solutions in S∗ and

dist(z, S) = miny∈S,y =z

||F(z) − F(y)||2 (9)

d = 1

|S∗|∑

x∈S∗dist(x, S). (10)

In this paper, the � metric is implemented by the opensource software jMetal8 [58].

In the calculation of both IGD(P) and �(P), a set of ref-erence Pareto-optimal solutions is needed. For DTLZ1 to

7In our empirical studies, IGD metric is implemented by the codes fromhttp://dces.essex.ac.uk/staff/qzhang/moeacompetition09.htm

8http://jmetal.github.io/jMetal/

TABLE IINUMBER OF DIVISIONS WHEN GENERATING WEIGHT VECTORS

TABLE IIINUMBER OF REFERENCE POINTS USED FOR THE CALCULATION

OF IGD AND � ON WFG TEST PROBLEMS

DTLZ4, the same method as in [11] is used to generatereference Pareto-optimal points. Specifically, a set of welldistributed weight vectors is first produced. For m ≤ 5, theDas and Dennis’s [59] systematic approach is utilized to pro-duce weight vectors where a parameter p should be specified.For m > 5, weight vectors are generated by the two-layergeneration method [8], [11] which involves two parameters h1and h2, denoting the number of divisions for the boundary andinside layers, respectively. Thereafter, the intersecting pointsof weight vectors and the Pareto-optimal surface of DTLZ testproblems can be exactly located (because analytical forms ofthe exact Pareto-optimal surfaces of these test problems areknown). Finally, these intersecting points can be regarded asthe reference points. The Pareto-optimal surface of DTLZ1 is ahyperplane, while that of DTLZ2 to DTLZ4 is a hypersphere.As discussed in [55], the number of reference points for cal-culating IGD should be large enough so as to cover the truefront as well as possible. Therefore, we use relatively largervalues of divisions in the weight vectors generation method.The number of divisions for different numbers of objectives islisted in Table II, and the last column gives the correspondingnumber of reference points for DTLZ test problems.

For WFG test problems, the range of the ith (i =1, 2, . . . , m) objective is [0, 2i]. Since the Pareto-optimal frontof WFG4-9 is also a hypersphere, we can get the referencepoints of WFG4-9 by multiplying the ith objective of the refer-ence points of DTLZ2-4 by 2i. Hence, the number of referencepoints is the same as DTLZ test problems, which is shown inthe last column of Table III. For WFG1 and WFG2, theirPFs are irregular. Because their shape functions are known,we sample a large number of points in the underlying space[0, 1]m−1, and then calculate objective values according to ana-lytic expressions of the shape functions. Finally, we remove alldominated points, and the remaining points constitute the final

Page 10: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

140 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 1, FEBRUARY 2017

TABLE IVPOPULATION SIZE (N) FOR DIFFERENT NUMBERS OF OBJECTIVES

reference set. Based on the above method, we obtain the num-ber of reference points for WFG1 and WFG2, which is givenin columns 2 and 3 in Table III. For WFG3, it was originallydesigned to have a degenerate PF, i.e., a straight line in theobjective space. However, according to the latest study [60],WFG3 with three or more objectives also has nondegeneratePF whose derivation, even for the case of three objectives, isextremely difficult. Given this, as suggested in [60], we con-struct the reference point set by selecting all the nondominatedsolutions from all the returned solutions by 20 runs of eachalgorithm. The number of reference points for each objectiveis presented in column 4 of Table III.

3) HV: This metric measures the volume of the objectivespace between the obtained solutions set and a specified ref-erence point in the objective space. For HV, a larger valueis preferable. For the calculation of HV, two issues are cru-cial: one is the scaling of the search space, and the other is thechoice of the reference point. We normalize the objective valueof obtained solutions according to the range of the problems’PFs, and set reference point to 1.1 times of the upper boundsof the true PFs. In this paper, the Monte Carlo sampling [61]is applied to approximate HV, where the number of samplingpoints is set to 1 000 000. Note that solutions dominated bythe reference point are discarded for the calculation of HV. Inthis paper, the HV is implemented by the codes in [10].

C. General Experimental Settings

The parameter settings for this experiment are listed asbelow unless otherwise mentioned.

1) Population Size: According to [8], the population sizein NSGA-III is set as the smallest multiple of four largerthan the number of reference points (H) produced by theso-called two-layer reference point (or weight vector) genera-tion method. The population size in VaEA keeps the same asin NSGA-III. Since, MOMBI2 involves a binary tournamentselection, it requires the population size to be even numbers.Therefore, in MOMBI2, we use the same population size asin NSGA-III or VaEA. For MOEA/DD and MOEA/D, thepopulation size is set to N = H, as recommend by theirdevelopers [11], [39]. The value of N for different numberof objectives is summarized in Table IV.

2) Number of Independent Runs and TerminationCondition: All algorithms are independently run 20 timeson each test instance and terminated when a predefined

TABLE VSETTINGS OF MFE FOR DIFFERENT NUMBERS OF OBJECTIVES

maximum function evaluations (MFEs) reaches. The settingsof MFE for different numbers of objectives are listed inTable V. For VaEA, the termination condition can be easilydetermined by Gmax = MFE/N.

3) Parameter Settings for Operators: In all algorithms, theSBX and polynomial mutation are used to generate offspringsolutions. The crossover probability pc and mutation probabil-ity pm are set to 1.0 and 1/n, respectively. For SBX operator,its distribution index is ηc = 30, and the distribution index ofmutation operator is ηm = 20 [8], [11].

4) Parameter Settings for Algorithms: Following the prac-tice in [11] and [39], the PBI approach is used in MOEA/Dand MOEA/DD with a penalty parameter θ = 5. In both algo-rithms, the neighborhood size T is set to 20. For MOEA/DD,a probability δ is set for choosing a neighboring partner of aparent in the mating selection procedure. As recommend by itsdevelopers, this value is set to δ = 0.9. Following the practicein [49], two parameters in MOMBI2 are set as ε = 1e−3 andα = 0.5, respectively.

D. Results and Analysis

Table VI shows the median and inter quartile range (IQR)results on all the twenty DTLZ test instances in terms of theIGD metric. The significance of difference between VaEA andthe peer algorithms is determined by using the well knownWilcoxon’s rank sum test [62]. As shown, MOEA/DD is themost effective algorithm in terms of the number of the bestor the second best results it obtains. NSGA-III performs verycompetitively to MOEA/DD. For MOEA/D, it obtains the bestperformance on DTLZ1, achieving the best results for 8-, 10-,and 15-objective test instances. The performance of MOMBI2is poor than that of its competitors, and it performs best onlyon DTLZ2 and DTLZ4 with five objectives. Our proposedVaEA obtains significantly better performance than other algo-rithms on 10- and 15-objective DTLZ2 and DTLZ4, and showsclear improvements over MOMBI2 and MOEA/D on 13 and 7out of 20 test instances, respectively.

It is found from the results that better performance onDTLZ test problems is obtained by decomposition and refer-ence point-based algorithms, i.e., MOEA/DD, MOEA/D, andNSGA-III. On one hand, this finding is consistent with thatin [8] and [11]. On the other hand, the phenomenon canbe well interpreted. Since PFs of DTLZ test problems areregular, being either a hyperplane or a hypersphere, decompo-sition or reference points-based algorithms predefine a set ofwell distributed weight vectors or reference points which canpromote the diversity of the obtained solutions. Furthermore,all algorithms adopt specific mechanisms to keep a good

Page 11: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

XIANG et al.: VAEA FOR UNCONSTRAINED MANY-OBJECTIVE OPTIMIZATION 141

TABLE VIMEDIAN AND IQR (IN BRACKETS) OF IGD METRIC ON DTLZ TEST INSTANCES. THE BEST AND THE SECOND BEST RESULTS

FOR EACH TEST INSTANCE ARE SHOWN WITH DARK AND LIGHT GRAY BACKGROUND, RESPECTIVELY

Fig. 3. Final solution set of the five algorithms on the ten-objective DTLZ3, shown by parallel coordinates. (a) VaEA. (b) MOEA/DD. (c) NSGA-III.(d) MOEA/D. (e) MOMBI2.

balance between diversity and convergence. For example, thePBI approach used in MOEA/DD and MOEA/D simulta-neously measures diversity and convergence of a solution.In NSGA-III, the nondominated sorting method and a well

designed niching procedure collaboratively make the algorithmuseful.

To visually understand the solutions’ distribution,Figs. 3 and 4 plot, by parallel coordinates, the final

Page 12: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

142 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 1, FEBRUARY 2017

Fig. 4. Final solution set of the five algorithms on the 15-objective DTLZ4, shown by parallel coordinates. (a) VaEA. (b) MOEA/DD. (c) NSGA-III.(d) MOEA/D. (e) MOMBI2.

solutions of one run with respect to the ten-objective DTLZ3and 15-objective DTLZ4, respectively. This run is associatedwith the particular run that obtains the closest result to themedian value of IGD. As shown in Fig. 3, the solutionsobtained by MOEA/D, MOEA/DD, and NSGA-III distributesimilarly, while those found by MOMBI2 fail to cover thefirst three objectives well. Though solutions obtained byVaEA have a good distribution, some extreme values forthe second, the third and the sixth objectives are detected.As a result, VaEA has a larger IGD value than MOEA/D,MOEA/DD, and NSGA-III do (because of its relatively poorconvergence), but performs better than MOMBI2 whose solu-tions have an extremely worse distribution. For solutions ofthe 15-objective DTLZ4 (see Fig. 4), they perform similarlyin terms of the convergence, i.e., values of each objectiveare within [0, 1]. However, there indeed exit differencesconcerning the distribution. Solutions obtained by MOEA/Dare unable to cover the third, eighth, ninth and thirteenthobjectives well, and those obtained by MOMBI2 fail to coverthe second objective. It seems that all values of the secondobjective of obtained solutions tend to zero. Compared withthe solutions of MOEA/DD and NSGA-III, those found byVaEA are able to cover the whole PF. It seems that solutionsobtained by MOEA/DD and NSGA-III mainly concenter onthe boundary, or on the middle parts of the front.

Table VII gives the median and IQR results on all theWFG test instances in terms of the IGD metric. As shown,VaEA and NSGA-III perform best, presenting a clear advan-tage over the other three algorithms on the majority of the testinstances. More specifically, VaEA obtains the best and secondbest IGD results on 24 and 11 out of the 45 test instances,

respectively. And NSGA-III obtains 11 best results and 24second best results. VaEA performs best on WFG3, WFG4,WFG5, WFG6, WFG7, and WFG9, while NSGA-III outper-forms other algorithms on WFG2 and WFG8. For WFG3,MOEA/DD is a competitive algorithm to VaEA. On thethree-objective WFG4 to WFG7, MOMBI2 performs best butstruggles on higher numbers of objectives. Statistically, VaEAshows significant improvement over some other algorithms onmost of the test instances. Specifically, the proportion of thetest instances where VaEA performs better than MOEA/DD,NSGA-III MOEA/D, and MOMBI2 is 36/45, 25/45, 42/45,and 33/45, respectively. Conversely, the proportion that VaEAis defeated by the peer algorithms is 7/45, 14/45, 2/45, and10/45 for MOEA/DD, NSGA-III, MOEA/D, and MOMBI2,respectively. It is demonstrated by the results that MOEA/DDand MOEA/D obtain relatively poor performance on this testsuite compared with their counterparts. This may be due to thefact that the PFs of WFG test problems are irregular, beingdiscontinued or mixed, and are scaled with different ranges ineach objective. A set of well distributed weight vectors cannot guarantee a good distribution of obtained solutions. Fordiscontinued PFs, some weight vectors are not associated withany Pareto-optimal solution. Another reason accounting for thefailure of MOEA/DD may be its lack of a normalization pro-cedure before the evaluation of a solution. The reason for thebetter performance of VaEA on some WFG test problems maybe that the maximum-vector-angle-first principle can not onlyensure the width of the search region, but also dynamicallyadjust the search direction of the whole population. Therefore,unlike MOEA/DD or MOEA/D where the search directions arefixed by weight vectors, VaEA is not easy to be trapped into

Page 13: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

XIANG et al.: VAEA FOR UNCONSTRAINED MANY-OBJECTIVE OPTIMIZATION 143

TABLE VIIMEDIAN AND IQR (IN BRACKETS) OF IGD METRIC ON ALL THE WFG TEST INSTANCES

local optima, or to bias the search to certain regions of the truePF, on biased problems such as WFG1 and WFG9, or on mul-timodal problems such as WFG4. What is more, the diversityand convergence in VaEA are kept balanced by introducingthe worse-elimination principle.

The HV results of WFG test instances are given inTable VIII. It can be seen from the table that VaEA performsbest on WFG2, WFG3, and WFG9, while NSGA-III outper-forms other algorithms on WFG5, WFG7, and WFG8. ForMOMBI2, it is competitive on some test instances, such as

3- and 5-objective WFG3 and WFG4, etc. For MOEA/DDand MOEA/D, they obtain worse HV values on almost all thetest instances, except for WFG1 with three and five objec-tives. Note that for problems with complicated fronts (suchas WFG1 and WFG2), and with higher dimensional objectivespace (such as 15-objective WFG3-4, and WFG6-9), VaEAobtains the best performance in terms of the HV metric.

Take ten-objective WFG9 and 15-objective WFG1 as exam-ples, the final solutions obtained by all the algorithms areshown in Figs. 5 and 6. It is clear from Fig. 5 that solutions

Page 14: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

144 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 1, FEBRUARY 2017

TABLE VIIIMEDIAN AND IQR (IN BRACKETS) OF HV METRIC ON ALL THE WFG TEST INSTANCES

of VaEA are with the highest quality in terms of both the con-vergence and distribution. For NSGA-III and MOMBI2, theirsolutions are distributed similarly: the middle parts of eachobjective are not well covered. The final solutions found byMOEA/DD fail to sufficiently cover the top half of the sixthto tenth objectives. The solutions obtained by MOEA/D con-centre only on a small parts of the true PFs, resulting in thelargest (or smallest) IGD (or HV) values. WFG1 is a hardproblem due to its mixed and biased Pareto-optimal front. Forthe 15-objective WFG1, all algorithms encounter some diffi-culties when solving this problem (see Fig. 6). The solutions

obtained by MOEA/DD and MOEA/D converge into a smallmiddle part of the PF, while those obtained by NSGA-III andMOMBI2 struggle to cover a wider region in some objec-tives, e.g, the 2nd to the 11th objectives. For VaEA, muchbetter boundary values are found for each objective, whichmay be attributed to the survival of extreme solutions in theassociation operation in Algorithm 4.

Table IX shows the results concerning the � metric on5-, 10-, and 15-objective DTLZ and WFG test problems. Asseen, VaEA obtains significantly better performance than allthe peer algorithms on almost all the tDest instances, except for

Page 15: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

XIANG et al.: VAEA FOR UNCONSTRAINED MANY-OBJECTIVE OPTIMIZATION 145

Fig. 5. Final solution set of the five algorithms on the ten-objective WFG9, shown by parallel coordinates. (a) VaEA. (b) MOEA/DD. (c) NSGA-III.(d) MOEA/D. (e) MOMBI2.

Fig. 6. Final solution set of the five algorithms on the 15-objective WFG1, shown by parallel coordinates. (a) VaEA. (b) MOEA/DD. (c) NSGA-III.(d) MOEA/D. (e) MOMBI2.

DTLZ1, DTLZ3, 5-objective WFG1, and 5-objective WFG2.On DTLZ1 and DTLZ3, MOEA/DD performs best. For WFG1and WFG2 with five objectives, MOEA/D and NSGA-III are

two most effective algorithms. Good distributions (in terms ofboth uniformness and extensiveness) of solutions obtained byVaEA are reflected by Figs. 3–6, and the better performance of

Page 16: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

146 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 1, FEBRUARY 2017

TABLE IXMEDIAN AND IQR (IN BRACKETS) OF � METRIC ON ALL THE 5-, 10-, AND 15-OBJECTIVE TEST INSTANCES

VaEA in terms of the � metric is attributed to the maximum-vector-angle-first principle which dynamically adds solutions,and emphasizes solutions locating at the most sparsest regionin terms of the vector angles.

E. Comparison of Evolutionary Trend and Running Time

In this section, evolutionary trends of selected algorithmsare plotted in Fig. 7, where all the 15-objective WFG testproblems are considered. In these figures, the x-coordinate isthe number of function evaluations, while the y-coordinateis the value of IGD metric. It is observed from Fig. 7 thatVaEA performs better than other algorithms on almost allthe test problems during the whole evolutionary process. For

WFG8, VaEA and NSGA-III obtain very close performance.Intuitively, VaEA and NSGA-III are better than MOEA/DDand MOEA/D. For WFG2, larger fluctuations of the val-ues of IGD are found by all algorithms, which may be dueto the discontinuity of the Pareto-optimal front. It is inter-esting to find that the trajectory line of MOEA/DD risesafter a certain number of function evaluations on almost allthe test problems, except for WFG9. This phenomenon maybe attributed to the mechanism adopted in MOEA/DD thatsolutions belonging to the worst nondomination level butbeing associated with an isolated subregion, are deemed tobe important for population diversity and are preserved with-out reservation [11] (see Fig. 8). These solutions may be faraway from the Pareto-optimal front. Once they have been

Page 17: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

XIANG et al.: VAEA FOR UNCONSTRAINED MANY-OBJECTIVE OPTIMIZATION 147

Fig. 7. Evolutionary trajectories of IGD on all the 15-objective WFG test problems. (a) WFG1. (b) WFG2. (c) WFG3. (d) WFG4. (e) WFG5. (f) WFG6.(g) WFG7. (h) WFG8. (i) WFG9.

Fig. 8. F belongs to the worst nondomination level, but it is associated withan isolated subregion. Therefore, it survives in MOEA/DD. According to [11],this mechanism may slow down the convergence speed. Note that this figureis borrowed from the original study of MOEA/DD [11].

found, as discussed in [11], they would definitely do harmto the convergence of the population, resulting in a rise ofIGD value.

To evaluate the speed of all the considered algorithms,we record the actual running time (in seconds: s) of eachalgorithm on each test problem with ten objectives. Thehardware configurations are as follows: Intel Core i5-5200UCPU 2.20 GHz (processor), 8.00 GB (RAM). All algorithmsare implemented by JAVA codes, and the experimental resultsare shown in Fig. 9. Clearly, MOEA/D and MOMBI2 are thefastest and slowest algorithms, respectively. For VaEA, it isfaster than NSGA-III, but slower than MOEA/DD. AlthoughVaEA, NSGA-III and MOEA/DD have the same worst timecomplexity O(mN2), they behave differently in terms of theactual running speed. For the pairwise comparison betweenVaEA and MOEA/DD, both of them divide the union popula-tion into different layers, however, MOEA/DD uses an efficientnondomination level update method [48] to renew the nondom-ination level structure which saves the running time to someextent. Compare with NSGA-III, VaEA utilizes a simpler nor-malization procedure. In fact, the normalization in NSGA-IIIneeds to determine intercepts of a hyperplane which involvessolving a system of linear equations [8]. Hence, VaEA runsrelatively faster than NSGA-III. Similar observations are foundfor other numbers of objectives.

Page 18: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

148 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 1, FEBRUARY 2017

Fig. 9. Running time of all the algorithms on the ten-objective test problems.

Fig. 10. Comparison between VaEA and its three variants on five-objective DTLZ1, DTLZ3, WFG1, and WFG2. VaEA-a: the algorithm adding only extremesolutions. VaEA-b: the algorithm adding only better converged solutions. VaEA-c: the algorithm adding random solutions.

Fig. 11. Comparison between VaEA and its three variants on 15-objective DTLZ1, DTLZ3, WFG1, and WFG2. VaEA-a: the algorithm adding only extremesolutions. VaEA-b: the algorithm adding only better converged solutions. VaEA-c: the algorithm adding random solutions.

F. Further DiscussionsRecall that m extreme solutions and the first m best solutions

in terms of the fitness value (or better converged solutions)are preferentially added if the population P is empty (seeAlgorithm 4). Does the inclusion of these types of solu-tions really contribute to the performance improvement ofVaEA? To answer this question, we carry out the follow-ing experiment to compare the performance of VaEA withits three variants: the algorithm adding only extreme solu-tions (denoted by VaEA-a), the algorithm adding only betterconverged solutions (denoted by VaEA-b), and the algorithmadding m random solutions (denoted by VaEA-c). The IGDvalues on five-objective DTLZ1, DTLZ3, WFG1, and WFG2

are shown in Fig. 10. The main reason for choosing the abovefour test problems are that they are representatives of DTLZand WFG test suites, and are deemed to be hard problems.As shown in Fig. 10, the differences among all the algorithmsare really tiny on DTLZ1 and DTLZ3. For WFG1, the bestalgorithm is VaEA-a, while VaEA performs best on WFG2.Generally speaking, the four algorithms perform competitivelyon the five-objective test problems.

Now, we consider 15-objective test problems, and the resultsare shown in Fig. 11. It can be found that the performance ofVaEA-a seriously degenerates on DTLZ1 and DTLZ3, indi-cating that adding only extreme solutions is not enough forthe performance improvements. Similarly, the inclusion of

Page 19: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

XIANG et al.: VAEA FOR UNCONSTRAINED MANY-OBJECTIVE OPTIMIZATION 149

Fig. 12. Comparison of IGD values for different coefficients of σ on theten-objective DTLZ2 test problem.

random solutions can also lead to the deterioration of the per-formance (see VaEA-c on DTLZ3). For VaEA-b, it obtainsslightly better IGD than VaEA on DTLZ1, but performs worseon DTLZ3, WFG1, and WFG2. From the analysis above, thealgorithm adding both extreme and better converged solutions(i.e., VaEA) is able to provide better overall performance, indi-cating the usefulness of the joint addition of two types ofsolutions.

Finally, we investigate the condition in the worse-elimination principle. As described in Section II-B4, if theangle between a solution and its target solution is smallerthan σ = ((π/2)/N + 1), then a worse solution may bereplaced according to the worse-elimination principle. In theprevious experimental studies, the threshold is fixed to σ =((π/2)/N + 1). In this section, we multiply σ by a coeffi-cient, such as 0.5, 1.0, 1.5, 2.0, 2.5, and 3.0, and run the VaEAalgorithm under each configuration. Take ten-objective DTLZ2as an example, the IGD values for different coefficients aregiven in Fig. 12, where we find no significant differencesamong all the considered coefficients, which implies thatVaEA is not sensitive to the settings of the condition in theworse-elimination principle.

IV. PRACTICAL APPLICATIONS IN TWO

PROBLEMS FROM PRACTICE

In this part, VaEA is compared with the two most competi-tive algorithms MOEA/DD and NSGA-III on two engineeringproblems. The first problem has three objectives and thesecond one has nine objectives.

A. Problem Description

1) Crashworthiness Design of Vehicles: This problem aimsat the structural optimization for crashworthiness criteria inautomotive industry [8]. The thickness of five reinforced mem-bers around the frontal structure is chosen as the designvariables (denoted as ti, i = 1, 2, . . . , 5) which could signifi-cantly affect the crash safety. The mass of vehicle (mass), theintegration of collision acceleration (Ain) and the toe boardintrusion in the “offset-frontal crash” (intrusion) are taken as

objectives. For each variable, the upper and lower boundariesare 1 mm and 3 mm, respectively. Mathematical formulationfor the three objectives is available in the original study [63].

2) Car Side-Impact Problem: The original car side-impactproblem described in [64] and [65] is a single objective opti-mization problem with ten constraints (g1 to g10). There areseven decision variables (x1 to x7) and four stochastic param-eters x8 and x11. In this paper, each constraint is convertedto an objective by minimizing the constraint violation value.Thus, there are 11 objectives in total (ten converted objectivesas well as the original objective function f ). As suggestedin [64], g5, g6, and g7 can be combined by taking their aver-age value. Therefore, the number of objectives reduces to ninein the new version of the car side-impact problem. A math-ematical formulation of the original objective as well as allconstraints can be found elsewhere [65].

B. Results on Two Problems From Practice

Since the true PFs of the above two practical problemsare not known, we independently run VaEA, MOEA/DD, andNSGA-III on each problem 40 times, and combine all returnedsolutions for each run of each algorithm to construct the refer-ence PFs (and dominated points are removed) which are usedto calculate the � and IGD metrics. For the nine-objectivecar side-impact problem, the population size N is 210 by let-ting h1 = 3 and h2 = 2, and MFE is set to 210 ∗ 2000.All other experimental configurations are kept the same as inSection III-C. Table X gives the median and IQR of each met-ric on the two engineering problems mentioned before. Fromstatistical results, VaEA shows a significant improvement overother two algorithms in terms of both metrics on both prac-tical problems, which strongly demonstrates the usefulness ofour proposed VaEA for practical applications.

Fig. 13 plots the PFs obtained by VaEA, MOEA/DD, andNSGA-III on the crashworthiness design problem. Obviously,VaEA and NSGA-III outperform MOEA/DD concerning thedistribution of the final solutions. For MOEA/DD, it seemsto find solutions only on the boundaries of the front. A largeportion of the front is not covered by any nondominated points.As a result, much larger values of � and IGD are obtainedby the algorithm. Solutions found by VaEA and NSGA-III aredistributed similarly. However, the number of nondominatedpoints found by VaEA in the top left part is larger than thatfound by NSGA-III (34 versus 25). Furthermore, some partsin the approximate set of NSGA-III is more crowded than thatof VaEA. Therefore, our proposed VaEA presents better metricvalues.

In the modified car side-impact problem, there are nineobjectives which makes the visualization much more difficult.As suggested in [10], the solutions are shown in Fig. 14 regard-ing a 2-D objective space f1 and f2. In this problem, the regionsof f1 and f2 in the reference PF are [15.6154, 42.5883] and[0, 1.3964], respectively. Apparently, as shown in Fig. 14(a),the solutions of VaEA are distributed more widely than thoseof MOEA/DD. Compared with MOEA/DD, the solutionsfound by VaEA have lower f2 values, meaning that a muchwider area has been explored by our approach. It seems that

Page 20: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

150 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 1, FEBRUARY 2017

Fig. 13. Final solutions obtained by (a) VaEA, (b) MOEA/DD, and (c) NSGA-III on the crashworthiness design problem.

Fig. 14. Pairwise comparison between VaEA and MOEA/DD as well as NSGA-III on the car side-impact problem. The final solutions of the algorithms areshown regarding the 2-D objective space f1 and f2. (a) VaEA versus MOEA/DD. (b) VaEA versus NSGA-III.

TABLE XRESULTS (MEDIAN AND IQR OF � AND IGD) OF VAEA, MOEA/DD, AND NSGA-III ON TWO ENGINEERING PROBLEMS

the solutions returned by MOEA/DD are mainly centralized atthe middle part of the 2-D space. For the pairwise comparisonbetween VaEA and NSGA-III, both algorithms have founda set of widely distributed solutions. However, there indeedexist differences in terms of the uniformity of the solutions. Itis observed in Fig. 14(b) that obvious clusters (marked withred ellipses) are identified among the solutions of NSGA-III.Therefore, VaEA achieves better performance in terms of the� metric.

V. CONCLUSION

To balance convergence and diversity, this paper proposesa new MaOEA named VaEA by using the concept of vectorangle. The proposed algorithm is tested on DTLZ and WFG

test problems with up to 15 objectives. It is demonstratedby the experimental results that VaEA is quite competitivecompared with NSGA-III, and is better than MOEA/DD,MOEA/D, and MOMBI2 on the majority of all the testinstances. Especially, VaEA shows significant improvementover other peer algorithms in terms of the distribution of theobtained solutions. Our proposed VaEA is able to find a setof properly distributed solutions, and keeps a good balancebetween the convergence and diversity.

Furthermore, the VaEA has been applied to two practi-cal problems with irregular PFs. As shown, our algorithmoutperforms its competitors in terms of both IGD and gener-alized SPREAD metrics. VaEA could be particularly suitablefor practical applications because of its inherent properties ofbeing free from reference points or weight vectors, introducing

Page 21: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

XIANG et al.: VAEA FOR UNCONSTRAINED MANY-OBJECTIVE OPTIMIZATION 151

no additional algorithmic parameters and having a lower timecomplexity. Applying VaEA to constrained MaOPs is one ofthe subsequent studies.

ACKNOWLEDGMENT

The authors would like to thank the anonymous reviewersfor providing valuable comments to improve this paper, andadd special thanks to Dr. K. Li (University of Birmingham)for his insightful suggestions.

REFERENCES

[1] B. Li, J. Li, K. Tang, and X. Yao, “Many-objective evolutionary algo-rithms: A survey,” ACM Comput. Surveys, vol. 48, no. 1, pp. 1–35,Sep. 2015.

[2] M. Farina and P. Amato, “On the optimal solution definition for many-criteria optimization problems,” in Proc. Annu. Meeting North Amer.Fuzzy Inf. Process. Soc., New Orleans, LA, USA, 2002, pp. 233–238.

[3] P. J. Fleming, R. C. Purshouse, and R. J. Lygoe, “Many-objectiveoptimization: An engineering design perspective,” in EvolutionaryMulti-Criterion Optimization. Heidelberg, Germany: Springer, 2005,pp. 14–32.

[4] G. Fu, Z. Kapelan, J. Kasprzyk, and P. Reed, “Optimal design of waterdistribution systems using many-objective visual analytics,” J. WaterResources Plan. Manag., vol. 139, no. 6, pp. 624–633, Nov./Dec. 2013.

[5] U. K. Wickramasinghe, R. Carrese, and X. Li, “Designing airfoils usinga reference point based evolutionary many-objective particle swarmoptimization algorithm,” in Proc. IEEE Congr. Evol. Comput. (CEC),Barcelona, Spain, 2010, pp. 1–8.

[6] R. J. Lygoe, M. Cary, and P. J. Fleming, “A real-world applicationof a many-objective optimisation complexity reduction process,” inEvolutionary Multi-Criterion Optimization (LNCS 7811). Heidelberg,Germany: Springer, 2013, pp. 641–655.

[7] S. Yang, M. Li, X. Liu, and J. Zheng, “A grid-based evolutionary algo-rithm for many-objective optimization,” IEEE Trans. Evol. Comput.,vol. 17, no. 5, pp. 721–736, Oct. 2013.

[8] K. Deb and H. Jain, “An evolutionary many-objective optimizationalgorithm using reference-point-based nondominated sorting approach,part I: Solving problems with box constraints,” IEEE Trans. Evol.Comput., vol. 18, no. 4, pp. 577–601, Aug. 2014.

[9] M. Li, S. Yang, and X. Liu, “Diversity comparison of Pareto frontapproximations in many-objective optimization,” IEEE Trans. Cybern.,vol. 44, no. 12, pp. 2568–2584, Dec. 2014.

[10] M. Li, S. Yang, and X. Liu, “Bi-goal evolution for many-objectiveoptimization problems,” Artif. Intell., vol. 228, pp. 45–65, Nov. 2015.

[11] K. Li, K. Deb, Q. Zhang, and S. Kwong, “An evolutionary many-objective optimization algorithm based on dominance and decompo-sition,” IEEE Trans. Evol. Comput., vol. 19, no. 5, pp. 694–716,Oct. 2015.

[12] S. Bandyopadhyay and A. Mukherjee, “An algorithm for many-objectiveoptimization with reduced objective computations: A study in differen-tial evolution,” IEEE Trans. Evol. Comput., vol. 19, no. 3, pp. 400–413,Jun. 2015.

[13] M. Asafuddoula, T. Ray, and R. Sarker, “A decomposition-based evolu-tionary algorithm for many objective optimization,” IEEE Trans. Evol.Comput., vol. 19, no. 3, pp. 445–460, Jun. 2015.

[14] P. C. Roy, M. M. Islam, K. Murase, and X. Yao, “Evolutionary pathcontrol strategy for solving many-objective optimization problem,” IEEETrans. Cybern., vol. 45, no. 4, pp. 702–715, Apr. 2015.

[15] R. Cheng, Y. Jin, M. Olhofer, and B. Sendhoff, “A refer-ence vector guided evolutionary algorithm for many-objectiveoptimization,” IEEE Trans. Evol. Comput., to be published,doi: 10.1109/TEVC.2016.2519378.

[16] K. Deb, “Multi-objective optimization,” in Multi-Objective OptimizationUsing Evolutionary Algorithms. New York, NY, USA: Wiley, 2001,pp. 13–46.

[17] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitistmultiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput.,vol. 6, no. 2, pp. 182–197, Apr. 2002.

[18] C. A. C. Coello, G. T. Pulido, and M. S. Lechuga, “Handlingmultiple objectives with particle swarm optimization,” IEEE Trans. Evol.Comput., vol. 8, no. 3, pp. 256–279, Jun. 2004.

[19] E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the strengthPareto evolutionary algorithm,” Dept. Elect. Eng., Swiss Federal Inst.Technol. (ETH), Zürich, Switzerland, Tech. Rep. 103, 2001.

[20] J. Knowles and D. Corne, “The Pareto archived evolution strategy:A new baseline algorithm for Pareto multiobjective optimisation,” inProc. Congr. Evol. Comput. (CEC), vol. 1. Washington, DC, USA, 1999,pp. 98–105.

[21] D. W. Corne et al., “PESA-II: Region-based selection in evolu-tionary multiobjective optimization,” in Proc. Genet. Evol. Comput.Conf. (GECCO), San Francisco, CA, USA, 2001, pp. 283–290.

[22] R. C. Purshouse and P. J. Fleming, “On the evolutionary optimization ofmany conflicting objectives,” IEEE Trans. Evol. Comput., vol. 11, no. 6,pp. 770–784, Dec. 2007.

[23] M. Li, S. Yang, and X. Liu, “Shift-based density estimation for Pareto-based algorithms in many-objective optimization,” IEEE Trans. Evol.Comput., vol. 18, no. 3, pp. 348–365, Jun. 2014.

[24] K. Deb and S. Jain, “Running performance metrics for evo-lutionary multi-objective optimization,” Kanpur Genet. AlgorithmsLab. (KanGAL), Indian Inst. Technol., KanGAL Tech. Rep. 2002004,2002.

[25] M. Laumanns, L. Thiele, K. Deb, and E. Zitzler, “Combining conver-gence and diversity in evolutionary multiobjective optimization,” Evol.Comput., vol. 10, no. 3, pp. 263–282, Sep. 2002.

[26] H. Sato, H. E. Aguirre, and K. Tanaka, “Controlling dominancearea of solutions and its impact on the performance of MOEAs,” inEvolutionary Multi-Criterion Optimization (LNCS 4403). Heidelberg,Germany: Springer, 2007, pp. 5–20.

[27] K. Ikeda, H. Kita, and S. Kobayashi, “Failure of Pareto-based MOEAs:Does non-dominated really mean near to optimal?” in Proc. Congr. Evol.Comput., vol. 2. Seoul, South Korea, 2001, pp. 957–962.

[28] Z. He, G. G. Yen, and J. Zhang, “Fuzzy-based Pareto optimality formany-objective evolutionary algorithms,” IEEE Trans. Evol. Comput.,vol. 18, no. 2, pp. 269–285, Apr. 2014.

[29] X. Zou, Y. Chen, M. Liu, and L. Kang, “A new evolutionary algorithmfor solving many-objective optimization problems,” IEEE Trans. Syst.,Man, Cybern. B, Cybern., vol. 38, no. 5, pp. 1402–1412, Oct. 2008.

[30] Y. Yuan, H. Xu, B. Wang, and X. Yao, “A new dominance relation-based evolutionary algorithm for many-objective optimization,” IEEETrans. Evol. Comput., vol. 20, no. 1, pp. 16–37, Feb. 2016.

[31] F. di Pierro, S.-T. Khu, and D. A. Savic, “An investigation on prefer-ence order ranking scheme for multiobjective evolutionary optimization,”IEEE Trans. Evol. Comput., vol. 11, no. 1, pp. 17–45, Feb. 2007.

[32] M. Farina and P. Amato, “A fuzzy definition of ‘optimality’ for many-criteria optimization problems,” IEEE Trans. Syst., Man, Cybern. A,Syst., Humans, vol. 34, no. 3, pp. 315–326, May 2004.

[33] A. L. Jaimes, C. A. C. Coello, H. Aguirre, and K. Tanaka, “Adaptiveobjective space partitioning using conflict information for many-objective optimization,” in Evolutionary Multi-Criterion Optimization(LNCS 6576). Heidelberg, Germany: Springer, 2011, pp. 151–165.

[34] T. Wagner, N. Beume, and B. Naujoks, “Pareto-, aggregation-,and indicator-based methods in many-objective optimization,” inEvolutionary Multi-Criterion Optimization (LNCS 4403), S. Obayashi,K. Deb, C. Poloni, T. Hiroyasu, and T. Murata, Eds. Heidelberg,Germany: Springer, 2007, pp. 742–756.

[35] S. F. Adra and P. J. Fleming, “Diversity management in evolutionarymany-objective optimization,” IEEE Trans. Evol. Comput., vol. 15, no. 2,pp. 183–195, Apr. 2011.

[36] K. Praditwong and X. Yao, “A new multi-objective evolutionary optimi-sation algorithm: The two-archive algorithm,” in Proc. Comput. Intell.Security, vol. 1. Guangzhou, China, 2006, pp. 286–291.

[37] H. Wang, L. Jiao, and X. Yao, “Two_Arch2: An improved two-archivealgorithm for many-objective optimization,” IEEE Trans. Evol. Comput.,vol. 19, no. 4, pp. 524–541, Aug. 2015.

[38] J. Cheng, G. G. Yen, and G. Zhang, “A many-objective evolutionaryalgorithm with enhanced mating and environmental selections,” IEEETrans. Evol. Comput., vol. 19, no. 4, pp. 592–605, Aug. 2015.

[39] Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary algorithmbased on decomposition,” IEEE Trans. Evol. Comput., vol. 11, no. 6,pp. 712–731, Dec. 2007.

[40] M. Li, S. Yang, and X. Liu, “Pareto or non-Pareto: Bi-criterion evolu-tion in multi-objective optimization,” IEEE Trans. Evol. Comput., to bepublished, doi: 10.1109/TEVC.2015.2504730.

[41] H.-L. Liu, L. Chen, Q. Zhang, and K. Deb, “An evolutionary manyobjective optimisation algorithm with adaptive region decomposition,”School Appl. Math., Guangdong Univ. Technol., Guangzhou, China,COIN Tech. Rep. 2016014, 2016.

Page 22: Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chenlimx/Preprint/TEVC16b.pdf · 2019. 1. 28. · Yi Xiang, Yuren Zhou, Miqing Li, and Zefeng Chen Abstract—Taking both convergence and

152 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 21, NO. 1, FEBRUARY 2017

[42] H.-L. Liu, F. Gu, and Q. Zhang, “Decomposition of a multiobjectiveoptimization problem into a number of simple multiobjective sub-problems,” IEEE Trans. Evol. Comput., vol. 18, no. 3, pp. 450–455,Jun. 2014.

[43] H. Li and D. Landa-Silva, “An adaptive evolutionary multi-objectiveapproach based on simulated annealing,” Evol. Comput., vol. 19, no. 4,pp. 561–595, Dec. 2011.

[44] J. Siwei, C. Zhihua, Z. Jie, and O. Yew-Soon, “Multiobjective optimiza-tion by decomposition with Pareto-adaptive weight vectors,” in Proc.7th Int. Conf. Nat. Comput. (ICNC), vol. 3. Shanghai, China, Jul. 2011,pp. 1260–1264.

[45] Y. Qi et al., “MOEA/D with adaptive weight adjustment,” Evol. Comput.,vol. 22, no. 2, pp. 231–264, Jun. 2014.

[46] K. Deb and R. B. Agrawal, “Simulated binary crossover for continuoussearch space,” Complex Syst., vol. 9, pp. 115–148, Apr. 1995.

[47] M. T. Jensen, “Reducing the run-time complexity of multiobjective EAs:The NSGA-II and other algorithms,” IEEE Trans. Evol. Comput., vol. 7,no. 5, pp. 503–515, Oct. 2003.

[48] K. Li, K. Deb, Q. Zhang, and S. Kwong, “Efficient non-domination levelupdate approach for steady-state evolutionary multiobjective optimiza-tion,” Dept. Elect. Comput. Eng., Michigan State Univ., East Lansing,MI, USA, COIN Tech. Rep. 2014014, 2014.

[49] R. H. Gómez and C. A. C. Coello, “Improved metaheuristic based onthe R2 indicator for many-objective optimization,” in Proc. Genet. Evol.Comput. Conf., Madrid, Spain, 2015, pp. 679–686.

[50] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, Scalable Test Problemsfor Evolutionary Multiobjective Optimization. London, U.K.: Springer,2005.

[51] S. Huband, P. Hingston, L. Barone, and L. While, “A review of multiob-jective test problems and a scalable test problem toolkit,” IEEE Trans.Evol. Comput., vol. 10, no. 5, pp. 477–506, Oct. 2006.

[52] E. Zitzler and L. Thiele, “Multiobjective evolutionary algorithms:A comparative case study and the strength Pareto approach,” IEEETrans. Evol. Comput., vol. 3, no. 4, pp. 257–271, Nov. 1999.

[53] C. A. C. Coello, G. B. Lamont, and D. A. V. Veldhuizen, EvolutionaryAlgorithms for Solving Multi-Objective Problems, 2nd ed. New York,NY, USA: Springer, 2007.

[54] A. Zhou, Y. Jin, Q. Zhang, B. Sendhoff, and E. Tsang, “Combiningmodel-based and genetics-based offspring generation for multi-objectiveoptimization using a convergence criterion,” in Proc. IEEE Congr. Evol.Comput. (CEC), Vancouver, BC, Canada, 2006, pp. 892–899.

[55] H. Ishibuchi, H. Masuda, Y. Tanigaki, and Y. Nojima, “Difficulties inspecifying reference points to calculate the inverted generational dis-tance for many-objective optimization problems,” in Proc. IEEE Symp.Comput. Intell. Multi-Criteria Decis. Making (MCDM), Orlando, FL,USA, 2014, pp. 170–177.

[56] H. Jain and K. Deb, “An evolutionary many-objective optimizationalgorithm using reference-point based nondominated sorting approach,part II: Handling constraints and extending to an adaptive approach,”IEEE Trans. Evol. Comput., vol. 18, no. 4, pp. 602–622, Aug. 2014.

[57] K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms,vol. 16. New York, NY, USA: Wiley, 2001.

[58] J. J. Durillo and A. J. Nebro, “jMetal: A Java framework formulti-objective optimization,” Adv. Eng. Softw., vol. 42, no. 10,pp. 760–771, 2011.

[59] I. Das and J. E. Dennis, “Normal-boundary intersection: A new methodfor generating the Pareto surface in nonlinear multicriteria optimizationproblems,” SIAM J. Optim., vol. 8, no. 3, pp. 631–657, 1998.

[60] H. Ishibuchi, H. Masuda, and Y. Nojima, “Pareto fronts of many-objective degenerate test problems,” IEEE Trans. Evol. Comput., to bepublished, doi: 10.1109/TEVC.2015.2505784.

[61] J. Bader and E. Zitzler, “HypE: An algorithm for fast hypervolume-basedmany-objective optimization,” Evol. Comput., vol. 19, no. 1, pp. 45–76,Jul. 2011.

[62] J. Derrac, S. García, D. Molina, and F. Herrera, “A practical tutorial onthe use of nonparametric statistical tests as a methodology for comparingevolutionary and swarm intelligence algorithms,” Swarm Evol. Comput.,vol. 1, no. 1, pp. 3–18, Mar. 2011.

[63] X. Liao, Q. Li, X. Yang, W. Zhang, and W. Li, “Multiobjective optimiza-tion for crash safety design of vehicles using stepwise regression model,”Struct. Multidiscipl. Optim., vol. 35, no. 6, pp. 561–569, Jun. 2008.

[64] L. Gu et al., “Optimisation and robustness for crashworthiness of sideimpact,” Int. J. Veh. Design, vol. 26, no. 4, pp. 348–360, 2001.

[65] K. Deb et al., “Reliability-based optimization using evolutionary algo-rithms,” IEEE Trans. Evol. Comput., vol. 13, no. 5, pp. 1054–1074,Oct. 2009.

Yi Xiang born in Hubei, China, in 1986. Hereceived the M.S. degree in applied mathematicsfrom Guangzhou University, Guangzhou, China, in2013. He is currently pursuing the Ph.D. degree withSun Yat-sen University, Guangzhou.

His current research interests include many-objective optimization, mathematical modeling, anddata mining.

Yuren Zhou received the B.Sc. degree in mathemat-ics from Peking University, Beijing, China, in 1988,and the M.Sc. degree in mathematics and the Ph.D.degree in computer science from Wuhan University,Wuhan, China, in 1991 and 2003, respectively.

He is currently a Professor with the School ofData and Computer Science, Sun Yat-sen University,Guangzhou, China. His current research interestsinclude design and analysis of algorithms, evolution-ary computation, and social networks.

Miqing Li received the B.Sc. degree in com-puter science from the School of Computerand Communication, Hunan University, Changsha,China, the M.Sc. degree in computer science fromthe College of Information Engineering, XiangtanUniversity, Xiangtan, China, and the Ph.D. degreefrom the Department of Computer Science, BrunelUniversity London, Uxbridge, U.K.

His current research interests include evolution-ary computation, many-objective optimization, anddynamic optimization.

Zefeng Chen received the B.Sc. degree from SunYat-sen University, Guangzhou, China, and theM.Sc. degree from the South China University ofTechnology, Guangzhou. He is currently pursuingthe Ph.D. degree with the School of Data andComputer Science, Sun Yat-sen University.

His current research interests include evolution-ary computation, multiobjective optimization, andmachine learning.