[IEEE 2013 IEEE Congress on Evolutionary Computation (CEC) - Cancun, Mexico (2013.06.20-2013.06.23)]...

9
Abstract—Recently, Particle Swarm Optimizer (PSO) has become a popular tool for solving constrained optimization problems. However, there is no guarantee that PSO will perform consistently well for all problems and will not be trapped in local optima. In this paper, a PSO algorithm is introduced that uses two new mechanisms, the first one to maintain a better balance between intensification and diversification and the second one to escape from local solutions. Furthermore, all the basic parameters are determined self-adaptively. The performance of the proposed algorithm is analyzed by solving the CEC2010 constrained optimization problems. The algorithm shows consistent performance, and is superior to other state-of-the-art algorithms. Index Terms— Constrained optimization, particle swarm optimization, diversity mechanism I. INTRODUCTION Many practical decision processes require solving Constrained Optimization Problems (COPs). In a single objective COP, the values for a set of decision variables are determined by either maximizing or minimizing a given objective function subject to satisfying a number of functional constraints as well as the variable bounds. A COP with a minimization objective can be described as follows: Ԧ Subject to: Ԧ ൯ 0, ൌ 1,2, … , Ԧ ൯ ൌ 0, ൌ 1,2, … , ܧ, ܮ ݔ ൌ 1,2, … , (1) where Ԧ ݔ ݔ, ݔ,…, is a vector with D-decision variables, Ԧ the objective function, Ԧ the k th inequality constraints, Ԧ the e th equality constraint, and ܮ and are the lower and upper limit of ݔ . The special structure of COP as a mathematical problem, and the variation of function properties, makes it difficult to solve using either analytical or computational deterministic methods. So, researchers and practitioners attempted to use different population based stochastic search methods to solve such problems, such as PSO [1-4], genetic algorithms (GA) [5] and differential evolution (DE) [6]. PSO basically simulates the search process mimicking the social behavior of flocks of birds and fishes [1]. The concept of PSO is very simple and it is not difficult to implement. For this reason, PSO has become very popular within a short period of time. However, in any PSO algorithm, the choice of parameters plays a pivotal role in its success. In addition, PSO can be trapped in local optima. To overcome these problems, many studies were conducted such as the adaptation of algorithmic parameters and the ensemble of search operators [7]. Some other approaches were also employed with PSO for solving problems with multi-modal functions, such as: (i) Crowding [8]: an offspring replaces the nearest individual. (ii) Fitness sharing [9]: modifies the search landscape by reducing the payoff in densely populated regions. Here, the highly similar individuals in the population are penalized by reducing its fitness. (iii) Speciation [10]: the individuals are divided into different groups, in which the individuals within a predefined radius of the species seed are recognized as the same species. Most recently, a small number of studies [11-12] have incorporated a restart mechanism (with Covariance Matrix Adaptation evolution strategy) with evolutionary algorithms, where the population size can be increased by injecting new individuals under certain conditions. Although the addition of new individuals at any stage of evolution process may increase the chance of escaping from the local optima, it does not guarantee improvement for all type of problems. In addition, if the number of new individuals is too many, the algorithm may experience slow convergence. Therefore, in this paper, we propose to the following two points that may help to improve the performance of PSO algorithm: (i) the injected new individuals should evolve independent of the individuals of the population for a predefined number of generations, and (ii) there exists limited information exchange from the original population to accelerate the search process of the new individuals. Once the algorithm shows a sign of escaping from local optima, the population size can then be reduced to the initial number of individuals. The key contributions in this paper can be stated as follows: (a) a new PSO variant that provides a good balance between global and local PSO variants; (b) a new automatic individual injection method to allow PSO escaping from local optima; and (c) a new self-adaptive mechanism to update PSO parameters appropriately. To judge the performance of the proposed algorithm, a set of constrained problems, from IEEE CEC’2010 constrained optimization competition [13], has been solved. The reason of choosing this problem set is that these problems possess functions with diverse mathematical properties such as linear, non-linear, multimodal, separable, and non-separable. The constraints are of both equality and inequality type. The feasible region is considerable small with respect to the whole search space in some problems. From the results, it is clear that the proposed algorithm performs not only consistently Particle Swarm Optimizer for Constrained Optimization Saber M. Elsayed 1 , Ruhul A. Sarker 1 and Efrén Mezura-Montes 2 1 School of Engineering and Information Technology, University of New South Wales, ADFA Campus, Canberra 2600, Australia, E-mails: [email protected], [email protected] 2 Departamento de Inteligencia Artificial, Universidad Veracruzana, Sebastián Camacho 5, Centro, Xalapa, Veracruz, 91000, MEXICO, E- mail: [email protected] 2013 IEEE Congress on Evolutionary Computation June 20-23, Cancún, México 978-1-4799-0454-9/13/$31.00 ©2013 IEEE 2703

Transcript of [IEEE 2013 IEEE Congress on Evolutionary Computation (CEC) - Cancun, Mexico (2013.06.20-2013.06.23)]...

Abstract—Recently, Particle Swarm Optimizer (PSO) has become a popular tool for solving constrained optimization problems. However, there is no guarantee that PSO will perform consistently well for all problems and will not be trapped in local optima. In this paper, a PSO algorithm is introduced that uses two new mechanisms, the first one to maintain a better balance between intensification and diversification and the second one to escape from local solutions. Furthermore, all the basic parameters are determined self-adaptively. The performance of the proposed algorithm is analyzed by solving the CEC2010 constrained optimization problems. The algorithm shows consistent performance, and is superior to other state-of-the-art algorithms. Index Terms— Constrained optimization, particle swarm optimization, diversity mechanism

I. INTRODUCTION

Many practical decision processes require solving Constrained Optimization Problems (COPs). In a single objective COP, the values for a set of decision variables are determined by either maximizing or minimizing a given objective function subject to satisfying a number of functional constraints as well as the variable bounds. A COP with a minimization objective can be described as follows:

Subject to: 0, 1,2, … , , 0, 1,2, … , , 1,2, … , , (1) where , , … , is a vector with D-decision variables, the objective function, the kth inequality constraints, the eth equality constraint, and and are the lower and upper limit of .

The special structure of COP as a mathematical problem, and the variation of function properties, makes it difficult to solve using either analytical or computational deterministic methods. So, researchers and practitioners attempted to use different population based stochastic search methods to solve such problems, such as PSO [1-4], genetic algorithms (GA) [5] and differential evolution (DE) [6]. PSO basically simulates the search process mimicking the social behavior of flocks of birds and fishes [1]. The concept of PSO is very simple and it is not difficult to implement. For this reason, PSO has become very popular within a short period of time. However, in any PSO algorithm, the choice of parameters plays a

pivotal role in its success. In addition, PSO can be trapped in local optima. To overcome these problems, many studies were conducted such as the adaptation of algorithmic parameters and the ensemble of search operators [7]. Some other approaches were also employed with PSO for solving problems with multi-modal functions, such as: (i) Crowding [8]: an offspring replaces the nearest individual. (ii) Fitness sharing [9]: modifies the search landscape by reducing the payoff in densely populated regions. Here, the highly similar individuals in the population are penalized by reducing its fitness. (iii) Speciation [10]: the individuals are divided into different groups, in which the individuals within a predefined radius of the species seed are recognized as the same species. Most recently, a small number of studies [11-12] have incorporated a restart mechanism (with Covariance Matrix Adaptation evolution strategy) with evolutionary algorithms, where the population size can be increased by injecting new individuals under certain conditions. Although the addition of new individuals at any stage of evolution process may increase the chance of escaping from the local optima, it does not guarantee improvement for all type of problems. In addition, if the number of new individuals is too many, the algorithm may experience slow convergence. Therefore, in this paper, we propose to the following two points that may help to improve the performance of PSO algorithm: (i) the injected new individuals should evolve independent of the individuals of the population for a predefined number of generations, and (ii) there exists limited information exchange from the original population to accelerate the search process of the new individuals. Once the algorithm shows a sign of escaping from local optima, the population size can then be reduced to the initial number of individuals.

The key contributions in this paper can be stated as follows: (a) a new PSO variant that provides a good balance between global and local PSO variants; (b) a new automatic individual injection method to allow PSO escaping from local optima; and (c) a new self-adaptive mechanism to update PSO parameters appropriately.

To judge the performance of the proposed algorithm, a set of constrained problems, from IEEE CEC’2010 constrained optimization competition [13], has been solved. The reason of choosing this problem set is that these problems possess functions with diverse mathematical properties such as linear, non-linear, multimodal, separable, and non-separable. The constraints are of both equality and inequality type. The feasible region is considerable small with respect to the whole search space in some problems. From the results, it is clear that the proposed algorithm performs not only consistently

Particle Swarm Optimizer for Constrained Optimization

Saber M. Elsayed1, Ruhul A. Sarker1 and Efrén Mezura-Montes2 1

School of Engineering and Information Technology, University of New South Wales, ADFA Campus, Canberra 2600, Australia, E-mails: [email protected], [email protected]

2 Departamento de Inteligencia Artificial, Universidad Veracruzana, Sebastián Camacho 5, Centro, Xalapa, Veracruz, 91000, MEXICO, E-

mail: [email protected]

2013 IEEE Congress on Evolutionary Computation June 20-23, Cancún, México

978-1-4799-0454-9/13/$31.00 ©2013 IEEE 2703

for all problems but also better than the state-of-the-art algorithms.

This paper is organized as follows. After the introduction, section II presents an overview of PSO. Section III describes the proposed algorithm and its components. The experimental results and the analysis of those results are presented in section IV. Finally, the conclusions and future work are given in section V.

II.PARTICLE SWARM OPTIMIZATION

PSO is a population-based stochastic global optimization method. It takes inspiration from the movements of a flock of birds in searching food [1]. A PSO algorithm starts with an initial swarm of particles, which fly through the problem hyperspace with given velocities. Each particle’s velocity is updated at each generation based on its own and its neighbor’s experiences [14-15]. The movement of each particle then naturally evolves to an optimal or near-optimal solution.

A. Operators and parameters

In the PSO field, particles have been studied in two general types of neighbourhoods: global and local. In the global one, particles are moving biased by its personal best position and also by the best particle that has been found so far. This leads the algorithm to have a quick convergence pattern. However, it may thus become trapped in local optima. On the other hand, in the local PSO, each particle’s velocity is adjusted according to both its personnel best and the best particle within its neighborhood. In this variant, the algorithm, most probably, is able to escape from the local solutions, but it may suffer from a slower convergence pattern [15].

These two variants can be stated as follows: in a D-dimensional search space, the position and velocity of particle-z are represented as the vectors and , respectively. Let , , , , … , , , …, , and , , … , , …,

be the best position remembered by particle z, and its neighbours’ best position so far, respectively. The velocity of particle z at generation t, can be calculated as indicated in Eq. (2) and (3) for global and local PSO, respectively. In both cases the position of particle z is updated as in Eq. (4).

(1) Global PSO:

, , , , , (2)

(2) Local PSO:

, , , , , , , , (3) (4)

where, is the velocity of particle z at generation t-1, and , , w is the inertia weight factor, , are the acceleration coefficients, , , , are uniform

random numbers within [0, 1], and is the local best for individual z.

Clerc and Kennedy [16] proposed an alternative version of PSO, in which the velocity adjustment is given as follows:

, , , ,, 5

where is the constriction factor, | | and , 4. This version of PSO eliminates the parameter , where 0.729, while 4. Other researchers have mentioned that 1.49445 is a good choice [7]. Furthermore, because of PSO has a nature of prematurely converging, Mezura-Montes and Flores-Mendoza [17] proposed a PSO with a dynamic (deterministic) adaptation mechanism for both and that was proposed to start with low velocity values for some particles and to increase these values during the search process. They also assumed not to adapt all particles; instead some of the particle would use fixed values of those two parameters.

Motivated by the fact that a a particle was not simply influenced by the best performer among his neighbors, Mendes et al. [18] proposed to make particles fully informed, i.e. a particle use information from its neighbors, rather than just from the best one. Although their proposed algorithm was better than other state-of-the-art algorithms, it was not able to obtain consistent results over all the tested unconstrained problems.

Parsopoulos and Vrahatis [19] proposed a unified PSO variant that harnessed both local and global variants of PSO in hope of combining their exploration and exploitation abilities without imposing additional requirements in terms of function evaluations.

For the purpose of handling multi-modal problems, Qu et al. [20] proposed a distance-based locally-informed PSO, which eliminated the need to specify any niching parameter and enhanced the fine search ability of PSO. The algorithm used several local bests to guide the search of each particle. The neighborhoods were estimated in terms of Euclidean distance. The algorithm was tested on a set of unconstrained problems that showed superior performance in comparison to niching algorithms. However, our preliminary empirical results showed that this algorithm is poor in handling constrained problems.

For the local PSO, there are different neighborhood structures that may affect the performance of swarms, such as ring, wheel and random local topologies. In the ring topology, each particle is connected with two neighbors. In the wheel topology, individuals are isolated from one another and information is only communicated to a focal individual, while in the random topology each individual gets information from a random individual.

B. A Review on Solving COPs by PSO

In the last few years, PSO has gradually gained much attention in solving COPs. Cagnina et al.[21] proposed a hybrid PSO for solving COPs. In their algorithm, they

2704

introduced different methods to update the particle’s information, as well as the use of a double population and a special shake mechanism to maintain diversity. The algorithm obtained competitive results, however it could not obtain the optimal solutions for several test problems. As mentioned earlier, Mezura-Montes and Flores-Mendoza [17] proposed a PSO with a dynamic adaptation of two different PSO parameters. The algorithm was tested on 24 test problems. Although their proposed algorithm was better than other algorithms, it was not able to obtain the optimal solutions in many occasions, and its performance was not consistent. Elsayed et al. [7] proposed a multi-topology PSO algorithm with a local search technique which was tested by solving a small set of constrained problems. Their results were very encouraging for further investigations. The PSO algorithm developed by Hu and Eberhart [22] started with a group of feasible solutions, where the feasibility function was used to check the constraint satisfaction by the newly explored solutions. All particles kept only those feasible solutions in their memory. The algorithm was tested on 11 test problems, but the results were not consistent. Liang and Suganthan [15] proposed a dynamic multi-swarm PSO with a constraint-handling mechanism for solving COPs, in which the sub-swarms were adaptively assigned to explore different constraints according to their difficulties. In addition, a Sequential Quadratic Programming (SQP) method was altered to improve its local search ability. Although the algorithm showed superior performance to many other algorithms, it seems that its search capabilities were highly dependent on SQP. The algorithm by Toscano-Pulido and Coello Coello [23] used the feasibility rule to handle constraints, as well as a turbulence operator to improve the exploratory capability of the algorithm. However, the results obtained were not consistent. Renato and Santos [24] introduced a PSO algorithm, in which the accelerating coefficients of PSO were generated using a Gaussian probability distribution. In addition, two populations were evolved, one for the variable vector and the other for the Lagrange multiplier vector. However, their algorithm was not able to obtain the optimal solution over all runs. He and Wang [25] proposed a co-evolutionary PSO algorithm, in which the notion of co-evolution was employed. In addition, two kinds of swarms, for evolutionary exploration and exploitation, as well as penalty factors were evolved. The algorithm was competitive to other algorithms in solving different engineering problems.

Motivated by multi-objective optimization techniques, Ray and Liew [26] proposed an effective multilevel information sharing strategy within a swarm to handle single objective, constrained and unconstrained, optimization problems, in which a better performer list (BPL) was generated by a multilevel Pareto ranking scheme treating every constraint as an objective, while the particles which were not in the BPL gradually congregated around their closest neighbor in the BPL.

III. AUTOMATIC PARTICLES INJECTION PSO ALGORITHM WITH AN ARCHIVE

In this section, first, we describe the proposed algorithm which we introduced as Automatic Particles Injection PSO

(API-PSO). We then discuss the self-adaptive mechanism and the constraint handling technique used in this research.

A. API-PSO

It is well-accepted that global PSO has the ability to converge quickly, but it still may be trapped in local optima. On contrary, a local PSO topology can avoid premature convergence, but the convergence to a solution is slower. Consequently, it will be useful to adopt a PSO which make a balance between these two variants. To add on this, the maintenance of diversity is still an important aspect in designing PSO algorithm.

Considering the abovementioned facts, the proposed PSO uses an archive of particles, and gradually emphasis on the best particles in the archive. This helps to maintain a balance between diversification and intensification. In addition, an automatic injection of new particles is used to help the algorithm to escape from local solutions. Furthermore, a mutation operator is integrated with the proposed algorithm. The proposed algorithm is discussed as follows:

1. At generation 0 , generate an initial random population . The variables in each particle must be within the range, such as:

, (6)

Also, for each particle generate its own , and . The adaptation of the control parameters are discussed in III.B.

2. Set two parameters values, and . The former defines the archive size, while the latter determines the number of best particles which should be inserted.

3. Sort the entire population based on fitness function and/or constraint violation.

4. Fill the archive with the best particles. Best particles mean those have the best fitness function and/or constraint violation.

5. Calculate the centre vector ( ) of all particles in the

current population, i.e. ∑ , where PS is the population size.

6. Calculate the Euclidian distance of each particle , to , and then sort them from the largest to the smallest distance.

7. Add the best particle, based on their Euclidian distance, to the archive.

8. For each in , evolve it using Eq. (7), (8) and (9):

If . 1 2 7

Else

, , 1 , , ,2 , , , 8

2705

(9)

where is selected from the archive pool, Θ is an integer random number of 1, , , while CFE and FFE are the current and maximum fitness evaluations, respectively. We must mention here that the difference between equations 7 and 8 is that in (7) Θ, and are fixed to update all variables, while in (8) for every 1, 2, . . . , , a random Θ,

and are selected. Furthermore, the algorithm makes more emphasis on (8) at the early stages of the search process, to maintain diversity, then adaptively emphasis on (9). We must mention here that steps (4 and 5) are discarded once is less than .

9. A mutation operator is adopted with a predefined probability to be applied. In this mutation, the new particle is shifted to explore new areas, as detailed in Eq (10):

(10)

where is an integer random number 0.1 , 0.5 , while and 1, . These values are based on [27]. The mutation step is applied after PSO updates a particle .

10. Automatic Injection of individuals [28]: If the best particle within the entire population does not improve with a predefined tolerance ( ) for generations, new individuals particles are generated, as in Table I, and each particle is assigned with its own , and

TABLE I. PSEUDO-CODE OF ADDING NEW PARTICLES 1: 1: 0.9

/* inherit information from previous evolved particles*/

, where is an integer random number 1,

11. For Ω generations, all new particles are evolved using the same PSO algorithm, as shown in Step 2 to Step 9. New particles are then merged with those particles obtained from previous generations, and hence the new population size is . All particles are then evolved by PSO as described earlier. If the best particle is improved, the population size is shrunk to its initial size (the best individuals are selected). However, if the best particle is not changed for generations, new individuals particles are added as described earlier. To sum up, the population size is reduced to its initial value if one of the following conditions is met: (i) If the best particle in the population improves. (ii) A predefined maximum size is reached and the best particle is not improved for generations. (iii) If the know best solution is reached with a tolerance of 1e-04. This condition is

not considered if the global optimal solution is not known.

12. If one or more of the stopping criteria are met, stop; else, go to 3.

B. PSO Parameters Self-Adaptation Control

As of PSO literature, the performance of PSO algorithm is greatly dependent on its own parameters ( , and ). It is very common to use fixed parameters values during the evolution process. However, we believe the use of self-adaptive values for the parameters would improve the algorithm’s performance.

To do this, each particle ( ) in the current population is assigned with three values, as seen in Fig. 1. One is for the constriction factor , while the other two are for and . These values are initialized within its own boundaries, as shown in equations 6-8.

1 2 … ... ... ...

Fig. 1. Initialization of the initial particles and PSO parameters (11) (12) (13)

where and are the lower and upper bound for , respectively, and are the lower and upper bound for , respectively, while and are the lower and upper bound for .

Following the PSO mechanism as in equation 5, the velocity ( ) of each parameter is then calculated and thus the new parameter value is the sum of its previous and new velocity values, as shown below. 0.5 14 (15)

where is the value of the best local particle of particle z, is the value of the global best particle obtained so far, while , are uniform random numbers between zero and one. Note that 0.5 represents a constriction factor, as in (5), and a value of 0.5 is used in this research based on our empirical results. 0.5 16 (17)

where is value of the local best for particle z, is value of the global best particle found so

far, while , are uniform random numbers 0,1 . 0.5 18

2706

(19)

where is value of the local best for particle z, is value of the global best particle found so

far, while , are uniform random numbers 0,1 .

C. Constraint Handling

To deal with the constraints, in this paper, the superiority of feasible solutions method [29] is used, in which a set of three feasibility criteria was used: i) between two feasible solutions, the fittest one (according to fitness function) is better, ii) a feasible solution is always better than an infeasible one, iii) between two infeasible solutions, the one having the smaller sum of constraint violation is preferred.

In this paper, the sum of constraint violation is calculated as follows: Θ∑ 0, ∑ 0, | | (20)

where is the kth inequality constraints, is the eth equality constraint. The equality constraints are also transformed to inequalities of the form showed in Eq. (21), where is a small value (0.0001): ε h x ε, for e 1, … , E (21)

IV. EXPERIMENTAL RESULTS

In this section, the performance of the proposed algorithm is analyzed by solving a set of recent benchmark problems on constrained optimization (18 problems each with 10 and 30 dimensions) [13]. Firstly, API-PSO is compared with the state-of-the-art algorithms. Secondly, the effect of the proposed diversity mechanism is discussed. The algorithm was coded using Matlab 7.8.0 (R2009a), and was run on a PC with a 3.0 GHz Core 2 Duo processor, a 3.5GB RAM, and windows XP. The algorithm was run 25 times for each test problem, where the stopping criterion was to run for up to 200K FFEs for the 10D instances, and 600K FFEs for 30D. Note that, the evaluation of the constraints is counted as one.

API-PSO uses the following parameters: = 90, 0.4,0.729 , 1.4,2 , and 1.4,2 , , and , and are self-updated as shown in section III.B, /2 , /3 , the mutation operator is

applied with a probability of 0.25, 60, = 20 and 50 for non-separable and separable function, respectively and = 1.0E-08 and Ω 3 . Note that these values are based on our empirical studies.

A. Comparison to the state-of-the-art algorithms

Here, the solutions of the proposed algorithm is compared with a DE algorithm ( DEag [30]), which won the CEC’2010 constrained optimization competition, as well as the best PSO algorithm, namely Co-evolutionary Comprehensive Learning Particle Swarm Optimizer (CO-CLPSO) [31], from the same competition. The detailed computational results for both the 10D and 30D instances are shown in Appendix A.

To start with, it is important to report that API-PSO is able to reach a 100% feasibility ratio for all 36 instances (18 from each of 10D and 30D), while DEag obtained 100% feasibility ratio for only 35 test instances, while it obtained only a 12% feasibility ratio for C12 with 30D. CO-CLPSO algorithm was able to obtain a 94.4% feasibility ratio for the 10D and 87.3% for the 30D test problems.

To judge the quality of solutions, a summary of comparison is reported in Table II. From this table, for the 10D results, API-PSO is competitive to DEag, while it is better than CO-CLPSO based on only the average results. Based on the 30D problems, API-PSO is superior to other algorithms for the majority of test problems.

Table II. COMPARISON OF API-PSO WITH DEAG AND CO-CLPSO

D Comparison Fitness Better Equal Worse Test

10D

API-PSO – to – DEag

Best 5 12 1 Average 7 7 4

API-PSO – to – CO-CLPSO

Best 13 5 0

Average 17 1 0

30D

API-PSO – to – DEag

Best 15 1 2 Average 14 0 4

API-PSO – to – CO-CLPSO

Best 17 1 0 Average 18 0 0

To judge the difference between API-PSO and PSO statistically, a non- parametric test, Wilcoxon Signed Rank Test [32], is performed. For a 5% significance level, a summary of comparison is reported in Table II. In presenting the test results, one of three signs ( , and ) is assigned, where the “ ” sign means that the first algorithm is significantly better than the second, the “ ” symbol denotes that the first algorithm is worse than the other, and “ ” indicates that there is no significant difference between the two algorithms. From this table, one can see that API-PSO is superior to CO-CLPSO based on the average results.

Based on the statistical test, for the best and average results, API-PSO is superior to other two algorithms for the 30D problems. However, for 10D problems, the algorithm is better than CO-CLPSO, and there is no significant difference when compared to DEag.

We have also compared the performance of these algorithms graphically using the performance profiles [33]. The performance profiles are a tool to compare the performance, of a set of algorithms using a set of test problems, with a comparison goal (such as the computational time and the average number of fitness evaluations) to obtain a certain level of a performance indicator (such as optimal fitness). Due to lack of appropriate data from the published algorithms, the performance profiles are defined in this section with a different comparison goal. In these cases, the optimal solutions are not known for many problems and no algorithm recorded the fitness evaluations used to obtain a predefinded fitness value. For this reason, the comparison goal is defined as the average fitness value obtained after running the algorithms for a fixed number of fitness evaluations (such as 200K and 600K FFEs for the 10D and

2707

30D problems, respectively). The performance profiles are presented in Fig. 2.

This figure shows that API-PSO has the probability of 0.75 and 0.9 to obtain the best fitness value for 10 and 30D problems respectively, and has reached a probability of 1.0 with a value of equals to 10.5E+10 and 12.0E+14, respectively. It is also clear that DEag is the second best algorithm, while CO-CLPSO is the third.

(a) 10D

(b) 30D

Fig. 2. Average fitness values performance profiles comparing API-PSO with different state-of-the-art algorithms for both 10D and 30D problems, in (a) and (b), respectively.

B. The effect of the proposed injection mechanism

In this section, we show the benefit of incorporating the diversity mechanism to the algorithm in achieving better results as well as escaping from local optima.

Here, the algorithm was run without using the adaptive individual injection mechanism to solve both the 10D and 30D test problems. The parameters are set same as those in the previous section. A comparison for average results of five test problems is presented in Table III. These

problems are multimodal and can be used as evidence for the benefit of the proposed methodology. Note that there is no significant difference for other test problems.

Table III. COMPARISON BETWEEN API-PSO WITH AND WITHOUT THE INJECTION MECHANISM.

10D 30D With Without With Without

C01 -7.467701E-01 -7.441055E-01 -7.991352E-01 -7.686186E-01 C06 -5.775752E+02 -5.771758E+02 -5.306378E+02 -5.306377E+02 C08 4.469705E+00 9.326829E+00 9.464406E+01 1.189919E+02 C12 -5.255054E+00 -1.992458E-01 -1.992624E-01 -1.992613E-01 C13 -6.471591E+01 -6.469729E+01 -6.228464E+01 -6.154768E+0

Besides the above mentioned benefit, the proposed injection mechanism can help the algorithm to escape from local optima. This can be seen from the convergence plots in Fig. 3.

(a) C01 (10D)

(b) C08 (10D), the y-axis is in a log scale

Fig. 3. The convergence plots of the proposed algorithm with and without the injection mechanism for C01 and C08 with 10D

For further clarifications, the changes in population sizes for two problems (C01 and C08) are depicted in Fig.4.

V. CONCLUSIONS AND FUTURE WORK

There is no doubt that particle swarm optimizer is highly dependent on algorithmic parameters and its performance is poor when solving constrained problems,

2708

(a) C01 (10D)

(b) C08 (10D)

Fig. 4. The change of population size for C01 and C08 with 10D for one run

except if it was hybridized with other search techniques, as it can easily be trapped into local solutions.

Bearing in mind these facts, in this paper, an improved PSO algorithm was proposed for solving constrained problems; however it can be applied for solving unconstrained ones. In the proposed algorithm, an archive of both, best and more diversified individuals, was used. At the early stages of the evolution process, the algorithm would make more emphasis on the local PSO variant, and then adaptively emphasize on the best individuals. Furthermore, a new self-adaptive mechanism was proposed to adapt the PSO parameters. In addition to this, a diversity mechanism was embedded with PSO to allow it escaping from local optimum solutions.

The algorithm was tested on the CEC2010 test problems. The proposed algorithm showed better performance in comparison to other state-of-the-art algorithms. Also, using the proposed diversity mechanism

was able to help the algorithm to get away from local solutions.

For future work, we intend to analyse all parameters as well as the computational time of the algorithm. To add to this, we intend to apply the algorithm for solving real-worl application

REFERENCES [1] J. Kennedy and R. Eberhart, “Particle swarm optimization,”

in proceeding IEEE International Conference on Neural Networks, 1995, pp. 1942-1948.

[2] M. A. Montes de Oca, J. Pena, T. Stutzle, C. Pinciroli, and M. Dorigo, “Heterogeneous particle swarm optimizers,” in proceeding IEEE Congress on Evolutionary Computation, 2009, pp. 698-705.

[3] K. E. Parsopoulos and M. N. Vrahatis, “Particle swarm optimization method for constrained optimization problems,” Intelligent Technologies–Theory and Application: New Trends in Intelligent Technologies, vol. 76, pp. 214-220, 2002.

[4] G. Yue-Jiao, Z. Jun, H. S. Chung, C. Wei-Neng, Z. Zhi-Hui, L. Yun, and S. Yu-Hui, “An Efficient Resource Allocation Scheme Using Particle Swarm Optimization,” IEEE Transactions on Evolutionary Computation, vol. 16, pp. 801-816, 2012.

[5] D. Goldberg, Genetic Algorithms in Search, Optimization, and Machine Learning. MA: Addison-Wesley, 1989.

[6] R. Storn and K. Price, “Differential Evolution - A simple and efficient adaptive scheme for global optimization over continuous spaces,” International Computer Science Institute Technical Report, TR-95-012, 1995.

[7] S. Elsayed, R. Sarker, and D. Essam, “Memetic Multi-Topology Particle Swarm Optimizer for Constrained Optimization,” in proceeding IEEE Congress on Evolutionary Computation, World Congress on Computational Intelligence (WCCI2012), Brisbane, 2012.

[8] C. A. C. Coello, G. T. Pulido, and M. S. Lechuga, “Handling multiple objectives with particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 8, pp. 256-279, 2004.

[9] M. Salazar-Lechuga and J. E. Rowe, “Particle swarm optimization and fitness sharing to solve multi-objective optimization problems,” in proceeding Evolutionary Computation, 2005. The 2005 IEEE Congress on, 2005, pp. 1204-1211 Vol. 2.

[10] D. Parrott and L. Xiaodong, “Locating and tracking multiple dynamic optima by a particle swarm model using speciation,” IEEE Transactions on Evolutionary Computation, vol. 10, pp. 440-458, 2006.

[11] A. Auger and N. Hansen, “A restart CMA evolution strategy with increasing population size,” in proceeding IEEE Congress on Evolutionary Computation, 2005, pp. 1769-1776.

[12] J. A. Vrugt, B. A. Robinson, and J. M. Hyman, “Self-Adaptive Multimethod Search for Global Optimization in Real-Parameter Spaces,” IEEE Transactions on Evolutionary Computation, vol. 13, pp. 243-259, 2009.

[13] R. Mallipeddi and P. N. Suganthan, “Problem definitions and evaluation criteria for the CEC 2010 competition and special session on single objective constrained real-parameter optimization,” Technical Report, Nangyang Technological University, Singapore, 2010.

[14] Y. del Valle, G. K. Venayagamoorthy, S. Mohagheghi, J. C. Hernandez, and R. G. Harley, “Particle Swarm Optimization: Basic Concepts, Variants and Applications in Power Systems,” IEEE Transactions on Evolutionary Computation, vol. 12, pp. 171-195, 2008.

[15] J. J. Liang and P. N. Suganthan, “Dynamic Multi-Swarm Particle Swarm Optimizer with a Novel Constraint-Handling Mechanism,” in proceeding IEEE Congress on Evolutionary Computation, 2006, pp. 9-16.

[16] M. Clerc and J. Kennedy, “The particle swarm - explosion, stability, and convergence in a multidimensional complex space,” IEEE Transactions on Evolutionary Computation, vol. 6, pp. 58-73, 2002.

2709

[17] E. Mezura-Montes and J. Flores-Mendoza, “Improved Particle Swarm Optimization in Constrained Numerical Search Spaces. Nature-Inspired Algorithms for Optimisation.” vol. 193, R. Chiong, Ed., ed: Springer Berlin / Heidelberg, 2009, pp. 299-332.

[18] R. Mendes, J. Kennedy, and J. Neves, “The fully informed particle swarm: simpler, maybe better,” IEEE Transactions on Evolutionary Computation, vol. 8, pp. 204-210, 2004.

[19] K. Parsopoulos and M. Vrahatis, “Unified Particle Swarm Optimization for Solving Constrained Engineering Optimization Problems. Advances in Natural Computation.” vol. 3612, L. Wang, et al., Eds., ed: Springer Berlin / Heidelberg, 2005, pp. 582–591.

[20] B. Y. Qu, P. N. Suganthan, and S. Das, “A Distance-based Locally Informed Particle Swarm Model for Multimodal Optimization,” IEEE Transactions on Evolutionary Computation, vol. PP, pp. 1-1, 2012.

[21] L. C. Cagnina, S. C. Esquivel, and C. A. Coello Coello, “Solving constrained optimization problems with a hybrid particle swarm optimization algorithm,” Engineering Optimization, vol. 43, pp. 843-866, 2011.

[22] X. Hu and R. Eberhart, “Solving Constrained Nonlinear Optimization Problems with Particle Swarm Optimization,” in proceeding the 6th World Multiconference on Systemics, Cybernetics and Informatics, 2002, pp. 203-206.

[23] G. T. Pulido and C. A. Coello Coello, “A constraint-handling mechanism for particle swarm optimization,” in proceeding IEEE Congress on Evolutionary Computation, 2004, pp. 1396-1403.

[24] A. K. Renato and C. Leandro dos Santos, “Coevolutionary Particle Swarm Optimization Using Gaussian Distribution for Solving Constrained Optimization Problems,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 36, pp. 1407-1416, 2006.

[ [25] Q. He and L. Wang, “An effective co-evolutionary particle swarm optimization for constrained engineering design problems,” Engineering Applications of Artificial Intelligence, vol. 20, pp. 89-99, 2007.

[26] T. Ray and K. M. Liew, “A swarm with an effective information sharing mechanism for unconstrained and constrained single objective optimisation problems,” in proceeding IEEE Congress on Evolutionary Computation, 2001, pp. 75-80.

[27] S. Elsayed, R. Sarker, and T. Ray, “Parameters Adaptation in Differential Evolution,” in proceeding IEEE Congress on Evolutionary Computation, Brisbane, 2012, pp. 1- 8

[28] S. Elsayed and R. Sarker, “Differential Evolution with Automatic Population Injection Scheme,” in proceeding IEEE Symposium Series on Computational Intelligence, Singapore, 2013, accepted.

[29] K. Deb, “An Efficient Constraint Handling Method for Genetic Algorithms,” Computer Methods in Applied Mechanics and Engineering, vol. 186, pp. 311-338, 2000.

[30] T. Takahama and S. Sakai, “Constrained optimization by the ε constrained differential evolution with an archive and gradient-based mutation,” in proceeding IEEE Congress on Evolutionary Computation, 2010, pp. 1-9.

[31] J. J. Liang, Z. Shang, and Z. Li, “Coevolutionary Comprehensive Learning Particle Swarm Optimizer,” in proceeding IEEE Congress on Evolutionary Computation 2010, pp. 1-8.

[32] G. W. Corder and D. I. Foreman, Nonparametric Statistics for Non-Statisticians: A Step-by-Step Approach. Hoboken, NJ: John Wiley, 2009.

[33] H. J. C. Barbosa, H. S. Bernardino, and A. M. S. Barreto, “Using performance profiles to analyze the results of the 2006 CEC constrained optimization competition,” in proceeding IEEE Congress on Evolutionary Computation (CEC), 2010, pp. 1-8.

2710

APPENDIX A

FUNCTION VALUES ACHIEVED BY API-PSO, εDEag AND CO-CLPSO FOR THE CEC2010 TEST PROBLEMS.

Prob. Alg. 10D 30D Best Mean St. d Best Mean St. d

C01 API-PSO -7.473104E-01 -7.467701E-01 1.869851E-03 -8.199343E-01 -7.991352E-01 1.480511E-02 εDEag -7.473104E-01 -7.470402E-01 1.323339E-03 -8.218255E-01 -8.208687E-01 7.103893E-04 CO-CLPSO -7.4731E-01 -7.3358E-01 1.7848E-02 -8.0688E-01 -7.1598E-01 5.0252E-02

C02 API-PSO -2.277710E+00 -2.277706E+00 3.651729E-06 -2.280920E+00 -2.271482E+00 5.597414E-03 εDEag -2.277702E+00 -2.269502E+00 2.3897790E-02 -2.169248E+00 -2.151424E+00 1.197582E-02 CO-CLPSO -2.2777 -2.2666 1.4616E-02 -2.2809 -2.2029 1.9267E-01

C03 API-PSO 0.00000E+00 0.000000E+00 0.000000E+00 1.482175E-22 5.835052E-16 2.342356E-15 εDEag 0.000000E+00 0.000000E+00 0.0000000E+00 2.867347E+01 2.883785E+01 8.047159E-01 CO-CLPSO 2.4748E-13 3.5502E-01 1.77510 - - -

C04 API-PSO -1.0000E-05 -1.0000E-05 0.0000000E+00 -3.333286E-06 -3.332210E-06 1.020993E-09 εDEag -9.992345E-06 -9.918452E-06 1.5467300E-07 4.698111E-03 8.162973E-03 3.067785E-03 CO-CLPSO -1.0000E-05 -9.3385E-06 1.0748E-06 -2.9300E-06 1.1269E-01 5.6335E-01

C05 API-PSO -4.836106E+02 -4.836106E+02 0.0000000E+00 -4.836106E+02 -4.531034E+02 4.094375E+01 εDEag -4.836106E+02 -4.836106E+02 3.89035E-13 -4.531307E+02 -4.495460E+02 2.899105E+00 CO-CLPSO -4.8361E+02 -4.8360E+02 1.9577E-02 -4.8360E+02 -3.1249E+02 8.8332E+01

C06 API-PSO -5.786624E+02 -5.775752E+02 5.398528E-01 -5.306379E+02 -5.306378E+02 5.513569E-05 εDEag -5.786581E+02 -5.786528E+02 3.6271690E-03 -5.285750E+02 -5.279068E+02 4.748378E-01 CO-CLPSO -5.7866E+02 -5.7866E+02 5.7289E-04 -2.8601E+02 -2.4470E+02 3.9481E+01

C07 API-PSO 0.00000E+00 0.000000E+00 0.0000000E+00 8.478939E-24 2.477412E-15 1.132094E-14 εDEag 0.00000E+00 0.000000E+00 0.0000000E+00 1.147112E-15 2.603632E-15 1.233430E-15 CO-CLPSO 1.0711E-09 7.9732E-01 1.6275 3.7861E-11 1.1163 1.8269

C08 API-PSO 0.000000E+00 4.469705E+00 4.453766E+00 9.422069E-22 9.464406E+01 9.687806E+01 εDEag 0.00000E+00 6.727528E+00 5.560648E+00 2.518693E-14 7.831464E-14 4.855177E-14 CO-CLPSO 9.6442E-10 6.0876E-01 1.4255 4.3114E-14 4.7517E+01 1.1259E+02

C09 API-PSO 0.000000E+00 3.102937E-27 7.259108E-27 4.871558E-22 8.786423E-16 3.372742E-15 εDEag 0.00000E+00 0.000000E+00 0.0000000E+00 2.770665E-16 1.072140E+01 2.821923E+01 CO-CLPSO 3.7551E-16 1.9938E+10 9.9688E+10 1.9695E+02 1.4822E+08 2.4509E+08

C10 API-PSO 0.000000E+00 0.000000E+00 0.000000E+00 7.891501E-24 1.727637E-16 3.794349E-16 εDEag 0.000000E+00 0.000000E+00 0.0000000E+00 3.252002E+01 3.326175E+01 4.545577E-01 CO-CLPSO 2.3967E-15 4.9743E+10 2.4871E+11 3.1967E+01 1.3951E+09 5.8438E+09

C11 API-PSO -1.522713E-03 -1.522713E-03 8.025779E-14 -3.923437E-04 -3.923419E-04 2.933523E-09 εDEag -1.52271E-03 -1.52271E-03 6.3410350E-11 -3.268462E-04 -2.863882E-04 2.707605E-05 CO-CLPSO - - - - - -

C12 API-PSO -1.141266E+02 -5.255054E+00 2.281799E+01 -1.992634E-01 -1.992624E-01 6.990582E-07 εDEag -5.700899E+02 -3.367349E+02 1.7821660E+02 -1.991453E-01 3.562330E+02 2.889253E+02 CO-CLPSO -1.2639E+01 -2.3369 2.4329E+01 -1.9926E-01 -1.9911E-01 1.1840E-04

C13 API-PSO -6.842937E+01 -6.471591E+01 1.846606E+00 -6.437975E+01 -6.228464E+01 1.048709E+00 εDEag -6.842937E+01 -6.842936E+01 1.0259600E-06 -6.642473E+01 -6.535310E+01 5.733005E-01 CO-CLPSO -6.842936E+01 -6.397445E+01 2.134080E+00 -6.2752E+01 -6.0774E+01 1.1176

C14 API-PSO 0.000000E+00 0.000000E+00 0.000000E+00 8.380874E-25 1.883721E-10 9.367101E-10 εDEag 0.000000E+00 0.000000E+00 0.0000000E+00 5.015863E-14 3.089407E-13 5.608409E-13 CO-CLPSO 5.7800E-12 3.1893E-01 1.1038 3.28834e-09 0.0615242 0.307356

C15 API-PSO 0.000000E+00 0.000000E+00 0.000000E+00 2.879271E-21 1.686001E-01 8.430006E-01 εDEag 0.000000E+00 1.798980E-01 8.8131560E-01 2.160345E+01 2.160376E+01 1.104834E-04 CO-CLPSO 3.0469E-12 2.9885 3.3147 5.7499E-12 5.1059E+01 9.1759E+01

C16 API-PSO 0.000000E+00 0.000000E+00 0.000000E+00 0.000000E+00 0.000000E+00 0.000000E+00 εDEag 0.000000E+00 3.702054E-01 3.7104790E-01 0.000000E+00 2.168404E-21 1.062297E-20 CO-CLPSO 0.000000E+00 5.9861E-03 1.3315E-02 0.000000E+00 5.2403E-16 4.6722E-16

C17 API-PSO 0.000000E+00 2.958228E-33 3.142513E-33 2.173439E-03 1.185898E-01 1.486494E-01 εDEag 1.463180E-17 1.249561E-01 1.9371970E-01 2.165719E-01 6.326487E+00 4.986691E+00 CO-CLPSO 7.6677E-17 3.7986E-01 4.5284E-01 1.5787E-01 1.3919 4.2621

C18 API-PSO 7.257520E-29 6.728198E-23 1.394038E-22 7.048406E-12 2.323985E-04 1.119240E-03 εDEag 3.731440E-20 9.678765E-19 1.8112340E-18 1.226054E+00 8.754569E+01 1.664753E+02 CO-CLPSO 7.7804E-21 2.3192E-01 9.9559E-01 6.0047E-02 1.0877E+01 3.7161E+01

2711