Multiple Particle Swarm Optimizers with Inertia Weight ... · PDF fileMultiple Particle Swarm...

12
Multiple Particle Swarm Optimizers with Inertia Weight with Diversive Curiosity and Its Performance Test * Hong Zhang, Member IAENG Abstract— This paper presents a new method of curiosity-driven multi-swarm search, called multiple particle swarm optimizers with inertia weight with diversive curiosity (MPSOIWα/DC). Compared to a plain MPSOIW, it has the following outstanding features: (1) Decentralization in multi-swarm explo- ration with hybrid search, (2) Concentration in eval- uation and behavior control with diversive curios- ity, (3) Practical use of the results of evolutionary PSOIW, and (4) Their effective combination. This achievement expands the applied object of coopera- tive PSO with the multi-swarm’s decision-making. To demonstrate the effectiveness of the proposal, com- puter experiments on a suite of multidimensional benchmark problems are carried out. We examine the intrinsic characteristics of the proposal, and compare the search performance with other methods. The ob- tained experimental results clearly indicate that the search performance of the MPSOIWα/DC is superior to that by the EPSOIW, PSOIW, OPSO, RGA/E, and MPSOα/DC for the given benchmark problems. Keywords: cooperative particle swarm optimization, evolutionary particle swarm optimizer with inertia weight, hybrid search, localized random search, stagna- tion, exploitation and exploration, diversive and spe- cific curiosity, swarm intelligence 1 Introduction In a few words, optimization is to find the most suit- able value of a function within a given domain. In con- trast to traditional optimization methods such as steepest descent method, conjugate gradient method, and quasi- newton method, which may be good at solution accu- racy and exact computation but have brittle operations and necessary information to search environment, in gen- eral, the methods of genetic and evolutionary computa- * This paper was originally presented at IMECS2011 [37]. This is a substantially extended version. Hong Zhang is with the Department of Brain Science and En- gineering, Graduate School of Life Science & Systems Engineer- ing, Kyushu Institute of Technology, 2-4 Hibikino, Wakamatsu, Kitakyushu 808-0196, Japan (phone/fax: +81-93-695-6112; email: [email protected]). tion (GEC) 1 provide a more robust, efficient, and ex- pandable approach in treating with high-grade nonlinear, multimodal optimization problems, and complex practi- cal problems in the real-world without prior knowledge [13, 18, 28]. As a new member of GEC, particle swarm optimization (PSO) [8, 15] has been successfully applied in different fields of science, technology, engineering, and applica- tions [22]. This is because the technique has distinctive features, i.e. information exchange, intrinsic memory, and directional search in the mechanism and composi- tion compared to the other members such as genetic al- gorithms [12] and evolutionary programming [11]. For improving the convergence, solution accuracy, and search efficiency of a plain particle swarm optimizer (the PSO), many basic variants of the PSO such as a parti- cle swarm optimizer with inertia weight [24], a canonical particle swarm optimizer [5, 6], fully informed particle swarm [16] etc. were proposed. The principal objective of these optimizers (algorithms) was to put in enforcing the search strategy and information transfer in the interior of a particle swarm to increase diversification for realizing efficient search. Especially, in recent years, the method of multi-swarm search is rapidly developing from the basis of the method of single swarm search. A large number of studies and investigations on cooperative PSO 2 in rela- tion to symbiosis, interaction, and synergy are in the re- searcher’s spotlight. Various kinds of algorithms on coop- erative PSO, for example, hybrid PSO, multi-population PSO, multiple PSO with decision-making strategy etc., were published [1, 10, 20, 32] for attaining high search ability and solution accuracy more by deepening on group searching with cooperative actions. In comparison with those methods that only operate a single particle swarm, it is an indisputable fact that dif- ferent attempts and strategies to reinforcement of multi- 1 GEC usually refers to genetic algorithms (GAs), genetic pro- gramming (GP), evolutionary programming (EP), and evolution strategies (ES). 2 Cooperative PSO is generally considered as multiple swarms (or sub-swarms) searching for a solution (serially or in parallel) and exchanging some information during the search according to some communication strategy. IAENG International Journal of Computer Science, 38:2, IJCS_38_2_05 (Advance online publication: 25 May 2011) ______________________________________________________________________________________

Transcript of Multiple Particle Swarm Optimizers with Inertia Weight ... · PDF fileMultiple Particle Swarm...

Multiple Particle Swarm Optimizers with InertiaWeight with Diversive Curiosity and Its

Performance Test ∗

Hong Zhang, Member IAENG †

Abstract— This paper presents a new method ofcuriosity-driven multi-swarm search, called multipleparticle swarm optimizers with inertia weight withdiversive curiosity (MPSOIWα/DC). Compared toa plain MPSOIW, it has the following outstandingfeatures: (1) Decentralization in multi-swarm explo-ration with hybrid search, (2) Concentration in eval-uation and behavior control with diversive curios-ity, (3) Practical use of the results of evolutionaryPSOIW, and (4) Their effective combination. Thisachievement expands the applied object of coopera-tive PSO with the multi-swarm’s decision-making. Todemonstrate the effectiveness of the proposal, com-puter experiments on a suite of multidimensionalbenchmark problems are carried out. We examine theintrinsic characteristics of the proposal, and comparethe search performance with other methods. The ob-tained experimental results clearly indicate that thesearch performance of the MPSOIWα/DC is superiorto that by the EPSOIW, PSOIW, OPSO, RGA/E,and MPSOα/DC for the given benchmark problems.

Keywords: cooperative particle swarm optimization,

evolutionary particle swarm optimizer with inertia

weight, hybrid search, localized random search, stagna-

tion, exploitation and exploration, diversive and spe-

cific curiosity, swarm intelligence

1 Introduction

In a few words, optimization is to find the most suit-able value of a function within a given domain. In con-trast to traditional optimization methods such as steepestdescent method, conjugate gradient method, and quasi-newton method, which may be good at solution accu-racy and exact computation but have brittle operationsand necessary information to search environment, in gen-eral, the methods of genetic and evolutionary computa-

∗This paper was originally presented at IMECS2011 [37]. Thisis a substantially extended version.

†Hong Zhang is with the Department of Brain Science and En-gineering, Graduate School of Life Science & Systems Engineer-ing, Kyushu Institute of Technology, 2-4 Hibikino, Wakamatsu,Kitakyushu 808-0196, Japan (phone/fax: +81-93-695-6112; email:[email protected]).

tion (GEC)1 provide a more robust, efficient, and ex-pandable approach in treating with high-grade nonlinear,multimodal optimization problems, and complex practi-cal problems in the real-world without prior knowledge[13, 18, 28].

As a new member of GEC, particle swarm optimization(PSO) [8, 15] has been successfully applied in differentfields of science, technology, engineering, and applica-tions [22]. This is because the technique has distinctivefeatures, i.e. information exchange, intrinsic memory,and directional search in the mechanism and composi-tion compared to the other members such as genetic al-gorithms [12] and evolutionary programming [11].

For improving the convergence, solution accuracy, andsearch efficiency of a plain particle swarm optimizer (thePSO), many basic variants of the PSO such as a parti-cle swarm optimizer with inertia weight [24], a canonicalparticle swarm optimizer [5, 6], fully informed particleswarm [16] etc. were proposed. The principal objective ofthese optimizers (algorithms) was to put in enforcing thesearch strategy and information transfer in the interior ofa particle swarm to increase diversification for realizingefficient search. Especially, in recent years, the method ofmulti-swarm search is rapidly developing from the basisof the method of single swarm search. A large number ofstudies and investigations on cooperative PSO2 in rela-tion to symbiosis, interaction, and synergy are in the re-searcher’s spotlight. Various kinds of algorithms on coop-erative PSO, for example, hybrid PSO, multi-populationPSO, multiple PSO with decision-making strategy etc.,were published [1, 10, 20, 32] for attaining high searchability and solution accuracy more by deepening on groupsearching with cooperative actions.

In comparison with those methods that only operate asingle particle swarm, it is an indisputable fact that dif-ferent attempts and strategies to reinforcement of multi-

1GEC usually refers to genetic algorithms (GAs), genetic pro-gramming (GP), evolutionary programming (EP), and evolutionstrategies (ES).

2Cooperative PSO is generally considered as multiple swarms(or sub-swarms) searching for a solution (serially or in parallel)and exchanging some information during the search according tosome communication strategy.

IAENG International Journal of Computer Science, 38:2, IJCS_38_2_05

(Advance online publication: 25 May 2011)

______________________________________________________________________________________

swarm search can be perfected well, which mainly focuson the rationality of information propagation, coopera-tion, optimization, hierarchical expression, and intelli-gent control within these particle swarms for efficientlyfinding an optimal solution in a wide search space. Withregard to the effect of them, a lot of publications and re-ports have been shown that the methods of cooperativePSO have better adaptability and extensibility than onesof uncooperative PSO for dealing with different optimiza-tion problems [4, 14, 20, 36].

Due to great requests to enlarge search performance, uti-lizing the techniques of group searching, parallel process-ing with intelligent strategy has become one of extremelyimportant approaches to optimization. For magnifyingcooperative PSO research and improving the search per-formance of a plain multiple particle swarm optimiz-ers with inertia weight (MPSOIW), this paper presentsmultiple particle swarm optimizers with inertia weightwith diversive curiosity (MPSOIWα/DC), which is a newmethod of cooperative PSO.

In comparison with the plain MPSOIW, the proposalhas the following outstanding features: (1) Decentral-ization in multi-swarm exploration with hybrid search3

(MPSOIWα), (2) Concentration in evaluation and behav-ior control with diversive curiosity (DC), (3) Practical useof the results of evolutionary PSOIW (PSOIW∗), and (4)Their effective combination. According to these features,the MPSOIWα/DC could be expected to alleviate stag-nation in the optimization, and to enhance search abilityand solution accuracy by enforcement of group decision-making and managing the trade-off between exploitationand exploration in the multi-swarm’s heuristics.

From the viewpoint of methodology, the MPSOIWα/DCis an analogue of the method of multiple particle swarmoptimization with diversive curiosity [35], which hasbeen successfully applied to the plain multiple particleswarm optimizers (MPSO) and multiple canonical parti-cle swarm optimizers (MCPSO) [34]. Nevertheless, thecreation and actualization of the proposal are not only toimprove the search performance of the plain MPSOIW,but also to expand the applied object and area of thecuriosity-driven multi-swarm search. This is just our mo-tivation, and study purpose further to deepen the ap-proach of cooperative PSO with an integration way, i.e.reinforcement of hybrid search, parameter selection, andswarm intelligence.

The rest of the paper is arranged as follows. In Section2, the algorithms of the PSO and PSOIW are briefly de-scribed. Section 3 introduces the structure and featuresof the MPSOIWα/DC, and the adopted LRS and inter-nal indicator, respectively. Section 4 analyzes and dis-cusses the experimental results for a suite of the multi-

3The hybrid search, here, is compose of the particle swarm op-timizer with inertia weight and the localed random search.

dimensional benchmark problems to verify the effective-ness of the proposed method. Finally, the concludingremarks appear in Section 5.

2 Basic Algorithms

For convenience to the following description of the PSOand PSOIW, let the search space be N -dimensional, Ω ∈<N , the number of particles in a swarm be P , the positionand velocity of the i-th particle be ~xi = (xi

1, xi2, · · · , xi

N )T

and ~v i = (vi1, v

i2, · · · , vi

N )T , respectively.

2.1 The PSO

In the beginning of the PSO search, the position andvelocity of the i-th particle are generated in random, thenthey are updated by

~x ik+1 = ~xi

k + ~v ik+1

~v ik+1 = w0~v

ik+w1~r1 ⊗(~p i

k−~x ik)+w2~r2 ⊗ (~qk−~xi

k)(1)

where w0 is an inertial coefficient, w1 is a coefficientfor individual confidence, w2 is a coefficient for swarmconfidence. ~r1, ~r2 ∈ <N are two random vectors inwhich each element is uniformly distributed over [0, 1],and the symbol ⊗ is an element-wise operator for vec-tor multiplication. ~p i

k(= arg maxj=1,···,k

g(~x ij), where g(·)

is the criterion value of the i-th particle at time-step k)is the local best position of the i-th particle up to now,and ~qk(= arg max

i=1,2,···g(~p i

k)) is the global best position

among the whole swarm. In the original PSO, the pa-rameter values, w0 = 1.0 and w1 = w2 = 2.0, are used[15].

For preventing particles spread out to infinity in search, aboundary value, vmax, is introduced into the above men-tioned update rule to limit the biggest velocity of eachparticle by

v ij

k = vmax, if v ijk > vmax

v ijk = −vmax, if v ij

k < −vmax

(2)

where vijk is the j-th element of the i-th particle’s velocity

~v ik at time-step k.

2.2 The PSOIW

As is commonly known, the weak convergence is a dis-advantage of the PSO [2, 5]. This is because the valueof the inertial coefficient, w0 = 1.0, is not the best oneto manage the trade-off between exploitation and explo-ration [30, 31, 32]. For improving the convergence of thePSO by achieving a search shift which smoothly changesfrom exploratory mode to exploitative mode in the op-timization, Shi et al. modified the update rule of theparticle’s velocity in Eq. (1) by constant reduction of theinertia coefficient over time-step [9, 24] as follows.

~v ik+1 = w(k)~v i

k + w1~r1⊗(~p ik−~x i

k) + w2~r2⊗(~qk−~x ik) (3)

IAENG International Journal of Computer Science, 38:2, IJCS_38_2_05

(Advance online publication: 25 May 2011)

______________________________________________________________________________________

Figure 1: A flowchart of the MPSOIWα/DC

where w(k) is a variable inertia weight which is linearlyreduced from a starting value, ws, to a terminal value,we, with the increment of time-step k as follows.

w(k) = ws+we−ws

K×k (4)

where K is the maximum number of iteration for thePSOIW search. In the original PSOIW, the boundaryvalues are adopted to ws = 0.9 and we = 0.4, respectively,and w1 = w2 = 2.0 are still set as in the original PSO.Since the terminal value, we, is smaller than 1.0, it isperfectly obvious that the PSOIW has good convergencethan the PSO.

Based on the effect of the linearly damped inertia weight,the weakness of the PSO is overcome, and besides thesolution accuracy is also improved by the PSOIW run.However, the search results often converge to local solu-tions in treating with multimodal problems. Moreover,the phenomenon called stagnation in optimization is eas-ily caused as much as the terminal value we becomessmall. Consequently, this fault is fatal for the use of thePSOIW alone to treat with complex optimization prob-lems.

3 The MPSOIWα/DC

In order to thoroughly conquer the above mentionedshortcoming of the PSOIW, we propose to use the mul-tiple particle swarm optimizers with diversive curiosity,called MPSOIWα/DC. This is a powerful method of inte-gration of different approaches, which includes the prac-tical use of parameter selection, and intelligent, hybridand multi-swarm search to comprehensively managing

the trade-off between exploitation and exploration in themulti-swarm’s heuristics and to alleviate stagnation bygroup decision-making.

Concretely, Figure 1 illustrates a flowchart of theMPSOIWα/DC, which shows the data processing andinformation control in the method. It is the same as theMPSOα/DC [35] in instruction just instead of the part ofimplementing the PSO. The detailed characteristics on itare described below.

In the PSOIWα/DC, the plural PSOIWs are executedin parallel, and the localized random search (LRS) [35]is implemented to find the most suitable solution from alimited space for the solution found by each PSOIW. Thecontinuous action of the PSOIW and LRS, here, consti-tutes a hybrid search (i.e. memetic algorithm [19]). Thenthe best solution, ~q b

k , is determined with maximum selec-tion from the whole solutions found by the multi-swarmsearch (i.e. redundant search). Subsequently, it is putin a solution set being a storage memory for informationprocessing.

As an internal indicator in the multi-swarm, its role isto monitor whether the status of the best solution ~q b

k

continues to change or not at all time-step for makingup the concentration in evaluation and behavior control.Concretely, while the value of the output yk is zero, thismeans that the multi-swarm concentrates on exploringthe surroundings of the solution ~q b

k for “cognition”. Ifonce the value of the output yk become positive, it indi-cates that the multi-swarm has lost interest, i.e. feeling ofboredom, to search the region around the solution ~q b

k for“motivation”. The concepts of psychology, “cognition”

IAENG International Journal of Computer Science, 38:2, IJCS_38_2_05

(Advance online publication: 25 May 2011)

______________________________________________________________________________________

Table 1: Functions and criteria to the given suite of the multi-benchmark problems. The search space for eachproblem is limited to the search space Ω ∈ (−5.12, 5.12)N .

Problem Function Criterion N=2

Sphere fSp(~x) =N∑

d=1

x2d gSp(~x) =

1fSp(~x) + 1

Griewank fGr(~x) =1

4000

N∑

d=1

x2d −

N∏

d=1

cos( xd√

d

)+ 1 gGr(~x) =

1fGr(~x) + 1

Rastrigin fRa(~x) =N∑

d=1

(x2

d − 10 cos (2πxd) + 10)

gRa(~x) =1

fRa(~x) + 1

Rosenbrock fRo(~x) =N−1∑

d=1

[100

(xd+1 − x2

d

)2 +(1− xd

)2]

gRo(~x) =1

fRo(~x) + 1

Schwefel fSw(~x) =N∑

d=1

( d∑

j=1

xj

)2

gSw(~x) =1

fSw(~x) + 1

Hybrid fHy(~x) = fRa(~x) + 2fSw(~x) +112

fGr(~x) +120

fSp(~x) gHy(~x) =1

fHy(~x) + 1

and “motivation”, are further explained in formulationlater.

Due to the big reduction of boredom behavior in wholemulti-swarm search, the search efficiency finding an opti-mal solution or near-optimal solutions will be drasticallyimproved. Here, it is to be noted that the repeat of ini-tialization decided by the signal dk = 1 in Figure 1 isa mere expression style which introduces the mechanismof diversive curiosity for realizing a positive search. Ofcourse, the implementation style is not an isolated one, italso can be performed by other operation ways in prac-tice.

For convenience to understand the details of two impor-tant parts, i.e. the LRS and internal indicator, theirmechanisms are minutely described in the next subsec-tions.

3.1 The LRS

As be generally known, random search methods arethe simplest ones of stochastic optimization with undi-rectional search, and are effective and robust in han-dling many complex optimization problems [25]. Toefficiently obtain an optimal solution by using a hy-brid search method, we introduce the LRS [34] into theMPSOIWα/DC to find the most suitable solution froma limited space surrounding the solution found by thePSOIW. According to the additional operation, the prob-ability of escaping from a local solution will be steeplyraised for efficient searching.

Concretely, the procedure of the LRS is implemented asfollows.

step-1: Let ~q sk be a solution found by the s-th particle

swarm at time-step k, and set ~q snow = ~q s

k . Give theterminating condition, J (the total number of theLRS run), and set j = 1.

step-2: Generate a random data, ~dj ∈ <N ∼ N(0, σ2N )

(where σN is a small positive value given by user,which determines the small limited space). Checkwhether ~q s

k + ~dj ∈ Ω is satisfied or not. If~q sk + ~dj 6∈ Ω then adjust ~dj for moving ~q s

k + ~dj tothe nearest valid point within Ω. Set~qnew = ~q s

k + ~dj .

step-3: If g(~qnew)>g(~q snow) then set ~q s

now =~qnew.

step-4: Set j = j + 1. If j ≤ J then go to the step-2.

step-5: Set ~q sk = ~q s

now to correct the solution found bythe s-th particle swarm at time-step k. Stop thesearch.

Due to the complementary feature of the used hybridsearch, the correctional function seems to be close to theHGAPSO [14] in search effect, which implements a plainGA and the PSO with the mixed operations for improvingthe adaptation to treat with various blended distributionproblems.

IAENG International Journal of Computer Science, 38:2, IJCS_38_2_05

(Advance online publication: 25 May 2011)

______________________________________________________________________________________

Table 2: The major parameters used in the MPSOIWα/DC.Parameter Value Parameter Value

the number of individuals, M 10 the number of iterations, K 400the number of generation, G 20 the maximum velocity, vmax 5.12the number of particles, P 10 the range of LRS, σ2

N 0.05the number of particle swarms, S 3 the number of LRS run, J 10the duration of judgment, L 10∼90 the tolerance coefficient, ε 10−6 ∼ 10−2

Table 3: The resulting appropriate values of parameters in the PSOIW to each given 5D benchmark problem.Parameters

Problemws we w1 w2

Sphere 0.7267±0.0521 0.1449±0.1574 1.2116±0.6069 1.9266±0.0651Griewank 0.7761±0.0822 0.2379±0.2008 1.2661±0.6332 0.2827±0.0708Rastrigin 2.0716±0.9143 0.8816±0.6678 12.942±7.9204 5.0663±1.5421

Rosenbrock 0.7702±0.1660 0.5776±0.2137 1.9274±0.3406 1.9333±0.4541Schwefel 0.8552±0.3210 0.1253±0.2236 1.6106±1.3345 1.9610±1.3754Hybrid 1.4767±0.2669 0.6101±0.5335 5.0348±1.7687 9.2134±5.0915

3.2 Internal Indicator

Curiosity, as a general concept in psychology, is an emo-tion related to natural inquisitive behavior for humansand animals, and its importance and effect in motivatingsearch cannot be ignored [7, 21]. Berlyne categorized itinto two types: diversive curiosity (DC) 4 and specificcuriosity (SC) 5 [3]. In the matter of the mechanism ofthe former, Loewenstein insisted that “diversive curiosityoccupies a critical position at the crossroad of cognitionand motivation” in [17].

Based on the assumption of the “cognition” is the actof exploitation, and the “motivation” is the intention toexploration, Zhang et al. proposed the following internalindicator for distinguishing and detecting the above twobehavior patterns in the multi-swarm search [?, 32, 33].

yk(L, ε) = max(ε−

L∑

l=1

∣∣g(~q bk )−g(~q b

k−l)∣∣

L, 0

)(5)

where ~q bk (=arg max

s=1,···,Sg(~q s

k ), where S is the number of

plural particle swarms) denotes the best solution foundby the whole particle swarms at time-step k. As twoadjustable parameters of the internal indicator, L is du-ration of judgment, and ε is the positive tolerance coeffi-cient (sensitivity).

It is obvious that the bigger the value of the coefficient εis, the higher the probability of 1

L

∑Ll=1

∣∣g(~q bk )−g(~q b

k−l)∣∣ <

ε is, and vice verse. The change of the output yk reflects4Diversive curiosity signifies instinct to seek novelty, to take

risks, and to search for adventure.5Specific curiosity signifies instinct to investigate a specific ob-

ject for its full understanding.

the result of group decision-making generated by thewhole particle swarms about the present search. Since in-effective behavior and useless attempt of the multi-swarmsearch in the optimization is overcome by the reliable wayof alleviating stagnation, the search efficiency could begreatly enhanced.

4 Computer Experiments

To facilitate comparison and analysis of the performanceindexes of the proposed method, we use a suite of themulti-benchmark problems [27] and the correspondingcriteria in Table 1.

It is obviously displayed from the fitness functions of thegiven benchmark problems with two dimensions in Table1 that the distribution characteristics of these problems,i.e. the Sphere problem is unimodal with axes-symmetry,the Rosenbrock problem is unimodal with asymmetry, theSchwefel problem is unimodal with line symmetry, theGriewank and Rastrigin problems are multimodal withdifferent distribution density and axes-symmetry, and theHybrid composition problem is multimodal with differentdistribution density and line symmetry.

For dealing with these problems, Table 2 gives the majorparameters in the MPSOIWα/DC employed in the nextcomputer experiments.

4.1 Preliminaries

In order to ensure higher search performance of theMPSOIWα/DC, the optimized PSOIW is applied for ac-complishment of the efficient multi-swarm search. Fordoing this, we adopt the method of meta-optimization ofthe PSOIW heuristics, called evolutionary particle swarm

IAENG International Journal of Computer Science, 38:2, IJCS_38_2_05

(Advance online publication: 25 May 2011)

______________________________________________________________________________________

Figure 2: The distributions of average of criterion values and average of re-initialization frequencies with tuning theparameters, L and ε, for each problem.

optimizer with inertia weight (EPSOIW) [38] estimatesappropriate values of parameters in the PSOIW corre-sponding to each given problem. The method uses a real-coded genetic algorithm and a cumulative fitness functionto stable estimation.

As the results of the meta-optimization run, Table 3shows the obtained values of parameters in the estimatedPSOIW corresponding to each given 5D benchmark prob-lem with 20 trials. We can see that the obtained averageof the parameter values of each PSOIW is quite differentfrom that of the original PSOIW. This fact indicates thatthe parameter values in the original PSOIW are not thebest ones to these given problems, and different problemshould be solved by different proper values of parametersin the PSOIW, and implementing the EPSOIW to find arational PSOIW with high performance is necessary.

Then the average of the parameter values, we, are lessthan 1 for each problem, it suggests that the search be-haviors of particles finally converge at an optimal solu-tion and a near-optimal solution. In contrast to this,the average of the parameter values, w1 and w2, dras-tically exceeds 1, this result indicates that the global

search have more randomization without restriction forefficiently dealing with the Rastrigin and Hybrid prob-lems by escaping from local minimum since both of themare all complex multimodals.

Consequently, these estimated PSOIW in Table 3 asthe optimized optimizers, PSOIW∗, are used in theMPSOIWα/DC for improving the convergence andsearch accuracy, and certainly improving the search per-formance of the proposed method to the given benchmarkproblems.

4.2 Performance of the MPSOIW∗α/DC

For clarifying the characteristics of the proposedmethod, Figure 2 gives the search performance of theMPSOIW∗α/DC for each benchmark problem with 20trials by tuning the parameters of the internal indica-tor, L and ε. The following outstanding features of theMPSOIW∗α/DC are observed.

• The average of re-initialization frequenciesmonotonously increases with increment of thetolerance parameter, ε, and decrement of the

IAENG International Journal of Computer Science, 38:2, IJCS_38_2_05

(Advance online publication: 25 May 2011)

______________________________________________________________________________________

Figure 3: The performance comparison between the MPSOIW∗α/DC and MPSOIWα/DC.

duration of judgment, L, for each benchmarkproblem.

• The average of criterion values do not change at allwith tuning the parameters, L and ε, for the Rastri-gin and Hybrid problems.

• For obtaining better search performance, the recom-mended range of parameters of the MPSOIW∗α/DC:L∗Sp ∈ (10 ∼ 90) and ε∗Sp ∈ (10−6 ∼ 10−2) for theSphere problem; L∗Gr ∈ (10∼ 90) and ε∗Gr ∈ (10−5 ∼10−3) for the Griewank problem; L∗Ra ∈ (10 ∼ 90)and ε∗Ra ∈ (10−6 ∼ 10−2) for the Rastrigin prob-lem; L∗Ro ∈ (10 ∼ 90) and ε∗Ro ∈ (10−5 ∼ 10−3)for the Rosenbrock problem; L∗Ri ∈ (10 ∼ 90) andε∗Sw ∈ (10−6 ∼ 10−2) for the Schwefel problem; and

L∗Hy ∈ (10 ∼ 90) and ε∗Hy ∈ (10−6 ∼ 10−2) for theHybrid problem are available.

As to the results of the Rastrigin and Hybrid problems,the obtained average of the criterion values in Figure 2(c)and Figure 2(f) are mostly unchanged with tuning theparameters, L and ε. This phenomenon suggests thatthe optimized PSOIW∗ has powerful search ability to dealwith the two multimodal problems.

On the other hand, due to stochastic factor in the PSOIWsearch and complexity of the Rosenbrock problem, someirregular change of the experimental results can be dis-covered in Figure 2(d). Moreover, because of the effectof the used hybrid search, the fundamental finding, “thezone of curiosity,” in psychology [7] is not distinguished

IAENG International Journal of Computer Science, 38:2, IJCS_38_2_05

(Advance online publication: 25 May 2011)

______________________________________________________________________________________

Figure 4: The performance comparison between the MPSOIW∗α/DC and PSOIW∗α/DC.

except for the Rosenbrock problem. Hence, the surface of“the average of criterion values” seems to be a plane with-out the change of the parameters L and ε. This meansthat the MPSOIW∗α/DC has good adaptability to effi-ciently deal with the given benchmark problems underthe setting range of parameters, L and ε.

We also observe that the average of re-initialization fre-quencies is over 300 times in the case of the parameters,i.e. L=10 and ε = 10−2, for the Rosenbrock problemin Figure 2(d). Since the average of the criterion val-ues is the lowest than that in the other cases, this re-sult shows that the search behavior of the multi-swarmseems to have entered “the zone of anxiety,” [7] whichleads the search performance of the MPSOIWα/DC tobe lower. However, the average of re-initialization fre-quencies is close to 150 times in the same case for the

Hybrid problem in Figure 2(f), the situation of anxietydoes not appear.

4.3 Performance Comparison

To declare the intrinsic characteristics and the respec-tive effect of the multi-swarm and hybrid search in theMPSOIWα/DC, the following experiments are carriedout.

4.3.1 Optimized Swarms vs. Non-optimizedSwarms

For confirming the effectiveness of the optimized PSOIWused in the MPSOIWα/DC, Figure 3 shows the result-ing difference, ∆ON = g∗O − g∗N (g∗O: the average of

IAENG International Journal of Computer Science, 38:2, IJCS_38_2_05

(Advance online publication: 25 May 2011)

______________________________________________________________________________________

Figure 5: The performance comparison between the MPSOIW∗α/DC and MPSOIW∗/DC.

criterion values of the MPSOIW∗α/DC, g∗N : the aver-age of criterion values of the MPSOIWα/DC). The ob-tained results indicate that the search performance of theMPSOIWα/DC are greatly improved by introduction ofthe optimized PSOIW especially for the Rastrigin, Rosen-brock, and Hybrid problems. Since all of the differencesare positive, the effect of the optimized PSOIW in theoptimization is remarkable.

4.3.2 Single Swarm vs. Multiple Swarms

For equal treatment in search, the number of particlesused in a single swarm is the same to the total numberof particles used in the multi-swarms. Figure 4 showsthe resulting difference, ∆PS = g∗P − g∗S (g∗P : the average

of criterion values of the MPSOIW∗α/DC, g∗S : the aver-age of criterion values of the PSOIW∗α/DC). We can seethat the search performance of both the MPSOIW∗α/DCand PSOIW∗α/DC seems to be the same for the Rast-rigin and Hybrid problems. This is because the effect ofthe EPSOIW, i.e. the optimized PSOIW are suitable toefficiently deal with the Rastrigin and Hybrid problems.

In comparison with the differences between two meth-ods for the Sphere, Rastrigin, Rosenbrock, and Hy-brid problems, we observe that the search performanceof the MPSOIW∗α/DC is better than that by thePSOIW∗α/DC under the low-sensitivity condition of ε ≤10−5. This fact suggests that the superior search per-formance can be obtained by operating singular swarmunder the high sensitivity condition of ε ≥ 10−4.

IAENG International Journal of Computer Science, 38:2, IJCS_38_2_05

(Advance online publication: 25 May 2011)

______________________________________________________________________________________

Table 4: The mean and standard deviation of criterion values in each method for each 5D benchmark problem with20 trials. The values in bold signify the best result for each problem.

Problem MPSOIW∗α/DC EPSOIW PSOIW OPSO RGA/E MPSOIW∗α/DCSphere 1.0000±0.000 1.0000±0.000 1.0000±0.000 1.0000±0.000 0.9990±0.000 1.0000±0.000

Griewank 1.0000±0.000 0.9847±0.006 0.8505±0.119 0.9448±0.043 0.9452±0.078 1.0000±0.000Rastrigin 1.0000±0.000 1.0000±0.000 0.2325±0.159 0.2652±0.118 0.9064±0.225 1.0000±0.000

Rosenblock 0.9959±0.006 0.6070±0.217 0.5650±0.179 0.3926±0.197 0.3898±0.227 0.9893±0.012Schwefel 1.0000±0.000 1.0000±0.000 1.0000±0.000 0.7677±0.412 0.9875±0.214 1.0000±0.000Hybrid 1.0000±0.000 0.8025±0.405 0.3905±0.374 0.3061±0.359 0.1531±0.133 1.0000±0.000

4.3.3 Effect of the LRS

For investigating the performance difference between theMPSOIW∗α/DC and MPSOIW∗/DC, Figure 5 showsthe obtained experimental results corresponding to sameproblems. Note that the difference of them is defined by∆PN = g∗P − g∗N (g∗N : the average of criterion values ofthe MPSOIW∗/DC). Dissimilar to the preceding results,the search performance of the MPSOIW∗α/DC is betterthan that of the MPSOIW∗/DC in the most cases for eachgiven benchmark problem except the Rastrigin problem.This fact clearly indicates that the LRS plays an impor-tant role in drastically improving the search performanceof the MPSOIW∗/DC.

On the other hand, the effect of the LRS is not remarkablefor the Sphere, Schwefel, and Hybrid problems in any case.These results fit in with “no free lunch” (NFL) theorem[29]. They suggest that the effect of the LRS closelydepends on the object of search, which related to howto set the parameter values for the running number, J ,and the search range, σ2

N , and the inherent feature ofthe given benchmark problems. This is also a hot topicregarding how to rationally manage the trade-off betweencomputational cost and search performance [26]. Thedetails on discussion for the issue are omitted here.

4.3.4 Comparison with Other Methods

For further illuminating the effectiveness of the proposedmethod, we compare the search performance with theother methods such as the EPSOIW, PSOIW, OPSO (op-timized particle swarm optimization) [18], and RGA/E.

Table 4 gives the obtained experimental results of imple-menting these methods with 20 trials. It is well shownthat the search performance of the MPSOIW∗α/DC isbetter than that by the EPSOIW, PSOIW, OPSO, andRGA/E. The results sufficiently reflect that the merg-ing of both multiple hybrid search and the mechanismof diversive curiosity takes the active role in handlingthese benchmark problems. In particular, A big increase,i.e. the average of criterion values by implementing the

MPSOIW∗α/DC steeply rises from 0.5650 to 0.9959, insearch performance is achieved well for the Rosenbrockproblem.

5 Conclusion

A new method of cooperative PSO – multiple particleswarm optimizers with inertia weight with diversive cu-riosity, MPSOIWα/DC, has been proposed in this paper.Owing to the essential strategies of decentralization insearch and concentration in evaluation and behavior con-trol, the combination of the adopted hybrid search andthe mechanism of diversive curiosity, theoretically, hasgood capability to greatly improve search efficiency andto alleviate stagnation in handling complex optimizationproblems.

Applications of the MPSOIWα/DC to a suite of the 5Dbenchmark problems well demonstrated its effectiveness.The obtained experimental results verified that unifyingthe both characteristics of multi-swarm search and theLRS is successful and effective. In comparison with thesearch performance of the EPSOIW, PSOIW, OPSO, andRGA/E, the proposed method has an enormous latentcapability in treating with different benchmark problemsand the outstanding powers of multi-swarm search. Ac-cordingly, the basis of the development study of coopera-tive PSO research in swarm intelligence and optimizationis further expanded and consolidated.

It is left for further study to apply the MPSOIWα/DCto data mining, system identification, multi-objective op-timization, practical problems in the real-world, and dy-namic environments.

Acknowledgment

This research was partially supported by Grant-in-AidScientific Research(C)(22500132) from the Ministry ofEducation, Culture, Sports, Science and Technology,Japan.

IAENG International Journal of Computer Science, 38:2, IJCS_38_2_05

(Advance online publication: 25 May 2011)

______________________________________________________________________________________

References

[1] Bergh, F. van den, Engelbrecht, A. P., “A coopera-tive approach to particle swarm optimization,” IEEETransactions on Evolutionary Computation, Vol.8,Issue 3, pp.225-239, 2004.

[2] Bergh, F. van den, “An Analysis of Particle SwarmOptimizers,” IEEE Transactions on EvolutionaryComputation, 2006.

[3] Berlyne, D., Conflict, Arousal, and Curiosity.McGraw-Hill Book Co., New York, USA, 1960.

[4] Chang, J. F., Chu, S. C., Roddick, J. F., Pan, J. S.,“A parallel particle swarm optimization algorithmwith communication strategies,” Journal of infor-mation science and engineering, No.21, pp.809-818,2005.

[5] Clerc, M., Kennedy, J., “The particle swarm-explosion, stability, and convergence in a multidi-mensional complex space,” IEEE Transactions onEvolutionary Computation, Vol.6, No.1, pp.58-73,2000.

[6] Clerc, M., Particle Swarm Optimization, HermesScience Pubns, UK, 2006.

[7] Day, H., “Curiosity and the Interested Explorer,”Performance and Instruction, Vol.21, No.4, pp.19-22, 1982.

[8] Eberhart, R. C., Kennedy, J., “A new optimizerusing particle swarm theory,” Proceedings of thesixth International Symposium on Micro Machineand Human Science, pp.39-43, Nagoya, Japan, 1995.

[9] Eberhart, R. C., Shi, Y., “Comparing inertiaweights and constriction factors in particleswarm op-timization,” Proceedings of the 2000 IEEE Congresson Evolutionary Computation, Vol.1, pp.84-88, LaJolla, CA, 2000.

[10] El-Abd, M., Kamel, M. S., “A Taxonomy of Co-operative Particle Swarm Optimizers,” Interna-tional Journal of Computational Intelligence Re-search, ISSN 0973-1873, Vol.4, No.2, pp.137-144,2008.

[11] Fogel, D. B., Evolutionary Computation: Towarda New Philosophy of Machine Intelligemce, Piscat-away, NJ: IEEE Press, 1995.

[12] Goldberg, D. E., Genetic Algorithm in Search Op-timization and Machine Learning, Reading, MA:Addison-Wesley, 1989.

[13] Hema, C. R., Paulraj, M. P., Nagarajan, R., Yaacob,S., Adom, A. H., Application of Particle Swarm Op-timization for EEG Signal Classification, BiomedicalSoft Computing and Human Sciences, Vol.13, No.1,pp.79-84, 2008.

[14] Juang, C.-F., “A Hybrid of Genetic Algorithm andParticle Swarm Optimization for Recurrent Net-work Design,” IEEE Transactions on Systems, Manand Cybernetics, Part B, Vol.34, No.2, pp.997-1006,2004.

[15] Kennedy, J., Eberhart, R. C., “Particle swarm op-timization,” Proceedings of the 1995 IEEE Inter-national Conference on Neural Networks, pp.1942-1948, Piscataway, New Jersey, USA, 1995.

[16] Kennedy, J., Mendes, R., “Population structureand particle swarm performance,” Proceedings ofthe IEEE Congress on Evolutionary Computation(CEC2002), pp.1671-1676, Honolulu, Hawaii, USA,May 12–17, 2002.

[17] Loewenstein, G., “The Psychology of Curiosity: AReview and Reinterpretation,” Psychological Bul-letin, Vol.116, No.1, pp.75-98, 1994.

[18] Meissner, M. , Schmuker, M., Schneider, G., “Op-timized Particle Swarm Optimization (OPSO) andits application to artificial neural network training,”BMC Bioinformatics, Vol.7, No.125, 2006.

[19] Moscato, P., “On Evolution, Search Optimiza-tion, Genetic Algorithms and Martial Arts: To-wards Memetic Algorithms,” Technical Report Cal-tech Concurrent Computation Program, Report 826,California Institute of Technology, Pasadena, CA91125, 1989.

[20] Niu, B., Zhu, Y., He, X., “Multi-population Co-operation Particle Swarm Optimization,” LNCS,No.3630, pp.874-883, Springer Heidelberg, DOI10.1007/11553090−88, 2005.

[21] Opdal, P. M., “Curiosity, Wonder and Ed-ucation seen as Perspective Development,”Studies in Philosophy and Education, Vol.20,No.4, pp.331-344, Springer Netherlands, DOI10.1023/A:1011851211125, 2001.

[22] Poli, R., Kennedy, J., Blackwell, T., “Particle swarmoptimization — An overview,” Swarm Intell, Vol.1,pp.33-57, DOI 10.1007/s11721-007-0002-0, 2007.

[23] Pasupuleti, P., Battiti, R., “The Gregarious Re-cent Particle Swarm Optimizer (G-PSO),” Proceed-ings of IEEE Congress on Evolutionary Computa-tion, pp.84-88, 2000.

[24] Shi, Y., Eberhart, R. C., “A modified particle swarmoptimiser,” Proceedings of the IEEE InternationalConference on Evolutionary Computation, pp.69-73,Piscataway NJ, IEEE Press, 1998.

[25] Solis, F. J., Wets, R. J-B., “Minimization by Ran-dom Search Techniques,” Mathematics of OperationsResearch, Vol.6, No.1, pp.19-30, 1981.

[26] Spall, J. C., “Stochastic Optimization,” In: J. Gen-tle et al. (eds.) Handbook of Computational Statis-tics, Springer Heidelberg, pp.169-197, 2004.

[27] Suganthan, P. N., Hansen, N., Liang, J. J.,Deb, K., Chen, Y.-P., Auger, A., Tiwari,S., (2005). Problem Definitions and Evalu-ation Criteria for the CEC 2005. Available:http//:www.ntu.edu.sg/home/epnsugan/index files/CEC-05/Tech-Report-May-30-05.pdf

IAENG International Journal of Computer Science, 38:2, IJCS_38_2_05

(Advance online publication: 25 May 2011)

______________________________________________________________________________________

[28] Valle, Y. del., Venayagamoorthy, G. K., Mo-hagheghi, S., Hernandez, J.-C., Harley, R. G., Parti-cle Swarm Optimization: Basic Concepts, Variantsand Applications in Power Systems, IEEE Trans-actions on Evolutionary Computation, Vol.12, No.2,pp.171-195, 2008.

[29] Wolpert, D. H., Macready, W. G., “No Free LunchTheorems for Optimization,” IEEE Transactions onEvolutionary Computation, Vol.1, No.1, pp.67-82,1997.

[30] Zhang, H., Ishikawa, M., “Evolutionary ParticleSwarm Optimization (EPSO) – Estimation of Op-timal PSO Parameters by GA,” Proceedings of theIAENG International MultiConference of Engineersand Computer Scientists 2007 (IMECS2007), Vol.1,pp.13-18, Hong Kong, China, March 21–23, 2007.

[31] Zhang, H., Ishikawa, M., “Evolutionary ParticleSwarm Optimization – Metaoptimization Methodwith GA for Estimating Optimal PSO Methods,”In: O. Castillo, et al. (eds.), Trends in IntelligentSystems and Computer Engineering, Lecture Notesin Electrical Engineering, Vol.6, pp.75-90, SpringerHeidelberg, 2008.

[32] Zhang, H., Ishikawa, M., “Particle Swarm Opti-mization with Diversive Curiosity – An Endeavorto Enhance Swarm Intelligence,” IAENG Interna-tional Journal of Computer Science, Vol.35, Issue 3,pp.275-284, 2008.

[33] Zhang, H., Ishikawa, M., “Characterization ofparticle swarm optimization with diversive curios-ity,” Journal of Neural Computing & Applications,pp.409-415, Springer London, DOI 10.1007/s00521-009-0252-4, 2009.

[34] Zhang, H., Zhang, J., “The Performance Mea-surement of a Canonical Particle Swarm Opti-mizer with Diversive Curiosity,” Proceedings ofInternational Conference on Swarm Intelligence(ICSI’2010), LNCS 6145, Part I, pp.19-26, Beijing,China, 2010.

[35] Zhang, H., “Multiple particle swarm optimizers withdiversive curiosity,” Proceedings of the IAENG In-ternational MultiConference of Engineers and Com-puter Scientists 2010 (IMECS2010), Vol.1, pp.174-179, Hong Kong, China, March 17–19, 2010.

[36] H. Zhang, H., Ishikawa, M., “The performanceverification of an evolutionary canonical particleswarm optimizers,” Neural Networks, Vol.23, Issue 4,pp.510-516, DOI 10.1016/j.neunet.200912.002, 2010.

[37] Zhang, H., “Multiple particle swarm optimizerswith Inertia Weight with diversive curiosity,” Pro-ceedings of the IAENG International MultiConfer-ence of Engineers and Computer Scientists 2011(IMECS2011), Vol.1, pp.21-26, Hong Kong, China,March 16–18, 2011.

[38] Zhang, H., “A Proposal of Evolutionary ParticleSwarm Optimizer with Inertia Weight,” TechnicalReport of IEICE, Vol.110, No.388, NC-2010-115,pp.153-159, 2011. (in Japanese)

IAENG International Journal of Computer Science, 38:2, IJCS_38_2_05

(Advance online publication: 25 May 2011)

______________________________________________________________________________________