Hybridizing ant colony optimization with firefly algorithm for unconstrained optimization problems

11
Hybridizing ant colony optimization with firefly algorithm for unconstrained optimization problems R.M. Rizk-Allah , Elsayed M. Zaki, Ahmed Ahmed El-Sawy Department of Basic Engineering Science, Faculty of Engineering, Minoufia University, Shebin El-Kom, Egypt article info Keywords: Ant colony optimization Firefly algorithm Unconstrained optimization abstract We propose a novel hybrid algorithm named ACO–FA, which integrate ant colony optimi- zation (ACO) with firefly algorithm (FA) to solve unconstrained optimization problems. The proposed algorithm integrates the merits of both ACO and FA and it has two characteristic features. Firstly, the algorithm is initialized by a population of random ants that roam through the search space. During this roaming an evolution of these ants are performed by integrating ACO and FA, where FA works as a local search to refine the positions found by the ants. Secondly, the performance of FA is improved by reducing the randomization parameter so that it decreases gradually as the optima are approaching. Finally, the pro- posed algorithm ACO–FA is tested on several benchmark problems from the usual litera- ture and the numerical results have demonstrated the superiority of the proposed algorithm for finding the global optimal solution. Ó 2013 Published by Elsevier Inc. 1. Introduction Optimization problems are of importance for the industrial as well as the scientific world in many applications. There are many optimization problems that present attributes, such as high nonlinearity and multimodality, the solution of this kind of problems is usually a complex task. Moreover, in many instances, complex optimization problems present noise and/or dis- continuities which make traditional deterministic methods inefficient to find the global solutions. Meanwhile, global opti- mization methods based on meta-heuristics are robust alternatives to solve complex optimization problems and do not require any properties of the objective function have been developed. Due to the computational drawbacks of existing numerical methods, researchers have to rely on meta-heuristic algo- rithms based on simulations to solve some complex optimization problems. A common feature in meta-heuristic algorithms is that they combine rules and randomness to imitate natural phenomena. These phenomena include the biological evolu- tionary process (e.g., the genetic algorithm (GA) [9] and the differential evolution (DE) [23]), animal behavior (e.g., particle swarm optimization (PSO) [11] and ant colony algorithm (ACA) [5]), and the physical annealing process (e.g., simulated annealing (SA) [12]). Over the last decades, many meta-heuristic algorithms and their improved algorithms have been suc- cessfully applied to various engineering optimization problems [14,26,17,21,13,22,18]. They have outperformed conven- tional numerical methods on providing better solutions for some difficult and complicated real-world optimization problems. Among the existing meta-heuristic algorithms, a well-known algorithm is the ACO which is a stochastic search procedure based on observations of social behaviors of real insects or animals. The original algorithm of ACO is known as the ant system [4] which was proposed by Dorigo to solve the traveling salesman problem. Since then, some algorithms based on the ACO 0096-3003/$ - see front matter Ó 2013 Published by Elsevier Inc. http://dx.doi.org/10.1016/j.amc.2013.07.092 Corresponding author. E-mail address: [email protected] (R.M. Rizk-Allah). Applied Mathematics and Computation 224 (2013) 473–483 Contents lists available at ScienceDirect Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Transcript of Hybridizing ant colony optimization with firefly algorithm for unconstrained optimization problems

Page 1: Hybridizing ant colony optimization with firefly algorithm for unconstrained optimization problems

Applied Mathematics and Computation 224 (2013) 473–483

Contents lists available at ScienceDirect

Applied Mathematics and Computation

journal homepage: www.elsevier .com/ locate/amc

Hybridizing ant colony optimization with firefly algorithm forunconstrained optimization problems

0096-3003/$ - see front matter � 2013 Published by Elsevier Inc.http://dx.doi.org/10.1016/j.amc.2013.07.092

⇑ Corresponding author.E-mail address: [email protected] (R.M. Rizk-Allah).

R.M. Rizk-Allah ⇑, Elsayed M. Zaki, Ahmed Ahmed El-SawyDepartment of Basic Engineering Science, Faculty of Engineering, Minoufia University, Shebin El-Kom, Egypt

a r t i c l e i n f o

Keywords:Ant colony optimizationFirefly algorithmUnconstrained optimization

a b s t r a c t

We propose a novel hybrid algorithm named ACO–FA, which integrate ant colony optimi-zation (ACO) with firefly algorithm (FA) to solve unconstrained optimization problems. Theproposed algorithm integrates the merits of both ACO and FA and it has two characteristicfeatures. Firstly, the algorithm is initialized by a population of random ants that roamthrough the search space. During this roaming an evolution of these ants are performedby integrating ACO and FA, where FA works as a local search to refine the positions foundby the ants. Secondly, the performance of FA is improved by reducing the randomizationparameter so that it decreases gradually as the optima are approaching. Finally, the pro-posed algorithm ACO–FA is tested on several benchmark problems from the usual litera-ture and the numerical results have demonstrated the superiority of the proposedalgorithm for finding the global optimal solution.

� 2013 Published by Elsevier Inc.

1. Introduction

Optimization problems are of importance for the industrial as well as the scientific world in many applications. There aremany optimization problems that present attributes, such as high nonlinearity and multimodality, the solution of this kind ofproblems is usually a complex task. Moreover, in many instances, complex optimization problems present noise and/or dis-continuities which make traditional deterministic methods inefficient to find the global solutions. Meanwhile, global opti-mization methods based on meta-heuristics are robust alternatives to solve complex optimization problems and do notrequire any properties of the objective function have been developed.

Due to the computational drawbacks of existing numerical methods, researchers have to rely on meta-heuristic algo-rithms based on simulations to solve some complex optimization problems. A common feature in meta-heuristic algorithmsis that they combine rules and randomness to imitate natural phenomena. These phenomena include the biological evolu-tionary process (e.g., the genetic algorithm (GA) [9] and the differential evolution (DE) [23]), animal behavior (e.g., particleswarm optimization (PSO) [11] and ant colony algorithm (ACA) [5]), and the physical annealing process (e.g., simulatedannealing (SA) [12]). Over the last decades, many meta-heuristic algorithms and their improved algorithms have been suc-cessfully applied to various engineering optimization problems [14,26,17,21,13,22,18]. They have outperformed conven-tional numerical methods on providing better solutions for some difficult and complicated real-world optimizationproblems.

Among the existing meta-heuristic algorithms, a well-known algorithm is the ACO which is a stochastic search procedurebased on observations of social behaviors of real insects or animals. The original algorithm of ACO is known as the ant system[4] which was proposed by Dorigo to solve the traveling salesman problem. Since then, some algorithms based on the ACO

Page 2: Hybridizing ant colony optimization with firefly algorithm for unconstrained optimization problems

474 R.M. Rizk-Allah et al. / Applied Mathematics and Computation 224 (2013) 473–483

are presented, such as the ant colony system, MAX–MIN Ant System and the rank-based ant system [5]. These algorithms areall based on the idea of updating the pheromone information to search the shortest route. Currently, the ACO has also beenapplied to continuous problems, where interesting results are discovered [3,6,15,16].

Recently, hybridization is recognized to be an essential aspect of high performing algorithms. Pure algorithms cannotreach to an optimal solution in a reasonable time. Thus pure algorithms are almost always inferior to hybridizations. There-fore, some researchers recently started investigating the incorporation of ACO algorithm with other techniques such as hy-brid ant algorithms combined with beam search [2], constraint propagation [15], scatter search [24], simulated annealing forsolving open shop scheduling problem [7] and mutation genetic operator by introducing a genetic algorithm into ant colonysystem [10] among others [3,6,19,16,20].

A promising new meta-heuristic algorithm denoted as firefly algorithm, which is inspired by social behavior of firefly andthe phenomenon of bioluminescent communication. There are two important issues in the firefly algorithm that are the var-iation of light intensity and formulation of attractiveness. Yang [25] simplified that the attractiveness of a firefly is deter-mined by its brightness which in turn is associated with the objective function. In general, the attractiveness isproportional to their brightness. Furthermore, every member of the firefly swarm is characterized by its bright that canbe directly expressed as an inverse of an objective function for a minimization problem.

In this paper we propose a novel hybrid algorithm named ACO–FA for solving the unconstrained optimization problems.The motivation for a new hybrid algorithm is to overcome the drawback of classical ant colony algorithm which is not suit-able for continuous optimizations. This methodology consists of two phases. The first one employs the meta-heuristic searchby ACO, where the groups of candidate values of the variables are constructed and each value in the group has its trial infor-mation. At each iteration of ACO the solutions are constructed using the trial information, while the other employs fireflyalgorithm to improve the solution quality of optimization problems. The proposed algorithm has several characteristic fea-tures. Firstly, the algorithm is initialized by a set of random ants which is roaming through the search space. During thisroaming an evolution of these ants is performed by integrating ACO and FA, where FA works as a local search to refinethe positions found by the ants. Secondly, the performance of FA is improved by reducing the randomization parameterso that it decreases gradually as the optima are approaching. Finally, the proposed algorithm ACO–FA is tested on severalbenchmark problems from the usual literature and the numerical results have demonstrated the superiority of the proposedapproach for finding the global optimal solution.

The organization of the remaining paper is as follows. In Section 2 we describe some preliminaries on optimization prob-lems. In Sections 3 and 4, ACO and FA are briefly introduced. In Section 5, hybridizing ant colony optimization with fireflyalgorithm, named ACO–FA, is proposed and explained in details. Experiments and discussions are presented in Section 6. Fi-nally, we conclude the paper in Section 7.

2. Preliminaries

The general numerical unconstrained optimization problem can be defined as follows [1]:find x such that

min FðxÞ; x ¼ ðx1; x2; . . . ; xnÞ 2 Rn; ð1Þ

where x e X # S. The objective function F is defined on the search space S # Rn and the set X # S defines the feasible re-gion. Usually, the search space S is defined as an n-dimensional rectangle in Rn, domains of variables defined by their lowerand upper bounds:

xLj 6 xj 6 xU

j ; j ¼ 1;2; . . . ;n: ð2Þ

3. Ant colony optimization (ACO)

ACO makes use of agents, called ants, which mimic the behavior of real ants in how they manage to establish shortest-route paths from their colony to feeding sources and back [5]. Ants communicate information through pheromone trails,which influence which routes the ants follow, and eventually lead to a solution route.

ACO was initially designed to solve the Traveling Salesman Problem (TSP). In the TSP, a given set of n cities has to be vis-ited exactly once and the tour ends in the initial city. We call dij; i; j ¼ 1;2; . . . ;n, the length of the path between cities i and j.In the case of Euclidean TSP, dij is the Euclidean distance between i and j (i.e., dij ¼ xik �xj

��). The cities and routes betweenthem can be represented as a connected graph ðn; EÞ, where n the set of towns and E is the set of edges between towns (a fullyconnected graph in the Euclidean TSP).

The ants move from one city to another following the pheromone trails on the edges. Let sij(t) be the trail intensity onedge ði; jÞ at iteration t. Then, each ant k; k ¼ 1;2; . . . ;m, chooses the next city to visit depending on the intensity of the asso-ciated trail. When the ants have completed their city tours, the trail intensity is updated according to Eqs. (3) and (4):

sijðt þ 1Þ ¼ qsijðtÞ þ rsij; t ¼ 1;2; . . . ; T; ð3Þ

Page 3: Hybridizing ant colony optimization with firefly algorithm for unconstrained optimization problems

R.M. Rizk-Allah et al. / Applied Mathematics and Computation 224 (2013) 473–483 475

rsij ¼Xm

k¼1

rskij: ð4Þ

where q is a coefficient such that (1 � q) represents the evaporation of trail between iteration t and t + 1; T is the total num-ber of iterations andrsk

ij is the quantity per unit of length of trail substance (pheromone in real ants) laid on edge ði; jÞ by thekth ant between iteration t and t + 1, it is given by the Eq. (5):

rskij ¼

C=Wk if thekth ant uses the edgeði; jÞin its tour0 otherwise;

�ð5Þ

where C is a constant and Wk is the tour length of the kth ant.An ant k at city i chooses the city j to go to with a probability pk

ijðtÞ, which is a function of the town distance and of theamount of pheromone trail present on the connecting edge by the using Eq. (6):

pkijðtÞ ¼

½sijðtÞ�a �½gij �BP

h2Uk½sihðtÞ�a �½gih �

B 8j 2 Uk

0 otherwise;

8<: ð6Þ

where Uk a set of the cities that can be chosen by the kth ant at city i for the next step, gij = 1/dij is a heuristic function whichis defined as the visibility of the path between cities i and j; parameters a and B determine the relative influence of the trailinformation and the visibility [4].

4. Firefly algorithm (FA)

The Firefly algorithm was developed by Yang [25] and it was based on the idealized behavior of the flashing character-istics of fireflies. For simplicity, we can idealize these flashing characteristics as the following three rules

� all fireflies are unisex so that one firefly is attracted to other fireflies regardless of their sex;� Attractiveness is proportional to their brightness, thus for any two flashing fireflies, the less bright one will move towards

the brighter one. The attractiveness is proportional to the brightness and they both decrease as their distance increases. Ifno one is brighter than a particular firefly, it moves randomly;� The brightness or light intensity of a firefly is affected or determined by the landscape of the objective function to be

optimized.

In the FA, there are two important issues: the variation of light intensity and formulation of the attractiveness. For sim-plicity, we can always assume that the attractiveness of a firefly is determined by its brightness or light intensity which inturn is associated with the encoded objective function. In the simplest case for maximum optimization problems, the bright-ness IðxÞ of a firefly at a particular location x can be chosen as IðxÞaFðxÞ. However, the attractiveness b is relative; it should beseen in the eyes of the beholder or judged by the other fireflies. Thus, it will vary with the distance rij between firefly i andfirefly j. As light intensity decreases with the distance from its source and light is also absorbed in the media, so we shouldallow the attractiveness to vary with the degree of absorption.

In the simplest form, the light intensity IðrÞ varies with the distance r monotonically and exponentially as [25] in the Eq.(7):

I ¼ I0e�cr ; ð7Þ

where I0 is the original light intensity and c is the light absorption coefficient. As a firefly’s attractiveness is proportional tothe light intensity seen by adjacent fireflies, we can now define the attractiveness b of a firefly by Eq. (8):

b ¼ b0e�cr2; ð8Þ

where b0 is the attractiveness at r ¼ 0. It is worth pointing out that the exponent cr2 can be replaced by other functions suchas crm when m > 0

The distance between any two fireflies i and j at xi and xj, respectively, is theCartesian distance is calculated using Eq. (9):

rij ¼ kxi � xjk ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiXn

d¼1

ðxi;d � xi;dÞ2vuut ; ð9Þ

where xi,d is the dth component of the spatial coordinate xi of ith firefly. The movement of a firefly i is attracted to anothermore attractive (brighter) firefly j is determined by [25] according to Eq. (10):

xi ¼ xi þ b0e�cr2xj � xiÞ þ a1ðrand� 0:5Þ; ð10Þ

Page 4: Hybridizing ant colony optimization with firefly algorithm for unconstrained optimization problems

476 R.M. Rizk-Allah et al. / Applied Mathematics and Computation 224 (2013) 473–483

where the second term is due to the attraction while the third term is randomization with a1 being the randomizationparameter and rand is a random number generator uniformly distributed in [0, 1].

5. The proposed algorithm

Traditional ACO is a framework for discrete optimization problems. The agents in ACO work as teammates in an explo-ration team. The cooperation between the ants is based on pheromone communication. In view of the traveling salesmanproblem (TSP), which is a discrete optimization problem, the mission for the agents is to find the shortest way to traverseall the cities and return to the original city. The number of roads inter-connecting the cities is limited and fixed. However,optimization problems in the continuous domain are different. There are no longer ‘‘fixed’’ roads for agents to explore, butwalking in any directions in the n-dimensional (n > 1) domain can lead to quite a different result. Besides, because of its sto-chastic behavior, ACO may suffer from slow convergence.

To overcome these disadvantages, we introduce ACO–FA, which integrates ACO with FA to direct these ants to search inthe n-dimensional domain. This methodology consists of two phases. The first one employs the heuristic search by ACO,where the groups of candidate values of the variables are constructed and each value in the group has its trial information.In each iteration of ACO the solutions are constructed using the trial information, while the other phase employs firefly algo-rithm to improve the solution quality of optimization problems.

The main steps of the ACO–FA are summarized as follows:

5.1. Step 1: Initialization

As for a continuous problem of n dimensions, the algorithm assigns a random vector xx1; x2; . . . ; xn for each ant, where allthe m solutions for the variable i represent a group of candidate values. For every variable in every ant, a pheromone weight-ing s0 is attached to it. Fig. 1 describes the initialization, where the thickness of the color in the circles corresponds to theamount of the pheromones associate to every variable in the candidate groups.

5.2. Step 2: Evaluation

Evaluate the desired objective function in n variables for each ant.

5.3. Step 3: Pheromone update

After evaluating each ant according to objective function, the pheromone sij on each variable i; i ¼ 1;2; . . . ;n, of antj; j ¼ 1;2; . . . ;m, is update according to the evolution step. Pheromone is updated as usually in Eq. (11): first, pheromoneis reduced by a constant factor to simulate evaporation to prevent premature convergence; then some pheromone is laidon components of the candidate groups. Accordingly, pheromone concentration associated with each possible route (variablevalue) is changed in a way to reinforce good solutions, as in Eq. (11):

sijðt þ 1Þ ¼ qtauijðtÞ þ rsij;

rsij ¼C=FðxÞ 8xij 2 candidate groups0 otherwise;

� ð11Þ

where C is a constant, sij(t + 1) the revised concentration of pheromone is associated with option xij at iteration t + 1, sij(t) isthe concentration of pheromone at the previous iteration t and rsij(t) is change in pheromone concentration.

1x 2x 3x ... nx

1group 2group … … group n

Ant 1

Ant 2

Ant m

Fig. 1. The initialization of ACO.

Page 5: Hybridizing ant colony optimization with firefly algorithm for unconstrained optimization problems

R.M. Rizk-Allah et al. / Applied Mathematics and Computation 224 (2013) 473–483 477

5.4. Step 4: Solution construction

Once the pheromone is updated after an iteration, the next iteration starts by changing the ants’ paths (i.e., associated var-iable values) in a manner that respects pheromone concentration as in the Fig. 2. For each ant and for each dimension constructa new candidate group to replace the old one. As such, an ant k chooses the jth value in the group of candidate values for thevariable i at iteration t by the roulette wheel selection according to the transition probability as expressed in the Eq. (12).

pkijðtÞ ¼

½sijðtÞ�aPl2allowedk

½silðtÞ�a8xij; xil 2 allowedk; i ¼ 1;2; . . . ;n; k ¼ j ¼ 1;2; . . . ;m

0 otherwise;

8<: ð12Þ

where pkijðtÞ is probability that option xij is chosen by ant k for variable i at iteration t and allowedk is the set of the candidate

values contained in the group i.

5.5. Step 5: FA-based search for updating the candidate groups

FA was developed by Yang [25]. The FA is inspired by social behavior of fireflies and the phenomenon of bioluminescentflashes. Bioluminescent flashes have two fundamental functions: to attract mating partners (communication), and to attractpotential prey. In addition, flashing may also serve as a protective warning mechanism. The flashing light can be formulatedin such a way that it is associated with the objective function to be optimized, which makes it possible to formulate newoptimization algorithms.

In order to make the ants survey the overall search space, FA is applied to update the candidate groups for the ants whichensures highly preferable positions in the search space and increases the probability of finding a better solution. Only ACO is

1x 2x 3x ... nx

New solution for ant 2

Ant 1

Ant 2

Ant m

Fig. 2. Solution construction of ACO.

ACO-FAProblem Initialization of ants

( )m n×Pheromone model

Candidate group of values for each

variableProbabilistic

solutionconstruction

Evolution and Pheromone value

update

Initialization of Pheromone values

Initialization of FA and calculate the light intensity

Move each firefly to more attractive

one

New Candidate groups

Fig. 3. ACO–FA: a hybrid between ACO and FA.

Page 6: Hybridizing ant colony optimization with firefly algorithm for unconstrained optimization problems

478 R.M. Rizk-Allah et al. / Applied Mathematics and Computation 224 (2013) 473–483

preventives with candidate groups with specified size. Under this condition, the best solution might be easily trapped in alocal optimum. By applying FA on the candidate groups of the ants this lead to better-performing candidate groups over time.FA based search process motivates the fireflies to search for new regions including some lesser explored regions and enhancethe fireflies’ capability to explore the vast search space. The FA handles m fireflies equal to the number of ants in ACO. Eachfirefly generates a solution around the best found position among all ants. In the rest of this paper, we will discuss the imple-mentation FA in three steps as follows.

5.6. Step 5.1: Initialization

Initialize a swarm of fireflies with the obtained best ant position, where each firefly contains n variables (i.e., the positionof the ith firefly in the n dimensional search space can be represented as xi ¼ ðxi1; xi2; . . . ; xinÞ). Furthermore every member ofthe swarm is characterized by its light intensity (i.e., initialize each firefly with distinctive light intensityðI0ðxiÞ; i ¼ 1;2; . . . ;mÞ.

Fig. 4. The pseudo code of the proposed algorithm.

Page 7: Hybridizing ant colony optimization with firefly algorithm for unconstrained optimization problems

R.M. Rizk-Allah et al. / Applied Mathematics and Computation 224 (2013) 473–483 479

5.7. Step 5.2: Light intensity

Calculate the light intensity IiðxiÞ; i ¼ 1;2; . . . ;m; for each firefly which in turn is associated with the encoded objectivefunction, where for minimization problem the brighter firefly represents the minimum value for IðxÞ.

5.8. Step 5.3: Attractiveness and movement

In the firefly algorithm, the attractiveness of a firefly is determined by its light intensity which in turn is associated withthe encoded objective function and the distance rij between firefly i and firefly j. In addition, light intensity decreases withthe distance from its source, and light is also absorbed in the media, so we should allow the attractiveness to vary with thedegree of absorption. As a firefly’s attractiveness is proportional to the light intensity seen by adjacent fireflies, we can nowdefine the attractiveness b of a firefly by Eq. (8).on the other hand the movement of a firefly i is attracted to another moreattractive (brighter) firefly j is determined by Eq. (13).

Table 1Test functions.

Test functions Dimension Domain Optimum value

F1 ¼ x21 þ 2x2

2 � 0:3 cosð3px1Þ � 0:4 cosð4px2Þ þ 0:7 2 [�1.281.28] 0

F2 = [ cos (2px1) + cos (2.5px1) � 2.1] � [2.1 � cos (3px2) + cos (3.5px2)] 2 [�11] �16.09172

F3 ¼ ½0:002þP25

j¼1ðjþP2

i¼1ðxi � aijÞ6Þ�1��1 2 [�65.53665.536] 0.9980

a ¼ �32 �16 0 16 32 �32 �16 0 16 32 �32 �16 0 16 32 �32 �16 0 16 32 �32 �16 0 16 32�32 �32 �32 �32 �32 �16 �16 �16 �16 �16 0 0 0 0 0 16 16 16 16 16 32 32 32 32 32

� �

F4 ¼ ðx2 � 5:14p2 x2

1 þ 5p x1 � 6Þ2 þ 10ð1� 1

8pÞ cosðx1Þ þ 10 2 x1 2 ½�510�x2 2 ½015�

0.3978873

F5 ¼�

4� 2:1x21 þ

x41

3

�x2

1 þ x1x2 þ�

4x22 � 4

�x2

22 x1 2 ½�33�

x2 2 ½�22��1.0316285

F6 ¼�

1þ ðx1 þ x2 þ 1�2�

19� 14x1 þ 3x21 � 14x2 þ 6x1x2 þ 3x2

2

����

30þ ð2x1 � 3x2

�2�18� 32x1 þ 12x2

1 þ 48x2 � 36x1x2 þ 27x22

��2 ½�5 5� 3

F7 ¼�P5

i¼1i cosððiþ 1Þx1 þ iÞ���P5

i¼1i cosððiþ 1Þx2 þ iÞ�

2 ½�10 10� �186.73091

F8 ¼ 100�

x2 � x21

�2þ ð1� x1Þ2

2 ½�10 10� 0

F9 ¼ expf12 ðx2

1 þ x22 � 25Þ2g þ sin4ð4x1 � 3x2Þ þ 1

2 ðx1 þ x2 � 10Þ2 2 ½�5 5� 1

F10 ¼ 110

�12þ x2

1 þ1þx2

2x2

1þ x2

1 x22þ100

ðx1x2Þ4

�2 2 ½010� 1.74

F11 ¼ 100ðx2 � x21Þ

2 þ ð1� x1Þ2 þ 90ðx4 � x23Þ

2 þ ð1� x3Þ2þ10:1½ðx2 � 1Þ2 þ ðx4 � 1Þ2� þ 19:8ðx2 � 1Þðx4 � 1Þ

4 ½�10 10� 0

F12 = (x1 + 10x2)2 + 5(x3 � x4)2 + (x2 � 2x3)4 + 10(x1 � x4)4 4 ½�5 5� 0

F13 ¼P19

i¼1½ðx2i Þðx2

iþ1þ1Þ þ ðx2iþ1Þ

ðx2i þ1Þ� 20 ½�1 4� 0

F14 ¼ ðp=20Þ½10 sin2ðpx1Þ þP19

i¼1ððxi � 1Þ2ð1þ 10 sin2ðpxiþ1ÞÞÞ þ ðx20 � 1Þ2� 20 ½�10 10� 0

F15 ¼ �P4

i¼1ci exp½�P3

j¼1aijðxj � pijÞ2� 3 [01] �3.86278

i aij ci pij

1234

3 10 300:1 10 353 10 30

0:1 10 35

11:23

3:2

0:3689 0:1170 0:26730:4699 0:4387 0:74700:1091 0:8732 0:55470:0381 0:5743 0:8828

Table 2The algorithm parameters.

The size of ant colony (m) 100

The number of iterations for ant colony)T( 20The number of iterations for firefly algorithm (Tc) 20Maximum generations for the overall algorithm (W) 50Initial pheromone for each element in pheromone matrix s0 20Evaporation rate (q) 0.9The constant of ACO (C) 400Pheromone weight (a) 1Initial light intensity (I0( 0Initial attractiveness (b0) 1The light absorption coefficient (c) 1The randomness reduction constant (h) 0.9Randomization parameter (a1) 0.2

Page 8: Hybridizing ant colony optimization with firefly algorithm for unconstrained optimization problems

Table 3The com

Func

F1

F2

F3

F4

F5

F6

F7

F8

F9

F10

F11

F12

F13

F14

F15

NA⁄(No

480 R.M. Rizk-Allah et al. / Applied Mathematics and Computation 224 (2013) 473–483

xi ¼ xi þ b0e�cr2 ðxj � xiÞ þ a1ðrand� 0:5Þ: ð13Þ

Since each member of the swarm explores the problem space taking into account results obtained by others, therefore therandomization term may moves the firefly to lose its best location, so we introduce a modification on the randomizationterm that makes the fireflies approached from the optimum. A further improvement on the convergence of the algorithmis to vary the randomization parameter a1 so that it decreases gradually as the optima are approaching. For example, wecan use

atþ1 ¼ athð1� t

TcÞ; t ¼ 1;2; . . . ; Tc; ð14Þ

where Tc is the maximum number of generations for FA, and h e [0, 1] is the randomness reduction constant.The flow chart which describe the working of ACO–FA shown in Fig. 3 and the basic steps of the proposed ACO–FA algo-

rithm can be summarized as the pseudo code shown in Fig. 4.

parison of solution quality.

tions ACO–FA Comparedalgorithms

Functionvalue

Number of functionevaluation (NFE)

Time (s) Name Function value NFE

Best Average Best Average Best Average

0 1.792475E�20 3150 3099 0.2810 0.3388 SZGADRASET

0.298002E�70

4000957

�16.09172 �16.09172 2100 1780 0.2500 0.2266 SZGADRASET

�16.09172�16.09172

400022

0.9980 0.9980 1600 1600 0.2970 0.3108 SZGADRASET

0.99800.9980

20001823

0.3978873 0.3978873 200 200 0.6400 0.641 SZGADRASETPSACOCPSOPSOGA

0.397890.397887370.39790.39790.49600.4021

4000219209NA⁄NANA

�1.03162845 �1.0316284 880 916 0.0940 0.0958 SZGADRASET

�1.03163�1.0316284

30001738

3 3 1370 1566 0.1720 0.2359 SZGADRASETHSPSACOCPSOPSOGA

333334.626023.1471

40002550400,000240NANANA

�186.73091 �186.73091 1600 1650 0.2030 0.2031 SZGADRASETPSACOCPSOPSOGA

�186.73091�186.73091�186.7309�186.7274�180.3265�182.1840

30001665534NANANA

0 3.311002E�11 10250 11400 3.8750 4.6172 DRASETHS

3.9053E�155.684341886E�15

11,62350,000

1 1 4250 4337 1.0310 1.0608 DRASETHS

11

29,66345,000

1.744151 1.744151 1100 1100 0.0790 0.0790 DRASETHS 1.744152007961.74415

251800

8.9518E�15 1.3022714E�15 32135 36750 19.5310 15.0280 SZGADRASETHS

0.13074E�53.72E�124.8515E�9

175,4384985570,000

0 7.00841E�16 4800 4815 2.8900 4.8530 DRASETHS

8.17E�90.1254032468E�11

79,990100,000

3.5906E�18 3.3480E�18 16500 5835 5 1.6668 SZGADRASET

0.25422E�72.45E�16

320,00049,325

4.1651E�17 3.34037E�17 10200 9780 2.8440 2.8312 SZGADRASET

0.230033E�35.93E�12

239,52119,994

�3.8628 �3.8628 1500 1860 2.907 2.900 PSACOCPSOPSOGA

�3.8628�3.8610�3.8572�3.8571

2000NANANA

t Available).

Page 9: Hybridizing ant colony optimization with firefly algorithm for unconstrained optimization problems

R.M. Rizk-Allah et al. / Applied Mathematics and Computation 224 (2013) 473–483 481

6. Experiments and discussions

An extensive set of experiments have been conducted, in order to show ACO–FA algorithm’s effectiveness for the purposeof unconstrained optimization. fifteen common test functions were used in the experiments and then the evaluated resultswere compared with the prominent algorithms that reported in [8,19], which are called successive zooming genetic algo-rithm (SZGA),harmony search theory (HS),dynamic random search technique (DRASET) [8], chaotic particle swarm optimi-zation (CPSO), particle swarm optimization (PSO), genetic algorithm (GA) and particle swarm ant colony optimization(PSACO) [19]. The test functions, which are benchmark from [8,19], are listed in Table 1. Table 1 gives the details of the testfunctions, including their equations, dimensions, domains and the optimal values. Selected test functions are run on PC,which has Pentium 4 3.0 GHz processor and 1.0 GB RAM while testing the performance ACO–FA. The ACO–FA algorithmhad been coded in MATLAB 7.

6.1. Parameters setting

The proposed algorithm contains number of parameters. These parameters affect the performance of the proposed algo-rithm. Extensive experimental tests were conducted to see the effect of different values on the performance of the proposedalgorithm. Based upon these observations, the following parameters have been set as in Table 2.

Table 4The performance assessment.

Functions Absolute error = |obtained value–optimum value|

Proposed algorithm Compared algorithms

F1 0 SZGADRASET

2.98002E�80

F2 0 SZGADRASET

00

F3 0 SZGADRASET

00

F4 0 SZGADRASETPSACOCPSOPSOGA

2.7000E�60.7000E�71.2700E�51.2700E�59.81127E�24.2127E�3

F5 0.5000E�7 SZGADRASET

1.5000E�61.0000E�7

F6 0 SZGADRASETHSPSACOCPSOPSOGA

000001.6260E01.4710E�1

F7 0 SZGADRASETPSACOCPSOPSOGA

001.0000E�53.5100E�36.4044E04.5469E0

F8 0 DRASETHS

3.9053E�155.6843E�15

F9 0 DRASETHS

00

F10 4.1510E�3 DRASETHS

4.1520E�34.1500E�3

F11 8.9518E�15 SZGADRASETHS

1.3074E�63.7200E�124.8515E�9

F12 0 DRASETHS

8.1700E�91.2540E�12

F13 3.5906E�18 SZGADRASET

2.5422E�82.4500E�16

F14 4.1651E�17 SZGADRASET

2.3003E�45.9300E�12

F15 2.0000 E�5 PSACOCPSOPSOGA

2.0000E�51.7800E�35.5800E�35.6800E�3

Page 10: Hybridizing ant colony optimization with firefly algorithm for unconstrained optimization problems

482 R.M. Rizk-Allah et al. / Applied Mathematics and Computation 224 (2013) 473–483

6.2. The comparison of solution quality

In order to examine the capability of ACO–FA in unconstrained optimization problems, a comparison is made with theprominent algorithms from the literature. The test functions have been solved by ACO–FA for 10 times. The starting valuesof the variables for each problem were selected randomly for all runs from the solution space. The results found by ACO–FAsuch as the best and average function value, numbers of function evaluation (NFE) and solution time in seconds have beenrecorded in Table 3, whereas for the other algorithms only the function value and numbers of function evaluation are givenbecause the solution times, the best and average function value for some algorisms not given. It is obtained that founded bestfunction values by ACO–FA are the same or the closest as average function values for all functions except from the functionsF1; F5; F8; F12.

As it deduced from Table 3, the ACO–FA is successful while finding the optimum solution of the given functions and ACO–FA outperforms the prominent algorithms for all functions. On the other hand, ACO–FA can find the global minimum withless iteration number than compared algorithms except for the functions F1; F2; F6; F10.

6.3. The performance assessment

After discussing the experimental results, we want to analyze the performance of the proposed ACO–FA. So we performedanother analysis based on the absolute error. The absolute error is calculated as the absolute difference between the obtainedvalues generated by algorithms and the optimum values recorded in Table 4.

The results obtained are presented in Table 4 From the results shown in Table 4, it is clear that the proposed algorithmACO–FA more precise than the other algorithms where obtained values generated by the proposed algorithm ACO–FA closeto the optimum value. All these results demonstrated that the proposed algorithm ACO–FA is more accurate than some ofother algorithms.

In this subsection, a comparative study has been carried out to assess the proposed algorithm concerning quality of thesolution. On the first hand, evolutionary techniques suffer from the quality of solution. Therefore the proposed algorithm hasbeen used to increase the solution quality by combining the two merits of two heuristic algorithms.

On the other hand, unlike classical techniques, where the proposed algorithm searches from a population of points, notsingle point. Therefore the proposed algorithm can provide a globally optimal solution. In addition, the proposed algorithmuses only the objective function information, not derivatives or other auxiliary knowledge. Therefore it can deal with thenon-smooth, non-continuous and non-differentiable functions which are actually existed in practical optimization problems.Another advantage is that the simulation results prove superiority of the proposed algorithm to those reported in the liter-ature, where it is completely better than the other algorithms. So, the ACO–FA algorithm is quite competitive when com-pared with the other existing methods. Finally, the reality of using the proposed algorithm to handle complex problemsof realistic dimensions has been approved due to procedure simplicity.

7. Conclusions

This paper presents a hybrid algorithm combining two heuristic optimization techniques, ACO and FA. The proposed algo-rithm integrates the merits of both ACO and FA, where the algorithm is initialized by a set of random ants that is roamingthrough the search space. During this roaming an evolution of these ants is performed by integrating ACO and FA, where FAworks as a local search to refine the positions found by the ants. On the other hand, the performance of FA is improved byreducing the randomization parameter so that it decreases gradually as the optima are approaching. The comparisons ofnumerical results show that there is a scope of research in hybridizing swarm intelligence methods to solve difficult contin-uous optimization problems and this hybrid ACO–FA is a promising and valuable tool to solve unconstrained nonlinear opti-mization problems. A careful observation will reveal the following benefits of the proposed optimization algorithm.

1. It can efficiently overcome the drawback of classical ant colony algorithm which is not suitable for continuousoptimizations.

2. It competitive when compared with the other existing algorithm.3. It can find can find the global minimum for the problems very efficiently.4. The candidate paths to be selected by the ant changes dynamically rather than to be fixed and the solutions will tend to

be diverse and global by means of using firefly algorithm on the component values of the population at each iteration.

The future work will be focused on three directions: (i) the application of ACO–FA to constrained optimization problems;and (ii) the extension of the method to solve the multi-objective problems.

Acknowledgments

The authors are grateful to the anonymous reviewers for their valuable comments and helpful suggestions which greatlyimproved the paper’s quality.

Page 11: Hybridizing ant colony optimization with firefly algorithm for unconstrained optimization problems

R.M. Rizk-Allah et al. / Applied Mathematics and Computation 224 (2013) 473–483 483

References

[1] M.S. Bazaraa, C.M. Shetly, Nonlinear Programming Theory and Algorithms, Wiley, New York, 1979.[2] C. Blum, Beam-ACO-hybridizing ant colony optimization with beam search: an application to open shop scheduling, Computers & Operations Research

32 (6) (2005) 1565–1591.[3] L. Chen, J. Shen, L. Qin, H.J. Chen, An improved ant colony algorithm in continuous optimization, Journal of Systems Science and Systems Engineering 12

(2) (2003) 224–235.[4] M. Dorigo, Learning and Nature Algorithm (in Italian). Ph.D. Dissertation, Dipartimento di Electtonica, Politecnico di Milano, Italy, 1992.[5] M. Dorigo, T. Stützle, Ant Colony Optimization, MIT Press, London, 2004.[6] J. Dreo, P. Siarry, An ant colony algorithm aimed at dynamic continuous optimization, Applied Mathematics Computing 181 (2006) 457–467.[7] P. Hadi, T.M. Reza, Solving a multi-objective open shop scheduling problem by a novel hybrid ant colony optimization, Expert Systems with

Applications 38 (2011) 2817–2822.[8] K. Hamzacebi, F. Kutay, Continuous functions minimization by dynamic random search technique, Applied Mathematical Modelling 31 (2007) 2189–

2198.[9] J.H. Holland, Adaptation in Natural and Artificial Systems, University of Michigan Press, Ann Arbor, MI, 1975.

[10] Y. Jingan, Z. Yanbin, An improved ant colony optimization algorithm for solving a complex combinatorial optimization problem, Applied SoftComputing 10 (2010) 653–660.

[11] J. Kennedy, R.C. Eberhart, Particles warm optimization, in: Proceedings of IEEE International Conference on, Neural Networks, 1995, pp. 1942–1948.[12] S. Kirkpatrick, C. Gelatt, M. Vecchi, Optimization by simulated annealing, Science 220 (1983) 671–680.[13] Y. Marinakis, M. Marinaki, G. Dounias, Particles warm optimization for pap-smear diagnosis, Expert Systems with Applications 35 (4) (2008) 1645–

1656.[14] M.H. Mashinchi, A.O. Mehmet, P. Witold, Hybrid optimization with improved Tabu search, Applied Soft Computing 11 (2011) 1993–2006.[15] B. Meyer, A. Ernst, Integrating ACO and constraint propagation, in: M. Dorigo, M. Birattari, C. Blum, L.M. Gambardella, F. Mondada, T. Stützle (Eds.),

Proceedings of ANTS 2004 – Fourth International Workshop on Ant Colony Optimization and Swarm Intelligence, Lecture Notes in Computer Science,vol. 3172, Springer, Berlin, 2004. pp. 166–77.

[16] A.A. Mousa, Waiel F. Abd El-Wahed, R.M. Rizk-Allah, A hybrid ant colony optimization approach based local search scheme for multiobjective designoptimizations, Electric Power Systems Research 81 (2011) 1014–1023.

[17] A.Y. Qing, Dynamic differential evolution strategy and applications in electromagnetic inverses catering problems, IEEE Transactions on Geo scienceand Remote Sensing 44 (1) (2006) 116–125.

[18] M. Serrurier, H. Prade, Improving inductive logic programming by using simulated annealing, Information Sciences 178 (6) (2008) 1423–1441.[19] P.S. Shelokar, P. Siarry, V.K. Jayaraman, B.D. Kulkarni, Particle swarm and ant colony algorithms hybridized for improved continuous optimization,

Applied Mathematics and Computation 188 (2007) 129–142.[20] C. Shyi-Ming, C. Chih-Yao, Parallelized genetic ant colony systems for solving the traveling salesman problem, Expert Systems with Applications 38

(2011) 3873–3883.[21] J.H.V. Sickel, K.Y. Lee, J.S. Heo, Differential evolution and its applications to power plant control, in: International Conference on Intelligent Systems

Applications to Power Systems, ISAP, 2007, pp. 1–6.[22] K.M. Sim, W.H. Sun, Ant colony optimization for routing and load-balancing: survey and new directions, IEEE Transactions on Systems, Manand,

Cybernetics, Part A 33 (5) (2003) 560–572.[23] R. Storn, Differential evolution design of an IIR-filter, in: IEEE International Conference on Evolutionary Computation, Nagoya, 1996, pp. 268–273.[24] Z. Xiaoxia, T. Lixin, A new hybrid ant colony optimization algorithm for the vehicle routing problem, Pattern Recognition Letters 30 (2009) 848–855.[25] X.S. Yang, Nature-Inspired Metaheuristic Algorithms, Luniver Press, 2008.[26] J.J. Zhang, Y.H. Zhang, R.Z. Gao, Genetic algorithms for optimal design of vehicle suspensions, in: IEEE International Conference on Engineering of

Intelligent Systems, 2006, pp. 1–6.