978-953-7619-48-0

5
781 Genetical Swarm Optimization: Self-Adaptive Hybrid Evolutionary Algorithm for Electromagnetics Francesco Grimaccia, Marco Mussetta, and Riccardo E. Zich Abstract—A new effective optimization algorithm suitably developed for electromagnetic applications called genetical swarm optimization (GSO) is presented. This is a hybrid algorithm developed in order to combine in the most effective way the properties of two of the most popular evolutionary optimization approaches now in use for the optimization of electromagnetic structures, the particle swarm optimization (PSO) and genetic algorithms (GAs). The algorithm effectiveness has been tested here with respect to both its “ancestors,” GA and PSO, dealing with an electromagnetic application, the optimization of a linear array. The here proposed method shows itself as a general purpose tool able to effectively adapt itself to different electro- magnetic optimization problems. Index Terms—Array synthesis, evolutionary algorithms, hybridization strategies, optimization techniques. I. INTRODUCTION The general aim of optimization algorithms is to find a solution that represents a global maximum or minimum in a suitably defined solution domain, that means to find the best solution to a considered problem among all the possible ones [1]. Global search methods present two competing goals, exploration and exploitation: exploration is impor- tant to ensure that every part of the solution domain is searched enough to provide a reliable estimate of the global optimum; exploitation, in- stead, is important to concentrate the search effort around the best so- lutions found so far by searching their neighborhoods to reach better solutions. Many search algorithms achieve these two goals using local search methods, or global search approaches, or a dedicated combina- tion of both the global and local strategies: these algorithms are com- monly known as hybrid methods. Electromagnetic optimization problems generally involve a large number of parameters, that can be either continuous, discrete, or both, and they often include constraints in allowable values. Furthermore, the solution domain of electromagnetic optimization problems often has non differentiable and discontinuous regions, and it utilizes model approximations of the true electromagnetic phenomena to save com- putational resources. All these reasons make the optimization problem very complex and of difficult solution. Traditional approaches, better known as local algorithms, generally need to compute many derivatives in order to optimize the objective function; these techniques are, often, Newton-based methods and related to gradient descent algorithms. The large number of variables and the unavoidable mixture of discrete and continuous parameters involved, e.g., in antenna optimization problems, yield that it is generally difficult to use traditional optimiza- tion methods to find the best solution among the Pareto domain, as reported in recent literature [2], [3]. Moreover, for intrinsic reasons, the traditional local methods strongly depend on the first guess, that has to be suitably chosen inside the solution domain. In recent years several evolutionary algorithms have been developed for optimization of every kind of electromagnetic problems. Advan- tages of these techniques are the capability to find a global solution, Manuscript received March 1, 2006; revised October 7, 2006. The authors are with the Department of Electrical Engineering, Politecnico di Milano, I-20133 Milano, Italy (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TAP.2007.891561 without being trapped in local optima, and the possibility to face non- linear and discontinuous problems, with great numbers of variables. However, these algorithms have a strong stochastic basis, therefore they need a lot of iterations to get a significant result, and so their perfor- mances are evaluated, first of all, in terms of speed of convergence. In order to improve the speed of convergence of evolutionary algorithms, in this paper the authors introduce a new kind of hybrid method, called genetical swarm optimization (GSO), consisting in a cooperation of genetic algorithm (GA) and particle swarm optimization (PSO). In the following two paragraphs the most important features of GA and PSO are presented, while in Section II the GSO itself is illustrated. In par- ticular, in Section II-B the new self-adaptive feature is presented and Section III highlights some numerical results derived from the applica- tion of these techniques to the optimization of a linear array of radiators. A. Genetic Algorithms and Their Hybrids One of the most effective evolutionary algorithm developed until now certainly is the GA that is now quite popular among the electro- magnetic community and widely used [4], [5]. GA simulates the natural evolution, in terms of survival of the fittest, adopting pseudo-biological operators. The set of parameters that characterizes a specific problem is called an individual, or chromosome, and it is composed of a list of genes. Each gene contains a suitable encoding of a parameter itself and each chromosome represents a point in the search space. For each individual of the population a fitness function is therefore evaluated, resulting in a score assigned to the individual. Based on these fitness scores, a new population is generated iteratively with each successive population referred to as a generation. The GA uses basically three op- erators (selection, crossover, and mutation) to manipulate the genetic composition of the population. GA is very efficient at exploring the entire search space, but is rel- atively poor in finding the precise local optimal solution in the region in which the algorithm converges. Many efforts on the enhancement of traditional GA have been proposed by modifying the structure of the population or the role that an individual plays in it (distributed GA, cellular GA, and symbiotic GA), by modifying the basic operators of traditional GA or simply by adding new ones, trying to get a faster convergence rate. In order to solve the previous problem, hybrid GAs (Lamarckian GAs, Baldwinian GAs, and memetic algorithms [6], [7]) use local improvement procedures as a part of the evaluation of the in- dividuals of the population. These procedures essentially complement the global search strategy of the GA. B. Particle Swarm Optimization One of the most recently developed evolutionary optimization tech- niques is the so called PSO, that is based on a model of social interac- tion between independent individuals (particles) and uses social knowl- edge (i.e., the experience accumulated during the evolution) in order to search the parameter space by controlling the trajectories of the par- ticles according to a swarm-like set of rules [8], [9]. The position of each particle, which represents a particular solution of the problem, is used to compute the value of the function to be optimized. Individual particles traverse the problem hyperspace and are attracted by both the position of their best past performance and the position of the global best performance of the whole swarm. Starting from a random population of particles, each particle th is defined by its position vectors in the space of the parameters; such a particle has also a random velocity in the parameter space. The main PSO operator is the velocity update, that takes into account the best po- sition reached by all ones and the best position that the agent itself has reached in its past path, resulting in a migration of the swarm towards 0018-926X/$25.00 © 2007 IEEE

description

978-953-7619-48-0

Transcript of 978-953-7619-48-0

  • 781

    Genetical Swarm Optimization: Self-Adaptive HybridEvolutionary Algorithm for Electromagnetics

    Francesco Grimaccia, Marco Mussetta, and Riccardo E. Zich

    AbstractA new effective optimization algorithm suitably developed forelectromagnetic applications called genetical swarm optimization (GSO) ispresented. This is a hybrid algorithm developed in order to combine in themost effective way the properties of two of the most popular evolutionaryoptimization approaches now in use for the optimization of electromagneticstructures, the particle swarm optimization (PSO) and genetic algorithms(GAs). The algorithm effectiveness has been tested here with respect to bothits ancestors, GA and PSO, dealing with an electromagnetic application,the optimization of a linear array. The here proposed method shows itselfas a general purpose tool able to effectively adapt itself to different electro-magnetic optimization problems.

    Index TermsArray synthesis, evolutionary algorithms, hybridizationstrategies, optimization techniques.

    I. INTRODUCTION

    The general aim of optimization algorithms is to find a solution thatrepresents a global maximum or minimum in a suitably defined solutiondomain, that means to find the best solution to a considered problemamong all the possible ones [1]. Global search methods present twocompeting goals, exploration and exploitation: exploration is impor-tant to ensure that every part of the solution domain is searched enoughto provide a reliable estimate of the global optimum; exploitation, in-stead, is important to concentrate the search effort around the best so-lutions found so far by searching their neighborhoods to reach bettersolutions. Many search algorithms achieve these two goals using localsearch methods, or global search approaches, or a dedicated combina-tion of both the global and local strategies: these algorithms are com-monly known as hybrid methods.

    Electromagnetic optimization problems generally involve a largenumber of parameters, that can be either continuous, discrete, or both,and they often include constraints in allowable values. Furthermore,the solution domain of electromagnetic optimization problems oftenhas non differentiable and discontinuous regions, and it utilizes modelapproximations of the true electromagnetic phenomena to save com-putational resources. All these reasons make the optimization problemvery complex and of difficult solution. Traditional approaches, betterknown as local algorithms, generally need to compute many derivativesin order to optimize the objective function; these techniques are, often,Newton-based methods and related to gradient descent algorithms.The large number of variables and the unavoidable mixture of discreteand continuous parameters involved, e.g., in antenna optimizationproblems, yield that it is generally difficult to use traditional optimiza-tion methods to find the best solution among the Pareto domain, asreported in recent literature [2], [3]. Moreover, for intrinsic reasons,the traditional local methods strongly depend on the first guess, thathas to be suitably chosen inside the solution domain.

    In recent years several evolutionary algorithms have been developedfor optimization of every kind of electromagnetic problems. Advan-tages of these techniques are the capability to find a global solution,

    Manuscript received March 1, 2006; revised October 7, 2006.The authors are with the Department of Electrical Engineering, Politecnico

    di Milano, I-20133 Milano, Italy (e-mail: [email protected]).Color versions of one or more of the figures in this paper are available online

    at http://ieeexplore.ieee.org.Digital Object Identifier 10.1109/TAP.2007.891561

    without being trapped in local optima, and the possibility to face non-linear and discontinuous problems, with great numbers of variables.However, these algorithms have a strong stochastic basis, therefore theyneed a lot of iterations to get a significant result, and so their perfor-mances are evaluated, first of all, in terms of speed of convergence. Inorder to improve the speed of convergence of evolutionary algorithms,in this paper the authors introduce a new kind of hybrid method, calledgenetical swarm optimization (GSO), consisting in a cooperation ofgenetic algorithm (GA) and particle swarm optimization (PSO). In thefollowing two paragraphs the most important features of GA and PSOare presented, while in Section II the GSO itself is illustrated. In par-ticular, in Section II-B the new self-adaptive feature is presented andSection III highlights some numerical results derived from the applica-tion of these techniques to the optimization of a linear array of radiators.

    A. Genetic Algorithms and Their Hybrids

    One of the most effective evolutionary algorithm developed untilnow certainly is the GA that is now quite popular among the electro-magnetic community and widely used [4], [5]. GA simulates the naturalevolution, in terms of survival of the fittest, adopting pseudo-biologicaloperators. The set of parameters that characterizes a specific problemis called an individual, or chromosome, and it is composed of a listof genes. Each gene contains a suitable encoding of a parameter itselfand each chromosome represents a point in the search space. For eachindividual of the population a fitness function is therefore evaluated,resulting in a score assigned to the individual. Based on these fitnessscores, a new population is generated iteratively with each successivepopulation referred to as a generation. The GA uses basically three op-erators (selection, crossover, and mutation) to manipulate the geneticcomposition of the population.

    GA is very efficient at exploring the entire search space, but is rel-atively poor in finding the precise local optimal solution in the regionin which the algorithm converges. Many efforts on the enhancement oftraditional GA have been proposed by modifying the structure of thepopulation or the role that an individual plays in it (distributed GA,cellular GA, and symbiotic GA), by modifying the basic operators oftraditional GA or simply by adding new ones, trying to get a fasterconvergence rate. In order to solve the previous problem, hybrid GAs(Lamarckian GAs, Baldwinian GAs, and memetic algorithms [6], [7])use local improvement procedures as a part of the evaluation of the in-dividuals of the population. These procedures essentially complementthe global search strategy of the GA.

    B. Particle Swarm Optimization

    One of the most recently developed evolutionary optimization tech-niques is the so called PSO, that is based on a model of social interac-tion between independent individuals (particles) and uses social knowl-edge (i.e., the experience accumulated during the evolution) in order tosearch the parameter space by controlling the trajectories of the par-ticles according to a swarm-like set of rules [8], [9]. The position ofeach particle, which represents a particular solution of the problem, isused to compute the value of the function to be optimized. Individualparticles traverse the problem hyperspace and are attracted by both theposition of their best past performance and the position of the globalbest performance of the whole swarm.

    Starting from a random population of particles, each particle ith isdefined by its position vectorsXi

    in the space of the parameters; such aparticle has also a random velocity Vi

    in the parameter space. The mainPSO operator is the velocity update, that takes into account the best po-sition reached by all ones and the best position that the agent itself hasreached in its past path, resulting in a migration of the swarm towards

    0018-926X/$25.00 2007 IEEE

  • 782

    the global optimum. At each iteration the particle moves according toits velocity and the fitness function to be optimized f(X) is evaluatedfor each particle in their current position. The value of the cost functionis then compared with the best value obtained during the previous iter-ations. Besides, the best value ever obtained for each particle is storedand the corresponding position Pi

    is stored too. The velocity of the par-ticle is then stochastically updated following the well known updatingrule reported in (1), based on the attractions of the position Pi

    of itspersonal optimum and the position Pg

    , which is the global optimum,the best fitness value ever reached by all the swarm:

    V

    i+1

    = !V

    i

    +

    1

    1

    (P

    i

    X

    i

    ) +

    2

    2

    (P

    g

    X

    i

    ) (1)

    where! is a friction factor,1

    and2

    are constants, while 1

    and 2

    are

    random positive numbers with a uniform distribution and a maximumvalue M

    .

    II. THE GENETICAL SWARM OPTIMIZATION

    Some comparisons of the performances of GAs and PSO can befound in literature, underlining the reliability and the convergencespeed of both methods, but continuing to keep them separate [10],[11]. Both the algorithms have shown a good performance for someparticular applications but not for other ones. For example we no-ticed in our simulations that sometimes GAs outperformed PSO, butoccasionally the opposite happened showing the typical applicationdriven characteristic of any single technique. This is due to thedifferent search method adopted by the two algorithms, the typicalselection-crossover-mutation approach versus the velocity update one.Anyway, the population-based representation of the parameters thatcharacterizes a particular solution is the same for both the algorithms;consequently, it is possible to implement an hybrid technique in orderto effectively exploit both the qualities and uniqueness of the twoalgorithms.

    Some attempts have been done in this direction [12], [13], with goodresults, but with a weak integration of the two strategies. Precisely, mostof the times one technique is used mainly as a pre-optimizer for the ini-tial population of the other technique. In [12], for example, the authorstest two different combinations of GA and PSO, using the results of onealgorithm as a starting point for the other (in both the orders) to opti-mize a profiled corrugated horn antenna. In [13] instead, the upper-halfof the best-performing individuals in a population is regarded as eliteand, before using GA operators, this elite is first enhanced by means ofPSO, instead of being reproduced directly to the next generation.

    The new hybrid approach here proposed, called genetical swarm op-timization (GSO), consists in a stronger co-operation of GA and PSO,maintaining the integration of them for the entire optimization run. Inour algorithm in fact, during each iteration the population is dividedinto two parts and it is evolved with the two techniques respectively;the two parts are then recombined in the updated population for the nextiteration. After that it is again divided randomly into two parts for thenext run, in order to take advantage of both genetic and particle swarmoperators. Fig. 1 shows the idea that stands behind the algorithm andthe way to mix the two main techniques.

    The population update concept can be easily understood thinkingthat a part of the individuals has been substituted by new generatedones by means of GA, while the remaining are the same of the pre-vious generation but have been moved on the solution space by PSO.This kind of updating results in an evolutionary process where indi-viduals not only improve their score for natural selection of the fitnessor for good-knowledge sharing, but for both of them at the same time.In Fig. 2 it is possible to better understand the key steps of the GSO

    Fig. 1. Splitting of the population in subgroups during the iterations.

    Fig. 2. Typical flowchart illustrating the steps of the GSO and the interactionsbetween GA and PSO.

    algorithm, through an intuitive flowchart. The algorithm stops after apredefined number of iterations.

    A. Hybridization StrategiesIn the proposed procedure, the authors introduced a driving param-

    eter, called the Hybridization Coefficient (hc), that expresses the per-centage of population that in each iteration is evolved with GA: hc = 0means the procedure is a pure PSO (the whole population is processedaccording to the PSO operators), hc = 1 means pure GA (the wholepopulation is optimized according to the GA operators), while 0