K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The...

21
International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809 © Research India Publications. http://www.ripublication.com 3789 K-means cluster algorithm-based evolutionary approach for constrained multi-objective optimization A. A. Mousa 1,2 , M. Higazy 1,3 , Yousria Abo-Elnaga 4 1 Department of Mathematics and Statistics, Faculty of Sciences, Taif University, Saudi Arabia. 2 Department of Basic Engineering Science, Faculty of Engineering, Shebin El-Kom, Menoufia University, Egypt. 3Department of physics and engineering mathematics, faculty of electronic engineering, Menofia University, Egypt. 4 Department of Basic science, Higher Technological Institute, Tenth of Ramadan City, Egypt. Abstract One of the most popular approaches for generating non-dominated solutions of a multi-objective optimization problem is the evolutionary algorithms. The need of an approximation to the non- dominated set for the decision maker for selecting a final preferred solution is required in several real-life applications . Unfortunately, the cardinality of the Pareto optimal solutions' set may be very large or even infinite. On the other hand, due to the overflow of information, the decision maker (DM) may not be concerned in having an excessively large number of Pareto optimal solutions to deal with. In this paper, a new enhanced evolutionary algorithm is presented, our proposed algorithm has been enriched with modified k-means cluster scheme, On the first hand, in phase I, K-means clustering method is implemented to partition the population into a determined number of subpopulation with-dynamic-size, where distinct genetic algorithms (GAs) operators can be applied to each sub-population, instead of one GAs operator applied to the whole population. On the other hand, phase II uses K-means algorithm in order to make the algorithms practical by allowing a decision maker to control the resolution of the Pareto-set approximation by choosing an appropriate k value ( no of required clusters). To prove the excellence of the proposed approach compared to state-of-the-art evolutionary algorithms, diverse numerical studies will be done using a suite of multimodal test functions taken from the literature. Keywords: K-means cluster algorithm, evolutionary approaches, constrained optimization, multiobjective optimization INTRODUCTION Multiobjective Optimization Problems (MOPs) are those having more than one objective. There are a set of possible solutions and there is no unique optimum solution. these solutions are optimal in the wider sense that no other solutions in the search space are dominate them when all objectives are considered. As in [1,2], these solutions are called Pareto optimal solutions. Surely, MOPs appear in several areas of knowledge for example economics [3-6], machine learning [6- 8] and electrical power system [9-12]. A few developed evolutionary algorithms are constructed for solving the constrained multi-objective optimization problems. In spite of the several studies for solving the constrained optimization problems; there are no enough studies concerning the procedure for handling constraints. As an instance, in [13], treating constraints as high-priority objectives was suggested by Fonseca. In [14] by Harada, a few effective constraint- handling guidelines were suggested and a Pareto descent repair method was constructed. For MOPs, because the objective vector cannot be directly assigned as the fitness function value, it is always needed a correctly constructed algorithm for fitness assignment. Using the Pareto dominance relationship, the values of the fitness functions are assigned by most of the existing MOEAs [15-16]. AL Malki et al.[17] have presented Hybrid Genetic K-Means algorithm for Identifying the Most Significant Solutions from Pareto Front. They have implemented clustering techniques to organize and classify the solutions for various problems. In [18], the Hybrid Genetic Algorithm with K-Means has been presented by Al Malki et al. to solve the Clustering Problems. They have succeeded to eliminate the empty cluster problem by using a hybrid form of the k-means algorithm and GAs. Their proposition was be proved by the results of simulation tests via various data collections. The use of K-means clustering technique in [19] enable the algorithm to implement various operators of GA to each subpopulation rather than utilizing a single GA operator for all population. In this article, a new optimization algorithm is proposed which runs in two stages: in the first stage, the algorithm combines all principal characteristics of K-means with GA clustering method and uses it as a search engine to produce the correct Pareto optimal front. The use of K-means clustering technique enable the algorithm to implement various operators of GA to each subpopulation rather than utilizing a single GA operator for all population. Then in the second phase, k -means cluster algorithm is adopted as reduction algorithm in order to improve the spread of the solutions found so far. Our proposed algorithm has been enriched with modified k-means cluster scheme, the use of k -means also makes the algorithms practical by allowing a decision maker to control the resolution of the Pareto-set approximation by choosing an appropriate k value. k -means clustering aims to partition n observations into k clusters in which each observation belongs to the cluster with the nearest mean. Finally, various kinds of multi-objective benchmark problems have been reported to stress the importance of hybridization algorithms in generating Pareto optimal sets. Simulation results with the proposed approach will be compared to those reported in the literature. The comparison will demonstrate the superiority of the proposed approach and confirms its potential to solve the multi- objective optimization problems. The organization of this paper is as follows: Multi-objective optimization problem is presented in Section 2. Overview of GAs are presented in Section 3. Clustering Algorithms are introduced in Section 4. K-Means for clustering problems is introduced in section 5. The proposed algorithm is presented

Transcript of K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The...

Page 1: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3789

K-means cluster algorithm-based evolutionary approach for constrained

multi-objective optimization

A. A. Mousa1,2, M. Higazy1,3, Yousria Abo-Elnaga4

1 Department of Mathematics and Statistics, Faculty of Sciences, Taif University, Saudi Arabia. 2Department of Basic Engineering Science, Faculty of Engineering, Shebin El-Kom, Menoufia University, Egypt.

3Department of physics and engineering mathematics, faculty of electronic engineering, Menofia University, Egypt. 4Department of Basic science, Higher Technological Institute, Tenth of Ramadan City, Egypt.

Abstract

One of the most popular approaches for generating non-dominated

solutions of a multi-objective optimization problem is the

evolutionary algorithms. The need of an approximation to the non-

dominated set for the decision maker for selecting a final preferred

solution is required in several real-life applications . Unfortunately,

the cardinality of the Pareto optimal solutions' set may be very large

or even infinite. On the other hand, due to the overflow of

information, the decision maker (DM) may not be concerned in

having an excessively large number of Pareto optimal solutions to

deal with. In this paper, a new enhanced evolutionary algorithm is

presented, our proposed algorithm has been enriched with modified

k-means cluster scheme, On the first hand, in phase I, K-means

clustering method is implemented to partition the population into a

determined number of subpopulation with-dynamic-size, where

distinct genetic algorithms (GAs) operators can be applied to each

sub-population, instead of one GAs operator applied to the whole

population. On the other hand, phase II uses K-means algorithm in

order to make the algorithms practical by allowing a decision maker

to control the resolution of the Pareto-set approximation by choosing

an appropriate k value ( no of required clusters). To prove the

excellence of the proposed approach compared to state-of-the-art

evolutionary algorithms, diverse numerical studies will be done using

a suite of multimodal test functions taken from the literature.

Keywords: K-means cluster algorithm, evolutionary approaches,

constrained optimization, multiobjective optimization

INTRODUCTION

Multiobjective Optimization Problems (MOPs) are those

having more than one objective. There are a set of possible

solutions and there is no unique optimum solution. these

solutions are optimal in the wider sense that no other solutions

in the search space are dominate them when all objectives are

considered. As in [1,2], these solutions are called Pareto

optimal solutions. Surely, MOPs appear in several areas of

knowledge for example economics [3-6], machine learning [6-

8] and electrical power system [9-12]. A few developed

evolutionary algorithms are constructed for solving the

constrained multi-objective optimization problems. In spite of

the several studies for solving the constrained optimization

problems; there are no enough studies concerning the

procedure for handling constraints. As an instance, in [13],

treating constraints as high-priority objectives was suggested

by Fonseca. In [14] by Harada, a few effective constraint-

handling guidelines were suggested and a Pareto descent

repair method was constructed. For MOPs, because the

objective vector cannot be directly assigned as the fitness

function value, it is always needed a correctly constructed

algorithm for fitness assignment. Using the Pareto dominance

relationship, the values of the fitness functions are assigned by

most of the existing MOEAs [15-16].

AL Malki et al.[17] have presented Hybrid Genetic K-Means

algorithm for Identifying the Most Significant Solutions from

Pareto Front. They have implemented clustering techniques to

organize and classify the solutions for various problems. In

[18], the Hybrid Genetic Algorithm with K-Means has been

presented by Al Malki et al. to solve the Clustering Problems.

They have succeeded to eliminate the empty cluster problem

by using a hybrid form of the k-means algorithm and GAs.

Their proposition was be proved by the results of simulation

tests via various data collections. The use of K-means

clustering technique in [19] enable the algorithm to implement

various operators of GA to each subpopulation rather than

utilizing a single GA operator for all population.

In this article, a new optimization algorithm is proposed

which runs in two stages: in the first stage, the algorithm

combines all principal characteristics of K-means with GA

clustering method and uses it as a search engine to produce

the correct Pareto optimal front. The use of K-means

clustering technique enable the algorithm to implement

various operators of GA to each subpopulation rather than

utilizing a single GA operator for all population.

Then in the second phase, k -means cluster algorithm is

adopted as reduction algorithm in order to improve the spread

of the solutions found so far. Our proposed algorithm has been

enriched with modified k-means cluster scheme, the use of k

-means also makes the algorithms practical by allowing a

decision maker to control the resolution of the Pareto-set

approximation by choosing an appropriate k value. k -means

clustering aims to partition n observations into k clusters in

which each observation belongs to the cluster with the

nearest mean. Finally, various kinds of multi-objective

benchmark problems have been reported to stress the

importance of hybridization algorithms in generating Pareto

optimal sets. Simulation results with the proposed approach

will be compared to those reported in the literature. The

comparison will demonstrate the superiority of the proposed

approach and confirms its potential to solve the multi-

objective optimization problems.

The organization of this paper is as follows: Multi-objective

optimization problem is presented in Section 2. Overview of

GAs are presented in Section 3. Clustering Algorithms are

introduced in Section 4. K-Means for clustering problems is

introduced in section 5. The proposed algorithm is presented

Page 2: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3790

in section 6. The Simulation results are discussed in Section 7.

Finally, we conclude the paper in Section8.

MULTIOBJECTIVE OPTIMIZATION (MO)

Mathematically, a general minimization problem of

M objectives can be presented as [20-21]:

Minimize: , 1, 2, ..., 1

subject to the constraints: 0, 1, 2, ..., .

i

j

f x f x i M

g x j J

and 1 2, ,..., ,nx x x x where the dimension of the

decision variable space is equal n , the i-th objective function

is

if x and the j-th inequality constraint is jg x . Then

the main task of the MO problem is to find x that optimize

if x . The evaluation of the solutions uses the concept of

the Pareto dominance because the notion of an optimum

solution in MO is different compared to the single objective

optimization (SO).

Definition 1: (Pareto dominance). A vector

1 2, ,..., Mu u u u is said to dominate a vector

1 2, ,..., Mv v v v (u dominate v denoted by u v ),

for a MO minimization problem, if and only if

,..., , ,..., : 2i i i ii i M u v i i M u v

where M is the dimension of the objective space.

Definition 2: (Pareto optimality). A solution ,u U is

called a Pareto optimal iff there is no other solution (v U ),

such that u is dominated by v . These solutions are called

non dominated solutions. The set of all such non dominated

solutions constitutes the Pareto-Optimal Set.

OVERVIEW OF THE GA

The techniques of natural selection are the primary concept of

GAs. Any optimization variable, (xn), is converted into a gene

as a real number or a string of bits. All variables' genes,

1,....., nx x, form a chromosome, that depicts all individuals.

Depending on the specific problem, an array of real numbers,

binary string, a list of components in a database could be

considered as a chromosome. Each possible solution is

represented by an individual, and the set of individuals form a

population. From a population the fittest is chosen for matting.

Matting is done by merging genes from distinct parents to

generate a child, called a crossover. Finally, the children are

added to the population and the process repeats over again,

thus symbolizing an artificial Darwinian environment as

illustrated in Figure 1. The optimization shall stop if and only

if the population has converged or the generation has been

reached the maximum number.

Figure 1. GAs outline for optimization problems.

CLUSTERING ALGORITHM

Clustering is an approach of partitioning of data into similar

objects sets. Each set, called cluster, consists of mutually

similar objects and different to objects of other clusters [22].

Clustering is a very important area of research, which has

various applications in several fields such as in psychiatry

[23], market research [24], archaeology [25], pattern

recognition [26], medicine [27] and engineering [28].

In the literature, clustering has various proposed algorithms.

Due to its simplicity and accuracy, the K-means clustering is

possibly the most commonly-used clustering algorithm [29].

K-Means Clustering Technique

In 1967, Macqueen produced the K-means clustering

algorithm [30]. Due to the easiness of K-means clustering

algorithm, it is used in many fields. The K-means clustering

algorithm separates data into k sets as it is a partitioning

clustering approach [31]. The K-means clustering algorithm is

more prominent because it is an intelligent algorithm that can

cluster massive data rapidly and efficiently.

The basic idea of K-means algorithm is to classify the data D

into k different clusters where D is the data set, k is the

number of desired clusters. More precisely, the following

are the main steps of the K-means clustering algorithm

(Figure 2):

Step 1: Define a number of desired clusters, k.

Step 2: Choose the initial cluster centroids randomly, which

represent temporary means of the clusters.

Step 3: Compute the Euclidean distance from each object to

each cluster and each object is assigned to the closest

cluster with the smallest square distance.

Step 4: For each cluster, the new centroid is computed, and

each centroid value is now replaced by the respective

cluster centroid

Step 5: Repeat steps 3 and 4 until no point changes its cluster.

Page 3: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3791

Figure (2): K-means algorithm procedure(Taken from [32]).

THE PROPOSED APPROACH

In this section, the proposed algorithm that we are dealing

with is informally described. Our proposed algorithm consists

of two phases. In phase I, K-means clustering technique is

implemented to partition the population into a determined

number of subpopulation with-dynamic-sizes. On the other

hand, phase II applies K-means algorithm to make the

algorithms practical by allowing the decision maker to control

the precision of the Pareto-set approximation by choosing a

suitable number of the needed clusters.

Phase I

Step1: Population Initialization: Two disconnected sub-

populations are used by the algorithm, the individuals that

initialized randomly satisfying the search space (The lower

and upper bounds) form the first one, while the reference

points that satisfying all constraints (feasible points) form the

second one. However, we have concentrated on how elitism

could be introduced to guarantee convergence to the true

Pareto-optimal solutions. For that, an “archiving/selection”

plan is proposed that ensures instantaneously the advance to

reach the Pareto-optimal set and coverage of the entire range

of the non-dominated solutions. An externally limited size

archive ( )tA of non-dominated solutions is preserved and

iteratively updated in the presence of new solutions based on

the concept of -dominance by the proposed algorithm.

Step 2: Repair Algorithm: By repairing infeasible individuals,

the main task of this algorithm is to recognize all feasible

individuals from the infeasible ones. Via this algorithm, the

infeasible individuals in a certain population are developed

gradually till they become feasible ones. As explained in [10],

the repair operation is done as follows. Let S be the feasible

domain and S be a search point (individual). Since better

reference point has better chances to be selected, the

algorithm selects one of the reference points, let it be r S ,

and generates random points (1 ) , [0,1] Z r

from the segment defined between,r , but the segment may

be extended equally [33, 34] on both sides determined by a

user specified parameter [0,1] .

In such a case the algorithm selects one of the reference points

(Better reference point has better chances to be selected), say

r S and creates random points

(1 ) , [0,1] Z r from the segment defined

between,r , but the segment may be extended equally [33,

34] on both sides determined by a user specified

parameter [0,1] . So, an up-to-date feasible individual is

represented as:

1

2

. (1 )., (1 2 ) , [0,1]

(1 ). .

z r

z r

(3)

Step3: Classifying a population according to non-domination:

In this step, the objective functions of each solution are

evaluated and the non-dominated sets of solutions are selected

according to non-domination concept by using the following

algorithm [35-36]. Figure(3) illustrate the classifying a

population according to non-domination.

Page 4: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3792

Start with m = 1.

The solutions mx and nx are compared for

domination for all 1, 2, , POPn N and m n .

If mx is dominated by nx , for any n, mark mx as

‘dominated’, and it is Inefficient.

Increase m by one and return to Step 2. If all

solutions in the population are considered, go to Step

5

The solutions which are not marked ‘dominated’ are

non-dominated solutions

Save the non-dominated solutions in an external

archive.

Figure (3): Classifying a population according to non-

domination

Step 4: Selection Stage: Through a weighted combination, a

dynamic-weight approach [37] is used to aggregate, the multi-

objective functions into a single combined fitness function.

The main characteristic of dynamic-weight approach is that

the weights attached to the multiple objective functions are

not fixed but randomly specified. Consequently, the direction

of the search is not specific. The multiple objective functions

are combined into a scalar fitness solution as follows:

( ) ... ( ) ... ( );1 1

Z w f x w f x w f xq qi i (4)

where x is an individual, Z is the combined fitness function,

( )f xi

is the ith objective function and w is a weighting-

vector with 0 wi 1, . . . , ;i q

where

11

qw

i i

. In general, the value of each weight can be

randomly determined as follows:

,

1

randomiw i = 1,2,...,q;qi random

j j

(5)

where randomi

and randomj

are non-negative random

real numbers.

After formulating the combined fitness function (Z) for

each chromosome by equation (4), the selection probability

for each chromosome is then defined by following linear

scaling function:

min

min1( )

pop

ii N

jj

Z Zprob

Z Z

(6)

where iprob

is the selection probability of a chromosome i

whose combined fitness function is iZ and

minZ is the worst

combined fitness in the population.

Finally, a pair of individual is chosen randomly from the

current population and the best solution with better selection

probability is chosen and subsequently copied in mating pool

[30]. This step has to be repeated frequently, until the size of

the mating pool is equivalent to the original population size.

Step 5: K-Means clustering technique: In this paper, K-means

clustering technique[38] was used to split the population to K

separated subpopulations with dynamic size, as shown in

Figure (4). By using this method, various operators of GA can

be applied to subpopulations in stead of applying a single GA

operator to the whole population.

Figure 4: The splitting of the population into K separated

subpopulations

Step6: Crossover operator: The crossover operator generates a

new offspring by mating (combing) two chromosomes

(parents). The concept behind crossover is that the new

offspring may take the best characteristics from the parents,

and it will be better than both parents. The Crossover occurs

during evolution process according to user-definable fixed

probability. In this step, three different crossover operators are

used which are horizontal band crossover, uniform crossover

and real part crossover which are described as follows:.

Horizontal band crossover :In this operator, two

horizontal crossover sites are selected randomly. The

information in the horizontal region, which is determined

by horizontal crossover sites, is exchanged between the

two parents based on a fixed probability [39].

Uniform crossover: In uniform crossover [40],

random (0,1) mask is generated. In the random mask the

‘0’ denotes bit unchanged, while ‘1’ represents bits

swapping.

Real part crossover: This operator works on the real

part itp of a chromosome to cross the real variables

itp of the chromosomes by exchanging the information

in column vectors of itp matrix [41]. The new power

matrices of off-springs ( 1)(itp offspring and

2)(itp offspring ) are created from The power

matrices ( itp ) of parent chromosomes, as follows:

Page 5: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3793

1 1 2 1

2 2 1 2

1 1

1

) ,

2) , ;

( ...,(1 ) ,...

( ...,(1 ) ,...

it

it

parent parent parent parent

Tj j

parent parent parent parent

Tj j

p offspring column column column column

p offspring column column column column

(7)

where is chosen randomly between 0 and 1, and

jcoulmn is the column vector in power matrix and“j” is a

random positive integer in range of [1,T].

Step 7: Mutation operator: This operator is used to explore

some of the points in the search space by altering few genes

value in an individual randomly from its initial state. The

mutation is generally occurs according to user-definable

mutation probability which is usually low value. In this step,

we used three different mutation methods which are one point

mutation, intelligent mutation and swap window mutation

which are described as follows:

Single point mutation: In single point mutation, a

single bit is chosen randomly from binary part (itu ) of

offspring and then its value is changed from ‘0’ to ‘1’

and vice versa [40-41].

Intelligent mutation: This mutation [42] searches

for (10) or (01) combinations in commitment schedule,

then randomly changing them to 00 or 11.

Swap window mutation: Swap window mutation

works on the binary part ( itu ) of a chromosome by

selecting: 1) two units at random 2) a time window of

width w between 1 and T and 3) the location of the

window, then exchanging the entries of the two units

which are in the window [43].

At any time instant, if status of any unit changed due to

mutation operator from 0 to 1, the corresponding output

power of this unit will be changed from 0 to real value chosen

at random from the range of min max[ , ]i ip p

.

Step 8: Combination stage: At this step [44,45], all

subpopulations are grouped once more to form a new

population, as shown in Figure (5).

Figure 5: Commination stage

Step 9: Update the Archive of Non-dominated Solution: The

proposed approach has an external archive of non-dominated

solutions which gets updated iteratively when new solutions

are found based on the concept of non-domination. The

Archive is updated in each iteration by copying the solutions

of current population tP to archive

tV and applying the

dominance criteria to remove all dominated solutions (i.e.,

each solution of tP has three probabilities as in Algorithm 1

in Figure (6) [60].

Algorithm 2:Update the Archive

( , )

|

{ }/{ }

t t

t

t t

t

t t

Input V X P

If Y V Y X then

V =V

Else if Y V X Y then

V V X Y

Else if

|

{ }

:

t

t t

t

Y V Y X then

V V X

End

Output V

Figure 6: Update the Archive of non-dominated solutions

Phase II: K-means Algorithm

Step 10: Centres 1 2, ,..., Kz z z of K initial cluster are

randomly chosen from the n observations

1 2, ,..., nx x x .

Step 11: A point 1, 1,2,...,x i n is assigned to cluster

, {1,2,..., }jC j k if and only if:

, 1,2,..., &i j i px z x z p K j p (8)

Step 12: Centres of new cluster 1 2, ,..., Kz z z are

computed as follows:

* 1, 1,2,..., ;

j i

i j

x Ci

z x i Kn

(9)

where in is the number of elements which are belonging

to cluster jC .

Step 13: If * , 1,2,....,i iz z i K

then the algorithm

terminate, otherwise go to step 11.

After this phase, we get an initial center for all

predetermined clusters.

Page 6: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3794

Identifying the centroids of Pareto front

In order to identify one single point belong to the Pareto front

that represents certain cluster, the following algorithm as

shown in Figure (7) is presented. Figure (7) presents an

algorithm for identifying the centroid for each cluster in such

a way that it locate in the Pareto front while, Figure (8) shows

geometrically how the identifying mechanism occurs.

1 2

*

1 INPUT _ , 1,..., & 1,..., ,

0

INPUT , ,...,

For {1,..., }

Find | ( - ) ( - ) 1,2,..., , 1

j K

Kj Kj

j K

K

K Ki Ki k Ki K Kj

if x Cpartition matrix M m K K j n u

if x C

Z z z z

all K K do

z x x z Min x z i n u

* * * *

1 2

End for

OUTPUT Pareo front centriod: ( , ,...., ) KZ z z z

Figure 7: Algorithm for identifying the Pareto front centroids, taken from [17]

Figure 8: Algorithm for identifying the Pareto front centroids, taken from [17]

Results

The proposed algorithm is applied by various Pareto front of

MOO Test Instances for the CEC09 [46]; which are

containing different Pareto front characteristics to demonstrate

its ability to select the most compromise set of solution. The

problems cover different characteristics of MOOPs, namely

convex Pareto front, nonconvex Pareto Front and discrete

Pareto front and non-uniformity of solution distribution.

Twenty problems from CEC2009 contain different shapes of

Pareto front is selected and described in Table 1.

Table 1: MOO Test Instances for the CEC09

Page 7: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3795

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

f1

f 2

Figure 9: Pareto Front of problem UF2

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

f1

f 2

Figure 10: Pareto Front of problem UF1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

f1

f 2

Figure 11: Pareto Front of problem UF4

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

f1

f 2

Figure 12: Pareto Front of problem UF3

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

F1

F2

Figure 13: Pareto Front of problem UF6

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

F1

F2

Figure 14: Pareto Front of problem UF5

00.1

0.20.3

0.40.5

0.60.7

0.80.9

1

00.1

0.20.3

0.40.5

0.60.7

0.80.9

10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

f1

f2

f 3

Figure 15: Pareto Front of problem UF8

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

f1

f 2

Figure 16: Pareto Front of problem UF7

00.1

0.20.3

0.40.5

0.60.7

0.80.9

1

00.1

0.20.3

0.40.5

0.60.7

0.80.9

1

0

0.2

0.4

0.6

0.8

1

f1

f2

f 3

Figure 17: Pareto Front of problem UF10

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

00.10.20.30.40.50.60.70.80.910

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

f1

f2

f 3

Figure 18: Pareto Front of problem UF9

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

f1

f 2

Figure 19: Pareto Front of problem CF2

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

f1

f 2

Figure 20: Pareto Front of problem CF1

Page 8: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3796

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

f1

f 2

Figure 21: Pareto Front of problem CF4

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

f1

f 2

Figure 22: Pareto Front of problem CF3

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

f1

f 2

Figure 23: Pareto Front of problem CF6

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

f1

f 2

Figure 24: Pareto Front of problem CF5

00.1

0.20.3

0.40.5

0.60.7

0.80.9

1

00.1

0.20.3

0.40.5

0.60.7

0.80.9

1

0

0.2

0.4

0.6

0.8

1

f1

f2

f 3

Figure 25: Pareto Front of problem CF8

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

f1

f 2

Figure 26: Pareto Front of problem CF7

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

00.10.20.3

0.40.50.60.7

0.80.91

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

f2f

1

f 3

Figure 27: Pareto Front of problem CF10

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

00.10.20.30.40.50.60.70.80.91

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

f2f

1

f 3

Figure 28: Pareto Front of problem CF9.

The algorithm (Phase I) was implemented for the 20 problems

which are taken from CEC2009 with different Pareto front

Charctertistics as in figures (9-28).

Optimization of the above-formulated objective functions

using multi-objective EAs yields a set of Pareto-optimal

solutions, not give a single optimal solution. But the DM in

practical application needs to select a set of solutions with

limited size. From the above results the selected solutions

should have a diversity characteristics also it must cover the

entire Pareto front domain and allows the DM to attain a

reasonable representation of the Pareto-optimal front. In

addition, the results guarantee that the diversity among

selected solutions is achieved. Finally, the proposed algorithm

has a diversity preserving mechanism to overcome the cruise

of huge number of Pareto front as in figures(29-48)

Page 9: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3797

0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.82 Cluster Plot

f1

f 2

(0.72873,0.15143)

(0.22823,0.54985)

0 0.5 10

0.2

0.4

0.6

0.85 Cluster Plot

f1

f 2

(0.88689,0.058899)

(0.24024,0.51334)

(0.44444,0.33495)

(0.071071,0.74914)

(0.66216,0.18723)

0 0.5 10

0.5

17 Cluster Plot

f1

f 2

(0.045045,0.80052)

(0.44144,0.33643)

(0.5971,0.22785)

(0.91892,0.041713)

(0.29229,0.46076)

(0.15516,0.60904)

(0.75676,0.13049)

0 0.5 10

0.5

110 Cluster Plot

f1

f 2

(0.60561,0.22207)

(0.4955,0.29644)

(0.82983,0.089226)

(0.096096,0.69251)

(0.027027,0.84582)

(0.71722,0.15333)

(0.38789,0.37768)

(0.28378,0.46801)

(0.18519,0.57087)

(0.94344,0.028838)

Figure 29: Pareto front centroids of problem UF1for k=2,5,7, and 10 clusters

0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.82 Cluster Plot

f1

f 2

(0.22823,0.54985)

(0.72873,0.15143)

0 0.5 10

0.2

0.4

0.6

0.85 Cluster Plot

f1

f 2

(0.24024,0.51334)

(0.44444,0.33495)

(0.88689,0.058899)

(0.071071,0.74914)

(0.66216,0.18723)

0 0.5 10

0.5

17 Cluster Plot

f1

f 2

(0.44144,0.33643)

(0.15516,0.60904)

(0.5971,0.22785)

(0.29229,0.46076)

(0.75676,0.13049)

(0.045045,0.80052)

(0.91892,0.041713)

0 0.5 10

0.5

110 Cluster Plot

f1

f 2

(0.51301,0.28408)

(0.83884,0.08428)(0.73023,0.14566)

(0.2988,0.45407)

(0.4049,0.36415)

(0.03003,0.83739)

(0.1041,0.67978)

(0.94695,0.027018)

(0.62162,0.21182)

(0.1972,0.55711)

Figure 30: Pareto front centroids of problem UF2for k=2,5,7, and 10 clusters

Page 10: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3798

0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.82 Cluster Plot

f1

f 2

(0.72873,0.15143)

(0.22823,0.54985)

0 0.5 10

0.2

0.4

0.6

0.85 Cluster Plot

f1

f 2

(0.66216,0.18723)

(0.88689,0.058899)

(0.44444,0.33495)

(0.24024,0.51334)

(0.071071,0.74914)

0 0.5 10

0.5

17 Cluster Plot

f1

f 2

(0.59359,0.23013)

(0.75425,0.13194)

(0.044044,0.80276)

(0.15265,0.61226)

(0.28879,0.46402)

(0.91792,0.042243)

(0.43744,0.33946)

0 0.5 10

0.5

110 Cluster Plot

f1

f 2

(0.5025,0.29147)

(0.18919,0.56625)

(0.83283,0.087573)

(0.28929,0.46286)

(0.39439,0.37248)

(0.61211,0.21789)(0.72222,0.15037)

(0.028028,0.84296)

(0.098599,0.68846)

(0.94444,0.028317)

Figure 31: Pareto front centroids of problem UF3for k=2,5,7, and 10 clusters

0.2 0.4 0.6 0.8 10.2

0.4

0.6

0.8

1

2 Cluster Plot

f1

f 2

(0.79079,0.35999)

(0.29029,0.88755)

0 0.5 10

0.5

15 Cluster Plot

f1

f 2

(0.57307,0.66844)

(0.75676,0.42482)

(0.92192,0.148)

(0.36436,0.86309)

(0.12613,0.97875)

0 0.5 10

0.5

17 Cluster Plot

f1

f 2 (0.70621,0.4999)

(0.42643,0.81621)

(0.26527,0.92725)

(0.57257,0.67055)

(0.94494,0.10605)

(0.09009,0.98915)

(0.82983,0.3102)

0 0.5 10

0.5

110 Cluster Plot

f1

f 2

(0.6001,0.63911)

(0.17668,0.96766)

(0.3984,0.84036)

(0.29029,0.9147)

(0.5015,0.74764)

(0.6952,0.51597)

(0.78679,0.38031)

(0.059059,0.99533)

(0.87437,0.23485)

(0.95896,0.079823)

Figure 32: Pareto front centroids of problem UF4for k=2,5,7, and 10 clusters

Page 11: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3799

0.2 0.4 0.6 0.8 10.2

0.4

0.6

0.8

1

2 Cluster Plot

f1

f 2

(0.225,0.775)

(0.75,0.25)

0 0.5 10

0.5

15 Cluster Plot

f1

f 2

(0.1,0.9)

(0.725,0.275)

(0.525,0.475)

(0.325,0.675)

(0.925,0.075)

0 0.5 10

0.5

17 Cluster Plot

f1

f 2 (0.525,0.475)

(0.925,0.075)

(0.325,0.675)

(1,0)

(0.7,0.3)

(0.825,0.175)

(0.1,0.9)

0 0.5 10

0.5

110 Cluster Plot

f1

f 2

(0.7,0.3)

(0.575,0.425)

(0.45,0.55)

(0.075,0.925)

(0.275,0.725)

(1,0)

(0.825,0.175)

(0.925,0.075)

(0.75,0.25)

(0.65,0.35)

Figure 33: Pareto front centroids of problem UF5for k=2,5,7, and 10 clusters

0 0.5 10

0.5

12 Cluster Plot

f1

f 2

(0.1875,0.8125)

(0.875,0.125)

0 0.5 10

0.5

15 Cluster Plot

f1

f 2

(0,1)

(0.45858,0.54142)

(0.375,0.625)

(0.29142,0.70858)

(0.875,0.125)

0 0.5 10

0.5

17 Cluster Plot

f1

f 2

(0,1)

(0.95833,0.041667)

(0.375,0.625)

(0.87462,0.12538)

(0.29142,0.70858)

(0.79129,0.20871)

(0.45858,0.54142)

0 0.5 10

0.5

110 Cluster Plot

f1

f 2

(0.87688,0.12312)

(0,1)

(0.3445,0.6555)

(0.97598,0.024024)

(0.40738,0.59262)

(0.28125,0.71875)

(0.4695,0.5305)

(0.77515,0.22485)

(0.9268,0.073198)

(0.8262,0.1738)

Figure 34: Pareto front centroids of problem UF6for k=2,5,7, and 10 clusters

Page 12: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3800

0.2 0.4 0.6 0.8 10.2

0.4

0.6

0.8

1

2 Cluster Plot

f1

f 2

(0.24975,0.75025)

(0.75025,0.24975)

0 0.5 10

0.5

15 Cluster Plot

f1

f 2

(0.3023,0.6977)

(0.1006,0.8994)

(0.503,0.497)

(0.9014,0.098599)

(0.7027,0.2973)

0 0.5 10

0.5

17 Cluster Plot

f1

f 2

(0.78729,0.21271)

(0.35886,0.64114)

(0.92943,0.070571)

(0.64515,0.35485)

(0.5025,0.4975)

(0.21522,0.78478)

(0.071572,0.92843)

0 0.5 10

0.5

110 Cluster Plot

f1

f 2

(0.14665,0.85335)

(0.54505,0.45495)

(0.048549,0.95145)

(0.64515,0.35485)

(0.74575,0.25425)

(0.24525,0.75475)

(0.84735,0.15265)

(0.94945,0.050551)

(0.34484,0.65516)

(0.44494,0.55506)

Figure 35: Pareto front centroids of problem UF7for k=2,5,7, and 10 clusters

0.20.3

0.40.5

0.60.7

0.2

0.3

0.4

0.5

0.6

0.70.2

0.4

0.6

0.8

1

f1

2 Cluster Plot

X: 0.5784

Y: 0.5784

Z: 0.3602

X: 0.2387

Y: 0.2387

Z: 0.8945

f2

0.10.2

0.30.4

0.50.6

0.70.8

0.9

0

0.2

0.4

0.6

0.8

10.2

0.4

0.6

0.8

1

X: 0.8576

Y: 0.3519

Z: 0.2636

X: 0.604

Y: 0.2433

Z: 0.726

f1

5 Cluster Plot

X: 0.1443

Y: 0.1446

Z: 0.9644

X: 0.2443

Y: 0.6066

Z: 0.7232

X: 0.3523

Y: 0.8585

Z: 0.2605

f2

00.2

0.40.6

0.81

0

0.2

0.4

0.6

0.8

10

0.2

0.4

0.6

0.8

1

X: 0.902

Y: 0.2243

Z: 0.294

f1

X: 0.5542

Y: 0.1878

Z: 0.7878

X: 0.5661

Y: 0.5613

Z: 0.5738

7 Cluster Plot

X: 0.6796

Y: 0.682

Z: 0.1839

X: 0.1276

Y: 0.1277

Z: 0.9724

X: 0.1891

Y: 0.5552

Z: 0.7866

f2

X: 0.2239

Y: 0.9019

Z: 0.2948

00.2

0.40.6

0.81

0

0.2

0.4

0.6

0.8

10

0.2

0.4

0.6

0.8

1

X: 0.7776

Y: 0.1924

Z: 0.5734

X: 0.9316

Y: 0.2423

Z: 0.1952

f1

X: 0.4647

Y: 0.1775

Z: 0.8533

X: 0.6811

Y: 0.6748

Z: 0.2084

X: 0.5651

Y: 0.5375

Z: 0.6022

10 Cluster Plot

X: 0.0974

Y: 0.07408

Z: 0.987

X: 0.1607

Y: 0.3176

Z: 0.9259

X: 0.1966

Y: 0.5774

Z: 0.7775

f2

X: 0.2461

Y: 0.9366

Z: 0.1705

Figure 36: Pareto front centroids of problem UF8for k=2,5,7, and 10 clusters

Page 13: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3801

0.10.15

0.20.25

0.30.35

0.40.45

0.1

0.2

0.3

0.4

0.50.2

0.4

0.6

0.8

1

f1

X: 0.3763

Y: 0.3763

Z: 0.2475

2 Cluster Plot

X: 0.1237

Y: 0.1237

Z: 0.7525

f2

00.1

0.20.3

0.40.5

0.60.7

0.8

0

0.2

0.4

0.6

0.80

0.2

0.4

0.6

0.8

1

X: 0.7268

Y: 0.1006

Z: 0.1725

X: 0.4115

Y: 0.06112

Z: 0.5274

f1

5 Cluster Plot

X: 0.07219

Y: 0.07219

Z: 0.8556X: 0.06112

Y: 0.4115

Z: 0.5274

f2

X: 0.1006

Y: 0.7268

Z: 0.1725

00.1

0.20.3

0.40.5

0.60.7

0.8

0

0.2

0.4

0.6

0.80

0.2

0.4

0.6

0.8

1

X: 0.7257

Y: 0.1005

Z: 0.1738

f1

X: 0.4085

Y: 0.06081

Z: 0.5307

7 Cluster Plot

X: 0.08094

Y: 0.03988

Z: 0.8792

X: 0.03604

Y: 0.2445

Z: 0.7195X: 0.0633

Y: 0.4258

Z: 0.5109

f2

X: 0.09162

Y: 0.6068

Z: 0.3016

X: 0.1044

Y: 0.7969

Z: 0.09867

00.2

0.40.6

0.81

0

0.2

0.4

0.6

0.80

0.2

0.4

0.6

0.8

1

X: 0.8359

Y: 0.09752

Z: 0.06656

X: 0.5687

Y: 0.08362

Z: 0.3477

f1

X: 0.6936

Y: 0.1103

Z: 0.1962

X: 0.307

Y: 0.04581

Z: 0.6472

X: 0.4379

Y: 0.06527

Z: 0.4969

10 Cluster Plot

X: 0.176

Y: 0.02583

Z: 0.7982

X: 0.02834

Y: 0.05923

Z: 0.9124

X: 0.04459

Y: 0.3004

Z: 0.655

f2

X: 0.07952

Y: 0.5314

Z: 0.3891X: 0.104

Y: 0.7689

Z: 0.1271

Figure 37: Pareto front centroids of problem UF9for k=2,5,7, and 10 clusters

0.20.3

0.40.5

0.60.7

0.2

0.3

0.4

0.5

0.6

0.70.2

0.4

0.6

0.8

1

f1

2 Cluster Plot

X: 0.5784

Y: 0.5784

Z: 0.3602

X: 0.2387

Y: 0.2387

Z: 0.8945

f2

0.10.2

0.30.4

0.50.6

0.70.8

0.9

0

0.2

0.4

0.6

0.8

10.2

0.4

0.6

0.8

1

f1

5 Cluster Plot

f2

00.2

0.40.6

0.81

0

0.2

0.4

0.6

0.8

10.2

0.4

0.6

0.8

1

X: 0.9233

Y: 0.2441

Z: 0.2166

f1

X: 0.713

Y: 0.2712

Z: 0.6144

X: 0.4184

Y: 0.2058

Z: 0.8686

7 Cluster Plot

X: 0.6441

Y: 0.68

Z: 0.2747

X: 0.1

Y: 0.132

Z: 0.9762

X: 0.2346

Y: 0.6015

Z: 0.7342

f2

X: 0.224

Y: 0.9098

Z: 0.2744

00.2

0.40.6

0.81

0

0.2

0.4

0.6

0.8

10.2

0.4

0.6

0.8

1

X: 0.9466

Y: 0.1775

Z: 0.21

f1

X: 0.7991

Y: 0.5278

Z: 0.2278

X: 0.438

Y: 0.1731

Z: 0.869

X: 0.5358

Y: 0.5358

Z: 0.6303

10 Cluster Plot

X: 0.09875

Y: 0.09875

Z: 0.9835

X: 0.5278

Y: 0.7991

Z: 0.2278

X: 0.1731

Y: 0.438

Z: 0.869

X: 0.192

Y: 0.7556

Z: 0.6031

f2

X: 0.1775

Y: 0.9466

Z: 0.21

Figure 38: Pareto front centroids of problem UF10for k=2,5,7, and 10 clusters

Page 14: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3802

0.2 0.4 0.6 0.8 10.2

0.4

0.6

0.8

1

2 Cluster Plot

f1

f 2

(0.75,0.25)

(0.225,0.775)

0 0.5 10

0.5

15 Cluster Plot

f1

f 2

(0.65,0.35)

(0.425,0.575)

(0.225,0.775)

(0.9,0.1)

(0.05,0.95)

0 0.5 10

0.5

17 Cluster Plot

f1

f 2

(0.3,0.7)

(0.55,0.45)

(0.15,0.85)

(0.025,0.975)

(0.725,0.275)

(0.425,0.575)

(0.925,0.075)

0 0.5 10

0.5

110 Cluster Plot

f1

f 2

(1,0)

(0.75,0.25)

(0.325,0.675)

(0.2,0.8)

(0.575,0.425)

(0.05,0.95)

(0.45,0.55)

(0.825,0.175)

(0.675,0.325)

(0.925,0.075)

Figure 39: Pareto front centroids of problem CF1 for k=2,5,7, and 10 clusters

0 0.2 0.4 0.6 0.80

0.5

12 Cluster Plot

f1

f 2

(0.078125,0.80558)

(0.78125,0.11907)

0 0.5 10

0.5

15 Cluster Plot

f1

f 2

(0,1)

(0.77861,0.11793)

(0.15625,0.61115)

(0.9262,0.037864)

(0.63366,0.2044)

0 0.5 10

0.5

17 Cluster Plot

f1

f 2

(0,1)

(0.67056,0.18203)

(0.2243,0.52666)(0.17376,0.58351)

(0.88997,0.05723)

(0.12632,0.64508)(0.082831,0.71295)

0 0.5 10

0.5

110 Cluster Plot

f1

f 2

(0.15032,0.61301)

(0.21611,0.53561)

(0,1)

(0.73908,0.14039)

(0.96244,0.019022)

(0.66726,0.18324)

(0.81222,0.098846)

(0.090456,0.70049)

(0.59676,0.22761)

(0.88667,0.058438)

Figure 40: Pareto front centroids of problem CF2 for k=2,5,7, and 10 clusters

Page 15: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3803

0 0.2 0.4 0.6 0.80.2

0.4

0.6

0.8

1

2 Cluster Plot

f1

f 2

(0,1)

(0.76828,0.38005)

0 0.5 10

0.5

15 Cluster Plot

f1

f 2

(0.90012,0.18938)

(0.55334,0.69286)

(0,1)

(0.6572,0.56725)

(0.96731,0.063944)

0 0.5 10

0.5

17 Cluster Plot

f1

f 2

(0.91748,0.15814)(0.88318,0.2199)

(0,1)

(0.95097,0.095565)

(0.6572,0.56725)

(0.55334,0.69286)

(0.98386,0.031933)

0 0.5 10

0.5

110 Cluster Plot

f1

f 2(0.88862,0.21017)

(0.93382,0.12781)

(0.55521,0.69164)

(0.62539,0.60879)

(0.5184,0.73114)

(0.97821,0.042946)

(0,1)

(0.59077,0.65089)

(0.69151,0.52173)(0.65907,0.56553)

Figure 41: Pareto front centroids of problem CF3 for k=2,5,7, and 10 clusters

0.2 0.4 0.6 0.8 10.2

0.4

0.6

0.8

1

2 Cluster Plot

f1

f 2

(0.73003,0.35677)

(0.22996,0.77004)

0 0.5 10.2

0.4

0.6

0.8

1

5 Cluster Plot

f1

f 2

(0.90562,0.21938)

(0.29158,0.70842)

(0.70353,0.39396)

(0.096693,0.90331)

(0.49257,0.51849)

0 0.5 10

0.5

17 Cluster Plot

f1

f 2

(0.3497,0.6503)

(0.79535,0.32872)

(0.64809,0.42595)

(0.49456,0.51335)

(0.93223,0.19277)

(0.069138,0.93086)

(0.20892,0.79108)

0 0.5 10

0.5

110 Cluster Plot

f1

f 2 (0.54229,0.47926)

(0.95331,0.17169)

(0.43988,0.56012)

(0.1503,0.8497)

(0.65412,0.42294)

(0.85843,0.26657)

(0.0501,0.9499)

(0.2485,0.7515)

(0.34519,0.65481)

(0.75994,0.36102)

Figure 42: Pareto front centroids of problem CF4 for k=2,5,7, and 10 clusters

Page 16: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3804

0.2 0.4 0.6 0.8 10.2

0.4

0.6

0.8

1

2 Cluster Plot

f1

f 2

(0.73003,0.35677)

(0.22996,0.77004)

0 0.5 10.2

0.4

0.6

0.8

1

5 Cluster Plot

f1

f 2

(0.70353,0.39396)

(0.096693,0.90331)

(0.90562,0.21938)

(0.49257,0.51849)

(0.29158,0.70842)

0 0.5 10

0.5

17 Cluster Plot

f1

f 2

(0.3497,0.6503)

(0.49456,0.51335)

(0.64809,0.42595)

(0.93223,0.19277)

(0.20892,0.79108)

(0.069138,0.93086)

(0.79535,0.32872)

0 0.5 10

0.5

110 Cluster Plot

f1

f 2

(0.24148,0.75852)

(0.85392,0.27108)

(0.75249,0.36666)

(0.53334,0.48437)

(0.43136,0.56864)

(0.95181,0.17319)

(0.048096,0.9519)

(0.33667,0.66333)

(0.64458,0.42771)

(0.14529,0.85471)

Figure 43: Pareto front centroids of problem CF5 for k=2,5,7, and 10 clusters

0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.82 Cluster Plot

f1

f 2

(0.2009,0.65208)

(0.70107,0.16264)

0 0.5 10

0.5

15 Cluster Plot

f1

f 2

(0.073647,0.85996)

(0.86952,0.085109)

(0.22946,0.59596)

(0.61599,0.19205)

(0.4023,0.36001)

0 0.5 10

0.5

17 Cluster Plot

f1

f 2

(0.41082,0.34861)

(0.28257,0.51598)

(0.91365,0.069151)

(0.56339,0.21906)

(0.053106,0.89757)

(0.73852,0.13465)

(0.16383,0.70027)

0 0.5 10

0.5

110 Cluster Plot

f1

f 2

(0.039579,0.92294)

(0.60291,0.19854)(0.71716,0.14202)

(0.20741,0.62882)

(0.83233,0.10183)

(0.39028,0.37253)

(0.29659,0.49547)

(0.49208,0.26345)

(0.94578,0.054741)

(0.12174,0.77192)

Figure 44: Pareto front centroids of problem CF6 for k=2,5,7, and 10 clusters

Page 17: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3805

0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.82 Cluster Plot

f1

f 2

(0.2009,0.65208)

(0.70107,0.16264)

0 0.5 10

0.5

15 Cluster Plot

f1

f 2

(0.4023,0.36001)

(0.073647,0.85996)

(0.22946,0.59596)

(0.61599,0.19205)

(0.86952,0.085109)

0 0.5 10

0.5

17 Cluster Plot

f1

f 2

(0.73952,0.13425)

(0.16533,0.69778)

(0.053607,0.89664)

(0.28457,0.51311)

(0.56488,0.21819)

(0.41283,0.34625)

(0.91416,0.068949)

0 0.5 10

0.5

110 Cluster Plot

f1

f 2 (0.28958,0.50539)

(0.48463,0.26968)

(0.71218,0.14432)

(0.11723,0.77982)

(0.038076,0.92579)

(0.82932,0.10274)

(0.38327,0.38113)

(0.59588,0.20206)

(0.2009,0.63918)

(0.94478,0.055249)

Figure 45: Pareto front centroids of problem CF7 for k=2,5,7, and 10 clusters

0.40.45

0.50.55

0.60.65

0.70.75

0.8

0

0.2

0.4

0.6

0.80.494

0.496

0.498

0.5

0.502

0.504

X: 0.7923

Y: 3.019e-09

Z: 0.495

f1

2 Cluster Plot

f2

X: 0.4062

Y: 0.6018

Z: 0.5012

00.2

0.40.6

0.81

0

0.2

0.4

0.6

0.8

10.2

0.4

0.6

0.8

1

X: 0.8899

Y: 3.638e-09

Z: 0.3847

X: 0.7963

Y: 0.4598

Z: 0.3342

f1

5 Cluster Plot

X: 0.315

Y: 0.3175

Z: 0.8465

X: 0.5565

Y: 0.7268

Z: 0.3268

f2

X: 0

Y: 0.8905

Z: 0.3837

00.2

0.40.6

0.81

0

0.2

0.4

0.6

0.8

10.2

0.4

0.6

0.8

1

X: 0.9527

Y: 3.824e-09

Z: 0.2604

X: 0.6643

Y: 2.437e-09

Z: 0.7256

f1

X: 0.7587

Y: 0.5766

Z: 0.2368

7 Cluster Plot

X: 0.5243

Y: 0.5044

Z: 0.6559

X: 0.1807

Y: 0.308

Z: 0.9032X: 0.4757

Y: 0.824

Z: 0.2636

f2

X: 0

Y: 0.8986

Z: 0.3709

00.2

0.40.6

0.81

0

0.2

0.4

0.6

0.8

10

0.2

0.4

0.6

0.8

1

X: 0.9729

Y: 3.91e-09

Z: 0.1991

X: 0.8086

Y: 3.446e-09

Z: 0.5748

f1

X: 0.4982

Y: 0.1402

Z: 0.8362

X: 0.7685

Y: 0.5923

Z: 0.1774

X: 0.6782

Y: 0.4989

Z: 0.5157

10 Cluster Plot

X: 0.1614

Y: 0.2446

Z: 0.9388

X: 0.4775

Y: 0.827

Z: 0.2544

f2

X: 0

Y: 0.722

Z: 0.6736

X: 0

Y: 0.9608

Z: 0.2381

Figure 46: Pareto front centroids of problem CF8 for k=2,5,7, and 10 clusters

Page 18: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3806

0.10.2

0.30.4

0.50.6

0.70.8

0.4

0.42

0.44

0.46

0.48

0.50.2

0.4

0.6

0.8

1

f1

2 Cluster Plot

X: 0.7026

Y: 0.4964

Z: 0.3558

X: 0.1266

Y: 0.4236

Z: 0.8154

f2

00.2

0.40.6

0.81

0.2

0.4

0.6

0.8

10.2

0.4

0.6

0.8

1

X: 0.8818

Y: 0.3061

Z: 0.2855

f1

X: 0.5128

Y: 0.3597

Z: 0.7411

5 Cluster Plot

X: 0.5712

Y: 0.7429

Z: 0.2903

X: 0.05313

Y: 0.2518

Z: 0.947

f2

X: 0

Y: 0.7788

Z: 0.5836

00.2

0.40.6

0.81

0

0.2

0.4

0.6

0.8

10.2

0.4

0.6

0.8

1

X: 0.8946

Y: 0.3161

Z: 0.2466

f1

X: 0.6493

Y: 0.2278

Z: 0.6995

7 Cluster Plot

X: 0.585

Y: 0.7577

Z: 0.2387

X: 0.4373

Y: 0.5657

Z: 0.6786

X: 0.08263

Y: 0.1856

Z: 0.9643

f2

X: 0.001069

Y: 0.5657

Z: 0.8139

X: 0

Y: 0.8761

Z: 0.4514

00.2

0.40.6

0.81

0

0.2

0.4

0.6

0.8

10

0.2

0.4

0.6

0.8

1

X: 0.9136

Y: 0.3232

Z: 0.1796

X: 0.7852

Y: 0.2767

Z: 0.5302

f1

X: 0.517

Y: 0.1808

Z: 0.8226

10 Cluster Plot

X: 0.4675

Y: 0.6056

Z: 0.6351

X: 0.5595

Y: 0.7246

Z: 0.3874X: 0.6026

Y: 0.7805

Z: 0.1261

X: 0.3187

Y: 0.4123

Z: 0.8461

X: 0.0361

Y: 0.1733

Z: 0.975

f2

X: 0

Y: 0.5668

Z: 0.8133

X: 0

Y: 0.8764

Z: 0.4508

Figure 47: Pareto front centroids of problem CF9 for k=2,5,7, and 10 clusters

0.10.2

0.30.4

0.50.6

0.70.8

0.4

0.42

0.44

0.46

0.48

0.50.2

0.4

0.6

0.8

1

f1

2 Cluster Plot

X: 0.1266

Y: 0.4236

Z: 0.8154

f2

00.2

0.40.6

0.81

0.2

0.4

0.6

0.8

10.2

0.4

0.6

0.8

1

f1

X: 0.5128

Y: 0.3597

Z: 0.7411

5 Cluster Plot

X: 0.5712

Y: 0.7429

Z: 0.2903

X: 0.05313

Y: 0.2518

Z: 0.947

f2

X: 0

Y: 0.7788

Z: 0.5836

00.2

0.40.6

0.81

0

0.2

0.4

0.6

0.8

10.2

0.4

0.6

0.8

1

X: 0.8948

Y: 0.3164

Z: 0.2462

f1

X: 0.6502

Y: 0.2277

Z: 0.6987

7 Cluster Plot

X: 0.08528

Y: 0.1879

Z: 0.9632

X: 0.5859

Y: 0.7589

Z: 0.2342

X: 0.4418

Y: 0.572

Z: 0.6705

f2

X: 0.001298

Y: 0.5663

Z: 0.8135

X: 0

Y: 0.8762

Z: 0.4511

00.2

0.40.6

0.81

0

0.2

0.4

0.6

0.8

10

0.2

0.4

0.6

0.8

1

X: 0.9194

Y: 0.3256

Z: 0.1524

X: 0.8284

Y: 0.2918

Z: 0.4535

f1

X: 0.6407

Y: 0.2217

Z: 0.7216

X: 0.321

Y: 0.2171

Z: 0.91

10 Cluster Plot

X: 0.535

Y: 0.6929

Z: 0.4685

X: 0.4052

Y: 0.5271

Z: 0.7378

X: 0.5989

Y: 0.7757

Z: 0.1577

X: 0.01529

Y: 0.1777

Z: 0.9763

f2

X: 0

Y: 0.5702

Z: 0.811X: 0

Y: 0.8774

Z: 0.4492

Figure 48: Pareto front centroids of problem CF10 for k=2,5,7, and 10 clusters

Solving MOOPs using multi-objective EAs not gives a single

optimal solution (i.e., only one point), but it gives a set of

Pareto optimal solutions (nondominated solutions), in which

one objective value cannot be improved without sacrificing

other objectives values. But, for all practical applications we

need to select one solution or a set of limited number of

solutions; which should preserve the diversity of Pareto front.

This paper presents hybrid genetic K-means approach which

intends to not only make Pareto front tractable but to select

the most compromise set of solution depending on the DM

choice. The proposed methodology is tested by various

problems of MOO test instances: the Special Session and

Competition 2009 (CEC09) to demonstrate its ability to select

the most compromise set of solution. Results of simulation

experiments using several problems form CEC09 prove our

claim and show the ability of the proposed algorithm to solve

the clustering problem for the Pareto front. From the above

results the selected solution has a diversity characteristics also

cover the entire Pareto front domain and allows the DM to

attain a reasonable representation of the Pareto-optimal front.

Finally, the proposed algorithm has a diversity preserving

mechanism to overcome the cruise of huge number of Pareto

front.

CONCLUSION

Solving MOOPs using multi-objective EAs not gives a single

optimal solution, but a set of Pareto-optimal solutions, in

which one objective cannot be improved without sacrificing

other objectives. But, for practical applications we need to

Page 19: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3807

select one solution or a set of limited number of solutions;

which preserve the diversity of Pareto front. In this paper, a

new enhanced evolutionary algorithm is presented, our

proposed algorithm has been enriched with modified k-means

cluster scheme, On the first hand, in phase I, K-means

clustering technique is implemented to partition the

population into a determined number of subpopulation with-

dynamic-sizes. In this way, different genetic algorithms (GAs)

operators can apply to each sub-population, instead of one

GAs operator applied to the whole population. On the other

hand, phase II applies K-means algorithm to make the

algorithms practical by allowing the decision maker to control

the precision of the Pareto-set approximation by choosing a

suitable number of the needed clusters. The proposed

methodology is examined by various problems of MOO test

instances: the Special Session and Competition 2009 (CEC09)

to demonstrate its ability to select the most compromise set of

solution. Results of simulation experiments using several

problems form CEC09 prove our claim and show the ability

of the proposed algorithm to solve the clustering problem for

the Pareto front.

The main conclusions of the research work presented in this

roject can be summarized as follows:

1. In order to improve the performance of the GAs,

hybridization of the GAs with k-means is presented.

This algorithm is made of a classical genetic

algorithm based on the ideas of co-evolution coupled

with a k-means as cluster technique in order to

improve the spread of the solutions found so far.

2. The numerical analysis shows that our hybridization

algorithm is very efficient for benchmark problems.

3. The proposed approach was capable to eliminate the

drawbacks of GAs

4. Allowing a decision maker to control the precision of

Pareto front by defining desired number of cluster k

values.

5. Empirical results show that our approach is very

efficient in detecting the comprise set of solution

from the Pareto set.

6. The proposed algorithm has been effectively applied

to deal with the MOP, with no limitation in solving

higher-dimensional problems.

7. The proposed approach was capable to find well

distributed Pareto-front.

8. The success of our approach on most of the test

problems not only provides confidence but also stress

the importance of hybrid evolutionary algorithms in

solving multiobjective optimization problems.

9. The reality of using the proposed approach to handle

complex problems of realistic dimensions has been

approved due to procedure simplicity.

Recommendations for Future Researches

Several possible areas for future work have arisen from this

research concerning cluster analysis. Those are summarized as

follows:

1. An implementation of the proposed algorithm for

solving real problems.

2. Performance of the proposed algorithm can also be

improved by further investigation of the various

GAs parameters.

3. Evolutionary Algorithms, such as GAs, are a very

important area of investigation and have personally

provided a great deal of interesting work and

stimulation in the complexity and diversity of

applications in the last few years has been

enormous.

REFERENCES

[1] K. Miettinen, "Non-linear multiobjective optimization",

Dordrecht: Kluwer Academic Publisher (2002).

[2] A. A. Mousa, "A new Stopping and Evaluation Criteria

for Linear Multiobjective Heuristic Algorithms",

Journal of Global Research in Mathematical Archives ,

vol. 1( 8), August ( 2013).

[3] QiSen Cai, Defu Zhang, Bo Wu, Stehpen C.H. Leung ,

"A Novel Stock Fore-casting Model based on Fuzzy

Time Series and Genetic Algorithm", Procedia

Computer Science, vol. 18, pp. 1155-1162,( 2013).

[4] He Ni, Yongqiao Wang , "Stock index tracking by

Pareto efficient genetic algorithm", Applied Soft

Computing, Vol. 13(12), pp. 4519-4535, December

(2013).

[5] Janko Straßburg, Christian Gonzàlez-Martel, Vassil

Alexandrov, "Parallel genetic algorithms for stock

market trading rules", Procedia Computer Science,

Vol. 9, pp. 1306-1313, (2012).

[6] Xiao-Wei Xue, Min Yao, Zhaohui Wu, Jianhua Yang ,

"Genetic Ensemble of Extreme Learning Machine",

Neurocomputing, vol. 129, pp. 175- 184, (2014).

[7] Jiadong Yang, Hua Xu, Peifa Jia , "Effective search for

genetic-based machine learning systems via estimation

of distribution algorithms and embedded feature

reduction techniques", Neurocomputing, vol.113, pp.

105-121, August (2013).

[8] Hongming Yang, Jun Yi, Junhua Zhao, ZhaoYang

Dong , "Extreme learning machine based genetic

algorithm and its application in power system

economic dispatch ", Neurocomputing, vol. 102, pp.

154-162, February (2013).

[9] M.S.Osman , M.A.Abo-Sinna , and A.A. Mousa " A

Solution to the Optimal Power Flow Using Genetic

Algorithm ", Journal of Applied Mathematics &

Computation (AMC), vol .155(2), pp 391-405, August

(2004).

[10] M.S.Osman, M.A.Abo-Sinna, and A.A. Mousa

Page 20: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3808

"Epsilon-Dominance based Multiobjective Genetic

Algorithm for Economic Emission Load Dispatch

Optimization Problem", Electric Power Systems

Research, vol. 79, pp.1561–1567,(2009).

[11] B. N. AL-Matrafi , A. A. Mousa , "Optimization

methodology based on Quantum computing applied to

Fuzzy practical unit commitment problem",

International Journal of Scientific & Engineering

Research, vol. 4(11), pp. 1138-1146, November(2013).

[12] Abd Allah A. Galal, Abd Allah A. Mousa, Bekheet N.

Al-Matrafi , "Hybrid Ant Optimization System for

Multiobjective Optimal Power Flow Problem Under

Fuzziness", Journal of Natural Sciences and

Mathematics, vol. 6( 2), pp.179-199, July (2013).

[13] C.M. Fonseca, P. J. Fleming, " Multiobjective

optimization and multiple constraint handling with

evolutionary algorithms”, : Part I: a unified

formulation. IEEE Transactions on Systems and

Cybernetics — Part A: Systems and Humans, vol.

28(1), pp. 26–37, (1998)

[14] K. Harada, J . Sakuma, Ono Isao, S. Kobayashi,

"Constraint-handling method for multiobjective

function optimization: pareto descent repair operator",

In: Proceedings of the fourth international conference

on evolutionary multicriterion optimization, pp. 156–

70, (2007).

[15] Ahmed A. EL-Sawy, " Mohamed A. Hussein, EL-

Sayed M. Zaki, A. A. Mousa, "An Introduction to

Genetic Algorithms: A survey A practical Issues",

International Journal of Scientific & Engineering

Research, Volume 5(1), pp. 252-262, January(2014).

[16] A. A. Mousa, M. A. El-Shorbagy, Identifying a

Satisfactory Operation Point for Fuzzy Multiobjective

Environmental/Economic Dispatch Problem, American

Journal of Mathematical and Computer

Modelling,1(1): 1-14,(2016).

[17] Malki, AAL; Rizk, MM; El-Shorbagy, MA; Mousa,

AA; ,Identifying the Most Significant Solutions from

Pareto Front Using Hybrid Genetic K-Means

Approach,International Journal of Applied Engineering

Research,11,14,8298-8311,(2016).

[18] Al Malki, A., Rizk, M.M., El-Shorbagy, M.A. and

Mousa, A.A. , Hybrid Genetic Algorithm with K-

Means for Clustering Problems, Open Journal of

Optimization, 5, 71-83,(2016).

[19] A.A. Mousa, M.A. El-Shorbagy and M. A. Farag, K-

means-Clustering Based Evolutionary Algorithm for

Multi-objective Resource Allocation Problems, Appl.

Math. Inf. Sci. 11, No. 6, 1-12, (2017).

[20] Osman M.S., M.A.Abo-Sinna, and A.A. Mousa " IT-

CEMOP: An Iterative Co-evolutionary Algorithm for

Multiobjective Optimization Problem with Nonlinear

Constraints" Journal of Applied Mathematics &

Computation (AMC) 183, pp373-389, (2006).

[21] Waiel. F. Abd El-Wahed, A.A.Mousa , M. A. El-

Shorbagy, Integrating particle swarm optimization with

genetic algorithms for solving nonlinear optimization

problems, Journal of Computational and Applied

Mathematics 235:1446–1453, (2011).

[22] M. Verma, M. Srivastava, N. Chack, A.K. Diswar, N.

Gupta, A Comparative Study of Variou Clustering

Algorithms in Data Mining, International Journal of

Engineering Reserch and Applications (IJERA) 2(3),

1379-1384, (2012).

[23] S. Levine, I. Pilowski, D.M. Boulton, The classication

of depression by numerical taxonomy. The British

Journal of Psychiatry 115, 937-945, (1969).

[24] S. Dolnicar, Using cluster analysis for market

segmentation - typical misconceptions, established

methodological weaknesses and some

recommendations for improvement, University of

Wollongong Research Online, (2003).

[25] F.R. Hodson, Numerical typology and prehistoric

archaeology. In F. R. Hodson, D. G. Kendall, and P. A.

Tautu, editors, Mathematics in the Archaeological and

Historical Sciences. University Press, Edinburgh,

(1971).

[26] D. Pratima, N. Nimmakanti, P. Recognition,

Algorithms for Cluster Identification Problem. Special

Issue of International Journal of Computer Science &

Informatics (IJCSI) II (1, 2) (2011) 2231–5292, (2011).

[27] Doreswamy, K.S. Hemanth, Similarity Based Cluster

Analysis on Engineering Materials Data Sets,

Advances in Intelligent Systems and

Computing 167,161-168, (2012).

[28] D. Dilts, J. Khamalah , A. Plotkin, Using Cluster

Analysis for Medical Resource Decision Making,

Cluster Analysis for Resource Decisions 15(4) (1995)

333-347.

[29] A. Jain, Data clustering: 50 years beyond k-means.

Pattern Recognition Letters 31(8) (2010) 651–666.

[30] J. Macqueen, The classification problem, Western

Management Science Institute Working Paper No. 5

(1962).

[31] L. Morissette, S. Chartier, The k-means clustering

technique: General considerations and implementation

in Mathematical, Tutorials in Quantitative Methods for

Psychology 9 (1) (2013) 15-24.

[32] A. Jain, Data clustering: 50 years beyond k-means.

Pattern Recognition Letters 31(8) (2010) 651–666.

[33] Hongming Yang, Jun Yi, Junhua Zhao, ZhaoYang

Dong ,Extreme learning machine based genetic

algorithm and its application in power system eco-

nomic dispatch Neurocomputing, Volume 102, 15

February 2013, Pages 154-162

[34] Z. Michalewiz ,"Genetic Algorithms + Data Structures

= Evolution Programs",Springer-Verlag,3 Rd Edition

(1996).

[35] M.S.Osman , M.A.Abo-Sinna , and A.A. Mousa " An

Effective Genetic Algorithm Approach to

Multiobjective Resource Allocation Problems (

MORAPs) " Journal Of Applied Mathematics &

Computation 163, pp. 755-768, (2005).

Page 21: K-means cluster algorithm-based evolutionary approach · PDF filex that optimize i fx. The evaluation ... The K-means clustering algorithm is more prominent because it is an intelligent

International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 6 (2018) pp. 3789-3809

© Research India Publications. http://www.ripublication.com

3809

[36] M. A. Farag, M. A. El-Shorbagy, I. M. El-Desoky, A.

A. El-Sawy, Genetic Algorithm Based on K-Means-

Clustering Technique for Multi-Objective Resource

Allocation Problems, British Journal of Applied

Science & Technology 8(1) (2015), 80-96 .

[37] T. Murata, H. Ishibuchi, H. Tanaka, Multiobjective

Genetic Algorithm and its Application Toflowshop

Scheduling, Computers and Industrial Engineering 30

(4) (1996) 957–968.

[38] K. Arai, X.Q. Bu. ISODATA Clustering with

Parameter (Threshold For Merge And Split) Estimation

Based on GA: Genetic Algorithm, Reports of the

Faculty of Science and Engineering, Saga University

36(1) (2007)17-23.

[39] C.A. Anderson, K.F. Jones, J. Ryan, A Two-

Dimensional Genetic Algorithm for the ISING

Problem, Complex Systems 5 (1991) 327-333

[40] C. P. Cheng, C. W. Liu, Unit Commitment by

Annealing Genetic Algorithm, International Journal

Electrical Power & Energy Systems 24(2) (2002) 149–

158

[41] L. Sun, Y. Zhang, C. Jiang, A Matrix Real-Coded

Genetic Algorithm to the Unit Commitment Problem,

Electric Power Systems Research 76(9-10) (2006)

716–728.

[42] T. Senjyu, H. Yamashiro, K. Uezato, T. Funabashi, A

Unit Commitment Problem By Using Genetic

Algorithm Based On Unit Characteristic Classification,

2002. IEEE Power Engineering Society Winter

Meeting 1 (2002) 58 - 63

[43] A. Trivedi, Enhanced Multiobjective Evolutionary

Algorithm based on Decomposition for Solving the

Unit Commitment Problem, IEEE Transactions on

Industrial Informatics 11 (6) (2015) 1346 - 1357.

[44] M. A. Farag, M. A. El-Shorbagy, I. M. El-Desoky, A.

A. El-Sawy, A. A. Mousa, Binary-Real Coded Genetic

Algorithm Based K-Means Clustering for Unit

Commitment Problem, Applied Mathematics 6 (2015)

1873-1890.

[45] A.A. Mousa, M.A. El-Shorbagya, W.F. Abd-El-

Wahed, Local Search Based Hybrid Particle Swarm

Optimization Algorithm for Multiobjective

Optimization , Swarm and Evolutionary Computation 3

(2012) 1–14.

[46] Q. Zhang, A. Zhou, S.Z. Zhao, P.N. Suganthan, W.

Liu, S. Tiwari, Multiobjective optimization test

instances for the CEC09 special session and

competition, special session on performance

assessment of multiobjective optimization algorithms,

Technical Report, University of Essex, Colchester, UK

and Nanyang Technological University, Singapore

(2008).