Comparison of Clustering Algorithms for Analog Modulation Classification
-
Upload
arif-kurniawan -
Category
Documents
-
view
219 -
download
0
Transcript of Comparison of Clustering Algorithms for Analog Modulation Classification
-
8/6/2019 Comparison of Clustering Algorithms for Analog Modulation Classification
1/8
Comparison of clustering algorithms for analog modulation classification
Hanifi Guldemr*, Abdulkadir Sengur
Department of Electronic and Computer Science, Technical Education Faculty, Firat University, 23119 Elazig, Turkey
Abstract
This study introduces a comparative study of implementation of clustering algorithms on classification of the analog modulated
communication signals. A numberof key features areused forcharacterizing the analog modulation types. Four different clustering algorithms
are used for classifying the analog signals. These most representative clustering techniques are K-means clustering, fuzzy C-means clustering,
mountain clustering and subtractive clustering. Performance comparison of these clustering algorithms and the advantages and disadvantagesof the methods are examined. The validity analysis is performed. The study is supported with computer simulations.
q 2005 Elsevier Ltd. All rights reserved.
Keywords: Modulation recognition; Modulation classification; Clustering
1. Introduction
Recognition of the modulation type of an unknown
signal provides valuable insight into its structure, origin and
properties and is also crucial in order to retrieve the
information stored in the signal. In the past, modulation
recognition relied mostly on operators scanning the radio
frequency spectrum and checking it on the display. This
method is limited by the operators skills and abilities. This
limitation has led to the development of automatic
modulation recognition systems. Modulation classification
algorithms have generally followed two main approaches;
the decision theoretic and statistical pattern recognition.
Examples of the former are decisions based on signal
envelope characteristics, zero crossing, statistical moments
and phase-based classifiers (Aisbett, 1987; Dominiguez,
Borallo, & Garcia, 1991; Jondral, 1985). Pattern recognition
approach attempts to extract a feature vector for later use in
a statistical classifier (Polydoros & Kim, 1990; Soliman &Hsue, 1992). In early 1990 s, researchers became interested
in the use of artificial neural networks for automatic
modulation classification (Azzouz & Nandi, 1996; Ghani
& Lamontagne, 1993).
In this paper, a statistical pattern recognition based
modulation classification which uses clustering algorithms
is presented.
The main aim of most clustering techniques is to obtain
useful information by grouping data in clusters; within each
cluster the data exhibits similarity. Clustering is a popular
unsupervised pattern classification technique which par-
titions the input space into K regions based on some
similarity or dissimilarity metric (Jain & Dubes, 1988). The
number of clusters may or may not be known as a priori.
Achieving such a partitioning requires a similarity measures
that takes two input vectors and returns a value reflecting
their similarity. Since most similarity metrics are sensitive
to the range of patterns in the input vectors, each of the input
variables must be normalized to within; say the unit interval
[0,1]. On the other hand, clustering algorithms are used
extensively not only to organize and categorize data, but are
also useful for data compression and model construction.
Several algorithms for clustering data when the number ofclusters is known priori are available in the literature
(Kothari & Pitts, 1999; Maulik & Bandyopadhyay, 2000).
Signal Processing systems for communications will
operate in open environments, where it is required that
signals of different sources be processed, which come from
different emitters, hence with different characteristics and
for different user requirements (Sebastiano, Serpico, &
Gianni, 1994). Communication signals traveling in space
with different modulation types and different frequencies
fall in very wide band. These signals use a variety of
modulation techniques in order to send information from
Expert Systems with Applications 30 (2006) 642649
www.elsevier.com/locate/eswa
0957-4174/$ - see front matter q 2005 Elsevier Ltd. All rights reserved.
doi:10.1016/j.eswa.2005.07.014
* Corresponding author. Tel.: C90 424 2370000x6542; fax: C90 424
2367064.
E-mail addresses: [email protected] (H. Guldemr), ksengur@
firat.edu.tr (A. Sengur).
http://www.elsevier.com/locate/eswahttp://www.elsevier.com/locate/eswa -
8/6/2019 Comparison of Clustering Algorithms for Analog Modulation Classification
2/8
one location to another. Usually, it is required to identify
and monitor these signals for many applications, both
defense and civilian. Civilian applications may include
monitoring the non-licensed transmitters, while defense
applications may be electronic surveillance (Azzouz &
Nandi, 1996) or warfare purposes like threat detection
analysis and warning. Modulation recognition is extremelyimportant in communication intelligence applications for
several reasons. Firstly, applying the signal to an improper
demodulator may partially or completely damage the signal
information content. Secondly, knowing the correct modu-
lation type help recognize the threat and to determine
suitable jamming wave-form. At the moment, the most
attractive applications area is radio and other re-configur-
able communication systems.
In this paper, it has been investigated that how the
conventional clustering techniques work on modulation
classification. For comparison, K-means clustering, fuzzy
C-means clustering, mountain clustering and subtractive
clustering techniques were selected and evaluated on a data
set obtained from analog modulated communication signals.
These modulations are amplitude modulated signals (AM),
double side band modulated signals (DSB), upper side band
signals (USB), lower side band signals (LSB) and frequency
modulated signals (FM). Two key features, the standard
deviation of the direct phase component of the intercepted
signal and the signal spectrum symmetry around the carrier,
are employed for forming the data points. A comparative
study is achieved based on the computer simulations. The
analysis of modulation classification requires appropriate
definitions of similarity measures to characterize differences
between modulation types. However, there has not beendiscussed such a comparative study which incorporates
characteristics of modulation types. The advantages and
disadvantages of the examined unsupervised clustering
techniques, which are K-means clustering, fuzzy C-means
clustering, mountain clustering and subtractive clustering,
are investigated and simulation results are given.
2. Clustering
Clustering in N-dimensional Euclidean space RN is the
process of partitioning a given set ofn points into a number,
say K, of groups or clusters in such a way that patterns in the
same cluster are similar in some sense and patterns in
different clusters are dissimilar in the same sense. Let the set
ofn points {X1,X2,X3,.,Xn} be represented by the set Sand
the K clusters be represented by C1,C2,.,CK. Then Cis
for iZ1,2,.,Kand CihCjZ for iZ1,2,.,K, jZ1,2,.,K
and isj andgKiZ1CiZS. In this study, we examine four of
the most representative clustering techniques which are
frequently used in radial basis function networks and fuzzy
modeling (Jang & Sun, 1997). These are K-means
clustering, fuzzy C-means clustering, mountain clustering
method and subtractive clustering. More detailed discus-
sions of clustering techniques are presented in (Duda &
Hart, 1973; churman, 1996).
2.1. K-means clustering
K-means clustering also known as C-means clusteringhas been applied to a variety of areas, including image
segmentation, speech data compression, data mining and so
on. The steps of K-means algorithm, are therefore, first
described in brief.
Step 1 Choose K initial cluster centers z1, z2,.,zKrandomly from the n points{X1, X2, X3,.,Xn}.
Step 2 Assign point Xi, iZ1,2,.,n to the cluster Cj, j2{1,
2,.,K} if kXiKzjk!kXiKzpk, pZ1, 2,.,K and
jsp
Step 3 Compute new cluster centers as follows
znewi Z1
ni
XXj2Ci
Xj iZ1; 2;.; K (1)
where ni is the number of elements belonging to the cluster
Ci.
Step 4 If jjznewi Kzijj!3, iZ1,2,.,K, then terminate.
Otherwise continue from step 2.
Note that in case the process does not terminate at step 4
normally, then it is executed for a maximum number of
iterations.
2.2. Fuzzy C-means clustering
Fuzzy C-means clustering is a data clustering algorithm
in which each data point belongs to a cluster to a degree
specified by a membership grade. Bezdek proposed this
algorithm in 1973 (Bezdek, 1973) as an improvement over
earlier K-means clustering described in the previous title.
FCM partitions a collection ofn vector Xi, iZ1,2,.,n into c
fuzzy groups, and finds a cluster center in each group such
that a cost function of dissimilarity measure is minimized.
The steps of FCM algorithm, are therefore, first described in
brief.
Step 1 Chose the cluster centers ci, iZ1,2,.,c randomly
from the n points{X1, X2, X3,.,Xn}.
Step 2 Compute the membership matrix U using the
following equation
mijZ1Pc
kZ1
dijdkj
2=mK1 (2)
where dijZkciKxjk is the Euclidean distance between ith
cluster center and jth data point, and m is the fuzziness
index.
H. Guldemr, A. Sengur / Expert Systems with Applications 30 (2006) 642649 643
-
8/6/2019 Comparison of Clustering Algorithms for Analog Modulation Classification
3/8
Step 3 Compute the cost function according to the
following equation. Stop the process if it is below
a certain threshold
JU; c1;.; ccZXc
iZ
1
JiZXc
iZ
1X
n
jZ
1
mmij d
2ij (3)
Step 4 Compute new c fuzzy cluster centers ci, iZ1,2,.,c
using the following equation
ciZ
PnjZ1
mmijXj
PnjZ1
mmij
(4)
go to step 2.
2.3. Mountain clustering
The mountain clustering method as proposed by Yager
and Filev (Yager & Filev, 1994) is a relatively simple and
effective approach to approximate estimation of cluster
centers on the basis of a density measure called mountain
function. The following is a brief description of the
mountain clustering algorithm.
Step 1 Initialize the cluster centers forming a grid on the
data space, where the intersection of the grid lines
constitute the candidates for cluster centers,
denoted as a set C. A finer gridding increases the
number of potential cluster centers, but it alsoincreases the computation required.
Step 2 Construct a mountain function that represents a
density measure of a data set. The height of the
mountain function at a point c2Ccan compute as
mcZXNiZ1
exp KjjcKxijj
2
2s2
(5)
where xi is the ith data point and s is a design constant.
Step 3 Select the cluster centers by sequentially destruct-
ing the mountain function. First find the point in thecandidate centers C that has the greatest value for
the mountain function; this becomes the first cluster
center c1. Obtaining next cluster center requires
eliminating the effect of the just-identified center,
which is typically surrounded by a number of grid
points that also have high density scores. This is
realized by revising the mountain function as
follows
mnewcZmcKmc1exp KjjcKc1jj
2
2b2
(6)
after subtraction the second cluster center is again selected
as the point in C that has the largest value for the new
mountain function. This process of revising the mountain
function and finding the next cluster centers continues until
a sufficient number of cluster centers are attained.
2.4. Subtractive clustering
The mountain clustering method is simple and effective.
However, its computation grows exponentially with the
dimension of the problem. An alternative approach is
subtractive clustering proposed by Chiu (Chiu, 1994) in
which data points are considered as the candidates for center
of clusters. The algorithm continues as follow
Step 1 Consider a collection ofn data points {X1,X2,X3,.,
Xn} in an M-dimensional space. Since each data
point is a candidate for cluster center, a density
measure at data point Xi is defined as
DiZXnjZ1
exp KjjXiKXjjj
2
ra=22
!(7)
where ra is a positive constant. Hence, a data point will have
a high density value if it has many neighboring data point.
The radius ra defines a neighborhood; data points outside
this radius contribute only slightly to the density measure.
Step 2 After the density measure of each data point has
been calculated, the data point with the highest
density measure is selected as the first cluster
center. Let Xc1 be the point selected and Dc1 itsdensity measure. Next, the density measure for
each data point Xi is revised as follows
DiZDiKDc1 exp KjjXiKXjjj
2
rb=22
!(8)
where rb is a positive constant.
Step 3 After the density calculation for each data point is
revised, the next cluster center Xc2 is selected and
all of the density calculations for data points are
revised again. This process is repeated until a
sufficient number of cluster centers are generated.
3. Feature clustering and classification
The first step in any classification system is to identify
the features that will be used to classify the data. Feature
extraction is a form of data reduction, and the choice of
feature set can affect the performance of the classification
system. Some classifications can be determined from a
single feature, however, most are confirmed by examining
several features at once (Sengur & Guldemir, 2003, 2005).
Algorithms that do this statistically, known as clustering
H. Guldemr, A. Sengur / Expert Systems with Applications 30 (2006) 642649644
-
8/6/2019 Comparison of Clustering Algorithms for Analog Modulation Classification
4/8
algorithms (Gerhard, 2000). Each piece of data, called a
case, corresponds to an observation of a modulated signal
and the features extracted from that observation are called
parameters. Clustering algorithms work by examining a
large number of cases and finding groups of cases withsimilar parameters. These groups are called clusters, and are
considered to belong to the same category in the
classification.
4. Signal generation and implementation
In the modulation schemes, two types of signals are used.
These signals are a real voice signal and a simulated voice
signals both band-limited to 4 kHz. The simulated voice
signal is produced by a first order autoregressive process of
the form (Dubuc, Boudreau, Patenaude, & Inkol, 1999)
ykZ 0:95ykK1Cnk (9)
where n[k] is a white Gaussian noise.
A modulated signal s(t) can be expressed by a function of
the form
stZacatcos2pfctC4tCq0 (10)
where a(t) is the signal envelope, fc is the carrier frequency,
f(t) is the phase, q0 is the initial phase and ac controls the
carrier power. Particular modulation types are obtained by
encoding the base band message into a(t) and f(t). The
modulation types were restricted to the types commonlyused in analog communication. AM, DSB, SSB and FM
signals are expressed as follows, respectively
stZ 1Cmxtcos2pfct (11)
stZxtcos2pfct (12)
stZxtcos2pfctHytsin2pfct (13)
stZ cos 2pfctCKf
tKN
xtdt
2
4
3
5(14)
where m is the modulation index, x(t) is the modulating
signal and fc is the carrier frequency, y(t) is the Hilbert
transform. Kf is the frequency deviation coefficient of FM
signal. In the expression given for SSB, the negative sign is
used for upper side-band (USB) signal generation and thepositive sign is used for lower side-band (LSB) signal
generation.
In order to increase the accuracy of the classification,
a number of simulations have been done with theoreti-
cally produced different modulated signals with different
parameters such as various signal-to-noise ratios and
modulation index. Sixty simulated signals of each of the
modulation types of DSB, LSB and USB have been
generated. One hundred and twenty signals for AM with
modulation indices of 0.3 and 1, and 180 signals for FM
with frequency modulation indices of 1, 5, and 10 are
generated. Totally 480 modulated signals are used for theclassification. These signals are generated and processed
using Matlab functions in Communication Toolbox. An
additive white Gaussian noise with SNR of between 0
and 60 dB is used in the modeling of theoretically
produced analog modulated signals.
In the simulations, a first degree autoregressive 4 kHz
band-limited voice signal with sampling rate of 10 kHz and
resampled with 44 kHz and modulated by a 15 kHz
sinusoidal carrier is used. Fig. 1a shows the theoretically
produced signal. In order to incorporate the classification
system in real application, the system is also tested with the
real voice signal shown in Fig. 1b.
The experimental results comparing the examined
clustering algorithms are provided for data set which
is generated from the five analog modulated signals.
This is a non-overlapping two dimensional data set,
where the number of cluster is five. The data set
is generated as follows: The source signal is modulated
using the analog modulation schemas. An Additive
White Gaussian Noise (AWGN) is introduced to the
modulated signal such that the signal has the signal-to-
noise ratio randomly distributed between 0 and 60 dB
range and the features are extracted from these
modulated signals.
Fig. 1. (a) Theoretically produced first order autoregressive signal; (b) real voice signal.
H. Guldemr, A. Sengur / Expert Systems with Applications 30 (2006) 642649 645
-
8/6/2019 Comparison of Clustering Algorithms for Analog Modulation Classification
5/8
Two key features are used in this study for generating the
data set. The first feature is the standard deviation of the
direct phase component of the intercepted signal and it is
calculated as follow (Azzouz & Nandi, 1996)
sdpZ
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1
C
XAniOta
f2NLi
!K
1
C
XAniOta
fNLi
!2vuut (15)where C is the number of samples and An is the
instantaneous amplitude of the intercepted signal and ta is
a threshold for the instantaneous amplitude below which the
instantaneous phase is very sensitive to noise. The second
key feature is the signal spectrum symmetry around the
carrier which is given by
PZPLKPU
PUCPL(16)
where
PLZXfcniZ1
jXcij2 (17)
PUZXfcniZ1
jXciCfcnC1j2 (18)
where Xc(i) is the Fourier transform of the intercepted signal
Xc(i), (fcC1) is the sample number corresponding to the
carrier frequency, fc and fcn is defined as
fcnZfcNs
fsK1 (19)
here, it is assumed that the carrier frequency is known.The P versus sdp features for the data set from
autoregressive voice signal and real voice signal are shown
in Fig. 2.
5. Cluster validity analysis
Clustering is a tool that attempt to assign the patterns to the
groups, where such that patterns within a group are more
similar to each other than are patterns belonging to different
clusters. An intimately related important issue is the cluster
validity which deals with the significance of the structureimposed by a clustering method (Kothari & Pitts, 1999).Inthis
section, a cluster validity index which can provide a measure
of goodness of clustering on different partitions of a data set is
performed. Three well known cluster validityindices, Davies
Bouldin (DB) index (Davies & Bouldin, 1979), XieBeni
(XB) index (Xie & Beni, 1991), and maximum value PBM
index (Pakhira,Bandyopadhyay, & Maulic, 2004) are used for
the cluster validity. In the below, only the brief description of
these indices is given. The detailed theory about these indices
can be found in the related references.
The DaviesBouldin index is a function of ratio of the
sum of within cluster scatter to between clusters scatter. The
objective is to minimize the DB index for constituting the
optimum clusters. The XieBeni index is a ratio of the fuzzy
within cluster sum of squared distances to the product of the
number of elements and the minimum between cluster
separations. The minimum value of this index indicates the
optimum number of clusters in the data set. The objective of
PBM index is to maximize this index in order to obtain the
actual number of clusters. PBM index can be used to
associate a measure with different partitions of a data set;
the maximum value of which indicates the appropriate
partitioning. Hence, it is used for determining the
appropriate number of cluster in a data set. Table 1 presents
Fig. 2. P versus sdp (a) for first order autoregressive voice signal, (b) for real voice signal.
Table 1
Values of DB index, XB index and PBM index in the range [28]
Number of
clusters
DB XB PBM
8 0.4383 1.2710 246.6341
7 0.3703 0.0502 313.6147
6 0.2909 0.0560 334.1331
5 0.2216 0.0220 360.3448
4 0.2934 0.0313 102.5180
3 0.3448 0.0823 30.2861
2 0.2985 0.1484 23.5128
H. Guldemr, A. Sengur / Expert Systems with Applications 30 (2006) 642649646
-
8/6/2019 Comparison of Clustering Algorithms for Analog Modulation Classification
6/8
the variation of the DB index, XB index and PBM index
with the number of clusters in the range 28, when the
fuzzy-c means algorithm is used for clustering. The
optimum values of the indices are presented in boldface in
the table.
6. Results and discussion
In this study, four most popular clustering algorithms
are examined and a comparative study on the analog
modulated communication signal is performed. The
performances of the clustering algorithms are testedwith MATLAB based computer simulations. The results
are shown in Figs. 37 for real voice signal. K-means
algorithm is widely used at pattern recognition appli-
cations but i t may converge to values that are
not optimal. Also global solutions of the large problems
ca nnot be f ound with a re asona ble a mount of
computation. In this study, K-means algorithm converge
different values due to the initial cluster centers. On the
other hand, the number of the clusters in the data sets
must be specified before the process. Fuzzy C-means
algorithm is executed the best results even the initial
cluster centers has changed, but no guarantee ensures that
FCM always converge to an optimal solution. Mountain
clustering which is based on what human does in
visually forming cluster of a data set. Here, s parameter
affects the height as well as the smoothness of the
mountain function. The surface plot of the mountain
function with sZ0.05 is shown in Fig. 4. The mountain
clustering application results are satisfactory. However,
its computation grows exponentially with the dimension
of the problem because the method must evaluate the
mountain function over all grid points. Subtractive
clustering method aims to overcome this problem by
Fig. 3. K-means clustering.
Fig. 4. FCM clustering.
Fig. 5. Mountain clustering.
Fig. 6. Subtractive clustering with raZ0.5.
H. Guldemr, A. Sengur / Expert Systems with Applications 30 (2006) 642649 647
-
8/6/2019 Comparison of Clustering Algorithms for Analog Modulation Classification
7/8
considering the data points as the candidate cluster
centers. Mountain and subtractive methods do not require
the number of the clusters in the data set to be specified
before the process. During the implementation of the
subtractive clustering the ra parameters plays an
important role for forming the clusters. This situation is
shown in Figs. 6 and 7, respectively.
For the FCM clustering technique the fuzziness index
is taken as 1.5 and the number of iterations and the
desired error are 500 and 1!10K10, respectively. Table 2
shows the performance of the fuzzy c-means algorithm.
As it is clear from the table, three of the 60 DSB signalsare deduced as FM, 1 of the 60 LSB signal is deduced as
FM and 1 of the 60 USB signal is estimated as DSB.
In the k-means algorithm only 64 of the 180 FM signals
are correctly classified. The remaining 116 is estimated
as DSB and the other six is deduced as LSB as shown
in Table 3.
If the peaks of the mountains in the mountain
clustering algorithm are given as initial cluster centers
to the fuzzy c-means (mountain fcm) and k-means
clustering (mountain k-means) algorithms as proposed in
(Yager & Filev, 1994), the performances of thesetechniques are become much better as shown in Tables
4 and 5. In this case, in the mountain fcm algorithm all of
the modulated signals except 1 of the USB are correctly
classified. Figs. 8 and 9 show the mountain fcm and
mountain k-means results.
Fig. 7. Subtractive clustering with raZ0.25.
Table 2
Performance of the fuzzy c-means algorithm
Actual modu-
lation types and
feature numbers
Estimated modulation types
AM
(120)
DSB
(58)
FM
(184)
LSB
(59)
USB
(59)
AM (120) 120 0 0 0 0
DSB (60) 0 57 3 0 0
FM (180) 0 0 180 0 0
LSB (60) 0 0 1 59 0
USB (60) 0 1 0 0 59
Table 5
Mountain fuzzy c-means
Actual modu-
lation types and
feature numbers
Estimated modulation types
AM
(120)
DSB
(61)
FM
(180)
LSB
(60)
USB
(59)
AM (120) 120 0 0 0 0
DSB (60) 0 60 0 0 0
FM (180) 0 0 180 0 0LSB (60) 0 0 0 60 0
USB (60) 0 1 0 0 59
Table 3
Performance of the k-means algorithm
Actual modu-
lation types and
feature numbers
Estimated modulation types
AM
(120)
DSB
(177)
FM
(70)
LSB
(54)
USB
(59)
AM (120) 120 0 0 0 0
DSB (60) 0 60 0 0 0
FM (180) 0 116 64 6 0
LSB (60) 0 0 6 54 0
USB (60) 0 1 0 0 59
Table 4
Mountain K-means
Actual modu-
lation types and
feature numbers
Estimated modulation types
AM
(120)
DSB
(60)
FM
(182)
LSB
(59)
USB
(59)
AM (120) 120 0 0 0 0
DSB (60) 0 59 1 0 0FM (180) 0 0 180 0 0
LSB (60) 0 0 1 59 0
USB (60) 0 1 0 0 59
Fig. 8. Mountain fuzzy c-means.
H. Guldemr, A. Sengur / Expert Systems with Applications 30 (2006) 642649648
-
8/6/2019 Comparison of Clustering Algorithms for Analog Modulation Classification
8/8
7. Conclusion
In this study, pattern recognition based modulation
classification using clustering techniques is presented. A
comparative study is given which uses four different
clustering algorithms. A basic introduction of automatic
modulation recognition was given followed by a brief
description of the most representative clustering algor-
ithms used in this paper. Two key features are used in
the classification. The presented classification is not
limited to any specific class of modulations. A real voice
signal and a first order autoregressive voice signal is
modulated using the analog modulation schemes, AM,
FM, USB, LSB, DSB. An additive white Gaussian noiseis added to the modulated signal in order to test the
performance of the classification in the presence of noise.
The validity analysis is performed and simulation results
are given.
References
Aisbett, J. (1987). Automatic modulation recognition using time-domain
parameters. Signal Processing, 13, 323329.
Azzouz, E.E., Nandi, A.K. (1996). Automatic modulation recognition of
communication signals. Kluwer Academic Publishers.
Bezdek, J.C. (1973). Fuzzy mathematics in pattern classification. PhDthesis, Applied Math. Center, Cornell University Ithaca.
Chiu, S. L. (1994). Fuzzy model identification based on cluster estimation.
Journal of Intelligent and Fuzzy Systems , 2.
Davies, D. L., & Bouldin, D. W. (1979). A cluster separation measure,
IEEE Transaction on Pattern Analysis and Machine Intelligence, 1,
224227.
Dominiguez, L., Borallo, J., & Garcia, J. (1991). A general approach to
automatic classification of radio communication signals. Signal
Processing, 22, 239250.
Dubuc, C., Boudreau, D., Patenaude, F., Inkol, R. (1999). An automatic
modulation recognition algorithm for spectrum monitoring appli-
cations. IEEE International Conference on Communications (ICC99).
Duda, R.O., Hart, P.E. (1973). Pattern recognition and scene analysis. John
Willey and Sons.
Gerhard, D. (2000). Audio signal classification: An overview. Canadian
Artificial Intelligence , 46.
Ghani, N., & Lamontagne, R. (1993). Neural networks applied to the
classification of spectral features for automatic modulation recognition.
Military Communication Conference (MILCOM, 93, 111115.
Jain, A.K., Dubes, R.C. (1988). Algorithms for clustering data. Prentice-
Hall, Englewood Cliffs, NJ.
Jang, J. S.R., Sun, C.T. (1997). Mizutani, E.: Neuro-fuzzy and soft
computing. Prentice Hall.
Jondral, F. (1985). Automatic classification of high frequency signals.Signal Processing, 9, 177-190.
Kothari, R., & Pitts, D. (1999). On finding the number of clusters. Pattern
Recognition Letters, 20, 405416.
Maulik, U., & Bandyopadhyay, S. (2000). Genetic Algorithm based
clustering technique. Pattern Recognition, 33, 14551465.
Pakhira, M. K., Bandyopadhyay, S., & Maulic, U. (2004). Validity index
for crisp and fuzzy clusters. Pattern Recognition, 37, 487501.
Polydoros, A., & Kim, K. (1990). On the detection and classification of
quadrature digital modulations in broad-band noise. IEEE Transactions
on Communications, 38, 11991211.
Sebastiano, B., Serpico, F. R., & Gianni, V. (1994). Intelligent control of
signal processing algorithms in communications. IEEE Journal on
Selected Areas in Communications , 15531565.
Sengur, A., & Guldemir, H. (2003). Performance comparison of automatic
analog modulation classifiers. 3rd International Advanced Technol-ogies Symposium , 461464.
Sengur, A., & Guldemir, H. (2005). An educational interface for automatic
recognition of analog modulated signals. Journal of Applied Sciences,
5, 513516.
Soliman, S. S., & Hsue, S. (1992). Signal classification using statistical
moments. IEEE Transactions on Communications, 40, 908915.
Xie, X. L., & Beni, G. A. (1991). Validity measure for fuzzy clustering,
IEEE Transaction on Pattern Analysis and Machine Intelligence, 3,
841846.
Yager, R. R., & Filev, D. P. (1994). Generation of fuzzy rules by mountain
clustering. IEEE Transactions on Systems, Man and Cybernetics, 24,
209219.
Yager, R. R., & Filev, D. P. (1994). Approximate clustering via the
mountain method. IEEE Transactions on Systems, Man and Cyber-
netics, 24, 12791284.Churman, J. (1996). Pattern classification. John Willey and Sons,.
Fig. 9. Mountain k-means.
H. Guldemr, A. Sengur / Expert Systems with Applications 30 (2006) 642649 649