5G PRESENTATION BY KARTHEEK TAVVA FROM CHEBROLU ENGINEERING COLLEGE
Classification of Satellite broadcasting Image and …Classification of Satellite broadcasting Image...
Transcript of Classification of Satellite broadcasting Image and …Classification of Satellite broadcasting Image...
IJSRD - International Journal for Scientific Research & Development| Vol. 3, Issue 10, 2015 | ISSN (online): 2321-0613
All rights reserved by www.ijsrd.com 57
Classification of Satellite broadcasting Image and Validation Exhausting
Geometric Interpretation M. Srinivasa Rao
1 Kartheek V. L
2 Dr. T. Madhu
3
1,2Assistant Professor
3Principal
1Department of Computer Science Engineering
1,2,3Swarnandhra College of Engineering and Technology
Abstract— Classification of Land Use/Land Cover (LULC)
data from satellite images is extremely remarkable to design
the thematic maps for analysis of natural resources like
Forest, Agriculture, Water bodies, urban areas etc. The
process of Satellite Image Classification involves grouping
the pixel values into significant categories and estimating
areas by counting each category pixels. Manual
classification by visual interpretation technique is accurate
but time consuming and requires field experts. To overcome
these difficulties, the present research work investigated
efficient and effective automation of satellite image
classification. Automated classification approaches are
broadly classified in to i) Supervised Classification ii)
Unsupervised Classification iii) Object Based Classification.
This paper presents classification capabilities of K-Means,
Parallel Pipe and Maximum Likelihood classifiers to
classify multispectral spatial data (LISS-4). Using statistical
inference, classified results are validated with reference data
collected from field experts. Among three, Maximum
Likelihood classifier (MLC) gained a significant credit in
terms of getting maximum Overall accuracy and Kappa
Factor.
Key words: Land Use Land Cover, Pixel, Classification,
LISS-4, Overall accuracy, Kappa Factor
I. INTRODUCTION
Satellite imagery is a basis of large amount of two
dimensional information is recorded by satellite sensor.
Satellite images are rich and play a crucial role in providing
geographical information [1]. Satellite and remote sensing
images provides quantitative and qualitative information
that reduces sophistication of field work and study time [2].
Satellite remote sensing technologies collect temporal data
in the form of images at regular intervals. The volumes of
data receive at datacenters is huge and it is growing
exponentially as technology is growing rapid speed as
timely and data volumes and data volumes have been
emergent at an epidemic rate [3]. There is a strong need of
well-organized and constructive mechanisms to extract and
interpret valuable information from massive satellite images.
Satellite image classification is a powerful technique to
extract information from enormous number of satellite
images.
Satellite image classification is the process of
coalition the pixels in to meaningful subdivision based on its
numeric values [4]. Satellite image classification involves
interpretation of remote sensing images, Spatial data mining
to study about various natural recourses like Forest,
Agriculture, Water bodies, Urban areas and determining
various land uses in an area[5].
This paper is structured in assorted sections.
Section-II describes the Hierarchy of Satellite image
classification techniques. Section-III explains the various
classification methods. Section-IV describes about the study
area and data sources. Section-V presents validation of
results using statistical inference. Results and Discussions
are provided in Section-VI. The final section endows the
conclusion.
II. SATELLITE IMAGE CLASSIFICATION TECHNIQUES
Based on the spatial resolution, satellite images are
categorized in to Low (coarser pixel), Medium (medium
pixel size) and High (Finer pixels) resolution satellite
images (see Figure 1).
Fig. 1: Low, Medium And High Spatial Resolution Satellite
Image.
There are several methods and techniques for
satellite image taxonomy (see Figure 2). These methods are
generally classified in to three categories [6].
1) Manual classification
2) Automatic classification
3) Hybrid classification
Classification of Satellite broadcasting Image and Validation Exhausting Geometric Interpretation
(IJSRD/Vol. 3/Issue 10/2015/012)
All rights reserved by www.ijsrd.com 58
Fig. 2: Hierarchy Of Satellite Image Classification Techniques
A. Manual Classification
Manual classification techniques are Robust, efficient and
effective technique because the analysts will do the
classification by visual interpretation based on ground
reality of study area. But this method consumes more time
and requires field experts. The accuracy and efficiency of
the classification, depends on the analyst knowledge and
familiarity towards the field of study.
B. Automatic Classification
Performance of satellite image classification using Visual
interpretation depends on the analyst. To avoid this problem,
classification is done automatically by grouping the pixels
based on its similarity and dissimilarity. Based on the spatial
resolution of satellite image, automated satellite image
classification methods further classified in to three
categories. (a) Pixel Based Classification (b) Sub-Pixel
Based Classification (c) Object Based Classification.
1) Pixel Based Classification:
As the typical remote sensing image classification
technique, Pixel based classification methods assume each
pixel is pure and typically labeled as a single land use and
land cover type [7] [8]. Using this method, remote sensing
imagery is considered a collection of pixels with spectral
information, and there by spectral variables and their
transformations are input to pre-pixel classifier. In general
pixel based classification can be classified in to two groups.
a) Unsupervised Classification:
With unsupervised classifiers, a remote sensing image is
divided into number of classes based on the natural
groupings of image pixel values without having the training
data or prior knowledge of study area[9][10]. Two most
commonly used unsupervised classification algorithms, K-
Means[11][12] and its variant, the Iterative Self-Organizing
Data analysis (ISODATA) technique. Recently, Support
Vector Machine (SVM) and hierarchal clustering methods
were also developed for unsupervised classification [13].
The major drawback with this unsupervised classification is
computationally intensive and insufficient accuracy in
getting meaningful and required classes.
b) Supervised Classification:
Using supervised classification (see Figure 3) satellite
images are classified by using a known input called training
data from an analyst. An image analyst selects a
representative sample sites with known class types (i.e.
training sample or training signature) and compares the
spectral properties of each pixel in the image with those of
the training samples, then labels the pixel to the class type
according to decision rules[11]. Large number of supervised
classification methods have been developed and hierarchy of
Satellite image classification techniques is shown (See
Figure 2).
Classification of Satellite broadcasting Image and Validation Exhausting Geometric Interpretation
(IJSRD/Vol. 3/Issue 10/2015/012)
All rights reserved by www.ijsrd.com 59
Fig. 3: Flow Chart For Supervised Classification Of Satellite Image
2) Sub Pixel Based Classification:
Pixel Based image classification techniques assumes that
only one land use land cover type exists in each image pixel.
Due to heterogeneity of landscape with respect to spatial
resolution, such an assumption is not valid for classification
of satellite images having Medium and coarse spatial
resolution [14]. As a better alternative Sub-Pixel
classification techniques are appropriate as spatial
proportion of each land use land cover type can be
accurately estimated [15]. Major Sub-Pixel classification
techniques are fuzzy classification, neural Networks [16]
[17], regression modeling [18], regression tree analysis [19]
and spectral mixture analysis [20] have been developed to
address the mixed pixel problems.
3) Object Based Classification:
Compared to Pure-Pixel and Sub-Pixel classification
methods, Object Based classification provides a new
prototype to classify remote sensing imagery [21][22].
Instead of individual pixels, object based classifiers
considers Geographical object as basic unit for analysis.
Object based methods generate image objects through image
segmentation [23], and then conduct image classification on
objects rather than pixels. With image segmentation
techniques, image objects are formed using spectral, spatial
and contextual information. Object based approaches are
considered more appropriate for Very High Resolution
(VHR) remote sensing images since they assume that
geographic objects are formed by multiple image pixels.
Many studies are proven significant higher accuracy has
been achieved with object based approaches [24][25][26].
Fig. 4: (A) Satellite Image (B) Object Based Classification
Of Satellite Image (C) Pixel Based Classification Of
Satellite Image
III. SATELLITE IMAGE CLASSIFICATION METHODS
This section exemplifies few up to date satellite image
classification methods.
K-Means: In data mining, K-Means clustering [27] is a
process of unsupervised classification (i.e. Cluster) analysis.
This method aims to partition n-observations in to k-clusters
in which each scrutiny belongs to the cluster with the
adjacent mean. It is an iterative course of action. In first
step, it assigns an arbitrary preliminary cluster vector. In
second step, each pixel classifies to closest cluster. Finally,
the novel cluster mean vectors are intended based on each
and every pixel in one cluster. Second and final steps are
repeated until no change in mean value of each cluster. The
objective of K-Means algorithm is to play down the within
cluster changeability.
The objective function is Sum of Squares Distances (see eq-
1) between each pixel and its assigned cluster center shown
as,
SSdist = ∑ (1)
Where , is the mean of the cluster pixel x is assigned to.
Satellite Image
Geo-Spatial DataBase
Raster Data Pre-Processing
Sample Vector Data shape File
Classification
Generate Signature File
Training Data
Pixel Data
Classified Image
Confusion Matrix
Validation
Classification Schema
Stop
Start
Classification of Satellite broadcasting Image and Validation Exhausting Geometric Interpretation
(IJSRD/Vol. 3/Issue 10/2015/012)
All rights reserved by www.ijsrd.com 60
By Minimizing the Mean Squared Error (MSE) , SSdist can
be minimized (see eq-2). Cluster variability can be measured
using MSE as,
MSE = ∑
=
(2)
A. Iterative Self Organizing Data (ISODATA):
ISODATA [28] algorithm allows a set of all clusters to be
robotically adjusted during the iteration by assimilation of
similar clusters and Splitting clusters (see Figure 5).
Assimilation of Clusters is done if whichever the number of
pixels in the cluster is less than a confident threshold or else
the centers of two clusters are closed than a certain
threshold. Splitting of a Cluster is done if the standard
deviation of Cluster exceeds its threshold value and number
of pixels is twice the threshold of minimum number of
pixels.
Fig. 5: ISODATA Classification Pixels Using Cluster
Means
ISODATA algorithm is comparable to K-Means
algorithm [29] with the dissimilarity that the ISODATA
algorithm allows for diverse number of clusters. Whereas
the K-Means considers that all the clusters are
acknowledged.
B. Support Vector Machine (SVM):
SVM [30] is a classification system resulting from statistical
learning theory. It separates the classes with decision surface
that maximizes the margin between the classes. The surface
is often called the optima hyper plane, the data points
neighboring to hyper plane are called support vectors (see
Figure 6). By maximizing the margin between data points
and decision boundary Misclassification errors can be
minimized [33].
A Binary classification of N training samples and
each example is consisting of a tuple (xi, yi) (i= 1,2.........,N)
where, xi=(xi1,xi2,....xid)T corresponds to the attribute set for
the ith
sample and let yi denotes its class label.
Fig. 6: Margin Of Decision Boundary In Binary SVM
Classifier
The decision boundary for linear classifier can be written as
(3)
If we can label all the circles as class +1 and all the
stars as class -1 then we can predict the class label "y" for
any test sample "z"
y= {
The Margin (d) of the decision boundary (see eq-4)is given
by the distance between these two hyper planes
d=
(4)
C. Minimum Distance Classification:
Minimum distance to means [31] approach is supervised
classification approach in which the decision rule calculates
the spectral distance between the measurement vector for
the candidate pixel and mean vector for each signature(see
Figure 7). this classifier is suitable when each class has one
representing vector[34].
Fig. 7: Calculation Of Minimum Distance Between Centers
Of Three Classes And Candidate Pixel With Respect To
Bands A&B
The distance (see eq-5) can be calculated and the candidate
pixel is assigned to the class with the smallest spectral
Euclidian distance (Minimum distance) to the candidate
pixel [32].
Dab = ∑
(5)
Where, Dab= Distance between class a and pixel b,
ai = Mean spectral value of class a in band i, bi= Spectral
value of pixel b in band i, n= Number of spectral bands.
D. Mahalanobis Distance Classifier:
Mahalanobis Distance classifier [34] [35] is almost same as
Minimum distance approach. It uses Covariance matrix for
satellite image classification.
Mahalanobis Distance (Dx) =
∑
(6)
Where, ∑ = Pixel Covariance matrix for class i
(i=1,2,........n), =Average vector of class i.
E. Parallel Piped Classification:
Parallel piped classifier [39] divide each axis of multi
spectral attribute space in to assessment regions called
classes on the basis of its range (i.e. lowest and highest
values) of pixels. The correctness of the classifier depends
on the range in consideration of population statistics of each
class (see Figure 8).
Classification of Satellite broadcasting Image and Validation Exhausting Geometric Interpretation
(IJSRD/Vol. 3/Issue 10/2015/012)
All rights reserved by www.ijsrd.com 61
Fig. 8: Classification Of Pixel Data Based On Lowest (µa1-
2s), Highest (µa1+2s) And Mean (µa1) Values On Band A
And Lowest (µb1-2s), Highest (µb1+2s) And Mean (µb1)
Values On Band B Of Class-1 Using Parallel Piped
Approach.
In two-dimensional space this will form many
rectangular boxes as number of classes. All the pixels which
fall inside the box are labeled to that class. Computationally
parallel piped classifier is efficient but overlaps may leads to
misclassifications.
F. Maximum Likelihood Classification (MLC):
MLC is also known as Bayesian classifier, is a statistical
method for supervised classification [32] in which pixels
with maximum likelihood is classified in to corresponding
class. The likelihood (Lk) of a pixel (see eq-7) belongs to a
class k is measured in terms of its posterior probability [ 37].
Lk = (
)
(
) (7)
Where, =prior probability of class k, (
)=
Conditional probability or Probability density function of
class K.
(
) are common to all classes. So, Lk
depends on probability density function.
Fig. 9: Concept Of Maximum Likelihood Classification
Based On Probability Density With Respect To Each Class
Probability density function (see eq-8) for normal
distribution to calculate likelihood can be expressed as
follows.[38]
Lk(X) =
√
(8)
Where, Lk(X)=Likelihood of pixel X belongs to
class k,n= Number of satellite image bands and =
variance-covariance matrix of class k.
G. K-Nearest Neighbor Classifier:
KNN classification [36][37] is based on majority vote of the
K- nearest Neighbors, based on Euclidean distance (see eq-
9) in feature space, where K specifies the number of
neighbors to be used. It does not require a training step to be
performed.
Let (x, y) D --> Set of training examples
k --> Number of nearest neighbors
z=(x', y') --> Test example
Euclidean Distance
d(x', x)= ∑
(9)
Once the nearest neighbor list is obtained, the test example
is classified based on the majority class of its nearest
neighbor (see eq-10) .
Majority voting :
y' =argmax ∑ (10)
Where, --> class label
--> Class label for one of the nearest neighbors.
Drawback of KNN classifier is that some test
records may not be classified because they do not match any
training example.
H. Seeded Region Growing (SRG):
In SRG technique [40] the image is segmented in to regions
with respect to set of g seeds. Given set of seeds
S={s1,s2..........,sg}, each step additional pixel is included into
one of the seed sets. Furthermore, these initial seed are
replaced by the centroids of these generates homogeneous
regions R = {R1, R2, ...... Rg} with reference to further pixels
gradually. The pixels in indistinguishable region are labeled
as one class and pixels in dissimilar regions labeled by
different classes, and others be called unallocated pixels
[41].
Set of Unallocated pixels (H) is formulated as (see eq-11):
H={ ⋃ | ⋃
} (11)
is defined as the difference among the testing
pixel (x , y) and its adjacent labeled region .
=
(12)
Where, indicates the values of three color
components of the testing pixel and
represents the average of the three color components of the
homogeneous region , with
the centroid of .
IV. STUDY AREAS AND DATA SOURCES
A. Study Areas:
LISS-IV satellite image is a multispectral spatial data with
three bands (B2, B3, B4) and 5.8 M spatial resolution. To
demonstrate the capabilities of different classification
techniques, two different study areas from LISS-IV satellite
image of approximately 3 x 1.5 Km2 covers rectangle is
located in Eluru city, AP, India is selected. These areas are
having various land covers like Urban area, Aqua ponds,
Agriculture, Sandy area etc. Study Area-I located between
160 42' 33.10" N 810 05'43.61" E and 160 41' 30.74" N 810
08' 46.70"E (see Figure 10) covers the Urban and its
Classification of Satellite broadcasting Image and Validation Exhausting Geometric Interpretation
(IJSRD/Vol. 3/Issue 10/2015/012)
All rights reserved by www.ijsrd.com 62
surrounding areas in the middle of Eluru city and Study
Area-II located between 160 40' 38.10" N 810 08'35.39" E
and 160 38' 57.03" N 810 09' 43.74"E (see Figure 11)
covers the Aqua and Agriculture fields at outs cuts of Eluru
city.
Fig. 10: Study Area-I Shown In Blue Window In Satellite
Image.
Fig. 11: Study Area-II Shown In Blue Window On Satellite
Image.
B. Reference Data:
In order to estimate the exactness of the classification under
taken in this research, reference data was captured by
digitizing different areas like Urban, Aqua and Agriculture
etc. using visual interpretation by the field expert [47]. To
evaluate Study Area-I, Five different classes (see Figure 12)
are digitized and for Study Area-II, Six different classes (see
Figure 13) are digitized in the form of polygons.
Fig. 12: Reference Data of Study Area-I
Fig. 13: Reference Data of Study Area-II
V. VALIDATION
Accuracy assessment [43] of the Satellite image
classification techniques can be undertaken using confusion
matrix (see Figure 14) and Kappa statistics. The Kappa
index of agreement (KIA) is a statistical measure adopted
for accuracy assessment in Land Use & Land Cover analysis
of satellite image. It is often used to check for accuracy of
classified satellite images verses real ground truth data. All
diagonal elements of the confusion matrix (see Figure 14 )
represents classified pixels that are agreed to ground truth
and all non-diagonal elements represents error of
omission(exclusion) or error of commission(inclusion) [44].
Reference Data
(Ground truth) Row
Total C1 C2 C3 C4 C5 C6
Cla
ssif
ied
dat
a C1 N11 N12 N13 N14 N15 N16 N1+
C2 N21 N22 N23 N24 N25 N26 N2+
C3 N31 N32 N33 N24 N35 N36 N3+
C4 N41 N42 N43 N44 N45 N46 N4+
C5 N51 N52 N53 N54 N55 N56 N5+
C6 N61 N62 N63 N64 N65 N66 N6+
Colum
Total N+1 N+2 N+3 N+4 N+5 N+6 N
Fig. 14: A Model Of Confusion Matrix For Six Classes
Error of Omission is a ratio between number of
correctly assigned pixels in each class and the number of
training set pixels used for that class. It is also called
Producer accuracy ( ).
(13)
Where, = Number of pixels that are correctly
classified to class Ci and = Number of pixels in the
reference data class Ci.
Error of Commission is a ratio between number of
correctly assigned pixels in each class and the total number
pixels assigned to the same class. It is also called User
accuracy ( ).
(14)
Where, = Number of pixels that are correctly
classified to class Ci and = Number of pixels in the
classified data class Ci.
Overall accuracy is the ration between the total
number of correctly classified pixels (Diagonal elements of
confusion matrix) and total number of reference pixels. k
Classification of Satellite broadcasting Image and Validation Exhausting Geometric Interpretation
(IJSRD/Vol. 3/Issue 10/2015/012)
All rights reserved by www.ijsrd.com 63
Op = ∑
(15)
Kappa statistic is a measure of the difference
between the actual agreement between Reference data and
an automated classifier and the chance agreement between
the reference data and random classifier.
K =
= ∑
∑
∑
(16)
Kappa factor value ranges from 0 to 1. A Kappa
value of zero represent that the classification is poor. If the
chance agreement is large, kappa value could be negative,
indicates very poor classifier performance. Based on the
value of the Pixel each one is labeled with a class name by
using a classification technique. K-Means Unsupervised
classifier with k=5 is applied on Study Area-I and its
classification results (See Figure 15) shows Blue: Water
Body, Green: Agriculture, Yellow with Dots: Urban Area,
Red: Trees, Light Green: Grass Land.
For both Supervised classification methods (i.e.
Parallel Pipe and Maximum Likelihood Classifiers)
Signature data is generated from the Reference data with
specified number of classes.
Fig. 15: Classified image of Study Area-I using K-Means
Classifier
Using pixel value range, Parallel Pipe classifier
generated the classification results (see Figure 16). By
calculating likelihood of each pixel with respect to each
class and pixel with maximum likelihood is classified into
one of the five classes (see Figure 17). Maximum likelihood
is decided by using conditional probability. In both cases the
classification results shows Blue: Water Body, Green:
Agriculture, Yellow with Dots: Urban Area, Red: Trees,
Light Green: Grass Land.
Fig. 16: Classified image of Study Area-I using Parallel Pipe
Classifier.
Fig. 17: Classified image of Study Area-I using MLC
Classifier
Validation of each class is done by comparing
classified date with Reference data (see Figure 12) having
50 sample pixels in which 14 pixels represents water body,
11 pixels represents Urban area, 7 pixels for Grass land, 6
for Trees and 12 pixels are for Agriculture. For all three
classification methods Confusion Matrix is formulated and
producer and User accuracy for each class is computed. To
evaluate correctness of classification Overall accuracy and
Kappa factor are computed for K-Means ( see Table 1),
Parallel Pipe (See Table 2) and Maximum Likelihood
Classifier (see Table 3). The diagonal elements of the
confusion matrix represent the pixels that are correctly
classified and non diagonal elements are miss classified
pixels with respect to ground truth (i.e. Reference data).
Reference Data
(Ground truth) Row Total
Water Body Urban Area Grass Land Trees Agriculture
Cla
ssif
ied
dat
a Water Body 11 1 0 0 0 12
Urban Area 1 7 1 0 1 10
Grass Land 2 1 4 1 1 9
Trees 0 2 1 4 1 8
Agriculture 0 0 1 1 9 11
Colum Total 14 11 7 6 12 50
Class
Omission
Error
(Producer
Accuracy
)
Commission
Error
(User
Accuracy )
Overall
Accuracy
Water
Body 78.57%
91.60% 70%
Urban
Area 66.66%
70.00%
Classification of Satellite broadcasting Image and Validation Exhausting Geometric Interpretation
(IJSRD/Vol. 3/Issue 10/2015/012)
All rights reserved by www.ijsrd.com 64
Grass Land 57.16% 44.44%
Trees 66.66% 50.00%
Agriculture 75% 81.80% Kappa =0.621
Table 1: Confusion Matrix And Kappa Factor To Validate Classes Of Study Area-I Using K-Means Classifier.
Reference Data
(Ground truth) Row Total
Water Body Urban Area Grass Land Trees Agriculture
Cla
ssif
ied
dat
a Water Body 11 1 0 0 0 12
Urban Area 2 8 2 0 0 12
Grass Land 1 1 4 1 1 8
Trees 0 1 1 5 1 8
Agriculture 0 0 0 0 10 10
Colum Total 14 11 7 6 12 50
Class
Omission
Error
(Producer
Accuracy )
Commission
Error
(User
Accuracy )
Overall
Accuracy
Water
Body 78.57% 91.66%
76.6%
Urban
Area 72.72% 66.66%
Grass Land 57.14% 50.00%
Trees 83.33% 62.50%
Agriculture 83.33% 100.00% Kappa =0.69
Table 2: Confusion Matrix And Kappa Factor To Validate Classes Of Study Area-I Using Parallel Pipe Classifier.
Reference Data
(Ground truth) Row Total
Water Body Urban Area Grass Land Trees Agriculture
Cla
ssif
ied
dat
a Water Body 12 2 0 0 0 14
Urban Area 1 8 1 0 0 10
Grass Land 1 1 6 1 1 10
Trees 0 0 0 5 1 6
Agriculture 0 0 0 0 10 10
Colum Total 14 11 7 6 12 50
Class
Omission
Error
(Producer
Accuracy
)
Commission
Error
(User
Accuracy )
Overall
Accuracy
Water
Body 85.71% 85.71%
82%
Urban
Area 72.72% 80.00%
Grass Land 85.71% 66.66%
Trees 83.33% 83.33%
Agriculture 83.33% 100.00% Kappa =0.82
Table 3: Confusion Matrix And Kappa Factor To Validate Classes Of Study Area-I Using MLC Classifier.
To check the performance, The same Three
classifiers are applied on Study Area-II which covers
different set of patterns. Firstly, K-Means algorithm is
applied with k=6 and the classification results (see Figure
18) shows Yellow: Creek, Blue: Aqua Ponds, Pink with
Dots: Sandy area, Light green: Grass Area, Red: Trees,
Green: Agriculture.
Classification of Satellite broadcasting Image and Validation Exhausting Geometric Interpretation
(IJSRD/Vol. 3/Issue 10/2015/012)
All rights reserved by www.ijsrd.com 65
Fig. 18: Classified image of Study Area-II using K-Means
Classifier
Secondly, Parallel Pipe classifier is applied on
Study Area-II by using Six classes Signature data and the
classification results (see Figure 19) shows Yellow: Creek,
Blue: Aqua Ponds, Pink with Dots: Sandy area, Light green:
Grass Area, Red: Trees, Green: Agriculture.
Fig. 19: Classified Image Of Study Area-II Using Parallel
Pipe Classifier.
Finally, Maximum Likelihood classifier is applied
on Study Area-II by using Six classes Signature data.
Classification results (see Figure 20) shows Yellow: Creek,
Blue: Aqua Ponds, Pink with Dots: Sandy area, Light green:
Grass Area, Red: Trees, Green: Agriculture.
Fig. 20: Classified Image Of Study Area-II Using MLC
Classifier.
Validation of each class is done by comparing
classified date with Reference data (see Figure 13) having
57 sample pixels in which 9 pixels represents Creek, 13
pixels represents Aqua Ponds, 8 pixels for Sandy land, 7 for
Grass land, 5 for Trees and 15 pixels are for Agriculture. For
all three classification methods Confusion Matrix is
formulated and producer and User accuracy for each class is
computed. To evaluate correctness of classification Overall
accuracy (see eq-15) and Kappa factor (see eq-16) are
computed for K-Means ( see Table 1), Parallel Pipe (See
Table 2) and Maximum Likelihood Classifier (see Table 3).
Reference Data
(Ground truth)
Row Total
C Q S G T A
Cla
ssif
ied
dat
a C 6 2 1 0 0 1 10
Q 1 9 2 0 0 1 13
S 1 1 4 1 0 0 7
G 1 1 1 5 1 1 10
T 0 0 0 1 3 1 5
A 0 0 0 0 1 11 15
Colum Total 9 13 8 7 5 15 57
Class
Omission
Error
(Producer
Accuracy )
Commission
Error
(User
Accuracy )
Overall
Accuracy
C 66.66% 60%
66.66%
Q 69.23% 69.23%
S 50% 57.14%
G 71.4% 50%
T 60% 60%
A 73.33% 73.33% Kappa =0.585
C=Creek, Q=aQua Ponds , S=Sandy Land, G= Grass Land, T= Trees, A= Agriculture
Table 4: Confusion Matrix And Kappa Factor To Validate Classes Of Study Area-II Using K-Means Classifier
Classification of Satellite broadcasting Image and Validation Exhausting Geometric Interpretation
(IJSRD/Vol. 3/Issue 10/2015/012)
All rights reserved by www.ijsrd.com 66
Reference Data
(Ground truth)
Row Total
C Q S G T A
Cla
ssif
ied
dat
a C 6 0 0 0 0 1 7
Q 1 11 1 1 0 0 14
S 1 1 6 0 0 1 9
G 1 1 1 6 0 1 10
T 0 0 0 0 4 0 4
A 0 0 0 0 1 12 13
Colum Total 9 13 8 7 5 15 57
Class
Omission
Error
(Producer
Accuracy )
Commission
Error
(User
Accuracy )
Overall
Accuracy
C 66.66% 85.7%
78.94%
Q 84.61% 78.57%
S 87.5% 66.66%
G 85.7% 60%
T 80% 100%
A 80% 92.3% Kappa =0.741
C=Creek, Q=aQua Ponds , S=Sandy Land, G= Grass Land, T= Trees, A= Agriculture
Table 5: Confusion Matrix And Kappa Factor To Validate Classes Of Study Area-I Using Parallel Pipe Classifier
Reference Data
(Ground truth)
Row Total
C Q S G T A
Cla
ssif
ied
dat
a
C 7 1 0 0 0 1 9
Q 1 11 1 1 0 1 15
S 1 0 6 0 0 1 8
G 0 1 1 6 0 0 8
T 0 0 0 0 5 0 5
A 0 0 0 0 0 12 12
Colum Total 9 13 8 7 5 15 57
Class
Omission
Error
(Producer
Accuracy )
Commission
Error
(User
Accuracy )
Overall
Accuracy
C 77.77% 77.77%
82.45%
Q 84.6% 73.33%
S 75% 75%
G 85.7% 75%
T 100% 100%
A 80% 100%
Kappa =0.784
C=Creek, Q=aQua Ponds , S=Sandy Land, G= Grass Land, T= Trees, A= Agriculture
Table 6: Confusion Matrix And Kappa Factor To Validate Classes Of Study Area-II Using MLC Classifier.
VI. RESULTS AND DISCUSSION
LISS-4 satellite images are pre-processed and classified
using three methods: K-Means, Parallel Pipe and Maximum
Likelihood classifiers. The accuracies of the three methods
were assessed by accuracy measures and they can be found
in Table-7 for study area-1 and study area-II. In this study,
reference data (see Figure-12 & Figure-13) is taken from the
field experts and signature file is generated with 98%
accuracy using visual interpretation.
Classification of Satellite broadcasting Image and Validation Exhausting Geometric Interpretation
(IJSRD/Vol. 3/Issue 10/2015/012)
All rights reserved by www.ijsrd.com 67
Classification
Technique
Study Area-I Study Area-II
Overall
Accurac
y
Kapp
a
Facto
r
Overall
Accurac
y
Kapp
a
Facto
r
K-Means 70% 0.62 66.66% 0.585 Parallel piped
classification 76.6% 0.69 78.94% 0.741
Maximum
Likelihood
classification(ML
C)
82% 0.82 82.45% 0.784
Table 7: Overall Accuracy And Kappa Factor For Classified
Study Area-I&II Using Three Classification Methods.
Fig. 21: Graphical Representation Of Overall Accuracy For
The Two Study Areas Using Three Classifiers
Judging by the overall accuracy (see Figure 21), it
is evident that Maximum Likelihood classifier (MLC) is
superior than parallel pipe classifier and also K-Means
unsupervised classifier in classification of both study areas
(i.e. Overall accuracy 82% vs {76.6%, 70%}for study area-I
and 82.45% vs {78.94%, 66.66%}) for study area-II.
Fig. 22: Graphical Representation Of Kappa Factor For The
Two Study Areas Using Three Classifiers
In case of Kappa factor (see Figure 22), it is
obvious that Maximum Likelihood classifier (MLC) is
superior than Parallel Pipe classifier and also K-Means
unsupervised classifier in classification of both study areas
(i.e. Kappa factor is 0.82 vs {0.69, 0.62} for study area-I
and 0.78 vs {0.74, 0.58}for study area-II). If the study area
is having heterogeneity in land use and no signature is
available then K-Means unsupervised classifier is very much
suitable. This research work is also found that K-means
classifier is computationally expensive and it needs number
of classes (K) as input. If the value of K is fail to spot, leads
to misclassification. When compare to supervised
classifiers, K-means classifier showing minimum overall
accuracy of 70% & 66.66% for both study area-I and study
area-II respectively. By referring kappa factor also K-Means
classifier is showing 0.62 & 0.58 for both study area-I and
study area-II respectively and it is comparatively less. In
case of supervised classifiers (Parallel Piped & Maximum
Likelihood Classifiers) correctness of classification depends
on the signature and also reference data.
VII. CONCLUSION
When three different classifiers are compared by analyzing
two diverse study areas, it can be concluded that Maximum
likelihood classifier (MLC) is superior which has maximum
Overall accuracy and Kappa factor. With the obtained result
it is evident that the accuracy of supervised classifier is
directly dependent on reference data (ground reality). In
case, analyst is lacking intimate familiarization with huge,
compound and diverse area, unsupervised classification (i.e.
K-Means classifier) has a potential to produce satisfactory
results.
REFERENCES
[1] Muhammad, S., Aziz. G, Aneela. N., " Classification
by Object Recognition in Satellite Images by Data
Mining". Proceedings of the world Congress of
Engineering (WCE 2012), Vol 1, July4-6, London,
U.K, 2012.
[2] Chaichoke.V, Supawee.P, Tanasak.V, and
Andrew.K.S. "A Normalized Difference Vegetation
Index (NDVI) Time-Series of Idle Agriculture Lands:
A Preliminary Study", Engineering Journal. Vol. 15,
Issue 1, pp. 9-16, 2011.
[3] Zheng. X, Sun. X, Fu. K and Hongqi Wang,
"Automatic Annotation of Satellite Images via Multi
feature Joint Sparse Coding with Spatial Relation
Constraint", Geo science and Remote Sensing Letters,
IEEE, VOL. 10, NO. 4, July 2013, pp. 652 - 656, 2013.
[4] Anders Karlsson "Classification of high resolution
Satellite images",August-2003,available at
http://infoscience.epfl.ch/record/
63248/files/TPD_Karlsson.pdf.
[5] Amanda Briney "An overview of Remote Sensing",
16th May 2014. available at
http://geography.about.com /od/geographictechnology/
a/remotesensing.htm.
[6] Horning.N “Land Cover Classification methods”,
Version1.0. Center for biodiversity and Conservation,
2004. Available at
http://biodiversityinformatics.amnh.org.
[7] Fisher. P. "The Pixel: A Snare and a Delusion",
International Journal of Remote Sensing (IJRS), 1997,
18: 679-685, doi: http://dx.doi.org/
10.1080/01431169219015.
[8] Xu M.,Watanachaturaporn P., Varshney P., Arora
M., “Decision Tree Regression for soft classification of
Remote Sensing Data”, Remote Sensing of
Environment, 97:322-336. doi:
http://dx.doi.org/10.1016/j.compenvurbsys.2005.01.00
5.
Classification of Satellite broadcasting Image and Validation Exhausting Geometric Interpretation
(IJSRD/Vol. 3/Issue 10/2015/012)
All rights reserved by www.ijsrd.com 68
[9] Lillesand T.M, Kiefer R.W, Chipman J.W "Remote
Sensing and Image Interpretation". Ed.5. John Wiley&
Sons Ltd, 2004.
[10] Puletti .N, Perria .R ,Storchi. P “Unsupervised
Classification of very high remotely sensed images for
grapevine rows detection”, European Journal of
Remote Sensing, 47: 45-54, 2014, doi:
http://dx.doi.org/10.5721/EuJRS20144704.
[11] Rollet R, Benie G.B., Li W., Wang S., Boucher J.M
“Image classification algorithms based on the RBF
Neural Network and K-Means”, International Journal
of Remote sensing, 19: 3003-3009, 1998. doi:
http://dx.doi.org/10.1080/014311698214398.
[12] Blanzieri E., Melgani F "Nearest neighbor
classification of remote sensing image with the
maximum marginal principle" IEEE Transactions on
Geoscience and Remote sensing, 46:1804-1811, 2008.
doi: http://dx.doi.org/10.1109/TGRS.2008.916090.
[13] Goncalves M.L., Netto M.L.A., Costa J.A.F., Zullo J.J.
"An Unsupervised method for classifying Remotely
Sensed Images using Kohonen self organizing and
Agglomerative Hierarchical Clustering Method",
International Journal for Remote Sensing, 29: 3171-
3207, 2008. doi:
http://dx.doi.org/10.1080/01431160701442146.
[14] Lu D.,Weng Q.(2007) "Survey of Image Classification
methods and Techniques for improving classification
performance", International Journal of Remote
Sensing, 28: 823-870.doi:
http://dx.doi.org/10.1080/014311606600746456.
[15] Woodcock C.E., Gopal S. (2000) "Fuzzy Set Theory
and Thematic Maps: Accuracy assessment and Area
Estimation", International Journal of Geographical
Information Science, 14:153-172.doi:
http://dx.doi.org/10.1080/136588100240895.
[16] Kulkarni A.D., Kamlesh L.(1999) "Fuzzy Neural
Network Models for supervised classification:
Multispectral Image Analysis", Geocarto International,
4: 42051. doi:
http://dx.doi.org/10.1080/10106049908542127.
[17] Mannan B., Roy A.K. (2003) "Crisp and Fuzzy
Competitive Learning Networks for Supervised
classification of Multispectral IRS Scenes",
International Journal for Remote sensing,24:3491-
3502.
doi:http://dx.doi.org/10.1080/0143116021000053805.
[18] Yang X., Liu Z. (2005) "Use of Satellite-derived
Landscape Imperviousness Index to Characterize
Urban Spatial Growth. Computers, Environment, and
Urban Systems", 29:524-540. doi:
http://dx.doi.org/10.1016/j.compenvurbsys.2005.01.00
5.
[19] Yang C.C., Prasher S.O., Enright P., Madramootoo C.,
Burgess M., Goel P.K., Callum I. " Application of
Decision Tree Technology for Image Classification
using RemoteSensing Data. Agricultural Systems, 76:
1101-1117, 2003. doi:
http://dx.doi.org/10.1016/S0308-521X(02)00051-3.
[20] Wu C., Murray A.T. "Estimating Impervious Surface
Distribution by Spectral Mixture Analysis. Remote
Sensing of Environment,84:493-505, 2003. doi:
http://dx.doi.org/10.1016/S0034-4257(02)00136-0.
[21] Blaschke T. "Object based image analysis for remote
sensing", Journal of Photogrammetry and Remote
Sensing(ISPRS),65:2-16, 2010 doi:
http://dx.doi.org/10.1016/j.isprsjprs.2009.06.004.
[22] Myint S.W, Gober P, Brazel A, Grossman-Clarke S,
Weng Q. "Per-pixel vs. Object-based Classification of
Urban Land Cover Extraction using High Spatial
Resolution Imagery. Remote Sensing of Environment",
115: 1145-1161, 2011 doi: http://dx.doi.org
/10.1016/j.rse.2010.12.017.
[23] Pal N.R., Bhandari D."On Object Background
Classification", International Journal of Systems
Science, 23: 1903-1920, 1992.
doi:http://dx.doi.org/10.1080/00207729208949429.
[24] Benz U.C., Hofmann P., Willhauck G., Lingenfelder I.,
Heynen M. "Multi resolution, object-oriented fuzzy
analysis of remote sensing data for GIS-ready
information", Journal of Photogrammetry and Remote
Sensing(ISPRS), 58: 239-258, 2004. doi:
http://dx.doi.org/ 10.1016/j.isprsjprs.2003.10.002.
[25] Wang L., Sousa W.P, Gong P. "Integration of Object -
based and Pixel - based Classification for Mapping
Mangroves with IKONOS Imagery", International
Journal of Remote Sensing (IJRS), 25: 5655-5668,
2004. doi: http://dx.doi.org/10.1080/
014311602331291215.
[26] Myint S.W, Gober P, Brazel A, Grossman-Clarke S,
Weng Q "Per-pixel (vs) Object-based classification of
Urban Land Cover Extraction using High Spatial
Resolution Imagery", Remote Sensing of Environment,
115:1145-1161, 2011. doi: http://
dx.doi.org/10.1016/j.rse.2010.12.017.
[27] Ahmed. R, Mourad. Z, Ahmed.B.H, Mohamed, B. “An
Optimal Unsupervised Satellite image Segmentation
Approach Based on Pearson System and k-Means
Clustering Algorithm Initialization”, International
Science Index, Vol. 3, No. 11, pp. 948-955, 2009.
[28] Al-Ahmadi.F.S, Hames.A.S "Comparison of Four
Classification Methods to Extract Land Use and Land
Cover from Raw Satellite Images for Some Remote
Arid Areas", Kingdom of Saudi Arabia, Journal of
King Abdul-Aziz University, Earth Sciences, Vol. 20,
No.1, pp:167-191, 2009.
[29] Manoj, P., Astha, B., Potdar, M, B., Kalubarme, M, H.
and Bijendra, A. "Comparison of Various
Classification Techniques for Satellite Data",
International Journal Of Scientific & Engineering
Research(IJSER), 2013, Vol. 4, Issue 2, pp. 1-6.
[30] Jensen, J, R. 2005. "Introductory Digital Image
Processing: A Remote Sensing Perspective", 3rd
Edition, Up-per Saddle River: Prentice-Hall, 526 p.
[31] Tso, B. and Mather, P, M. "Classification Methods for
Remotely Sensed Data", 2nd Ed. Chapter 2-3, Taylor
and Francis Group, America, 2009.
[32] Shalaby A., Tateishi R. "Remote Sensing and GIS for
Mapping and Monitoring Land Cover and Land-use
Changes in the Northwestern Coastal Zone of Egypt",
Applied Geography, 27: 28-41. doi:
http://dx.doi.org/0.1016/j.apgeog, Oct-2006.
[33] Marconcini M., Camps-Valls G., Bruzzone L. (2009)
"A Composite Semi supervised SVM for Classification
Classification of Satellite broadcasting Image and Validation Exhausting Geometric Interpretation
(IJSRD/Vol. 3/Issue 10/2015/012)
All rights reserved by www.ijsrd.com 69
of Hyperspectral Images", IEEE Geoscience and
Remote Sensing Letters,6:234-238.
doi: http://dx.doi.org/10.1109/LGRS.2008.2009324.
[34] Deer P.J., Eklund P. "Study of Parameter Values for a
Mahalanobis Distance Fuzzy Classifier", Fuzzy Sets
and Systems, 137: 191-213, 2003. doi:
http://dx.doi.org/10.1016/ S0165-0114(02)00220-8.
[35] Dwivedi R.S., Kandrika S., Ramana K.V, "Comparison
of Classifiers of Remote-Sensing Data for Land-
Use/Land Cover Mapping", Current Science, 86: 328-
335, 2004.
[36] Zhang. B, Li. S, Jia. X, Gao. L, Peng. M "Adaptive
Markov Random Field Approach for Classification of
Hyperspectral Imagery", Geoscience and Remote
Sensing, IEEE, 8:973-977, 2011. doi:
http://dx.doi.org/10.1109/LGRS.2011.2145353.
[37] Zhang B., Li S., Wu C., Gao L., Zhang W., Peng M.
“A Neighbourhood-constrained Kmeans Approach to
Classify Very High Spatial Resolution Hyperspectral
Imagery”, Remote Sensing Letters, 4: 161-170, 2013.
doi: http://dx.doi.org/10.1080/
2150704X.2012.713139.
[38] Zhu H.W, Basir O. “An Adaptive Fuzzy Evidential
Nearest Neighbor Formulation for Classifying Remote
Sensing Images”, IEEE Transactions on Geoscience
and Remote Sensing, 43: 1874-1889, 2005. doi:
http://dx.doi.org/10.1109/TGRS.2005.848706.
[39] Perakis K., Kyrimis K., Kungolos A. "Monitoring
Land Cover Change Detection with Remote Sensing
Methods in Magnesia Prefecture in Greece", Fresenius
Environmental Bulletin. 9: 659-666, 2000. doi:
http://dx.doi.org/1018-4619/2000/9-10/659-08.
[40] Jianping F, Guihua Z , Mathurin B , Mohand Said
Hacid, “Seeded region growing: An extensive and
comparative study”, Elsevier, Pattern Recognition
Letters 26, 2005.
[41] Rolf Adams and Leanne Bischo, “Seeded region
growing”, IEEE transactions on pattern analysis and
machine intelligence, vol. 16, no. 6, June 1994.
[42] Qiyao Yu and David A. Clausi, Senior Member, IEEE,
"IRGS: Image segmentation using edge penalties and
region growing". ieee transactions on pattern analysis
and machine intelligence, vol. 30, no. 12, December
2008.
[43] Congalton, R.G. 1991. A review of assessing the
accuracy of classifications of remotely sensed data.
Remote Sensing of Environment. 37: 35-46.
[44] Congalton, R.G.; Green, K. 1999. Assessing the
accuracy of remotely sensed data: principles and
practices. Boca Raton, FL: Lewis Publishers. 137 p.
[45] Foody G.M. (2008). Harshness in image classification
accuracy assessment. International Journal of Remote
Sensing, 29, 3137e3158.
[46] "Data User's Hand Book", IRS-P6/NRSA/NDC/HB-
08/03, Edition No:1, Aug-2003, Pg No:22 is available
in . http://www.nrsc.gov.in/pdf/hresourcesat1.pdf.
[47] P.A.R.K.Raju, K.R.K.Raju, S. Sridhara Naidu, P.
Raghuram, Integrated Geo-environmental evaluation
for sustainable development on Watershed basis using
Remote Sensing, GIS and Conventional data sets of
Catchment Area of Kolleru lake, W.G.Dt, A.P”
International Journal of Environmental Science (IJES):
Development and Monitoring. Vol.7 No.3 (2012), pp-
273-279.