COMPARISON OF CLASSIFICATION ACCURACY USING MULTI-TEMPORAL, MULTI-POLARIZATION AND MULTI-INCIDENCE...
description
Transcript of COMPARISON OF CLASSIFICATION ACCURACY USING MULTI-TEMPORAL, MULTI-POLARIZATION AND MULTI-INCIDENCE...
![Page 1: COMPARISON OF CLASSIFICATION ACCURACY USING MULTI-TEMPORAL, MULTI-POLARIZATION AND MULTI-INCIDENCE ENVISAT ASAR IMAGES](https://reader036.fdocuments.us/reader036/viewer/2022073014/557212c1497959fc0b90dcd3/html5/thumbnails/1.jpg)
COMPARISON OF CLASSIFICATION ACCURACY USING MULTI-TEMPORAL,
MULTI-POLARIZATION AND MULTI-INCIDENCE ENVISAT ASAR IMAGES
Richelle L. Franciscoa, Leo D. Ortega
b
aResearch Laboratory for Applied Geodesy and Space Technology, Training Center for Applied Geodesy
and Photogrammetry, University of the Philippines, Diliman, Quezon City, Philippines; Tel: +63-2-9818500 ext. 3147; E-mail: [email protected]
bHydrometeorology Division, Weather and Flood Forecasting Center, Philippine Atmospheric, Geophysical
and Astronomical Services Administration, Diliman, Quezon City, Philippines; Tel: +63-2-9265060; Email: [email protected]
KEYWORDS: Land Cover Map, Mindanao, ENVISAT ASAR, Accuracy Assessment
ABSTRACT:
It is challenging to use remotely-sensed images using the optical sensors especially in the Philippines which have dense cloud-cover throughout. Radar images such as the Envisat ASAR used in the study are better since its signals can penetrate through clouds and can provide multi-polarization, multi-temporal and multi-incidence angled images which are good in land cover mapping studies such as this. Images used in the study were six Alternating Polarization Precision (APP) acquired from January to March 2012. These images were in HV and HH polarization covering Eastern Mindanao. The images then underwent geometric corrections and texture analysis. Positional accuracy was also checked by choosing ground control points in Google earth and cross-registered to the radar images. Six regions of interest (ROI’s) which include the classes of water, dense vegetation, grassland, barren land and built-up areas were used as training and accuracy datasets for the land cover classification using Artificial Neural Network (ANN) and Support Vector Machine (SVM) for the IS2 (19.2 - 26.70 ɵi), IS4 (30-35 ɵi) and IS2+IS4 images. Results showed that IS2+IS4 classified images using the both classifiers (ANN and SVM) gained the highest accuracy results of 79.1257% and 76.4671%, respectively. In conclusion, ANN performed better than SVM and is a superior classification method for land cover analysis.
1. INTRODUCTION
Land cover mapping studies are best done with the aid of radar images since they can penetrate through cloud covers unlike the optical sensors which have limited capabilities. And since the Philippines is a tropical country, cloud covers are consistently a problem in remote sensing activities.
The current land cover map for the country is done in 2003. The need to update the land cover map is a good way to assess the vulnerability of a place to natural disasters. Land cover maps are good inputs in comprehensive landuse plans which have actually been a must for every municipality. Because of the changing patterns in climatological scenarios, disaster risk reduction
and management are well implemented if there is solid and up-to-date information of the land cover and land use in a specific area.
In 2002, ENVISAT with the advanced SAR
(ASAR) as one important instrument was
successfully launched. As compared to the
preceding SARs on ERS-1 and ERS-2, ASAR can
be operated in many different modes with relevant
additional functionality including Beam steering
capability; Large and multi-incidence angle data;
multi-polarization capability; wide swath and global
monitoring modes for large area coverage at reduced
resolution spatial (Wegmuller, 2003).
ASAR works at C-band (3.75 to 7.5 cm wavelength). It is within the microwave (radar) portion of the electromagnetic spectrum. C-band has been found useful in sea ice surveillance as
![Page 2: COMPARISON OF CLASSIFICATION ACCURACY USING MULTI-TEMPORAL, MULTI-POLARIZATION AND MULTI-INCIDENCE ENVISAT ASAR IMAGES](https://reader036.fdocuments.us/reader036/viewer/2022073014/557212c1497959fc0b90dcd3/html5/thumbnails/2.jpg)
well as in other applications. Imaging radars equipped with C-band are generally not hindered by atmospheric effects and are capable of 'seeing' through tropical clouds and rain showers. Its penetration capability with regard to vegetation canopies or soils is limited and is restricted to the top layers.
Alternating Polarization Mode (AP Mode) product contains two co-registered images. The possible polarization pairs are HH/VV, HH/HV, or VV/VH. The significant advantages of ASAR are the new beam steering capability that allows data takes with different incidence angles and the Alternating Polarization Modes, which provides simultaneous dual-polarized images (DLR, 2006).
ASAR also has certain advantages over the optical sensors. ENVISAT’s ASAR sensor can penetrate cloud cover, which is especially useful in studying Polar Regions, and can capture imagery at night. ASAR is the first permanent space borne radar to incorporate dual-polarization capabilities - the instrument can transmit and receive signals in either horizontal or vertical polarization.(http://www.infoterra.co.uk/envisat)
This study would serve as a platform for understanding and preparing ahead for natural calamities through its result – a land cover map for Eastern Mindanao.
2. OBJECTIVES
This study is guided by the following set of objectives that is (1) to compare accuracy results of IS2, IS4 and IS2+IS4 ASAR images; (2) to compare ANN and SVM Classification Methods; and (3) to produce a general land cover map of Eastern Mindanao.
3. STUDY AREA
The selected study area is in one of the three major islands in the Philippines which is Mindanao. The availability of Envisat ASAR image for the study is the main consideration for this.
Figure 1. ASAR and Google Earth images of the study area
4. MATERIALS USED
Six images each for the alternating polarization precision images were used for the study for the months of January to March of this year.
Table 1. List of ASAR images used.
Alternating Alternating Polarization Precision Polarization Precision
(IS2) (IS4) January 11, 2012 (HV) January 3,2012 (HV) January 11, 2012 (HH) January 3,2012 (HH) February 10, 2012(HV) February 2,2012 (HV) February 10, 2012(HH) February 2,2012 (HH) March 11,2012 (HV) March 3,2012 (HV) March 11,2012 (HH) March 3,2012 (HH)
5. METHODOLOGY
In all image processing activities, the initial step was to do calibration and terrain correction. In this study, the radar images were pre-processed using the Next ESA SAR Toolbox (NEST).
NEST is an open source toolbox for reading, post-processing, analyzing and visualizing large archive of data (from Level 1) from ESA SAR missions and from third party missions.
![Page 3: COMPARISON OF CLASSIFICATION ACCURACY USING MULTI-TEMPORAL, MULTI-POLARIZATION AND MULTI-INCIDENCE ENVISAT ASAR IMAGES](https://reader036.fdocuments.us/reader036/viewer/2022073014/557212c1497959fc0b90dcd3/html5/thumbnails/3.jpg)
Figure 2. Image Processing and Classification Method Flowchart 4.1 Calibration
The objective of calibrating the image is to provide imagery in which the pixel values can be directly related to the radar backscatter of the scene. The output image is expressed in decibels.
Radar image is usually displayed as grey scale
image. The intensity of each pixel represents the proportion of microwave radiation backscattered from that on the ground and this depends on a variety of factors such as rocks types, sizes, shapes, orientation of the scatterers in the target area, moisture content of the target area,
frequency, and polarization of the radar pulses and the incident angles of the radar beam. The pixel intensity values are often converted to physical quantity called backscattering coefficient or normalized radar cross-section measured in
decibel (Duncan, et.al, 2010) 4.2 Terrain Correction
The Terrain Correction Operator will produce an orthorectified product in the WGS 84 geographic coordinates. The Range Doppler orthorectification method is implemented for geocoding SAR images from single 2D raster radar geometry. It uses available orbit state vector information in the metadata or external precise orbit, the radar timing annotations, the slant to ground range conversion parameters together with the reference DEM data to derive the precise geolocation information. But in our case, the acquired images were already in ground range.
4.3 Positional Accuracy
In order to check whether the rectified image is accurate, positional accuracy was done. Ground control points were identified in the georeferenced image as well as in Google earth. Root Mean Square Error was computed using the formula:
(1)
Table 2. Computation of RMSE
GCP Google Earth TC Envisat delta delta RMSE
N E (m)
1 989987. 779762. 98998 77976 -0.15 0.28 0.31765
38 79 7.53 2.51
2 991600. 786246. 99159 78624 0.86 -
0.87476
25 58 9.39
6.74 0.16
3 992488. 764388. 99248 76438 1.2 0.12 1.20599
45 58 7.25 8.46
4 1009713 779111. 10097 77911 0.03 0 0.03
.53 25 13.5 1.25
5 989669. 779274. 98966 77927 0.06 - 0.06083
53 69 9.47
4.7 0.01
6 987210. 779399. 98721 77939 0.03 0.24 0.24187
22 54 0.19 9.3
TOTAL RMSE 0.45518
The resulting RMSE is 0.45 meters. In order for the image to be acceptable, the required RMSE should be less than or equal to 0.5 of the pixel size. Since Envisat ASAR has a pixel size of 12.5 meters, we can say that the rectified image is satisfactory. 4.4 Texture Analysis
In SAR images, texture is defined to be the local scene heterogeneity and this property is used for classification of land use categories such as water, agricultural areas, etc. (Tuceryan, et. al, 1998).
Co-occurrence texture analysis was then done to the pre-processed image to produce the mean and variance images using Envi 4.8. Texture Analysis is a matrix of relative frequencies with which pixel values occur in two neighboring processing windows separated by a specified distance and direction. It also shows the number of occurrences of the relationship between a pixel and its specified neighbor.
![Page 4: COMPARISON OF CLASSIFICATION ACCURACY USING MULTI-TEMPORAL, MULTI-POLARIZATION AND MULTI-INCIDENCE ENVISAT ASAR IMAGES](https://reader036.fdocuments.us/reader036/viewer/2022073014/557212c1497959fc0b90dcd3/html5/thumbnails/4.jpg)
A processing window size of 13x13 was used since the effectiveness of this window size is related to the variability and spacing of the ground features being observed, such as field size and building spacing (Haack and Bechdol, 1999; Haack and Bechdol, 2000).
4.5 Regions of Interest
After having corrected and/or calibrated the ASAR images, regions of interest (ROI’s) for the six identified land cover classes were generated with the aid of Google Earth and georeferenced recent Worldview and Quickbird images (jpg format). A number of training and accuracy ROI’s were collected before proceeding with the classification procedures.
Table 3. ROI classifications
Land cover Definition
class
Natural and areas where the vegetative
Semi-Natural
cover is in balance with
Vegetation
abiotic and biotic forces of its
(Dense
biotope.
Vegetation)
areas where the natural
Cultivated vegetation has been removed
or modified and replaced by
Terrestrial
different types of vegetative
(Cropland)
cover resulting from anthropic
activities.
areas which have an artificial
cover which is the result of
human activities such as
Built-up Area construction (cities, towns,
transportation), extraction
sites (open mines and
quarries) and waste disposal
sites.
Inland Water areas which are naturally
covered by water such as
(Water)
lakes and rivers
Bare Soil areas which do not have an
artificial cover resulting from
(Barrenland)
human activities.
(Grassland*) areas which are naturally
covered by grass
Source: FAO Land Cover Classification: A Dichotomous, Modular-Hierarchical Approach (1996)
* not included in the FAO classification
The ROI’s were then subjected to separability measures with results ranging from 1 to 2. A value of 1.90 means two land cover classes are highly separable in the image while values less than 1 has poor separability. Acceptable Jeffries-Matusita, Transformed Divergence (JMTG) values should range from 1.50 to 2 to ensure a higher degree of accuracy later on. The training ROI’s were then used in the ANN and SVM classification proper in order to produce a classified image. 4.6 Artificial Neural Network Classification
This classification method uses standard back-propagation for supervised learning. Number of hidden layers to use can be selected and you can choose between a logistic or hyperbolic activation function. . The error is back-propagated through the network and weight adjustment is made using a recursive method.
The advantage of ANN is that it is non-parametric and it has no assumptions about the probabilistic models of data, it has the ability to generalize in noisy environments and ability to learn complex patterns (Ji, 2000). 4.7 Support Vector Machine Classification
This classification method performs classification by constructing a N- dimensional hyperplane that optimally separates the data into two categories. SVM analysis finds the line (or in general, hyperplane) that is oriented so that the margin between the support vectors is maximized.
The kernel type used was radial basis function. It is feed forward but have only one hidden layer. RBF hidden layer units have a receptive field which has a center: that is, a particular input value at which they have a maximal output. Their output tails off as the input moves away from this point
SVM is one of the effective methods in pattern and texture classification; texture patterns are composed of many pixels and are used as input features for SVM. 4.8 Accuracy Assessment
Having performed the classification procedures, accuracy assessment was done to check whether the land cover classification done is correct. The
![Page 5: COMPARISON OF CLASSIFICATION ACCURACY USING MULTI-TEMPORAL, MULTI-POLARIZATION AND MULTI-INCIDENCE ENVISAT ASAR IMAGES](https://reader036.fdocuments.us/reader036/viewer/2022073014/557212c1497959fc0b90dcd3/html5/thumbnails/5.jpg)
accuracy ROI’s with the lowest separability value of 1.71 were used for this. A confusion matrix was generated to check for the accuracy. Several statistical measures such as the user and producer’s accuracy, overall accuracy and Kappa statistic were analyzed.
5. RESULTS AND ANALYSIS
In the collection of ROI’s used as training and accuracy, it is ensured that there is a representative sample in the study area as shown in the image below.
Figure 3. Spatial distribution of ROI’s in the study area.
Based on the classification methods used, the classified images were derived with the corresponding measures of accuracy in the succeeding tables. Table 4. Results of classifications
Classification IS2 IS4 IS2+IS4
Artificial Neural Network
Support Vector Machine
Combined IS2 and IS4 ASAR images yielded the highest accuracy results both for ANN and SVM.
Table 5. ANN accuracy results for IS2 Class Prod. Acc. (%) User Acc. (%)
Built-up 96.24 92.92
Cropland 57.79 91.05
Water 92.5 94.97
Grassland 13.86 9.13 Barren Land 46.06 31.53
Dense vegetation 79.29 55.34
Table 6. ANN accuracy results for IS4
Class Prod. Acc. (%) User Acc. (%) Built-up 91.79 72.65
Cropland 55.99 99.86
Water 95.71 91.48
Grassland 25.23 14.12
Barren Land 32.15 31.11
Dense vegetation 79.34 72.85
Table 7. ANN accuracy results for IS2 and IS4
Class Prod. Acc. (%) User Acc. (%) Built-up 96.69 93.59 Cropland 69.04 91.78
Water 78.83 64.76 Grassland 19.76 11
Barren Land 39 36.58 Dense vegetation 94.54 94.07
Table 8. SVM accuracy results for IS2
Class Prod. Acc. (%) User Acc. (%) Built-up 23.03 24.94
Cropland 27.14 17.42
Water 97.03 93.99
Grassland 57.01 88.35
Barren Land 69.29 43.03
Dense vegetation 94.62 94.46 Table 9. SVM accuracy results for IS4
Class Prod. Acc. (%) User Acc. (%)
Built-up 31.11 41.27
Cropland 39.82 16.9
Water 92.58 82.73
Grassland 61.98 92.29
Barren Land 62.69 54.04
Dense vegetation 95.79 93.96
![Page 6: COMPARISON OF CLASSIFICATION ACCURACY USING MULTI-TEMPORAL, MULTI-POLARIZATION AND MULTI-INCIDENCE ENVISAT ASAR IMAGES](https://reader036.fdocuments.us/reader036/viewer/2022073014/557212c1497959fc0b90dcd3/html5/thumbnails/6.jpg)
Table 10. SVM accuracy results for IS2 and IS4 Class Prod. Acc. (%) User Acc. (%)
Built-up 95.86 94.54 Cropland 63.01 92.85
Water 39.23 19.03 Grassland 71.77 50.98
Barren Land 31.12 33.48 Dense vegetation 94.78 93.79
According to Wang (2005), one SAR image at given frequency, polarization and incidence angle, is often insufficient to attain the required accuracy of classification. It is not possible to classify individual bright SAR objects based on single incidence angle geometry. Improvements are expected by multi-temporal and/or multi-polarization and/or multi-angle SAR images. Table 11. Summary of accuracy assessment.
Images SVM (%) ANN (%)
IS2 73.7006 75.4731 IS4 74.1303 73.6999
IS2 + IS4 76.4671 79.1257
It was shown that satellite SAR images acquired from several incidence angles can give valuable complementary information about hard targets and thereby give more complete picture of the number and type of manmade objects present in an area.
6. CONCLUSIONS
Based on the results of the study, it is concluded that the degree of accuracy is directly proportional to that of the degree of separability of the regions of interest. The more separable the land cover classes identified are, the higher the accuracy computed in total and in parts.
On the other hand, the combined IS2 and IS4 ASAR images prove to be superior to the individual ASAR images processed. This might be due to the fact that the combined images compensated the weakness of the other thereby producing a more accurate result.
Finally, since Artificial Neural Network Classification produced higher accuracy results than Simple Vector Machine, it is a good classification method to use in land cover mapping studies such as this.
7. RECOMMENDATIONS
There are things that should be done in order to improve the results of the study or better generate an acceptable degree of classification. One is to use other bands of radar images used which may generate better results. Since the ASAR images used were on C-band, the use of other bands such as the L and P-bands should also be taken into consideration for comparison of results.
Inclusion of other land cover classes which the FAO defined may also be a useful addendum to the results for a more specific application. Species level classification for vegetation class may also be made to further the classification efforts.
Having initially done a land cover classification of the study area, it would be great to compare it to the existing land cover map of the area in order to see and detect any change if there any. It would be good to conduct a change detection study as a follow-up for the land cover classification done.
8. REFERENCES
Ahmed K. 2006. ENVISAT ASAR for Land Cover Mapping and Change Detection. Royal Institute of Technology. Stockholm, Sweden.
Arrowsmith, R. and K. Whipple. 2008. Simple
dislocation models for earthquake displacements and the earthquake cycle. Tectonic morphology.
http://whipple_arrowsmith598.asu.edu/Dislocatio nExercise/
Boersma, P. and D. Weenink. 2012. Principal Component Analysis. Praat:doing phonetics by computer.http://www.fon.hum.uva.nl/praat/manua l/Principal_component_analysis.html
Chen, C. H. , L. F. Pau, P. S. P. Wang (eds.) 1998. Texture Analysis. The Handbook of Pattern Recognition and Computer Vision (2nd Edition), World Scientific Publishing Co., pp. 207-248
Clausi D. et al . 2004. Comparing Cooccurrence Probabilities and Markov Random Field for Texture Analysis of SAR Sea Ice Imagery. IEEE Transactions on Geoscience and Remote Sensing. Vol. 42, No. 1.pp. 215-228.
![Page 7: COMPARISON OF CLASSIFICATION ACCURACY USING MULTI-TEMPORAL, MULTI-POLARIZATION AND MULTI-INCIDENCE ENVISAT ASAR IMAGES](https://reader036.fdocuments.us/reader036/viewer/2022073014/557212c1497959fc0b90dcd3/html5/thumbnails/7.jpg)
European Space Agency. NEST. http://nest.array.ca/web/nest
Gidudu, A., G. Hulley and T. Marwala. 2008. AN SVM MULTICLASSIFIER APPROACH TO LAND COVER MAPPING. ASPRS Annual Conference. Portland, Oregon. (April 28-May 2, 2008).
Haack B., and M. Bechdol. 1998. Integrating Multi Sensor Data and RADAR Texture Measures for Land Cover Mapping. Computers and Geosciences 26(2000) pp. 411-421.
Hodgson M. 1998. What Size Window for Image Classification? A Cognitive Perspective. Photogrammetric Engineering and Remote Sensing. Vol. 64, No. 8. pp. 797-807.
Ji. C.Y. 2000. Land Use Classification of Remotely Sensed Data Using Kohonen Self-Organizing Feature Map Neural Networks. Photogrammetric Engineering and Remote Sensing. Vol. 66, No. 12. pp. 1451-1460
Kapnias, D., P. Malenov, and S. Kay. 2003. Guidelines for Best Practice and Quality Checking of Ortho Imagery. JRC Scientific and Technical Reports. Italy. Issue 3.0. Radar and SAR Glossary. Envisat-ESA. Earthnet Online.http://envisat.esa.int/handbooks/asar/CNT R5-2.htm
Sheoran, A., and B. Haack .2009. Quad Polarization PALSAR Radar, Radar Texture, and Optical Data for Land-Cover Classification. Alaskan Satellite Facility News and Notes. Fall 6(1): 3 - 4.
Sherrod, P. Support Vector Machines. DTREG. Software for Predictive Modeling and Forecasting. http://www.dtreg.com/svm.htm
Smith, L. 1996. An Introduction to Neural Networks. University of Stirling. Scotland. http://www.cs.stir.ac.uk/~lss/NNIntro/InvSlides.ht ml Tuceryan, M. and A.K. Jain. 1998. Texture Analysis.
Yoshioka, M. et. al . 2007. SAR image classification by SVM.
Wang X. et. al, 2005. Envisat Asar Data For Agriculture Mapping In Zhangzhou, Fujian Province, China. Proceedings 2005 Dragon Symposium “Mid-Term Results”, Santorini, Greece.
Wegmüller, U., T. Strozzi , A. Wiesmann, and C. Werner, 2003. ENVISAT ASAR for Land Cover Information. Geoscience and Remote Sensing Symposium, 2003. IGARSS '03. Proceedings. 2003 IEEE International. vol.5: 2996- 2998.
Weydal, D.J. 2002. Backscatter changes of urban features using multiple incidence angle RADARSAT images. Canadian Journal of Remote Sensing, 28(6), pp. 782-793
Zhang, P. 2000. Neural Networks for Classification: A Survey. IEEE Transactions on Systems, Man, and Cybernetics. Part C: Applications and Reviews. Atlanta, Georgia. Vol. 30, No. 4. pp. 451-461. http://geol.hu/data/online_help
9. ACKNOWLEDGEMENT
The Envisat ASAR images were provided by the European Space Agency through the UP-SAGO Project.