Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría...

16
Full Terms & Conditions of access and use can be found at http://www.tandfonline.com/action/journalInformation?journalCode=tgei20 Download by: [Fabiola D. Yépez-Rincón] Date: 22 September 2017, At: 16:35 Geocarto International ISSN: 1010-6049 (Print) 1752-0762 (Online) Journal homepage: http://www.tandfonline.com/loi/tgei20 Synergetic efficiency of Lidar and WorldView-2 for 3D urban cartography in Northeast Mexico Fabiola D. Yépez Rincón & Diego F. Lozano García To cite this article: Fabiola D. Yépez Rincón & Diego F. Lozano García (2017): Synergetic efficiency of Lidar and WorldView-2 for 3D urban cartography in Northeast Mexico, Geocarto International, DOI: 10.1080/10106049.2017.1377774 To link to this article: http://dx.doi.org/10.1080/10106049.2017.1377774 Accepted author version posted online: 12 Sep 2017. Published online: 21 Sep 2017. Submit your article to this journal Article views: 2 View related articles View Crossmark data

Transcript of Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría...

Page 1: Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría 4. Resultados y...2 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA Synergistic studies

Full Terms & Conditions of access and use can be found athttp://www.tandfonline.com/action/journalInformation?journalCode=tgei20

Download by: [Fabiola D. Yépez-Rincón] Date: 22 September 2017, At: 16:35

Geocarto International

ISSN: 1010-6049 (Print) 1752-0762 (Online) Journal homepage: http://www.tandfonline.com/loi/tgei20

Synergetic efficiency of Lidar and WorldView-2 for3D urban cartography in Northeast Mexico

Fabiola D. Yépez Rincón & Diego F. Lozano García

To cite this article: Fabiola D. Yépez Rincón & Diego F. Lozano García (2017): Synergeticefficiency of Lidar and WorldView-2 for 3D urban cartography in Northeast Mexico, GeocartoInternational, DOI: 10.1080/10106049.2017.1377774

To link to this article: http://dx.doi.org/10.1080/10106049.2017.1377774

Accepted author version posted online: 12Sep 2017.Published online: 21 Sep 2017.

Submit your article to this journal

Article views: 2

View related articles

View Crossmark data

Page 2: Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría 4. Resultados y...2 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA Synergistic studies

Geocarto InternatIonal, 2017https://doi.org/10.1080/10106049.2017.1377774

Synergetic efficiency of Lidar and WorldView-2 for 3D urban cartography in Northeast Mexico

Fabiola D. Yépez Rincóna  and Diego F. Lozano Garcíab aInstitute of civil engineering, Universidad autónoma de nuevo león, San nicolás de los Garza, Mexico; bInstituto tecnológico y de estudios Superiores de Monterrey, Monterrey, Mexico

ABSTRACTThree-dimensional urban cartography is needed for city changes’ assessment. The variety of studies using 3D calculations of urban elements grows each year. Building and vegetation volumes are necessary to assess and understand spatio-temporal urban changeable environments. However, there are technical questions as to which method can improve 3D urban cartographic accuracy. The innovative part of this current study is the creation of a six-band hybrid obtained from LIDAR and WorldView2 synergy. Two different enhancement algorithms demonstrated the most important spectral features for the urban development and vegetation classes. Results indicated an improvement in accuracy by up to 21.3%, according to the Kappa coefficient. Both infra-red band and intensity band were the most significant, according to the principal components analysis. The synergy delimited classes and polygons, as well as the direct display of information regarding heights of elements and improving the extraction of roads, buildings and vegetation classes.

1. Introduction

Remote sensing (RS) and GIS facilitate mapping of urban areas and monitoring of changes in land cover (Schneider et al. 2003; Peijun et al. 2014). Sensor characteristics vary in the spatial, spectral and temporal resolution (Schowengerdt 2006; Schott 2007 and Lillesand et al. 2008), and when two or more sensors combine their independent features to improve performance as a whole, this is called a synergy process. This type of ‘synergistic’ study emerged during the 1980s (Banner and Lynham 1981; Forster 1985; Lozano -Garcia and Hoffer 1993; Corbane et al. 2008), as an option that would increase the accuracy of the classification of features present in the imagery (Tarantino et al. 2011). It did in fact show an increase in the classification accuracy from ≈60 to 90% (Hodgson et al. 2003; Herold and Roberts 2006).

1.1. LIDAR

Optical imagery and LIDAR (Light detection and ranging) technology are some of data sources used in the photogrammetry and RS communities (Adams and Chandler 2002; Zhang and Lin 2016).

© 2017 Informa UK limited, trading as taylor & Francis Group

KEYWORDSUrban areas; soil coverage classification

ARTICLE HISTORYreceived 30 May 2017 accepted 6 September 2017

CONTACT Fabiola D. Yépez rincón [email protected]

Dow

nloa

ded

by [

Fabi

ola

D. Y

épez

-Rin

cón]

at 1

6:35

22

Sept

embe

r 20

17

Page 3: Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría 4. Resultados y...2 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA Synergistic studies

2 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA

Synergistic studies using the combination of these two data sources have made significant progress over the last two decades, particularly in local urban planning and monitoring (Baltsavias 1999; Sylos Labini et al. 2012). The variety of studies using these data sets, to produce 2D or 3D representation of urban elements grows each year (Ban et al. 2010; Grote et al. 2012; Sylos Labini et al. 2012). 3D data is used to obtain metrics of buildings and vegetation that help recognize spatio-temporal issues which are necessary to understand energy consumption and reduction of air pollution, and other environmental studies (Liang 2004; Casalengo et al. 2017).

1.2. WorldView-2

Very High Resolution (VHR) optical sensors such as WorldView-2 (WV2) allow accurate 2D cartog-raphy of land cover classes in urban neighbourhoods (Pacifici et al. 2009; Kumar et al. 2012). However, traditional per-pixel approach for classification procedure represents a challenge due to the high spectral similarity between various cover type classes (Zhou 2007; Myint et al. 2011). Classification procedures of VHR imagery have shown a substantial progress on quantitative evaluation of urban land cover (Peijun et al. 2014). The procedure is organized into libraries which reduce human intervention. Nevertheless, accuracy of these classifications usually depends on technical experience.

1.3. Pansharpening

Most earth resource satellites provide Panchromatic (PAN) images with higher spatial resolution than its corresponding Multispectral (MS). This limitation (of the MS imagery) can be overcome with the use of enhancement techniques (Pansharpening), in which both high-spatial (PAN) and high-spectral (MS) resolution imageries can be effectively combined into a new high resolution multispectral image.

1.4. Classification

The challenge of 3D urban cartography is related to the complexity of the urban environment, the constant changes that characterize the city and the need to update these changes in a timely and efficient way. 3D urban cartography faces challenges, mainly related to the classification of the urban environment, the automatic detection of the changes and the efficient update of the 3D urban cartog-raphy elements. As pointed out by Xu and Coors (2012) and Shirowzhan and Trinder (2017), there are some technical issues as to which is the best process for classifying and quantifying urban elements.

Monitoring the trend of urban changes can be achieved by filtering classes such as buildings or vegetation to determine their heights above ground level (AGL) using LIDAR data (Shirowzhan and Trinder 2017). This type of spatio-temporal 3D assessment is required for residential developments in urban areas (Xu and Coors 2012). Another example in which 3D modelling of buildings is necessary, is building analysis for urban energy planning (Krüger and Kolbe 2012).

Classification of 3D point cloud is carried out by filtering methods, resulting in the identification of distinctive classes of urban structure, i.e. Buildings, houses, bridges, etc. In Texas, Meng et al. (2010) used data on intensity, returns, elevation, and the geometric characteristics of surfaces for segmenting and identifying various classes. However, the challenge is the very high similarity within class spectral variations since this propitiates the class mixture of 3D points from an object to another. This com-plexity increases as the set of information gets bigger, such as in the case of a city.

1.5. PCA

Since the mid 1980’s, image fusion researchers had published hundreds of documents with variations of image fusion techniques, some of the most popular and effective are HIS (Intensity, Hue, Saturation), PCA (Principal Components Analysis) (Pacifici et al. 2009; Leichtle et al. 2016), arithmetic combination and wavelet base fusion, and the Pansharpening Techniques (PT).

Dow

nloa

ded

by [

Fabi

ola

D. Y

épez

-Rin

cón]

at 1

6:35

22

Sept

embe

r 20

17

Page 4: Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría 4. Resultados y...2 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA Synergistic studies

GEOCARTO INTERNATIONAL 3

PANSHARP Technique (PT), according to Zhang (2004), uses (1) ‘LST (Least Square Technique) to find the best fit between the grey values of the image bands being fused and to adjust the contribution of individual bands to the fusion result to reduce the colour distortion, and (2) it employs a set of statistical approaches to estimate the grey value relationship between all the input bands to eliminate the problem of data-set dependency (i.e. reduce the influence of data-set variation) and to automate the fusion process’.

1.6. Objectives

The authors of this paper tested three methods for improving urban cartography. First, they used a combination of semiautomatic algorithms to classify a LIDAR 3D point cloud. Second with an enhanced WV2 (via a PT algorithm) they classified the image using a Maximum Likelihood classifier. Third, they combined both LIDAR 3D Point cloud and WV2 data, they created a hybrid, six bands image and analysed it via standard image classification techniques to map the urban environment and assessed the synergistic effects of the combined data-set.

2. Materials and methods

2.1. Area of study

The Monterrey Metropolitan Area (MMA) is located in north-east Mexico. It consists of 12 munici-palities and covers an area of about 580 km2. It has more than 5 million habitants and a growth rate of 1.9% per year (INEGI 2015). Founded in 1596 by Spanish settlers, it remained a relatively small city until the first part of the twentieth century, when an important industrialization process increased its economic importance; and then with the signing of NAFTA in the early 1990s, Monterrey saw a major increase in population and area. Although the city of Monterrey was founded in 1596, it underwent two metropolitan expansion processes, from 1940 to 1980, and again from 1990 to the present (Aparicio Moreno et al. 2011; García Ortega et al. 2013). Aparicio Moreno et al. (2011) noted that the agents of socio-spatial segregation modified the morphology and geometry of the city; hence there is a mixture of buildings and commercial, residential and industrial developments (INEGI 2007) within the MMA.

2.2. Data specifications and preparation

2.2.1. LIDAR dataAn aerial survey to scan the entire city was conducted during December 2010; the project was sup-ported by the National Water Commission (CNA). An airplane carrying the airborne laser scanner (Leica ALS 50 Phase II) flew 1000 m AGL. The ALS provides a sinusoidal scan pattern in a plane nominally orthogonal to the longitudinal axis of the scanner and centred about nadir. Table 1 shows the technical specifications of the two sensors used in this study.

Table 1. technical specifications of the data-sets used in this work.

Parameter LIDAR WV2resolution x, y 0.7 m MS bands: 1.84 m

Panchromatic0.46 mresolution z 0.15 m n/aSpectral range 1084 nm B1: 450–510 nm, Blue

B2: 510–580 nm, GreenB3: 630–690 nm, redB4: 760–895 nm, InfraredPan: 450 to 800 nm

acquisition date 14 December 2010 10 December 2010

Dow

nloa

ded

by [

Fabi

ola

D. Y

épez

-Rin

cón]

at 1

6:35

22

Sept

embe

r 20

17

Page 5: Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría 4. Resultados y...2 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA Synergistic studies

4 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA

The total scanned area was 202, 237.5 km2. The cloud was orthorectified using 32 ground control points (GCP) on a rising ground (Figure 1). The raw point cloud contains the following information: X, Y coordinates (UTM – WGS84 projection), elevation Z (m), return number (1, 2, 3, 4), intensity value (0–255), automatic gain control value (AGC), flight line in LAS format. The result was a total of 5392 files or ‘tiles’ (subdivisions of information of 500 × 750 m). Five areas of interest (AOI) of 6.25 km2 were randomly selected and the corresponding tiles selected and exported. Each AOI is in average 28 million 3D points.

From the LIDAR elevation data, we generated three new data sets: a Digital Surface Model (DSM), a Digital Terrain Model (DTM) and the Digital Height Model (DHM). The first one represents the ele-vation value of the objects present in the area (trees, buildings, bridges, etc.), the second represents the elevation value of the ‘bare earth’, and the third data-set is the subtraction of the DSM from the DTM.

2.2.2. WorldView-2 imagesWe used a WV2 satellite image with four MS bands, as well as the Panchromatic image (Figure 1, Table 1). The imagery covered an area of more than 50% of the study area (which is displayed in the section where the two data-sets overlap).

2.2.3. Orthorectification and Pansharpening in WV-2The two WV2 ortho-images (Panchromatic and Multispectral) were generated using the mathemat-ical method of Optical Satellite Modelling (OSM) plus Rational Polynomial Coefficients (RPC) with a total of 140 control points for the whole data-set (with residual error of 0.31 m in X and 0.63 m in Y). For the elevation correction we used a Digital Terrain Model (DTM) generated from the LIDAR point cloud, with a 0.1 m elevation precision. The final fused image was a VHR multispectral image

Figure 1. location of the study area with coverage of WV-2 and lIDar data.

Dow

nloa

ded

by [

Fabi

ola

D. Y

épez

-Rin

cón]

at 1

6:35

22

Sept

embe

r 20

17

Page 6: Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría 4. Resultados y...2 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA Synergistic studies

GEOCARTO INTERNATIONAL 5

was generated using PT to enhance spectral distortion (Li et al. 2017), eliminating the problem of data-set dependency and to automate the fusion process.

2.2.4. Combination of the LIDAR and WV2 data setsA new composite image was created using 6 bands of information (Figure 2). The first 4 bands were extracted directly from the WV-2 image (Figure 2(A)–(D)), while the last two bands were generated from the LIDAR data using the values of intensity, inserted into band 5 (Figure 2(E)) and local height (DHM) inserted into band 6 (Figure 2(F)). To preserve the data ranges, all six bands were coded in a 16-bit format.

2.3. Urban coverage classification

2.3.1. Classification of LIDAR dataPreliminary tests consisted of trial and error based on the limited literature of the data LIDAR filtering for the bare earth recommended by Baligh et al. (2008) and the generation of DTM by Jacobsen (2003), Kraus and Pfeifer (2001), Liu (2008), Shan and Sampath (2007) and Shan and Aparajithan (2005). The first set includes a seven filter combination designed using a series of arrangements of algorithms, fed by a set of thresholds or ranges of values for each attribute (i.e. elevation, intensity, geometry) that defines each class (Figure 3). Filters were resetted reducing the ranges of intensity, planimetric dimensions and elevation differences corresponding to each class. This process was performed four times in order to refine the range of values per class (Figure 3). The filters used were Elevation, Return, Intensity, Building, Plane analysis, Height by surface and Reclassify (Table 2).

The developed filter combinations were tested by block design (according to the characteristics of the data), and, based on classification accuracy achieved, only the best four semi-filters combinations were selected. The filtering process and the statistical analysis of data were performed by organizing a

Figure 2. the six bands used for synergy, 4 from WV-2 and 2 created with lIDar. (a) Band 1 (WV-2 Blue), (b) Band 2 (WV-2 Green), (c) Band 3 (WV-2 red), (d) Band 4 (WV-2 near infra-red 1), (e) Band 5 (lIDar intensity) and (f ) Band 6 (Digital Height Model from lIDar).

Dow

nloa

ded

by [

Fabi

ola

D. Y

épez

-Rin

cón]

at 1

6:35

22

Sept

embe

r 20

17

Page 7: Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría 4. Resultados y...2 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA Synergistic studies

6 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA

Figure 3. Schematic workflow showing the main steps of the synergistic model and of each individual database.

Dow

nloa

ded

by [

Fabi

ola

D. Y

épez

-Rin

cón]

at 1

6:35

22

Sept

embe

r 20

17

Page 8: Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría 4. Resultados y...2 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA Synergistic studies

GEOCARTO INTERNATIONAL 7

set of combinations of non-iterative and iterative algorithms in order to design an ad hoc filter related to the conditions of the study area (Table 3). As seen in Figure 3, roofs area has an important influence on the definition of planimetric dimensions. The fist filter was applied to each AOI and its various iterations were considered (Filter 1-Filter 2, Filter 2-Filter 4, Filter 3-Filter 4). The end result (Filter 4) was analysed statistically, using a Tile Statistics tool, and confusion or error matrices. Segments with errors were reallocated as unclassified and then the filter was reset and operated again.

2.3.2. Classification of WV-2 and synergy imageThe classification process for both, the WV-2 and the new combined images, followed the same meth-odology of Unsupervised and Supervised Classification with the same set of training polygons, and the Maximum Likelihood classification algorithm.

2.4. Validation of desk-study

The desktop validation used the maximum number of classes that could be encountered in the AOI, and this number was estimated via an unsupervised classification. The results showed a variation of 9–22 classes for each AOI. The validation effort can be observed on Figure 4. The sampling effort was 10 validation sites per class, gaining a total of 770 sites to validate. The distribution of the sites was conducted using a random sample that was stratified by class for each area (Fitzpatrick-Lins 1981; Lu and Weng 2007).

Using the programmes of Google Earth (GE) and Google Street View (GSV), we obtained in a desk-study, panoramic views at street-level (360° of horizontal movement and 290° of vertical move-ment), which could validate each element and verify the classes to which those elements belonged. GSV coverage in the study area during the period under review varied depending on the availability of information in the virtual tours.

Table 2. Filter description.

Filter DescriptionIntensity this filter reclassifies the selected point cloud based on the range of the intensity for each specific classreturn the return filter reclassifies points when the points match the specified return value requirement, i.e.

first return, last returnelevation this filter reclassifies the selected points based on their absolute elevation (Z) valueBuilding this filter reclassify the selected points looking for buildings in areas that are void of ground-classed

points. It uses the surface and geometry of the roofsPlane analysis the Plane analysis filter is designed to find and reclassify groups of point clouds that form planar

surfaces. this filter functions differently than the Building filter because it is capable of specifying different angles of planar surfaces such as roofs. this tool does not require a reference surface to separate features

Height from surface the Height from surface filter reclassifies points that are at a specified elevation above or below a surface

reclassify the reclassify filter implies reclassifying0 all points from the source classification(s) to the target classi-fication. there are no additional parameters with this filter

Table 3. Filter combinations and ranges for various runs tested during the classification of the lIDar data.

Filter type Run # 1 Run # 2 Run # 3 Run # 4Intensity 0 to 250 By class By class By classelevation (m.a.s.l.) 490 to 550 490 to 640 500 to 640 500 to 640return(s) First First & last First & last First, Fourth & lastBuilding (sq. m) 20 to 500 20 to 500 not used not usedPlane analysis not used not used By type By typeHeight from surface not used not used not used 1 to 3 unitsreclassify not used not used errors to unclassified errors to unclassified

Dow

nloa

ded

by [

Fabi

ola

D. Y

épez

-Rin

cón]

at 1

6:35

22

Sept

embe

r 20

17

Page 9: Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría 4. Resultados y...2 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA Synergistic studies

8 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA

2.4.1. 3D dataUsing software for point cloud visualization, we validated the elements by class and then made meas-urements of height and surface coverage of buildings and trees using the Adjustable Profile Line (APL) tool. The height measurements were made using the front profile as a reference and the area measurements were made using the planar information. These data were analysed only with respect to the sampled information from the field, in order to corroborate the accuracy of the point cloud in Z.

2.5. Validation in the field

A global positioning system (GPS), a hypsometer laser and a camera were used during the field trips to some of the sites where doubts about the classifications could not be resolved using the desk-top validation. In order to evaluate the results of XY, we used a Trimble Juno SC or Magellan GPS; and for Z value, we used laser distance measuring equipment (LaserAce® hypsometer). The light weight of the equipment allowed us to measure the angle and distance information at a range of 150 m and with an accuracy of up to 30 cm in ranges higher than 300 m, while the accuracy for angles is +/–0.25°.

The fieldwork represents a smaller proportion of validated data and was considered to be about 5% of the total points which were verified and updated by the gap (of about 2 years) in information and validation. Furthermore, the dimensions of some elements such as buildings and trees were measured in order to validate the 3D information.

2.6. Analysis of precision

2.6.1. Confusion or error matrixAn error matrix was made with each of the sites that were identified as having errors in omissions and commissions following every type of classification. With the number of elements that were mis-classified, we consequently calculated the general values by class.

Figure 4. effort of aoI sampling, field-study and desk-study.

Dow

nloa

ded

by [

Fabi

ola

D. Y

épez

-Rin

cón]

at 1

6:35

22

Sept

embe

r 20

17

Page 10: Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría 4. Resultados y...2 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA Synergistic studies

GEOCARTO INTERNATIONAL 9

2.6.2. Kappa coefficientThe Kappa coefficient was used to assess the accuracy of the ratings, which is reported in the literature as a statistical measure that adjusts the effect of chance in the proportion of observed agreement in the assignment of class and evaluation of land cover on maps (Erener and Düzgün 2009; Torahi and Rai 2011; Qadri et al. 2017). The values for each class were obtained from the error matrix that was generated using the validation results in order to determine the reliability of the sampling (Stehman 1997; Jensen 2005). The equation used for K was:

where, Pr (a) is the observed agreement between observers, and Pr (e) is the hypothetical probability of agreement by chance, using the observed data to calculate the odds that each observer randomly classifies each category.

If the reviewers are in complete agreement, then κ = 1. If no more agreement is made among the different reviewers than would be expected by chance (as defined by Pr (e)), then κ = 0.

2.7. Transformation of the principal components

Principal component analysis (PCA) was performed for the 5 AOIs. PCA generates a mathematical procedure that transforms the number of possible correlations to a number of non-correlatable var-iables, the so-called Principal Components. The first principal component obtained all the possible variability within the data, and also for each of the following components.

PCA involves a mathematical procedure that transforms a number of (possibly) correlated variables into a number of (smaller) uncorrelated variables called Principal Components. The first principal com-ponent accounts for as much of the variability of the data as possible, and each succeeding component accounts for as much of the remaining variability as possible. For details on PCA. The axis of PCA for three subsets of variables that explain >95% of the variance (cumulative proportion of all components) were tested as explanatory variables for the classification of species of trees and ash damage.

3. Results and discussion

3.1. LIDAR classification

The point cloud was segmented using a filtering technique similar to that proposed by Meng et al. (2010). It took into account, the values on intensity, returns and elevation, and the geometric charac-teristics of the surfaces for segmenting and attributing classes. A cloud with more than 108 million points, with an average density of 3.46 points per m2 was classified (Figure 5). Automatic classification (Figure 3, Filter Combination 1) was made for 5.40% of the surface of the MMA achieving 75.7% overall accuracy of the classes.

The adjusted thresholds of the classes within the algorithm and the semi-automatic execution pre-cision improved by 16.7% (Figure 3, Filter Combinations 2, 3 & 4). The result of the adjusted algorithm using a semi-automatic process permits identification of floors, pavements, buildings, vegetation, major infrastructure, shadows and water in the MMA with the Kappa coefficient of 92.4% for all classes. However, in ranking in order of highest to lowest, accuracy of classes is: vegetation (94.9%), buildings (94.5%), floors (88.5%) and soil (86.54%) (Table 4).

The percentage of coverage varied by AOI class, which was expected for the MMA that has intro-duced a process-dependent growth of trade, industry and housing development, which makes it a heterogeneous surface area within the study area (INEGI 2015). The complex geometry represented a challenge for the algorithms of Building and Planar Analysis which had to be readjusted for each area in order to find significant differences in the size of the surface of industrial roofs versus residential roofs, or with planar angles of commercial premises versus industrial premises (Figure 6).

k =Pr (a) − Pr(e)

1 − Pr(e)

Dow

nloa

ded

by [

Fabi

ola

D. Y

épez

-Rin

cón]

at 1

6:35

22

Sept

embe

r 20

17

Page 11: Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría 4. Resultados y...2 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA Synergistic studies

10 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA

Another challenge for the filter was the identification of urban trees in backyards or where the buildings are covered by the tree canopy. This overlap of building-tree represented the class with the highest number of misclassified points that returned an error rate of 8.56%.

One of the advantages of the LIDAR filtering is that it can achieve better results when combined with manual filtering in the specific areas where errors are detected, hence it is necessary to evaluate the efficiency in terms of time. Manual operation (semiautomatic classification) increased the accuracy and improved the working scale (in areas where it was undertaken) because, by determining points with classification errors and the reclassifying the data, we obtained an improvement in precision of k = 6.9% (Table 4). However, the processing times increased significantly (e.g. AOI Nº1 with automatic classification took 15 ± 5 min and semiautomatic took 90 ± 20 min).

By running the filters automatically in urban areas and obtaining the best precision, we produced a cartographic scale of 1:10,000. These allowed us to generate a set of maps with better resolution and higher precision in the classes considered during the classification process.

3.2. WV-2 classification

The improved spatial resolution of the WV-2 image after the PT enhancement provided a much better image for the standard classification (Figure 7(A)–(E)). With the use of the PT enhancement, it was possible to standardize the size of the cells to 1 m, to match the pixel size of the rasterized LIDAR data to 1 m. The improved resolution (pixel size from 1.69 to 1 m) seen in Figure 7(F), showing in the central section of AOI Nº7 with the WV-2 image, orthophoto and the Pansharpening from left to right and a polygon showing the differences of pixels.

Figure 5. classification of lIDar, improved precision with manual handling.

Table 4. results of precision by type of classification.

Class WV2NS WV2S LIDAR LIDAR semiauto SinergyBuilding 87.5 90.9 82.3 94.5 97.5Vegetation 92.3 95.1 87.6 94.9 96.8Pavements 90.8 94.9 72.6 88.5 95.5Ground 81.5 88.4 65.8 86.5 94.2Shadows 92.3 95.8 78.6 96.4 98.4Water 95.0 98.4 73.8 96.4 98.6Infrastructure 87.8 93 69.2 87.6 98.3average 89.6 93.8 75.7 92.1 97.0time range low Medium low High Medium

Dow

nloa

ded

by [

Fabi

ola

D. Y

épez

-Rin

cón]

at 1

6:35

22

Sept

embe

r 20

17

Page 12: Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría 4. Resultados y...2 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA Synergistic studies

GEOCARTO INTERNATIONAL 11

Figure 6. classification with lIDar and WV-2 showing 5 aoIs.

Figure 7. Improved resolution of images using Pansharpening.

Dow

nloa

ded

by [

Fabi

ola

D. Y

épez

-Rin

cón]

at 1

6:35

22

Sept

embe

r 20

17

Page 13: Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría 4. Resultados y...2 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA Synergistic studies

12 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA

We were able to identify 22 classes in the first run. The classes were clustered to set the validation methodology, seeking approval in classes with those used in the LIDAR classification. We classified and analysed urban coverage with WV-2 achieving precision levels of 89.6% (unsupervised) and 93.8% (supervised).

3.3. Classification of LIDAR-WV-2 synergy

Synergy improved classification percentage to 95.8%, showing an improvement of up to 20.1% com-pared to the automatic classification of LIDAR alone and up 6.2% in the unsupervised classification of WV-2 (Figure 8). The LIDAR-WV2 synergy achieved and strengthened the response in addressing those pixels where there is urban vegetation.

Bands 5 and 6, generated using LIDAR elevation data, improved the classification accuracy percent-age to 95.8%, similar to those reported by Herold and Roberts (2006), who argue that the geometry of the buildings helps in the classification process. Comparing the efficiency of the three methods of classification allows the valuation of each sensor, whereby the LIDAR technology is highly efficient in producing information. Though it could not, by itself, replace photogrammetry, its utility improves the synergy with WV-2.

3.4. Principal components

Knowing the importance of each of the elements of synergy helps to optimize computing resources, such as time. This finding allows a focus on the elements that are necessary to achieve the best preci-sion in less time and with fewer resources. The PCA allowed us to find that the image bands in WV-2 contributed the highest percentage of variance in the data (Figure 9).

Figure 8. classification of 3 classification for the aoI no 1.

Dow

nloa

ded

by [

Fabi

ola

D. Y

épez

-Rin

cón]

at 1

6:35

22

Sept

embe

r 20

17

Page 14: Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría 4. Resultados y...2 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA Synergistic studies

GEOCARTO INTERNATIONAL 13

4. Conclusions and recommendations

The synergistic model LIDAR-WV-2 managed to improve the performance of the sensors individ-ually. Synergy generally allowed an increase in the accuracy by more than 20%, which is due to the two bands in the NIR (infrared band) information range defining urban vegetation most successfully (7.9% improvement).

The result permits a model with the potential to generate products for the extraction of elements of urban infrastructure and equipment and the subsequent obtaining of the metrics of urban trees as width and height of crowns, which are useful data for managing urban forest inventories.

However, synergy alone could have disadvantages as the methodology of the conversion of pix-el-points results in the loss of some of the finer elements of infrastructure and equipment that might be represented with LIDAR data. Another disadvantage of the model is the processing time.

On the other hand, the synergy between sensors really did not represent an improvement in the classification of the town. The use of LIDAR technology can be enough to provide sufficient and rel-evant data without the need of obtaining expensive optical imaging equipment.

AcknowledgementsThe high-resolution LIDAR data were acquired through a project funded by Organismo de Cuenca Río Bravo (Rio Bravo Basin Agency) (CNA). The medium-resolution LIDAR data are part of the collection of data from Instituto Nacional de Estadística y Geografía (National Institute of Statistic and Geography) (INEGI 2007). The authors recognize the Instituto Tecnológico y de Estudios Superiores de Monterrey (ITESM) and the Consejo Nacional de Ciencia y Tecnología (National Council for Science and Technology) (CONACYT) for the support during the project.

Disclosure statementNo potential conflict of interest was reported by the authors.

Figure 9. Images of the Principal components of Synergy.

Dow

nloa

ded

by [

Fabi

ola

D. Y

épez

-Rin

cón]

at 1

6:35

22

Sept

embe

r 20

17

Page 15: Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría 4. Resultados y...2 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA Synergistic studies

14 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA

FundingThe high-resolution LIDAR data were acquired through a project funded by Organismo de Cuenca Río Bravo (Rio Bravo Basin Agency) (CNA). The medium-resolution LIDAR data are part of the collection of data from Instituto Nacional de Estadística y Geografía (National Institute of Statistic and Geography) (INEGI, 2007). The authors recog-nize the Instituto Tecnológico y de Estudios Superiores de Monterrey (ITESM) and the Consejo Nacional de Ciencia y Tecnología (National Council for Science and Technology) (CONACYT) for the support during the project through Doctoral scholarship No. 12412.

ORCIDFabiola D. Yépez Rincón   http://orcid.org/0000-0001-5025-9967Diego F. Lozano García   http://orcid.org/0000-0001-6886-6458

ReferencesAdams J, Chandler J. 2002. Evaluation of lidar and medium scale photogrammetry for detecting soft-cliff coastal change.

Photogramm Rec. 17(99):405–418.Aparicio Moreno CE, Ortega Rubí ME, Sandoval Hernández E. 2011. La segregación socio-espacial en Monterrey a lo

largo de su proceso de metropolización. Reg Soc. 23(52):173–207.Baligh A, Zoej MV, Mohammadzadeh A. 2008. Bare earth extraction from airborne lidar data using different filtering

methods. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 37(PART B3B).

Baltsavias E. 1999. A comparison between photogrammetry and laser scanning. ISPRS J Photogramm Remote Sens. 54:83–94.

Ban Y, Hu H, Rangel IM. 2010. Fusion of Quickbird MS and RADARSAT SAR data for urban land-cover mapping: object-based and knowledge-based approach. Int J Remote Sens. 31(6):1391–1410.

Banner AV, Lynham T. 1981. Multitemporal analysis of Landsat data for forest cut over mapping – a trial of two procedures. Proceedings of the 7th Canadian Symposium on Remote Sensing held in Winnipeg in 1981; Winnipeg: Canadian Remote Sensing Society. p. 233–240.

Casalengo S, Anderson K, Cox DT, Hancock S, Gaston KJ. 2017. Ecological connectivity in the three-dimensional urban green volume using waveform airborne LIDAR. https://www.ncbi.nlm.nih.gov/m/pubmed/28382936/.

Corbane C, Raclot D, Jacob F, Albergel J, Andrieux P. 2008. Remote sensing of soil surface characteristics from a multiscale classification approach. Catena. 75(3):308–318.

Erener A, Düzgün HS. 2009. A methodology for land use change detection of high resolution pan images based on texture analysis. Ital J Remote Sens. 41(2):47–59.

Fitzpatrick-Lins K. 1981. Comparison of sampling procedures and data analysis for a land-use and land-cover map. Photogramm Eng Remote sens. 47(3):343–351.

Forster BC. 1985. An examination of some problems and solutions in monitoring urban areas from satellite platforms. Int J Remote Sens. 6(1):139–151.

García Ortega RS, Solano Arzaluz, Firch Osuna JM. 2013. Procesos y tendencias de la urbanización en el noreste mexicano. Economía, Sociedad y Territorio. XI:253–264.

Grote A, Heipke C, Rottensteiner F. 2012. Road network extraction in suburban areas. Photogramm Rec. 27(137):8–28.Herold M, Roberts DA. 2006. Multiespetral satellites – Imaging spectrometry – LIDAR: spatial-spectral tradeoffs in

Urban Mapping. Int J Geomatics. 2(1):1–13.Hodgson ME, Jensen JR, Tullis JA, Riordan KD, Archer CM. 2003. Synergistic use of LIDAR and color aerial photography

for mapping urban parcel imperviousness. Photogramm Eng Remote Sens Proc. https://www.isprs.org/proceedings/XXXV/congress/comm3/papers/288.pdf.

INEGI. 2007. Delimitación de las zonas metropolitanas de México 2005. 1st ed. Instituto Nacional de Estadística y Geografía. Distrito Federal. 184p.

INEGI. 2015. Encuesta intercensal 2015. https://www.beta.inegi.org.mx/proyectos/enchogares/especiales/intercensal/Jacobsen K. 2003. DEM generation from satellite data. EARSeL Ghent. 273276, 4p. http://pdfs.semanticscholar.org/

e066/f8a278f845bf2cb19c9b4e81ec4dde1e1131.pdf.Jensen JR. 2005. Introductory digital image processing: a remote sensing perspective. New York, NY: Prentice Hall.Kraus K, Pfeifer N. 2001. Advanced DTM generation from LiDAR data. Int Arch Photogramm Remote Sens Spatial

Inform Sci 34(3/W4):23–30.Krüger A, Kolbe TH. 2012. Building analysis for urban energy planning using key indicators on virtual 3d city models

– the energy Atlas of Berlin. ISPRS – Int Arch Photogramm Remote Sens Spatial Inform Sci. XXXIX-B2:145–150.Kumar A, Pandey AC, Jeyaseelan AT. 2012. Built-up and vegetation extraction and density mapping using WorldView-

II. Geocarto Int. 27(7):557–568.

Dow

nloa

ded

by [

Fabi

ola

D. Y

épez

-Rin

cón]

at 1

6:35

22

Sept

embe

r 20

17

Page 16: Synergetic efficiency of Lidar and WorldView-2 for 3D urban ...fic.uanl.mx/ftp/MDV/MCOIA/Categoría 4. Resultados y...2 F. D. YÉPEZ RINCÓN AND D. F. LOZANO GARCÍA Synergistic studies

GEOCARTO INTERNATIONAL 15

Leichtle T, Geiß C, Wurm M, Martin K, Lakes T, Taubenböck H. (2016). An unsupervised approach for building change detection in VHR remote sensing imagery.

Li H, Jing L, Tang Y. 2017. Assessment of pansharpening methods applied to WorldView-2 imagery fusion. Sensors. 17(1), 89:1–30.

Liang S. 2004. Quantitative remote sensing of land surfaces: state of the art. Chichester: Wiley.Lillesand T, Kiefer R, Chipman J. 2008. Remote sensing and image interpretation. 6th ed. United States, 640p.Liu X. 2008. Airborne LiDAR for DEM generation: some critical issues. Prog Phys Geogr. 32(1):31–49.Lozano -Garcia DF, Hoffer RM. 1993. Sinergetic effects of combined Landsat-TM and SIR-B data for forest resources

assessment. Int J Remote Sens. 14(14):2677–2694.Lu D, Weng Q. 2007. A survey of image classification methods and techniques for improving classification performance.

Int J Remote Sens. 28(5):823–870.Meng X, Currit N, Zhao K. 2010. Ground filtering algorithms for airborne LiDAR data: a review of critical issues.

Remote Sens. 2:833–860.Myint SW, Gober P, Brazel A, Grossman-Clarke S, Weng Q. 2011. Per-pixel vs. object-based classification of urban land

cover extraction using high spatial resolution imagery. Remote Sens Environ. 115(5):1145–1161.Pacifici F, Chini M, Emery WJ. 2009. A neural network approach using multi-scale textural metrics from very high-

resolution panchromatic imagery for urban land-use classification. Remote Sens Environ. 113(6):1276–1292.Peijun D, Lium P, Xia J, Feng L, Liu S, Tan K, Cheng L. 2014. Remote sensing image interpretation for urban environment

analysis: Methods, systems and examples. Remote sens. 6(10):9458–9474.Qadri S, Khan M, Qadri SF, Razzaq A, Ahmad N, Jamil M, Shah AN, Muhammad SS, Saleem K, Awan SA. (2017).

Multisource data fusion framework for land use/land cover classification using machine vision. J sens. 2017:1–8. ID 3515418.

Schneider A, Friedl MA, McIver DK, Woodcock CE. 2003. Mapping urban areas by fusing multiple sources of coarse resolution remotely sensed data. Photogramm Eng Remote Sens. 69(12):1377–1386.

Schott JR. 2007. Remote sensing: the image chain approach. 2nd ed. New York: Oxford University Press. 667p.Schowengerdt RA. (2006). Remote sensing: models and methods for image processing. 3rd ed. San Diego, CA: Academic

Press. 560p.Shan J, Aparajithan S. 2005. Urban DEM generation from raw lidar data. Photogramm Eng Remote Sens. 71(2):217–226.Shan J, Sampath A. 2007. Urban terrain and building extraction from airborne LIDAR data. Boca Raton, FL: CRC

Press; p. 21–42.Shirowzhan S, Trinder J. 2017. Building classification from lidar data for spatio-temporal assessment of 3D urban

developments. Procedia Eng. 180:1453–1461.Stehman SV. 1997. Selecting and interpreting measures of thematic classification accuracy. Remote sens Environ.

62(1):77–89.Sylos Labini D, Drimaco P, Manunta A Agrimi, Pasquariello G. 2012. Synergy between GMES and regional innovation

strategies: very high resolution images for local planning and monitoring. Eur J Remote Sens. 45:305–315.Tarantino C, Lovergine FP, Adamo M, Pasquariello G. 2011. Contextual Information for the classification of high

resolution remotely sensed images. Ital J Remote Sens. 43(1):75–86.Torahi AA, Rai SC. 2011. Land cover classification and forest change analysis, using satellite imagery-a case study in

Dehdez area of Zagros mountain in Iran. J Geogr Inform Syst. 3(1):1–11.Xu Z, Coors V. 2012. Combining system dynamics model, GIS and 3D visualization in sustainability assessment of

urban residential development. Build Environ. 47:272–287.Zhang Y. 2004. Understanding image fusion. Photogram Eng Remote Sens. 70:657–661.Zhang J, Lin X. 2016. Advances in fusion of optical imagery and LIDAR point cloud applied to photogrammetry and

remote sensing. Int J Image Data Fusion. 8(1):1–31.Zhou G. 2007. Urban 3D building model from LIDAR data and digital aerial images. Part IV. Chapter 13. In: Weng

Q, editor. Remote sensing of impervious surfaces. Taylor and Francis Series in Remote Sensing Applications. 466p.

Dow

nloa

ded

by [

Fabi

ola

D. Y

épez

-Rin

cón]

at 1

6:35

22

Sept

embe

r 20

17