Lab-01_Image Acquisition and Import:  · Web view9784 10144 10488 10412 10573 11096 10718 10133...

81
1. LAB-01_IMAGE ACQUISITION AND IMPORT: 1.1 Problem Land change in and around Las Cruces, NM. This can become a problem for society in that Las Cruces is located in a desert, and with increasing urbanization, water will become even scarcer in the area. Without a local water source, Las Cruces could dry up. Fixing this problem is not as easy as engineering the Rio Grande even more than it already is, or Elephant Butte reservoir. Las Cruces cannot just import the water it needs as well, and I do not think everyone realizes this. I feel this is the intellectual gap people are facing. 1.2 Objectives To map land cover in 1986 and 2009. To characterize the biophysical properties of the landscape in 1986 and 2009. To detect land changes between 1986 and 2009 in and around Las Cruces. 1.3 Significance I hope to achieve through this research an understanding of how land change in Las Cruces, NM has changed since 1986, and to show people how it affects society. I believe people will see the adverse affects it can have, and they will learn from it and develop smarter ways of living. 1.4 Description of Study Area

Transcript of Lab-01_Image Acquisition and Import:  · Web view9784 10144 10488 10412 10573 11096 10718 10133...

1. LAB-01_IMAGE ACQUISITION AND IMPORT:

1.1 Problem

Land change in and around Las Cruces, NM. This can become a problem for society in that Las Cruces is located in a desert, and with increasing urbanization, water will become even scarcer in the area. Without a local water source, Las Cruces could dry up. Fixing this problem is not as easy as engineering the Rio Grande even more than it already is, or Elephant Butte reservoir. Las Cruces cannot just import the water it needs as well, and I do not think everyone realizes this. I feel this is the intellectual gap people are facing.

1.2 Objectives

To map land cover in 1986 and 2009. To characterize the biophysical properties of the landscape in 1986 and 2009. To detect land changes between 1986 and 2009 in and around Las Cruces.

1.3 Significance

I hope to achieve through this research an understanding of how land change in Las Cruces, NM has changed since 1986, and to show people how it affects society. I believe people will see the adverse affects it can have, and they will learn from it and develop smarter ways of living.

1.4 Description of Study Area

32.3197° N, 106.7653° W Located in the state of New Mexico, in the southwest region of the United States.

Just northwest of El Paso, TX. 199.1 km² Population is 99,665 (2011)

I have chosen this area because it is where I live, and it has experienced major growth since 1986. Las Cruces is expected to continue to grow rapidly. Las Cruces is located in a desert, so its water source is usually low, and with this major growth it would be safe to assume that its water source will continue to deplete. The city surrounds the Rio Grande River, with the majority of the city expanding towards the Organ Mountains. For this exercise I viewed the images as true color images. Landsat TM was the best choice for

acquiring imagery because it has been around for some time, so it had imagery from 1986, as well as having good spatial and spectral resolutions. The reason for choosing imagery from 1986 and 2009 was because Las Cruces has experienced a lot of growth during that time, and they were both taken on the same day of their respective year. The amount of precipitation for these two different times was high for Las Cruces. This will provide healthier vegetation, so it will be easier to distinguish in the imagery.

1.5 FIGURES

^

^

¯

Rio

Albuquerque

Las Cruces

0 4 82Kilometers

§̈¦25

New Mexico

§̈¦10

§̈¦10

0 100 200 30050Kilometers

Study Area

Major Interstates

Rio Grande

County Boundaries

Las Cruces Metro

^ Major Cities

R,G,B = Bands 4, 3, 2 of a Landsat Thematic Mapperimage acquired over the study area on 11 June 2009

Grande

1.6 TABLES

Table 1. This is the metadata for the Landsat TM imagery acquired on June 11, 2009 and June 12, 1986.

Dataset Attribute06-11-2009 06-12-1986

Attribute Value Attribute ValueLandsat Scene Identifier LT50330382009162PAC02 LT50330381986163XXX03Spacecraft Identifier 5 5Sensor Mode BUMPERStation Identifier PAC XXXDay / Night DAY DAYWRS Path 033 033WRS Row 038 038WRS TypeDate Acquired 2009/06/11 1986/06/12Start Time 2009:162:17:27:43.13994 1986:163:17:03:17.75056Stop Time 2009:162:17:28:09.75306 1986:163:17:03:44.39488Sensor Anomalies N NAcquisition Quality 9 9Quality Band 1 9 9Quality Band 2 9 9Quality Band 3 9 9Quality Band 4 9 9Quality Band 5 9 9Quality Band 6 9 9Quality Band 7 9 9Cloud Cover 0% 0%Cloud CoverQuadrant Upper Left 0% 0%

Cloud CoverQuadrant Upper Right 0% 0%

Cloud CoverQuadrant Lower Left 0% 0%

Cloud CoverQuadrant Lower Right 0% 0%

Sun Elevation 66.50870877 61.43930331

Sun Azimuth 105.37092284 99.74687658Scene Center Latitude 31.74380 (31°44'37"N) 31.77550 (31°46'31"N)Scene Center Longitude -106.73214 (106°43'55"W) -106.85757 (106°51'27"W)Corner Upper Left Latitude 32.67923 (32°40'45"N) 32.70660 (32°42'23"N)Corner Upper Left Longitude -108.03506 (108°02'06"W) -108.13196 (108°07'55"W)Corner Upper Right Latitude 32.71508 (32°42'54"N) 32.74427 (32°44'39"N)Corner Upper Right Longitude -105.47271 (105°28'21"W) -105.60735 (105°36'26"W)

Corner Lower Left Latitude 30.74123 (30°44'28"N) 30.78500 (30°47'06"N)Corner Lower Left Longitude -107.97252 (107°58'21"W) -108.06787 (108°04'04"W)Corner Lower Right Latitude 30.77447 (30°46'28"N) 30.81995 (30°49'11"N)Corner Lower Right Longitude -105.46295 (105°27'46"W) -105.59490 (105°35'41"W)

1.7 LITERATURE CITED

Jensen, John. Introductory Digital Image Processing. Upper Saddle River, NJ: Pearson Prentice Hall , 2005.

Buenemann, Michaela, and Jack Wright. "Southwest Transformation: Eras of Growth and LandChange in Las Cruces, New Mexico." Southwestern Geographer. 14. (2010): 57-87.

2. LAB 2: DATA VISUALIZATION AND EVALUATION

2.1 Problem

A necessary step in determining land change in Las Cruces is to evaluate the statistics from the two time periods and compare the two. These statistics will help the viewer understand what kind of changes might have occurred, and will also help reduce the amount of material to look at.

2.2 Objectives

Subset satellite imagery in ENVI.

Compute, analyze, and interpret the median, mode, and mean (i.e., measures of

central tendency) as well as the range, variance, and standard deviation (i.e.,

measures of dispersion) of brightness values in bands of satellite imagery.

Visualize, analyze, and interpret histograms of brightness values for individual

bands of satellite imagery.

Compute, analyze, and interpret the variance-covariance matrix for multispectral

remote sensing data.

Compute, analyze, and interpret the correlation matrix for multispectral remote

sensing data.

Visualize, analyze, and interpret a feature space plot for pairs of bands of satellite

imagery.

Evaluate the quality of satellite imagery.

3.2 Methods

The first step was to subset the imagery. It was not necessary to subset the image spectrally since we need to keep all the bands. For this I had to first subset the 2009 image. I needed to take the area of interest vector file, which was in a shapefile format, and convert that to ENVI’s vector format. From there, I changed that file into an ROI, and then used this ROI to subset the 2009 image. Once this was done with the 2009 image, I could use that ROI for the 1986 image by reconciling it via map parameters.

Once the images were subset, I could now compute the statistics. For this step, all I needed to do was use the compute statistics tool from the menu, and then obtain statistics for both the 2009 image, and the 1986 image. I saved these statistics in two formats, an ENVI statistics file, and a text report file. For simplicity, I opened the text format file in Excel. With this information I was able to fill in Table 2.5.1 with its measures of central tendency, and measures of dispersion. The Excel

sheet gave me all the data, except for range, median, and mode, which I had to compute myself.

Once I had the information from the previous step, I could create a histogram of the brightness values, and how frequently they occur in each band (Figure 2.4.1 and 2.4.2). To create the histogram I used Excel, and I input the frequency of brightness values for each digital number.

Next was step was to compare the variance-covariance matrices of the 2009 image, and 1986 image. The excel spreadsheet automatically gives this to you, because ENVI computes it automatically. Once you have these tables, you can create a correlation matrix for the two images, which is an easier table to read than the variance-covariance matrix. With this table you can see which bands correlate with each other the most, which bands have the most redundant information, and which bands have the most unique information.

With the correlation matrix, you can then pick and choose which bands to view in a feature space plot. To view a feature space plot of an image, you use the tools menu and select create a 2D scatter plot. From there you choose which two bands to compare with each other. The higher the 2 bands correlation with each other, the straighter the line. If the correlation is low, the feature space plot takes on a more unique shaped. These two ideas are shown in figures 2.4.3 and 2.4.4.

3.3 Results

These statistics show that there is a change from 1986 to 2009. The univariate statistics change in almost every category. These changes are also displayed in the histograms, and there appears to be a major change in band 1. I believe this means that upon further analysis of the subject area, I will find the major changes in the amount of water present. The correlation matrix suggests that the bands where I will receive the most unique information is going to be by looking at bands 1 and 4. Band 5 is going to be the brightest band, while band 2 will be the darkest band.

3.4 FIGURES

Figure 2.4.1 2009 Statistics 0 8 16 24 32 40 48 56 64 72 80 88 96 104

112

120

128

136

144

152

160

168

176

184

192

200

208

216

224

232

240

248

0

5,000

10,000

15,000

20,000

25,000

30,000

35,000

40,000

45,000Band 1Band 2Band 3Band 4Band 5Band 7

Brightness Value

Freq

uenc

y

Figure 2.4.2 1986 Statistics

0 13 26 39 52 65 78 91 1041171301431561691821952082212342470

5,000

10,000

15,000

20,000

25,000

30,000

35,000

40,000

45,000

Band 1Band 2Band 3Band 4Band 5Band 7

Brightness Value

Freq

uenc

y

Figure 2.4.3 Bands 1 and 2

Figure 2.4.4 Bands 1 and 4

3.5 TABLES

Table 2.5.1 Correlation matrix of the 1986 image.

Band 1 2 3 4 5 71 1 0.926489 0.725653 0.034651 0.400406 0.477892 0.926489 1 0.912865 0.193858 0.654282 0.716129

3 0.725653 0.912865 1 0.230072 0.825776 0.8849114 0.034651 0.193858 0.230072 1 0.408623 0.263461

5 0.400406 0.654282 0.825776 0.408623 1 0.9605367 0.47789 0.716129 0.884911 0.263461 0.960536 1

4.

5. Table 2.5.2 Correlation matrix of the 2009 image.

Band 1 2 3 4 5 71 1.00 0.94 0.79 0.06 0.43 0.54

2 0.94 1.00 0.94 0.20 0.65 0.733 0.79 0.94 1.00 0.22 0.81 0.88

4 0.06 0.20 0.22 1.00 0.36 0.225 0.43 0.65 0.81 0.36 1.00 0.96

7 0.54 0.73 0.88 0.22 0.96 1.00

2.6 LITERATURE CITED

Jensen, John. Introductory Digital Image Processing. Upper Saddle River, NJ: Pearson Prentice Hall , 2005.

3. Lab 3: Radiometric Correction

3.1.Introduction

Radiometric correction is the process of trying to improve the accuracy of surface spectral reflectance. If the user has two images of the same area at different times, it may be necessary to fix one of the images. This can be done through the use of empirical line calibration. This is the process that was done to the images in this section.

3.2. Methods3.2.1. Performing an absolute radiometric correction

First step was to take the 2009 image and use ENVI’s Landsat calibration tool. I set radiance as the calibration type in the calibration window, and then edited the calibration parameters. Once I finished editing I performed the conversion of changing the digital numbers to at-sensor radiance.

After finishing the conversion for the 2009 image, the next step was to perform the exact same steps for the 2009 image, but do it to the 1986 image as well.

To start getting the images ready for use with FLAASH, I had to divide its radiance units by 10, by using the band math function. This was performed on the 2009 image, as well as the 1986 image.

Once the band math was complete, next step was to convert the data from BSQ (band sequential) to BIL (band interleaved by line). For this I had to go to the convert file input file and select the 2009 image. From here I went on to the convert file parameters window and selected BIL. After this step was complete I performed this process again on the 1986 image.

Now that these images have been prepped for FLAASH, I could perform the absolute atmospheric correction for the 2009 image. I went to the basic tools section and selected FLAASH, and I then set the image at a scale factor of 1. From here I had to specify some metadata fields that came with imagery. Once this process was complete, I again performed it on the 1986 image, using the metadata that came with the 1986n image instead.

3.2.2. Performing an Empirical Line Calibration.

To perform the empirical line calibration, I needed to collect PIF’s from both the 2009 and 1986 images.

The goal here was to collect a total of 10 PIF’s from each image. I needed 5 dark PIF’s, and 5 bright PIF’s. Once a PIF was taken on one image, the

other PIF on the other image had to be the same location. Every PIF should be matched up between the two images.

Once all the PIF’s were gathered, it was time to output them to a spectral library. I needed to make two separate libraries, one for the master image, and one for the slave image.

Now that there were two spectral libraries, one for the master image and one for the slave image, I could calibrate them. First I needed to pair up the matching PIF’s with each other. Once all 10 pairs were complete, I could run the empirical line processor.

3.2.3. Assessing the Empirical Line Calibration Results.

To assess the results it was necessary to output the plots to and ASCII file. It was now possible to view these plots in Excel.

To view them in graph format, I copy and pasted my information into an existing workbook that was already created. I could now view the master and slave image reflectance.

Next step was to import the ELC.cff file into Excel as well. First step was to open the file in a word document, then plug these numbers into an existing workbook and view the graph.

3.3. Results

My end results include many graphs and two corrected images. The 2009 and 1986 image have now been radiometrically corrected. This will be beneficial later on in upcoming exercises. To go along with these images there are several graphs and tables giving information about the images. All the PIF’s that were selected have their coordinates recorded so it is possible to see where they were taken. After performing the empirical line calibration a graph was given to show the results. There are now spectral libraries of different areas around Las Cruces at different times as well, which will be helpful when trying to track land cover change.

3.4. Discussion

After seeing all the results I feel it may be beneficial to go back and maybe select some different PIF’s. Mine were not as defined as the ones laid out in the example, and it could make for some better images that would make tracking the change in land cover easier.

3.5. Conclusion

What I do have currently seems like it will still do the job that I need it to do, but for future imagery and exercises, it would be a good idea to be very selective when

choosing PIF’s. The selection of PIF’s is clearly the most critical step. It can make your empirical line calibration accurate, or you can end up with poorly corrected images.

3.6. Tables and Figures

Figure 3.6.1. Spectral plot of master image.

Figure 3.6.2. Spectral plot of slave image.

Figure 3.6.3. Graph of PIF’s regression

Figure 3.6.4. Graph of Empirical Line Calibration Factors

3.7. Bibliography

Jensen, John. Introductory Digital Image Processing. Upper Saddle River, NJ: Pearson Prentice Hall , 2005.

Buenemann, Michaela. "Lab 3: Radiometric Correction." working paper., New Mexico State University, 2013. 

4. Lab 3: Radiometric Correction

4.1.Introduction

Radiometric correction is the process of trying to improve the accuracy of surface spectral reflectance. If the user has two images of the same area at different times, it may be necessary to fix one of the images. This can be done through the use of empirical line calibration. This is the process that was done to the images in this section.

3.2. Methods3.2.1. Performing an absolute radiometric correction

First step was to take the 2009 image and use ENVI’s Landsat calibration tool. I set radiance as the calibration type in the calibration window, and then edited the calibration parameters. Once I finished editing I performed the conversion of changing the digital numbers to at-sensor radiance.

After finishing the conversion for the 2009 image, the next step was to perform the exact same steps for the 2009 image, but do it to the 1986 image as well.

To start getting the images ready for use with FLAASH, I had to divide its radiance units by 10, by using the band math function. This was performed on the 2009 image, as well as the 1986 image.

Once the band math was complete, next step was to convert the data from BSQ (band sequential) to BIL (band interleaved by line). For this I had to go to the convert file input file and select the 2009 image. From here I went on to the convert file parameters window and selected BIL. After this step was complete I performed this process again on the 1986 image.

Now that these images have been prepped for FLAASH, I could perform the absolute atmospheric correction for the 2009 image. I went to the basic tools section and selected FLAASH, and I then set the image at a scale factor of 1. From here I had to specify some metadata fields that came with imagery. Once this process was complete, I again performed it on the 1986 image, using the metadata that came with the 1986n image instead.

3.2.2. Performing an Empirical Line Calibration.

To perform the empirical line calibration, I needed to collect PIF’s from both the 2009 and 1986 images.

The goal here was to collect a total of 10 PIF’s from each image. I needed 5 dark PIF’s, and 5 bright PIF’s. Once a PIF was taken on one image, the

other PIF on the other image had to be the same location. Every PIF should be matched up between the two images.

Once all the PIF’s were gathered, it was time to output them to a spectral library. I needed to make two separate libraries, one for the master image, and one for the slave image.

Now that there were two spectral libraries, one for the master image and one for the slave image, I could calibrate them. First I needed to pair up the matching PIF’s with each other. Once all 10 pairs were complete, I could run the empirical line processor.

3.2.3. Assessing the Empirical Line Calibration Results.

To assess the results it was necessary to output the plots to and ASCII file. It was now possible to view these plots in Excel.

To view them in graph format, I copy and pasted my information into an existing workbook that was already created. I could now view the master and slave image reflectance.

Next step was to import the ELC.cff file into Excel as well. First step was to open the file in a word document, then plug these numbers into an existing workbook and view the graph.

3.3. Results

My end results include many graphs and two corrected images. The 2009 and 1986 image have now been radiometrically corrected. This will be beneficial later on in upcoming exercises. To go along with these images there are several graphs and tables giving information about the images. All the PIF’s that were selected have their coordinates recorded so it is possible to see where they were taken. After performing the empirical line calibration a graph was given to show the results. There are now spectral libraries of different areas around Las Cruces at different times as well, which will be helpful when trying to track land cover change.

3.4. Discussion

After seeing all the results I feel it may be beneficial to go back and maybe select some different PIF’s. Mine were not as defined as the ones laid out in the example, and it could make for some better images that would make tracking the change in land cover easier.

3.5. Conclusion

What I do have currently seems like it will still do the job that I need it to do, but for future imagery and exercises, it would be a good idea to be very selective when

choosing PIF’s. The selection of PIF’s is clearly the most critical step. It can make your empirical line calibration accurate, or you can end up with poorly corrected images.

3.6. Tables and Figures

Figure 3.6.1. Spectral plot of master image.

Figure 3.6.2. Spectral plot of slave image.

Figure 3.6.3. Graph of PIF’s regression

Figure 3.6.4. Graph of Empirical Line Calibration Factors

3.7. Bibliography

Jensen, John. Introductory Digital Image Processing. Upper Saddle River, NJ: Pearson Prentice Hall , 2005.

Buenemann, Michaela. "Lab 3: Radiometric Correction." working paper., New Mexico State University, 2013. 

5. Lab 5: Image Derivatives

5.1.Introduction

Now that the images have been preprocessed, they are ready to start being

analyzed. The first bit of analyzing that needs to be done is to view the spatial and

spectral profiles. For right now, the main focus is simply on viewing these

different profiles, and analyzing them to extract some information about the

imagery.

5.2.Methods

The first step in analyzing the imagery was to collect some spatial profiles

(Images 5.3.1, 5.3.2, 5.3.3, 5.3.4, 5.3.5, 5.3.6.) and figure out what those spatial

profiles are showing. These spatial profiles can then be turned into a graph and

the imagery can be viewed through the graphs created in excel (Table 5.4.1, 5.4.2,

5.4.3, 5.4.4, 5.4.5, 5.4.6.) Once the spatial profiles are collected, they can be

outputted to an excel sheet, and from there turned into a graph. I chose to collect

spatial profiles over different sections of the Rio Grande. I was hoping to see

changes in the width of the Rio Grande in different areas, and compare the two

images to see how the width has changed since 1986. To do this I picked three

different locations of the Rio Grande on the 2009 image, and then chose the same

locations on the 1986 image. I then transferred that data into excel, and created

the graphs so that it would be easier to analyze the profiles, and potentially see

any differences.

5.3.Images

The first three images are from 2009, and the last three images are from 1986.

Image 5.3.1

Image 5.3.2

Image 5.3.3

Image 5.3.4

Image 5.3.5

Image 5.3.6

5.4 Tables

Again, the graphs coincide with the images. The first three graphs are for the 2009 imagery, and are in the same order as the images from the image section. The last three are the 1986 imagery.

Table 5.4.1

Table 5.4.2

Table 5.4.3

Table 5.4.4

Table 5.4.5

Table 5.4.6

5.5 Results

Based on the different graphs, it appears that in the spatial profiles, the Rio

Grande was wider in the 1986 graphs. If this is true, then the graphs clearly show

that the Rio Grande has shrunk since 1986. The best ones from comparisons

would be the second profiles from both the 2009 and 1986 imagery. The location

of the Rio Grande is pretty clear for both graphs, and it is also clear that the Rio

Grande was wider in this portion of the river back in 1986, than it is today. What

can also be taken out of these graphs is the surrounding area around the Rio

Grande. In some spots the graphs are very similar, this suggests that the

agriculture in that area has stayed the same, but in some locations on the graph,

the signatures are different. It might be interesting to see what the agriculture was

back then, and what it is now.

5.6 Discussions

With regards to land cover change between 1986 and 2009, I feel that these

spatial profiles do show change in the Rio Grande over those years, as well as

some of its surrounding agriculture. The goal was to just examine the Rio Grande,

by doing this some changes in agriculture around that area might have also been

identified to have undergone some sort of change.

5.7 Conclusion

This section was meant to gather spatial and spectral profiles, and to analyze the

information gathered. The spatial profiles were gathered from three different

locations along the Rio Grande from the 2009 and 1986 imagery. These spatial

profiles were then exported into excel to be examined. The results of these

different graphs revealed that the Rio Grande is not as wide in 2009 as it was in

1986. These graphs might have also revealed some changes in the agriculture

surrounding the Rio Grande.

6. Lab 6: Image Derivatives II6.1.Introduction

It is now time to start manipulating and analyzing the preprocessed imagery. For this lab I needed to perform band ratios, derive texture measures, and apply spatial filters to the 2009 imagery.

6.2.Methods

First step was to apply some spatial filters. I decided to apply some high pass and low pass filters, so that I could show the difference in the two. I used two different kernel sizes for both the low pass and the high pass filters. I started with a 3x3 kernel size, and then used a 7x7 kernel size. These four images are represented in images 6.3.1, 6.3.2, 6.3.3, and 6.3.4. After I applied spatial filters to the imagery, next step was to derive texture measures. I chose to use band 1 for the texture measures, and displayed the mean, skewness, variance, data range, and entropy (Images 6.3.5, 6.3.6, 6.3.7, 6.3.8, and 6.3.9.) After applying the spatial filters and texture measures, I needed to perform band ratios. The first band ratio used band 3 divided by band 4 (Image 6.3.10). The other band ratio used was band 5 divided by band 7 (Image 6.3.11). These were all the band ratios that were done to the 2009 imagery.

6.3.Images Image 6.3.1. Low Pass filter, 3x3 kernel size.

Image 6.3.2. High Pass filter, 3x3 kernel size.

Image 6.3.3. Low Pass filter. 7x7 kernel size.

Image 6.3.4. High Pass filter. 7x7 kernel size.

Image 6.3.5. Mean.

Image 6.3.6. Skewness.

Image 6.3.7. Variance.

Image 6.3.8. Data Range.

Image 6.3.9. Entropy.

Image 6.3.10. Band ratio 3/4.

Image 6.3.11. Band ratio 5/7.

6.4 Results

There is a clear difference in the output imagery when applying spatial filters. The high pass filter has a cleaner, crisp look to it, while the low pass filter is smoother. The low pass seems good for distinguishing what everything on the ground actually is, while the high pass filter appears good for detecting boundaries and borders. For detecting change around the Rio Grande, I would use both filters. The low pass would be nice to be able to tell what is agriculture, and what is not. Then by using the high pass filter I would be able to see the boundaries of the fields. As for the band ratios, band ratio 3/4 would help in identifying vegetation, and water. The band ratio of 5/7 is also helpful in detecting vegetation, as well as clays, micas, carbonates, and sulfates.

6.5 Discussions

Since I am trying to analyze how the Rio Grande and the surrounding area have changed, it would be best if I chose to use both of the chosen band ratios, as well as both the high pass and low pass filters. I feel using both filters would help to view how the agriculture has changed. The low pass filter would help in locating the agricultural spots, and the high pass filter for the boundaries of the agriculture. The two different band ratios would serve well in accentuating the vegetation present around the Rio Grande as well, and maybe the band ratio 5/7 could show some sort of change in the sediment present.

6.7 Conclusion

The band ratios, and filter images are going to be images that will be used for further analysis of the Rio Grande. They will accentuate what I wish to try and distinguish, and give clearer boundaries of the agricultural fields.

6.8 References

Jensen, John. Introductory Digital Image Processing. Upper Saddle River, NJ: Pearson Prentice Hall , 2005.

U.S. Geological Survey, "Landsat Data." Last modified January 12, 2013. Accessed March 12, 2013. http://pubs.usgs.gov/of/2005/1371/html/landsat.htm.

7. Lab 7: Image Derivatives III7.1.Introduction

An excellent way of viewing vegetation in imagery is using the NDVI and Tasseled Cap Transformation. NDVI is meant for viewing vegetation, and the Tasseled Cap is also great for displaying barren land, vegetation, and water. These two methods will be useful tools for viewing vegetation around the Rio Grande.

7.2.Methods

There were two separate steps needed for this lab. First was to compute the NDVI. The NDVI is calculated by taking the NIR band and subtracting it by the red band. Then take the difference and divide it by the sum of the NIR band plus the red band. The output is shown in image 7.3.1. Next step was the Tasseled Cap transformation. This method produces four useful output images (7.3.2, 7.3.3, 7.3.4, and 7.3.5.)

7.3.Images Image 7.3.1

Image 7.3.2

Image 7.3.3

Image 7.3.4

Image 7.3.5

7.4 Results

The first image is of the NDVI. The bright areas represent vegetation, and the dark areas are locations that have little to no vegetation. The next image is the brightness band of the Tasseled Cap transformation. The bright areas show barren land. The third image is the greenness band. The bright areas in this image show vegetation, and dark areas mean no vegetation. The fourth image is the wetness band. This band shows wet areas. Some bright areas that suggest water being present can be false with this band, because it could simply be a shadow being cast. The last image is an RGB image. The red areas are barren land, green/cyan areas are vegetated areas, and blue areas are water.

7.5 Discussions

The NDVI and Tasseled Cap clearly show the areas of vegetation. This will be crucial in determining land cover change in the Rio Grande area. There is a big patch of vegetation in the northern part of the imagery, and I believe that will be the main focus point, since there is a large portion of vegetation there. It should make it easy to detect any change, if any. It will still be necessary to analyze the vegetation all along the Rio Grande.

7.6 Conclusion

The NDVI and Tasseled Cap will be great tools for this research. They are easy to compute in ENVI, so performing them on the 1986 imagery will not be a problem. This lab was rather short and straightforward, but what can come out of this lab is very useful. The NDVI and Tasseled Cap are important tools for analyzing imagery.

7.7 References

Jensen, John. Introductory Digital Image Processing. Upper Saddle River, NJ: Pearson Prentice Hall , 2005.

8. Image Classification

8.1. Introduction

8.1.1. Image classification is the overall goal of this project. There are many different

types of image classifications processes, and this lab will examine several of them.

An unsupervised classification process will be applied to the 2009 imagery, and

several supervised classification schemes will be applied to the 2009 image, and the

most accurate one will then be applied to the 1986 imagery. This lab will examine

the different types of classification schemes, and assess their accuracy, to determine

which one will be the best to apply to the 1986 image.

8.2. Background

8.2.1. To understand what is going on when classifying an image, it is necessary to

understand what is going on within each classification system. This lab used a total

of one unsupervised classification process, and four supervised classifications.

ISODATA works by making a large number of passes through the dataset until it

reaches the specified results. The first iteration assigns pixels to clusters whose

mean is closest to Euclidean distance. All the iterations there after calculate new

averages for the clusters based on spectral location. Parallelepiped classifications

work by assigning pixels to a class if it falls within the parallelepiped. This box is

user defined based on the standard deviation. Maximum Likelihood uses probability

to determine if a pixel belongs to a predefined set of classes, and whichever class

the pixel has the highest probability of belonging to, that is the class it is assigned

to. Neural networks work by evaluating each pixel using stored weights in the

hidden layer neurons, and then produces a predicted value for every neuron to the

output layers. Vector Machines work by assigning a decision value to each pixel,

and these values are used to estimate probability values. These are stored as rule

images, with a value ranging from 0 to 1 for each pixel. The pixels are then grouped

into each class that it has the highest probability of belonging to.

8.3. Methods

8.3.1. This lab involved many steps to achieve a classification scheme that yielded a

high accuracy. First thing that I did to the 2009 image was perform an unsupervised

classification using the ISODATA feature on ENVI. The parameters I used for the

ISODATA classification were the default setting, except I used 100 iterations, and

set a minimum number of classes at 12. I ran this process several times, with

different number of iterations. I did this so I could compare the output images, and

pick one I liked the most. The image with 100 iterations was the one I preferred the

most. After I had the image I liked the most, I went through the different classes,

and labeled them for easier viewing. After completing this process, there were

multiple classes for the same thing. For example, I had two urban classes, but I

actually only need one. To clean this up, I combined these multiple classes using the

combine classes feature. After doing these steps the image was still kind of rough.

The next step for the image was to cluster bust the image. In order to cluster bust the

image, I created a series of masks. The two classes I wanted to cluster bust were

classes 4 and 5. After creating the mask for those two classes, I applied the mask to

the master image, and performed the unsupervised classification again. The result

was a new image that is shown below in the image portion of the write up. After

performing the unsupervised classification, the next thing to do was to apply the

different types of supervised classifications. Before this is possible, it is necessary to

acquire training data. This is done by creating an ROI for every class that is to be

classified. An ROI was created for agriculture, rangeland, urban, water, and barren

land. For each of these classes, the ROI was created using individual points.

Agriculture, urban, and rangeland had a total of 60 points collected for each class.

For the remaining classes a total of 30 points were used. Collecting the ROI’s

involved searching the image for the different land cover types. When collecting

urban points, I searched the image for pixels that included only urban features, I

would collect that pixel, and move on to other portions of the map until I had a total

of 60 points collected for urban. This is the process that was used for all the

different classes. After collecting all of my training data, I tested the separability.

The separability between each set of ROI’s is shown in the image below in the

image section. All of the different ROI’s had pretty high separability, with a couple

of them having above average separability. Once the separability was calculated, it

was possible to view the spectral signatures of each class, to compare how similar or

different they are from one another. This graph is shown below in the image section.

Once I was satisfied with my testing data, it was time to perform the different

supervised classifications. The first one was the parallelepiped. For this classifier, I

used all the default parameters. I messed around with some of the different

parameters, but in the end, I preferred the image that came from the default

parameters. After this I went through and did all the other classifiers; Maximum

likelihood, Neural Network, and Support Vector Machine. I ended up using all the

default parameters for these for the same reason. The image just came out better.

After performing all of these classifications, the parallelepiped was the one I

preferred the most. Since this was the one I preferred, I went and performed some

post classification processing on that image. I performed the Sieve feature on the

image first, and I did not care for the result. I thought it generalized my image too

much. I then clumped the image, and again I did not like the results. After messing

around with clumping and sieving, I performed a majority and minority analysis. I

was hoping to reduce speckle found in my urban section, but I could not achieve

that. I tried both majority and minority, and never got the results I was hoping for.

Once I finished with the post-classification processing, I applied some interactive

class overlays. I applied each different class to a separate viewing window and

placed each separate class next to each other, as demonstrated in the image for

interactive class overlays found in the image section. Once I was done messing

around with my images, it was time to assess their accuracy. To do this, I first had to

determine the spatial extent of each class. This was done using Compute Statistics

feature in ENVI. This gave me statistics showing how many pixels there are in the

image for each class, and what percent of the image is made up of that land cover.

To determine my testing sample size, I took the example laid out for me, and

divided that in half, to save time. To generate my sample size I had to use the land

change image, and then use the generate sample size feature in ENVI. In the

parameters I selected all my classes, and then assigned how many testing sites I

wanted for each class. This was determined earlier when I computed the statistics to

determine the percentage of each class in the image. After doing this ENVI

provided ROI’s by point for every testing site. From here I had to go through them

one by one and view each ROI pixel, and label each pixel by the class it was located

in. This was done by reconciling the ROI’s to the NAIP imagery to make it easier to

view where the pixels were located. Once I had labeled all the ROI’s, I could

reconcile the ROI’s to each different supervised classification image I had. Once all

of the testing sites were labeled, I could generate am error matrix. ENVI does this

by selecting the error matrix from the classification menu bar and going to post-

classification. In the parameters box I specified the supervised classification scheme

I wanted to test first, which was the parallelepiped. From here I just followed the

steps until I got to the end result, which was a statistics window. The overall

accuracy for the neural network was about 79%. I then followed these same steps,

selecting each different supervised classification scheme until all the schemes had

their accuracy assessed. Once I had assessed the accuracy of each scheme, I had to

perform the image classification on the 1986 image. This was done using all of the

same training data and testing data, and the classification used was neural network.

This way I could compare the images accurately. The steps involved were the same

as what was done for the 2009 image. After all this was done, I could save my

images as TIFF files, and create maps of them in ArcMap.

8.4. Results

8.4.1. Accuracy Assessment

8.4.1.1. The first item I could test accuracy on was my training data. Good ROI’s

are key in having accurate final products. I tested the separability of all my

ROI’s, and this table is shown below. For the most part I had very high

separability, with a couple ROI’s being at 1.8. I went ahead and kept these,

since I wasn’t too worried about it. After I had tested the accuracy for all the

different classifications, I was surprised by the results. The lowest accuracy

was 68% for the maximum likelihood classification. The highest was 78.9%

for neural network, and vector support analysis.

8.4.2. Different Classifications

8.4.2.1. Each supervised classification I used had a unique result. I preferred the

look of the parallelepiped, but it was the lowest accuracy. The neural network

and vector support analysis had the highest accuracy, so I chose the neural

network for the 1986 image. I found the unsupervised classification image to

be confusing, so I chose to not discuss it much. I felt it was inaccurate, and the

supervised classification images are more accurate, so they received more

attention than the ISODATA.

8.4.3. 1986 and 2009 Comparison

8.4.3.1. Once I had my 2009 and 1986 images classified, I was able to compare

them. Based on the images I have, the viewer can see an expansion of urban

land, and a decrease in agricultural land. It is hard to tell just how much urban

land has expanded, since my accuracy is not 100%. The viewer can still see a

clear expansion, and based on what I have observed by growing up here, there

has been a large expansion of urban land. The 1986 image also shows a lot of

agriculture out in the rangeland areas. I do not know if that is accurate or not,

but if it is, then that agriculture has disappeared by 2009.

8.5. Conclusion

8.5.1. I was surprised by the level of accuracy I was able to obtain for my supervised

classifications. I felt it would be rather low, but it ended up being better than I

thought. It seems like some agricultural land is missing from the 2009 image. I

thought there would be more, so that would be something I would look more into if

I were to redo this. I think it would be fun to redo this by actually going out into the

field to collect training data, if I had a current image of Las Cruces. I was pretty

satisfied with all my final results, I would only want to obtain better training data to

try and obtain a better accuracy of separability, which hopefully would lead to a

more accurately classified image.

8.6. Images and Tables8.6.1. ISODATA grouped classes image

8.6.2. ROI separability

8.6.3. Parallelepiped

8.6.4. Maximum Likelihood

8.6.5. Neural Network

8.6.6. Vector Machine

8.6.7. 1986 Neural Network

8.6.8. Superimposed classes on Band 4

8.6.9. Confusion Matrix Parallelepiped

8.6.10. Confusion Matrix Maximum Likelihood

8.6.11. Confusion Matrix Neural Network

8.6.12. Confusion Matrix Vector Machine

8.6.13. Final Land Cover Map

8.7. Bibliography

Jensen, John. Introductory Digital Image Processing. Upper Saddle River, NJ: Pearson Prentice Hall , 2005.

Buenemann, Michaela. "Image Classifiers." lecture., New Mexico State University, 2013. .

9. Change Detection

9.1. Introduction

9.1.1. The overall goal of this semester was to analyze two images of the Las Cruces

area from two different times. These two images needed to be analyzed to detect

how the area has changed from 1986 to 2009. The previous lab was putting it all

together, and finally viewing the area and mapping out how it has changed. This lab

gives us another perspective on how the Las Cruces area has changed over the two

decades.

9.2. Background

9.2.1. There is no absolute way of detecting change. No certain way is the 100%

accurate way. By looking at a number of different change techniques, the viewer

can get a better understanding of the area. Certain changes might not show up using

one technique, but will when using another. The techniques used for this lab are just

another way to view change in the study area.

9.3. Methods

9.3.1. The first technique was write memory function insertion. This was created

simply by using layer stacking, like in the previous lab. I first stacked the 6 bands

from the 2009 image with the 6 bands from the 1986 image. Once these layers were

all stacked, I viewed different images by loading different bands from both images

into the RGB color gun. After stacking all the bands, I could view a number of

different combinations. After that, I performed the multi-data composite image

change detection technique. To do this, I used the same stacked image from the

previous technique, and created a PCA for it. This provided me with statistics of the

image, and some more images that I could use for visual interpretation. This section

was kept pretty simple. I did not need to classify it, or label the change classes. This

was done previously, and for the purpose of this lab, all that was required was that I

go through the steps. Another technique used was image differencing. To do this I

took the NDVI of both the 2009 image and 1986 image. Band math was then used

to subtract the NDVI of the 2009 image, from the NDVI of the 1986 image. The last

part of this lab was post-classification comparison change detection. I did not

include this part of the lab, because I did not do it. I performed all the previous steps

through ENVI, but ERDAS was the program I needed to do complete this last part.

Due to time constraints I did not go back and left it out.

9.4. Discussion

9.4.1. I personally did not find too much of my images useful. I think that is because I

was not entirely sure of what I was looking at, or what to look for when using these

different change techniques. I felt like I was just going through the motions to learn

how to perform these techniques. The previous lab was where I feel I really was

able to analyze the imagery. The techniques for this lab I would not use again,

unless I had a chance to go over them again and really analyze what is being done

with these techniques.

9.5. Results

Write Memory Function Insertion using the infrared bands.

Multi-Data Composite Image Change Detection Bands 3,2,1

Bands 3,2,1

Image Differencing