Remote Sensing Part 4 Classification & Vegetation Indices.

Post on 12-Jan-2016

227 views 2 download

Tags:

Transcript of Remote Sensing Part 4 Classification & Vegetation Indices.

Remote SensingPart 4

Classification & Vegetation Indices

Classification Introduction• Humans are classifiers by nature - we’re always

putting things into categories

• To classify things, we use sets of criteria

• Examples: – Classifying people by age, gender, race, job/career, etc.– Criteria might include appearance, style of dress, pitch of

voice, build, hair style, language/lexicon, etc.– Ambiguity comes from:

• 1) our classification system (i.e., what classes we choose) • 2) our criteria (some criteria don’t differentiate people with

complete accuracy)• 3) our data (i.e., people who fit multiple categories and people

who fit no categories)

Non-Remote Sensing Classification Example

• “Sorting incoming Fish on a conveyor according to species using optical sensing”

Sea bassSpecies

Salmon

** The following data are just hypothetical

Methods

– Set up a camera and take some sample images to extract features

• Length• Lightness• Width• Number and shape of fins• Position of the mouth, etc…

Scanning the Fish

• Classification #1

– Use the length of the fish as a possible feature for discrimination

• Fish length alone is a poor feature for classifying fish type– Using only length we would be correct 50-60% of the time– That’s not great because random guessing (i.e., flipping a

coin) would be right ~50% of the time if there are an equal number of each fish type

• Classification #2

– Use the lightness (i.e., color) of the fish as a possible feature for discrimination

• Fish lightness alone is a pretty good feature for classifying fish by type– Using only lightness we would be correct ~ 80% of

the time

• Classification #3

– Use the width & lightness (i.e., color) of the fish as possible features for discrimination

• Fish lightness AND fish width do a very good job of classifying fish by type– Using lightness AND width we would be correct

~90% of the time

How does this relate to remote sensing?

• Instead of fish types, we are typically interested in land cover– For example: forests, crops, urban areas

• Instead of fish characteristics we have reflectance in the spectral bands collected by the sensor– For example: Landsat TM bands 1-6 instead

of fish length, width, lightness, etc.

Imagery Classification

• Two main types of classification– Unsupervised

• Classes based on statistics inherent in the remotely sensed data itself

• Classes do not necessarily correspond to real world land cover types

– Supervised• A classification algorithm is “trained”

using ground truth data• Classes correspond to real world land

cover types determined by the user

Notes

• For ease of display the following examples show just 2 bands: – one band on the X-axis– one band on the Y-axis

• In reality computers use all bands when doing classifications

• These types of graphs are often called feature space

• The points displayed on the graphs relate to pixels from an image

• The term cloud sometimes refers to the amorphous blob(s) of pixels in the feature space

Unsupervised Classification• Classes are created based on the

locations of the pixel data in feature space

Red BV’s

Infrared BV’s

0

0 255

255

v

A Computer Algorithm Finds Clusters

Red BV’s

Infrared BV’s

0

0 255

255

v

Unsupervised Classification

Unsupervised Classification

• Attribution phase – performed by human

water

Soil

agriculture

forest

Red BV’s

Infrared BV’s

0

0 255

255

Problems with Unsupervised Classification

Red BV’s

Infrared BV’s

0

0 255

255

v

The computer may consider these 2 clusters (forest and agriculture) as one cluster The computer may consider

this cluster (soil) to be 2 clusters

Supervised Classfication• We “train” the computer program using ground truth data

• I.e., we tell the computer what our classes (e.g., trees, soil, agriculture, etc.) “look like”

Coniferous treesDeciduous trees

Supervised Classification

Red BV’s

Infrared BV’s

0

0 255

255

v

Sample pixels

Other pixels

Supervised Classification• No attribution phase necessary

because we define the classes before-hand

water

Soil

agriculture

forest

Red BV’s

Infrared BV’s

0

0 255

255

Problems with Supervised Classification

Red BV’s

Infrared BV’s

0

0 255

255

vforest

agri

water

Soil

What’s this?

v

What is the computer actually doing?

• This classification generates statistics for the center, the size, and the shape of the sample pixel clouds

• The computer will then classify all the rest of the pixels in the image using these statistical values

Example: Remote Sensing of Clouds

Supervised Classification: Training Samples

• Users survey (using GPS) areas of “pure” land cover for all possible land cover types in an image

• OR

• Users “heads-up” digitize “pure” areas using expert knowledge and/or higher spatial resolution imagery

• The rest of the image is classified based on the spectral characteristics of the training sites

Classification of Nang Rong imagery

(a) Nov 1979

(c) Nov 2001

Shown are Landsat MSS,TM,and ETM Image Classification Results

(a) Nov 1992

Upland Ag

Forest

Rice

Water

Built-up

Land Use/cover Change in Nang Rong, Thailand

1954 1994

Example Classification Results (Bangkok, Thailand)

Accuracy Assessments

• After classifying an image we want to know how well the classification worked

• To find out we must conduct an accuracy assessment

How are accuracy assessments done?

• Basically we need to compare the classification results with real land cover

• As with training data, the real land cover data can be field data (best) or samples from higher spatial resolution imagery (easier)

• What points should we use for the accuracy assessment?– Possible options (there are others)

• Random points• Stratified random points (each class represented with an

equal number of points)

Classification Challenges

• What problem might occur when gathering points for an accuracy assessment (and to a lesser extent, training areas)?

• Can we use the same points for the accuracy assessment that we used to train the classification?

Ikonos Imagery: Glacier National Park

Classification Results

Accuracy Assessment Table

• Rows are the reference data, columns are the classified data

• Values on the diagonal are correctly classified

• The values in red are the producer’s accuracy for each class– A.k.a. errors of omission– E.g., “how many pixels that ARE water (13) are classified AS water (12)”

• The values in blue are the user’s accuracy for each class– A.k.a. the errors of commission– E.g., “how many pixels classified AS water (14) ARE water (12)”

• Overall accuracy = # of correctly classified pixels / total # of pixels

• The Kappa statistic is basically the overall accuracy adjusted for how many pixels we would expect to correctly classify by chance alone

Vegetation Indices

Vegetation Indices

• Normalized Differential Vegetation Index (NDVI)

• Takes advantage of the “red edge” of vegetation reflectance that occurs between red and near infrared reflectance (NIR)

• NDVI = (NIR – Red) / (NIR + Red)

• Many more indices with many variants exist (lots of acronyms like SAVI, etc.)

Normalized Difference Vegetation Index (NDVI)

dNIR

dNIR

RR

RRNDVI

Re

Re

NDVI: [-1.0, 1.0]

Often, the more leaves of vegetation there are present, the bigger the contrast in reflectance in the red and near-infrared spectra

NDVI most accurately approximates the Fraction of Absorbed Photosynthetically Active Radiation (FAPAR)

NDVI from AVHRR

Feb 27-Mar 12

Jul 17-Jul 30

Aug 14-Aug 27

Jun 19-Jul 2

Apr 24-May 7

Nov 6- Nov19

NDVI and Precipitation Relationships

Expansion and contraction of the Sahara

A: 12 Apr-2 May 1982B: 5 to 25 Jul 1982C: 22 Sep to 17 Oct 1982D: 10 Dec 1982-9Jan 1983

Monitoring forest fire

Pre-forest fire

Post-forest fire

Burned area identified from space using NDVI