Recognition Computer Vision I - Home | Computer...

104
CS252A, Winter 2007 Computer Vision I Recognition Computer Vision I CSE252A Lecture 20

Transcript of Recognition Computer Vision I - Home | Computer...

Page 1: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Recognition

Computer Vision ICSE252ALecture 20

Page 2: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Announcements• HW 4 Due tomorrow (Friday)• Final Exam –Thursday,

• HW 3 to be returned at end of class.• Final exam discussion – at end of class.• Stop me at 12:05!!

Page 3: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

• Dr.Kriegman,

In lec7.pdf on page 2 where the BRDF with an apple is shown, it is stated:"ratio of incident irradiance to emitted radiance"

The equation shows the ratio of emmitted radiance to incident irradiance instead, so the wording above should be reversed.

-Fred

Page 4: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CSE 252A, Winter 2006

BRDFBRDFWith assumptions in previous slide• Bi-directional Reflectance

Distribution Function ρ(θin, φin ; θout, φout)

•• Ratio of the emitted radiance to Ratio of the emitted radiance to incident irradianceincident irradiance

• Function of– Incoming light direction:

θin , φin– Outgoing light direction:

θout , φout

n̂(θin,φin)

(θout,φout)

( ) ( )( ) ωθφθ

φθφθφθρdxL

xLxininini

outoutooutoutinin cos,;

,;,;,; =

Page 5: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Last lecture• Motion wrap

– Normal Flow– Aperture problem– Lukas Kanade

• Tracking

Page 6: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Recognition

Given a database of objects and an image determine what, if any of the objects are present in the image.

Page 7: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Recognition

Given a database ofobjects and an imagedetermine what, if anyof the objects are present in the image.

Page 8: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Face

Camel

Bug Human/Felix

Quadruped

BarbaraSteele

Problem:

Recognizing instances

Recognizing categories

Page 9: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Recognition

Given a database ofobjects and an imagedetermine what, if anyof the objects are present in the image.

Page 10: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Object Recognition: The ProblemGiven: A database D of “known” objects and an image I:

1. Determine which (if any) objects in D appear in I2. Determine the pose (rotation and translation) of the object

Segmentation(where is it 2D)

Recognition(what is it)

Pose Est.(where is it 3D)

WHAT AND WHERE!!!

Page 11: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Recognition Challenges• Within-class variability

– Different objects within the class have different shapes or different material characteristics

– Deformable– Articulated– Compositional

• Pose variability: – 2-D Image transformation (translation, rotation, scale)– 3-D Pose Variability (perspective, orthographic projection)

• Lighting– Direction (multiple sources & type)– Color– Shadows

• Occlusion – partial• Clutter in background -> false positives

Page 12: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Object Recognition Issues:• How general is the problem?

– 2D vs. 3D– range of viewing conditions– available context– segmentation cues

• What sort of data is best suited to the problem?– Whole images– Local 2D features (color, texture, – 3D (range) data

• What information do we have in the database?– Collection of images?– 3-D models?– Learned representation?– Learned classifiers?

• How many objects are involved?– small: brute force search– large: ??

Page 13: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

A Rough Recognition Spectrum

Appearance-BasedRecognition(Eigenface, Fisherface)

Local Features +Spatial Relations

3-DModel-BasedRecognition

GeometricInvariants

ImageAbstractions/ Volumetric Primitives

Shape Contexts

FunctionAspect Graphs

Increasing Generality

Page 14: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Sketch of a Pattern Recognition Architecture

FeatureExtraction Classification

Image(window)

ObjectIdentityFeature

Vector

Page 15: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Example: Face Detection

• Scan window over image.• Classify window as either:

– Face– Non-face

ClassifierWindowFace

Non-face

Page 16: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Pattern Classification Summary• Supervised vs. Unsupervised: Do we have labels?• Supervised

– Nearest Neighbor – Bayesian

• Plug in classifier• Distribution-based• Projection Methods (Fisher’s, LDA)

– Neural Network– Support Vector Machine– Kernel methods

• Unsupervised– Clustering– Reinforcement learning

Page 17: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Image as a Feature Vector

• Consider an n-pixel image to be a point in an n-dimensional space, x Rn.

• Each pixel value is a coordinate of x.∈

x1

x2

x3

Page 18: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Nearest Neighbor Classifier{ { RRjj } } are set of training images.

x1

x2

x3RR11

RR22

II

),(minarg IRdistID jj

=

Page 19: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Comments• Sometimes called “Template Matching”• Variations on distance function (e.g. L1, robust

distances)• Multiple templates per class- perhaps many

training images per class.• Expensive to compute k distances, especially

when each image is big (N dimensional).• May not generalize well to unseen examples of

class.• Some solutions:

– Bayesian classification– Dimensionality reduction

Page 20: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Bayesian Classification

Blackboard

Page 21: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Some loss may be inevitable: the minimumrisk (shaded area) is called the Bayes risk

Page 22: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Finding a decision boundary is not the same asmodeling a conditional density.

Page 23: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Plug-in classifiers• Assume that distributions have some parametric

form - now estimate the parameters from the data.• Common:

– assume a normal distribution with shared covariance, different means; use usual estimates

– ditto, but different covariances; ditto• Issue: parameter estimates that are “good” may not

give optimal classifiers.

Page 24: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Example: Face Detection

• Scan window over image.• Classify window as either:

– Face– Non-face

Page 25: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Example: Finding skinNon-parametric Representation of CCD

• Skin has a very small range of (intensity independent) colors, and little texture– Compute an intensity-independent color measure,

check if color is in this range, check if there is little texture (median filter)

– See this as a classifier - we can set up the tests by hand, or learn them.

– get class conditional densities (histograms), priors from data (counting)

• Classifier is

Page 26: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Figure from “Statistical color models with application to skin detection,” M.J. Jones and J. Rehg, Proc. Computer Vision and Pattern Recognition, 1999 copyright 1999, IEEE

Page 27: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Figure from “Statistical color models with application to skin detection,” M.J. Jones and J. Rehg, Proc. Computer Vision and Pattern Recognition, 1999 copyright 1999, IEEE

Receiver Operating Curve

Page 28: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Eigenfaces: linear projection

• An n-pixel image x∈Rn can be projected to a low-dimensional feature space y∈Rm by

y = Wx

where W is an n by m matrix.

• Recognition is performed using nearest neighbor in Rm.

• How do we choose a good W?

Page 29: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Eigenfaces: Principal Component Analysis (PCA)

Some details: Use Singular value decomposition, “trick” described in text to compute basis when n<<d

Page 30: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Singular Value Decomposition[ Repeated from Lecture 7]

• Any m by n matrix A may be factored such thatA = UΣVT

[m x n] = [m x m][m x n][n x n]

• U: m by m, orthogonal matrix– Columns of U are the eigenvectors of AAT

• V: n by n, orthogonal matrix, – columns are the eigenvectors of ATA

• Σ: m by n, diagonal with non-negative entries (σ1, σ2, …, σs) with s=min(m,n) are called the called the singular values– Singular values are the square roots of eigenvalues of both

AAT and ATA & Columns of U are corresponding Eigenvectors– Result of SVD algorithm: σ1 ≥ σ2 ≥ … ≥ σs

Page 31: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

SVD Properties• In Matlab [u s v] = svd(A), and you can verify

that: A=u*s*v’• r=Rank(A) = # of non-zero singular values.• U, V give us orthonormal bases for the subspaces

of A:– 1st r columns of U: Column space of A– Last m - r columns of U: Left nullspace of A– 1st r columns of V: Row space of A– 1st n - r columns of V: Nullspace of A

• For d<= r, the first d column of U provide the best d-dimensional basis for columns of A in least squares sense.

Page 32: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Thin SVD

• Any m by n matrix A may be factored such thatA = UΣVT

[m x n] = [m x m][m x n][n x n]• If m>n, then one can view Σ as:

• Where Σ’=diag(σ1, σ2, …, σs) with s=min(m,n), and lower matrix is (n-m by m) of zeros.

• Alternatively, you can write:A = U’Σ’VT

• In Matlab, thin SVD is:[U S V] = svds(A)

⎥⎦

⎤⎢⎣

⎡∑0

'

Page 33: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Performing PCA with SVD• Singular values of A are the square roots of

eigenvalues of both AAT and ATA & Columns of U are corresponding Eigenvectors

• And • Covariance matrix is:

• So, ignoring 1/n subtract mean image μ from each input image, create data matrix, and perform thin SVD on the data matrix.

[ ][ ] TTnn

n

i

Tii AAaaaaaaaa ==∑

=

LL 21211

Ti

n

ii xx

n)()(1

1μμ rrrr

−−=Σ ∑=

Page 34: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Mean

First Principal ComponentDirection of Maximum Variance

Page 35: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Eigenfaces• Modeling

1. Given a collection of n labeled training images,2. Compute mean image and covariance matrix.3. Compute k Eigenvectors (note that these are images)

of covariance matrix corresponding to k largest Eigenvalues. (Or perform using SVD!!)

4. Project the training images to the k-dimensional Eigenspace.

• Recognition1. Given a test image, project to Eigenspace.2. Perform classification to the projected training

images.

Page 36: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Eigenfaces: Training Images

[ Turk, Pentland 01

Page 37: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Eigenfaces

Mean Image Basis Images

Page 38: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Eigenfaces for a single individual, observed under variable lighting

Page 39: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Face detection using “distance to face space”• Scan a window ω across the image, and classify the window as face/not face as follows:

• Project window to subspace, and reconstruct as described earlier.

•Compute distance between ω and reconstruction.

•Local minima of distance over all image locations less than some treshold are taken as locations of faces.

•Repeat at different scales.

•Possibly normalize windows intensity so that |ω| = 1.

Page 40: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Underlying assumptions• Background is not cluttered (or else only

looking at interior of object• Lighting in test image is similar to that in

training image.• No occlusion• Size of training image (window) same as

window in test image.

Page 41: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Difficulties with PCA• Projection may suppress important detail

– smallest variance directions may not be unimportant

• Method does not take discriminative task into account– typically, we wish to compute features that

allow good discrimination– not the same as largest variance or minimizing

reconstruction error.

Page 42: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Page 43: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Fisherfaces: Class specific linear projectionP. Belhumeur, J. Hespanha, D. Kriegman, Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection, PAMI, July 1997, pp. 711--720.

• An n-pixel image x∈Rn can be projected to a low-dimensional feature space y∈Rm by

y = Wx

where W is an n by m matrix.

• Recognition is performed using nearest neighbor in Rm.

• How do we choose a good W?

Page 44: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

PCA & Fisher’s Linear Discriminant • Between-class scatter

• Within-class scatter

• Total scatter

• Where– c is the number of classes– μi is the mean of class χi

– | χi | is number of samples of χi..

Tii

c

iiBS ))((

1

μμμμχ −−= ∑=

∑ ∑= ∈

−−=c

i x

TikikW

ik

xxS1

))((χ

μμ

WB

c

i x

TkkT SSxxS

ik

+=−−= ∑ ∑= ∈1

))((χ

μμ

μ1

μ2

μ

χ1χ2

• If the data points are projected by y=Wx and scatter of points is S, then the scatter of the projected points is WTSW

Page 45: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

PCA & Fisher’s Linear Discriminant • PCA (Eigenfaces)

Maximizes projected total scatter

• Fisher’s Linear Discriminant

Maximizes ratio of projected between-class to projected within-class scatter

WSWW TT

WPCA maxarg=

WSW

WSWW

WT

BT

Wfld maxarg=

χ1χ2

PCA

FLD

Page 46: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Computing the Fisher Projection Matrix

• The wi are orthonormal• There are at most c-1 non-zero generalized Eigenvalues, so m <= c-1• Can be computed with eig in Matlab

Page 47: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Fisherfaces

WWSWW

WWSWWW

WSWW

WWW

PCAWT

PCAT

PCABT

PCAT

Wfld

TT

WPCA

PCAfld

maxarg

maxarg

=

=

= • Since SW is rank N-c, project training set to subspace spanned by first N-c principal components of the training set.• Apply FLD to N-cdimensional subspace yielding c-1 dimensional feature space.

• Fisher’s Linear Discriminant projects away the within-class variation (lighting, expressions) found in training set.• Fisher’s Linear Discriminant preserves the separability of the classes.

Page 48: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

PCA vs. FLD

Page 49: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Harvard Face Database1515oo

4545oo

3030oo

6060oo

• 10 individuals• 66 images per person• Train on 6 images at 15o

• Test on remaining images

Page 50: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Recognition Results: Lighting Extrapolation

0

5

10

15

20

25

30

35

40

45

0-15 degrees 30 degrees 45 degrees

Light Direction

Erro

r Rat

e

Correlation Eigenfaces Eigenfaces (w/o 1st 3) Fisherface

Page 51: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Support Vector Machines• Bayes classifiers and generative approaches

in general try to model of the posterior, p( ω|x)

• Instead, try to obtain the decision boundary directly– potentially easier, because we need to encode

only the geometry of the boundary, not any irrelevant wiggles in the posterior.

– Not all points affect the decision boundary

[ From Marc Polleyfes]

Page 52: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Support Vector Machines• Set S of points xi∈Rn, each xi belongs to one of

two classes yi ∈{-1,1}• The goals is to find a hyperplane that divides S in

these two classes S is separable if ∃w ∈Rn,b ∈R

( ) 1x.w ≥+ by ii

Separating hyperplanes

0x.w =+ b wdy ii

1≥

wbd i

i+

=x.wdi

Closest point

wdy ii

1=

w=w

[ From Marc Polleyfes]

Page 53: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Problem 1:

Support Vector Machines• Optimal separating hyperplane maximizes

Optimal separating hyperplane (OSH)

w1

w.w21

( ) Niby ii ,...,2,1,1x.w =≥+ MinimizeSubject to

w2

support vectors

[ From Marc Polleyfes]

Page 54: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Solve using Lagrange multipliers• Lagrangian

– at solution

– therefore

( ) ( ){ }1x.ww.wα,w,1

21 −+−= ∑

=

bybL ii

N

iiα

01

==∂∂ ∑

=

N

iiiy

bL α

0xww 1

=−=∂∂ ∑

=ii

N

ii y

L α

( ) jijij

N

jii

N

ii yyL x.x

1,1 2

1 ααα ∑∑==

−=α

0≥α

[ From Marc Polleyfes]

Page 55: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Decision function

• Once w and b have been computed the classification decision for input x is given by

• Note that the globally optimal solution can always be obtained (convex problem)

( ) ( )bf += x.wsignx

[ From Marc Polleyfes]

Page 56: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Non-linear SVMs• Non-linear separation surfaces can be obtained by

non-linearly mapping the data to a high dimensional space and then applying the linear SVM technique

• Note that data only appears through vector product• Need for vector product in high-dimension can be

avoided by using Mercer kernels:( ) ( ) ( )iiiiK xxx,x ΦΦ=

( ) ( )pK y.xyx, = (Polynomial kernel)

( )⎟⎟

⎜⎜

⎛ −−= 2

2yxexpyx,

σK (Radial Basis Function)

( ) ( )δκ −= y.xtanhyx,K (Sigmoïdal function)

( ) ( ) 22

222121

21

21

2y.xyx, yxyyxxyxK ++==

( ) ( )2221

21 ,,x xxxx=Φ

e.g.

[ From Marc Polleyfes]

Page 57: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Space in which decisionboundary is linear - aconic in the original space has the form

x, y( )→ x2 , xy, y2, x, y( )= u0 ,u1,u2 ,u3 ,u4( )

au0 + bu1 + cu2 + du3 + eu4 + f = 0

[ From Marc Polleyfes]

Page 58: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Variability: Camera positionIlluminationInternal parameters

Within-class variations

Page 59: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Appearance manifold approach- for every object

1. sample the set of viewing conditions2. Crop & scale images to standard size3. Use as feature vector

- apply a PCA over all the images - keep the dominant PCs- Set of views for one object is represented as a

manifold in the projected space- Recognition: What is nearest manifold for a given test image?

(Nayar et al. ‘96)

Page 60: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

An example: input images

Page 61: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

An example: basis images

Page 62: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

An example: surfaces of first 3 coefficients

Page 63: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Parameterized Eigenspace

Page 64: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Recognition

Page 65: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Appearance-based vision for robot control

[ Nayar, Nene, Murase 1994 ]

Page 66: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Limitations of these approaches

• Object must be segmented from background (How would one do this in non-trivial situations?

• Occlusion?• The variability (dimension) in images is

large, so is sampling feasible?• How can one generalize to classes of

objects?

Page 67: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Appearance-Based Vision: Lessons

• Posing the recognition metric in the image space rather than a derived representation is more powerful than expected.

• Modeling objects from many images is not unreasonable given hardware developments.

• The data (images) may provide a better representations than abstractions for many tasks.

Strengths

Page 68: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Appearance-Based Vision: Lessons

• Segmentation or object detection is still an issue.

• To train the method, objects have to be observed under a wide range of conditions (e.g. pose, lighting, shape deformation).

• Limited power to extrapolate or generalize (abstract) to novel conditions.

Weaknesses

Page 69: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Model-Based Vision

• Given 3-D models of each object• Detect image features (often edges, line segments, conic sections)• Establish correspondence between model &image features• Estimate pose• Consistency of projected model with image.

Page 70: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

A Rough Recognition Spectrum

Appearance-BasedRecognition(Eigenface, Fisherface)

Local Features +Spatial Relations

3-DModel-BasedRecognition

GeometricInvariants

ImageAbstractions/ Volumetric Primitives

Shape Contexts

FunctionAspect Graphs

Page 71: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Recognition by Hypothesize and Test• General idea

– Hypothesize object identity and pose

– Recover camera parameters (widely known as backprojection)

– Render object using camera parameters

– Compare to image

• Issues– where do the hypotheses come

from?– How do we compare to image

(verification)?

• Simplest approach– Construct a

correspondence for all object features to every correctly sized subset of image points

• These are the hypotheses

– Expensive search, which is also redundant.

Page 72: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Pose consistency• Correspondences

between image features and model features are not independent.

• A small number of correspondences yields a camera matrix --- the others correspondences must be consistent with this.

• Strategy:– Generate hypotheses

using small numbers of correspondences (e.g. triples of points for a calibrated perspective camera, etc., etc.)

– Backproject and verify

Page 73: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Page 74: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Figure from “Object recognition using alignment,” D.P. Huttenlocher and S. Ullman, Proc. Int. Conf. Computer Vision, 1986, copyright IEEE, 1986

Page 75: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Voting on Pose• Each model leads to many correct sets of

correspondences, each of which has the same pose– Vote on pose, in an accumulator array– This is a hough transform, with all it’s issues.

Page 76: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Page 77: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Figure from “The evolution and testing of a model-based object recognition system”, J.L. Mundy and A. Heller, Proc. Int. Conf. Computer Vision, 1990 copyright 1990 IEEE

Page 78: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Figure from “The evolution and testing of a model-based object recognition system”, J.L. Mundy and A. Heller, Proc. Int. Conf. Computer Vision, 1990 copyright 1990 IEEE

Page 79: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Figure from “The evolution and testing of a model-based object recognition system”, J.L. Mundy and A. Heller, Proc. Int. Conf. Computer Vision, 1990 copyright 1990 IEEE

Page 80: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Figure from “The evolution and testing of a model-based object recognition system”, J.L. Mundy and A. Heller, Proc. Int. Conf. Computer Vision, 1990 copyright 1990 IEEE

Page 81: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Invariance• Properties or measures that are independent of

some group of transformation (e.g., rigid, affine, projective, etc.)

• For example, under affine transformations:– Collinearity– Parallelism– Intersection– Distance ratio along a line– Angle ratios of tree intersecting lines– Affine coordinates

Page 82: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Invariance - 1

• There are geometric properties that are invariant to camera transformations

• Easiest case: view a plane object in scaled orthography.

• Assume we have three base points P_i (i=1..3) on the object– then any other point on the object

can be written as

• Now image points are obtained by multiplying by a plane affine transformation, so

Pk = P1 + μka P2 − P1( )+ μkb P3 − P1( )

pk = APk

= A P1 + μka P2 − P1( )+ μkb P3 − P1( )( )= p1 + μka p2 − p1( )+ μkb p3 − p1( )

P1

P2

P3

Pk

Pk+1

Page 83: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Geometric hashing• Vote on identity and correspondence using

invariants– Take hypotheses with large enough votes

• Building a table:– Take all triplets of points in on model image to

be base points P1, P2, P3.– Take ever fourth point and compute μ’s– Fill up a table, indexed by μ’s, with

• the base points and fourth point that yield those μ’s• the object identity

Page 84: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Page 85: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Figure from “Efficient model library access by projectively invariant indexing functions,” by C.A. Rothwell et al., Proc. Computer Vision and Pattern Recognition, 1992, copyright 1992, IEEE

Page 86: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Verification• Edge score

– are there image edges near predicted object edges?– very unreliable; in texture, answer is usually yes

• Oriented edge score– are there image edges near predicted object edges with the right

orientation?– better, but still hard to do well (see next slide)

• Texture– e.g. does the spanner have the same texture as the wood?

Page 87: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Application: Surgery• To minimize damage by operation planning• To reduce number of operations by planning

surgery • To remove only affected tissue• Problem

– ensure that the model with the operations planned on it and the information about the affected tissue lines up with the patient

– display model information supervised on view of patient

– Big Issue: coordinate alignment, as above

Page 88: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

MRI CTI

NMI

USI

Reprinted from Image and Vision Computing, v. 13, N. Ayache, “Medical computer vision, virtual reality and robotics”, Page 296, copyright, (1995), with permission from Elsevier Science

Page 89: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Figures by kind permission of Eric Grimson; further information can be obtained from his web site http://www.ai.mit.edu/people/welg/welg.html.

Page 90: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Figures by kind permission of Eric Grimson; further information can be obtained from his web site http://www.ai.mit.edu/people/welg/welg.html.

Page 91: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Figures by kind permission of Eric Grimson; further information can be obtained from his web site http://www.ai.mit.edu/people/welg/welg.html.

Page 92: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

A Rough Recognition Spectrum

Appearance-BasedRecognition(Eigenface, Fisherface)

Local Features +Spatial Relations

3-DModel-BasedRecognition

GeometricInvariants

ImageAbstractions/ Volumetric Primitives

Shape Contexts

FunctionAspect Graphs

Page 93: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Matching using Local Image featuresSimple approach

• Detect corners in image (e.g. Harris corner detector).

• Represent neighborhood of corner by a feature vector produced by Gabor Filters, K-jets, affine-invariant features, etc.).

• Modeling: Given an training image of an object w/o clutter, detect corners, compute feature descriptors, store these.

• Recognition time: Given test image with possible clutter, detect corners and compute features. Find models with same feature descriptors (hashing) and vote.

Page 94: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Figure from “Local grayvalue invariants for image retrieval,” by C. Schmid and R. Mohr, IEEE Trans. Pattern Analysis and Machine Intelligence, 1997 copyright 1997, IEEE

Page 95: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Probabilistic interpretation• Write

• Assume

• Likelihood of image given pattern

Page 96: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Employ spatial relations

Figure from “Local grayvalue invariants for image retrieval,” by C. Schmid and R. Mohr, IEEE Trans. Pattern Analysis and Machine Intelligence, 1997 copyright 1997, IEEE

Page 97: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Figure from “Local grayvalue invariants for image retrieval,” by C. Schmid and R. Mohr, IEEE Trans. Pattern Analysis and Machine Intelligence, 1997 copyright 1997, IEEE

Page 98: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

ExampleTraining examples

Test image

Page 99: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Finding faces using relations• Strategy:

– Face is eyes, nose, mouth, etc. with appropriate relations between them

– build a specialised detector for each of these (template matching) and look for groups with the right internal structure

– Once we’ve found enough of a face, there is little uncertainty about where the other bits could be

Page 100: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Finding faces using relations• Strategy: compare

Notice that once some facialfeatures have been found, theposition of the rest is quitestrongly constrained.

Figure from, “Finding faces in cluttered scenes using random labelled graph matching,” by Leung, T. ;Burl, M and Perona, P., Proc. Int. Conf. on Computer Vision, 1995 copyright 1995, IEEE

Page 101: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Figure from, “Finding faces in cluttered scenes using random labelled graph matching,”by Leung, T. ;Burl, M and Perona, P., Proc. Int. Conf. on Computer Vision, 1995 copyright 1995, IEEE

Page 102: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Even without shading, shape reveals a lot - line drawings

Page 103: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

SceneInterpretation

“The Swing”Fragonard, 1766

Page 104: Recognition Computer Vision I - Home | Computer Sciencecseweb.ucsd.edu/classes/wi07/cse252a/lec20.pdf · • Supervised vs. Unsupervised: Do we have labels? • Supervised – Nearest

CS252A, Winter 2007 Computer Vision I

Final Exam• Closed book• One cheat sheet

– Single piece of paper, handwritten, no photocopying, no physical cut & paste.

• What to study– Basically material presented in class, and supporting

material from text – If it was in text, but NEVER mentioned in class, it is

very unlikely to be on the exam• Question style:

– Short answer– Some longer problems to be worked out.