Step 3: Classification
description
Transcript of Step 3: Classification
![Page 1: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/1.jpg)
Step 3: Classification• Learn a decision rule (classifier) assigning
bag-of-features representations of images to different classes
Zebra
Non-zebra
Decisionboundary
![Page 2: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/2.jpg)
Classification• Assign input vector to one of two or more
classes
• Any decision rule divides input space into decision regions separated by decision boundaries
![Page 3: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/3.jpg)
Nearest Neighbor Classifier
• Assign label of nearest training data point to each test data point
Voronoi partitioning of feature space for 2-category 2-D and 3-D data
from Duda et al.
Source: D. Lowe
![Page 4: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/4.jpg)
• For a new point, find the k closest points from training data
• Labels of the k points “vote” to classify
• Works well provided there is lots of data and the distance function is good
K-Nearest Neighbors
k = 5
Source: D. Lowe
![Page 5: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/5.jpg)
Functions for comparing histograms
• L1 distance
• χ2 distance
• Quadratic distance (cross-bin)
N
i
ihihhhD1
2121 |)()(|),(
N
i ihih
ihihhhD
1 21
221
21 )()(
)()(),(
ji
ij jhihAhhD,
22121 ))()((),(
![Page 6: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/6.jpg)
Linear classifiers
• Find linear function (hyperplane) to separate positive and negative examples
0:negative
0:positive
b
b
ii
ii
wxx
wxx
Which hyperplaneis best?
![Page 7: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/7.jpg)
![Page 8: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/8.jpg)
Linear classifiers - margin
Generalization is not good in this case:
Better if a margin is introduced:
(c o lo r) 2x
)( ro u n d n e ss1x
(c o lo r) 2x
)( ro u n d n e ss1x
(c o lo r) 2x
)( ro u n d n e ss1x
(c o lo r) 2x
)( ro u n d n e ss1x
b / | |w
![Page 9: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/9.jpg)
![Page 10: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/10.jpg)
![Page 11: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/11.jpg)
Support vector machines
• Find hyperplane that maximizes the margin between the positive and negative examples
1:1)(negative
1:1)( positive
by
by
iii
iii
wxx
wxx
MarginSupport vectors
For support, vectors, 1 bi wx
The margin is 2 / ||w||
![Page 12: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/12.jpg)
Finding the maximum margin hyperplane
1. Maximize margin 2/||w||
2. Correctly classify all training data:
Quadratic optimization problem:
Minimize
Subject to yi(w·xi+b) ≥ 1
wwT
2
1
1:1)(negative
1:1)( positive
by
by
iii
iii
wxx
wxx
Solution based on Lagrange multipliers
![Page 13: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/13.jpg)
Finding the maximum margin hyperplane
• Solution: i iii y xw
Support vector
learnedweight
![Page 14: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/14.jpg)
Finding the maximum margin hyperplane
• Solution:
b = yi – w·xi for any support vector
• Classification function (decision boundary):
• Notice that it relies on an inner product between the test point x and the support vectors xi
• Solving the optimization problem also involves computing the inner products xi · xj between all pairs of training points
i iii y xw
bybi iii xxxw
![Page 15: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/15.jpg)
![Page 16: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/16.jpg)
![Page 17: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/17.jpg)
![Page 18: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/18.jpg)
• Datasets that are linearly separable work out great:
• But what if the dataset is just too hard?
• We can map it to a higher-dimensional space:
0 x
0 x
0 x
x2
Nonlinear SVMs
Slide credit: Andrew Moore
![Page 19: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/19.jpg)
Φ: x → φ(x)
Nonlinear SVMs
• General idea: the original input space can always be mapped to some higher-dimensional feature space where the training set is separable:
Slide credit: Andrew Moore
![Page 20: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/20.jpg)
Nonlinear SVMs
• The kernel trick: instead of explicitly computing the lifting transformation φ(x), define a kernel function K such that
K(xi , xjj) = φ(xi ) · φ(xj)
(to be valid, the kernel function must satisfy Mercer’s condition)
• This gives a nonlinear decision boundary in the original feature space:
bKyi
iii ),( xx
![Page 21: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/21.jpg)
![Page 22: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/22.jpg)
![Page 23: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/23.jpg)
![Page 24: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/24.jpg)
![Page 25: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/25.jpg)
Kernels for bags of features
• Histogram intersection kernel:
• Generalized Gaussian kernel:
• D can be Euclidean distance, χ2 distance, Earth Mover’s Distance, etc.
N
i
ihihhhI1
2121 ))(),(min(),(
2
2121 ),(1
exp),( hhDA
hhK
![Page 26: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/26.jpg)
SVM classifier
SMV with multi-channel chi-square kernel
● Channel c is a combination of detector, descriptor
● is the chi-square distance between histograms
● is the mean value of the distances between all training sample
● Extension: learning of the weights, for example with MKL
),( jic HHD
cA
m
i iiiic hhhhHHD1 21
22121 )]()([
2
1),(
J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid, Local Features and Kernels for Classifcation of Texture and Object Categories: A Comprehensive Study, IJCV 2007
![Page 27: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/27.jpg)
Pyramid match kernel• Weighted sum of histogram intersections
at mutliple resolutions (linear in the number of features instead of cubic)
optimal partial matching between
sets of features
K. Grauman and T. Darrell. The Pyramid Match Kernel: Discriminative Classification with Sets of Image Features, ICCV 2005.
![Page 28: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/28.jpg)
Pyramid Match
Histogram intersection
![Page 29: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/29.jpg)
Difference in histogram intersections across levels counts number of new pairs matched
matches at this level matches at previous level
Histogram intersection
Pyramid Match
![Page 30: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/30.jpg)
Pyramid match kernel
• Weights inversely proportional to bin size
• Normalize kernel values to avoid favoring large sets
measure of difficulty of a match at level i
histogram pyramids
number of newly matched pairs at level i
![Page 31: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/31.jpg)
Example pyramid matchLevel 0
![Page 32: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/32.jpg)
Example pyramid matchLevel 1
![Page 33: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/33.jpg)
Example pyramid matchLevel 2
![Page 34: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/34.jpg)
Example pyramid match
pyramid match
optimal match
![Page 35: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/35.jpg)
Summary: Pyramid match kernel
optimal partial matching between
sets of features
number of new matches at level idifficulty of a match at level i
![Page 36: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/36.jpg)
Review: Discriminative methods
• Nearest-neighbor and k-nearest-neighbor classifiers• L1 distance, χ2 distance, quadratic distance,
• Support vector machines• Linear classifiers• Margin maximization• The kernel trick• Kernel functions: histogram intersection, generalized
Gaussian, pyramid match
• Of course, there are many other classifiers out there• Neural networks, boosting, decision trees, …
![Page 37: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/37.jpg)
Summary: SVMs for image classification
1. Pick an image representation (in our case, bag of features)
2. Pick a kernel function for that representation
3. Compute the matrix of kernel values between every pair of training examples
4. Feed the kernel matrix into your favorite SVM solver to obtain support vectors and weights
5. At test time: compute kernel values for your test example and each support vector, and combine them with the learned weights to get the value of the decision function
![Page 38: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/38.jpg)
SVMs: Pros and cons
• Pros• Many publicly available SVM packages:
http://www.kernel-machines.org/software• Kernel-based framework is very powerful, flexible• SVMs work very well in practice, even with very small
training sample sizes
• Cons• No “direct” multi-class SVM, must combine two-class SVMs• Computation, memory
– During training time, must compute matrix of kernel values for every pair of examples
– Learning can take a very long time for large-scale problems
![Page 39: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/39.jpg)
Generative methods
• Model the probability distribution that produced a given bag of features
• We will cover two models, both inspired by text document analysis:• Naïve Bayes• Probabilistic Latent Semantic Analysis
![Page 40: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/40.jpg)
The Naïve Bayes modelThe Naïve Bayes model
Csurka et al. 2004
N
iiN cwpcwwp
11 )|()|,,(
• Assume that each feature is conditionally independent given the class
![Page 41: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/41.jpg)
The Naïve Bayes modelThe Naïve Bayes model
Prior prob. of the object classes
Likelihood of ith visual wordgiven the class
Csurka et al. 2004
N
iic cwpcpc
1
)|()(maxarg*
MAPdecision
• Assume that each feature is conditionally independent given the class
Estimated by empiricalfrequencies of visual wordsin images from a given class
![Page 42: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/42.jpg)
The Naïve Bayes modelThe Naïve Bayes model
Csurka et al. 2004
N
iic cwpcpc
1
)|()(maxarg*
• Assume that each feature is conditionally independent given the class
wN
c
• “Graphical model”:
![Page 43: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/43.jpg)
Probabilistic Latent Semantic AnalysisProbabilistic Latent Semantic Analysis
T. Hofmann, Probabilistic Latent Semantic Analysis, UAI 1999
zebra
grass
tree
“visual topics”
![Page 44: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/44.jpg)
Probabilistic Latent Semantic AnalysisProbabilistic Latent Semantic Analysis
T. Hofmann, Probabilistic Latent Semantic Analysis, UAI 1999
zebra grass treeNew image
= α1 + α2 + α3
![Page 45: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/45.jpg)
Probabilistic Latent Semantic AnalysisProbabilistic Latent Semantic Analysis
• Unsupervised technique• Two-level generative model: a document is a
mixture of topics, and each topic has its own characteristic word distribution
wd z
T. Hofmann, Probabilistic Latent Semantic Analysis, UAI 1999
document topic wordP(z|d) P(w|z)
![Page 46: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/46.jpg)
Probabilistic Latent Semantic AnalysisProbabilistic Latent Semantic Analysis
• Unsupervised technique• Two-level generative model: a document is a
mixture of topics, and each topic has its own characteristic word distribution
wd z
T. Hofmann, Probabilistic Latent Semantic Analysis, UAI 1999
K
kjkkiji dzpzwpdwp
1
)|()|()|(
![Page 47: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/47.jpg)
wd z
pLSA for imagespLSA for images
“face”
Document = image, topic = class, word = quantized feature
J. Sivic, B. Russell, A. Efros, A. Zisserman, B. Freeman, Discovering Objects and their Location in Images, ICCV 2005
![Page 48: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/48.jpg)
The pLSA modelThe pLSA model
K
kjkkiji dzpzwpdwp
1
)|()|()|(
Probability of word i in document j
(known)
Probability of word i given
topic k (unknown)
Probability oftopic k givendocument j(unknown)
![Page 49: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/49.jpg)
The pLSA modelThe pLSA model
Observed codeword distributions
(M×N)
Codeword distributionsper topic (class)
(M×K)
Class distributionsper image
(K×N)
K
kjkkiji dzpzwpdwp
1
)|()|()|(
p(wi|dj) p(wi|zk)
p(zk|dj)
documents
wor
ds
wor
ds
topics
topi
cs
documents
=
![Page 50: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/50.jpg)
Maximize likelihood of data using EM:
Observed counts of word i in document j
M … number of codewords
N … number of images
Slide credit: Josef Sivic
Learning pLSA parametersLearning pLSA parameters
![Page 51: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/51.jpg)
RecognitionRecognition
)|(maxarg dzpzz
• Finding the most likely topic (class) for an image:
![Page 52: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/52.jpg)
RecognitionRecognition
)|(maxarg dzpzz
• Finding the most likely topic (class) for an image:
zzz dzpzwp
dzpzwpdwzpz
)|()|(
)|()|(maxarg),|(maxarg
• Finding the most likely topic (class) for a visual word in a given image:
![Page 53: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/53.jpg)
Topic discovery in images
J. Sivic, B. Russell, A. Efros, A. Zisserman, B. Freeman, Discovering Objects and their Location in Images, ICCV 2005
![Page 54: Step 3: Classification](https://reader031.fdocuments.us/reader031/viewer/2022013012/56814636550346895db3448c/html5/thumbnails/54.jpg)
Summary: Generative models
• Naïve Bayes• Unigram models in document analysis• Assumes conditional independence of words given class• Parameter estimation: frequency counting
• Probabilistic Latent Semantic Analysis• Unsupervised technique• Each document is a mixture of topics (image is a mixture of
classes)• Can be thought of as matrix decomposition• Parameter estimation: EM