Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

69
Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Transcript of Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Page 1: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Part 1: Bag-of-words models

by Li Fei-Fei (Princeton)

Page 2: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Related worksRelated works• Early “bag of words” models: mostly texture

recognition– Cula & Dana, 2001; Leung & Malik 2001; Mori, Belongie & Malik,

2001; Schmid 2001; Varma & Zisserman, 2002, 2003; Lazebnik, Schmid & Ponce, 2003;

• Hierarchical Bayesian models for documents (pLSA, LDA, etc.)– Hoffman 1999; Blei, Ng & Jordan, 2004; Teh, Jordan, Beal &

Blei, 2004

• Object categorization– Csurka, Bray, Dance & Fan, 2004; Sivic, Russell, Efros,

Freeman & Zisserman, 2005; Sudderth, Torralba, Freeman & Willsky, 2005;

• Natural scene categorization– Vogel & Schiele, 2004; Fei-Fei & Perona, 2005; Bosch,

Zisserman & Munoz, 2006

Page 3: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

ObjectObject Bag of ‘words’Bag of ‘words’

Page 4: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Analogy to documentsAnalogy to documents

Of all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal image was transmitted point by point to visual centers in the brain; the cerebral cortex was a movie screen, so to speak, upon which the image in the eye was projected. Through the discoveries of Hubel and Wiesel we now know that behind the origin of the visual perception in the brain there is a considerably more complicated course of events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step-wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image.

sensory, brain, visual, perception,

retinal, cerebral cortex,eye, cell, optical

nerve, imageHubel, Wiesel

China is forecasting a trade surplus of $90bn (£51bn) to $100bn this year, a threefold increase on 2004's $32bn. The Commerce Ministry said the surplus would be created by a predicted 30% jump in exports to $750bn, compared with a 18% rise in imports to $660bn. The figures are likely to further annoy the US, which has long argued that China's exports are unfairly helped by a deliberately undervalued yuan. Beijing agrees the surplus is too high, but says the yuan is only one factor. Bank of China governor Zhou Xiaochuan said the country also needed to do more to boost domestic demand so more goods stayed within the country. China increased the value of the yuan against the dollar by 2.1% in July and permitted it to trade within a narrow band, but the US wants the yuan to be allowed to trade freely. However, Beijing has made it clear that it will take its time and tread carefully before allowing the yuan to rise further in value.

China, trade, surplus, commerce,

exports, imports, US, yuan, bank, domestic,

foreign, increase, trade, value

Page 5: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

• Looser definition– Independent features

A clarification: definition of “BoW”

Page 6: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

A clarification: definition of “BoW”• Looser definition

– Independent features

• Stricter definition– Independent features – histogram representation

Page 7: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

categorycategorydecisiondecision

learninglearning

feature detection& representation

codewords dictionarycodewords dictionary

image representation

category modelscategory models(and/or) classifiers(and/or) classifiers

recognitionrecognition

Page 8: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

feature detection& representation

codewords dictionarycodewords dictionary

image representation

RepresentationRepresentation

1.1.

2.2.

3.3.

Page 9: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

1.Feature detection and representation1.Feature detection and representation

Page 10: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

1.Feature detection 1.Feature detection and representationand representation

• Regular grid– Vogel & Schiele, 2003– Fei-Fei & Perona, 2005

Page 11: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

1.Feature detection 1.Feature detection and representationand representation

• Regular grid– Vogel & Schiele, 2003– Fei-Fei & Perona, 2005

• Interest point detector– Csurka, et al. 2004– Fei-Fei & Perona, 2005– Sivic, et al. 2005

Page 12: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

1.Feature detection 1.Feature detection and representationand representation

• Regular grid– Vogel & Schiele, 2003– Fei-Fei & Perona, 2005

• Interest point detector– Csurka, Bray, Dance & Fan, 2004– Fei-Fei & Perona, 2005– Sivic, Russell, Efros, Freeman & Zisserman, 2005

• Other methods– Random sampling (Vidal-Naquet & Ullman, 2002)– Segmentation based patches (Barnard, Duygulu,

Forsyth, de Freitas, Blei, Jordan, 2003)

Page 13: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

1.Feature 1.Feature detectiondetection and and representationrepresentation

Normalize patch

Detect patches[Mikojaczyk and Schmid ’02]

[Mata, Chum, Urban & Pajdla, ’02]

[Sivic & Zisserman, ’03]

Compute SIFT

descriptor

[Lowe’99]

Slide credit: Josef Sivic

Page 14: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

1.Feature 1.Feature detectiondetection and and representationrepresentation

Page 15: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

2. Codewords dictionary formation2. Codewords dictionary formation

Page 16: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

2. Codewords dictionary formation2. Codewords dictionary formation

Vector quantization

Slide credit: Josef Sivic

Page 17: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

2. Codewords dictionary formation2. Codewords dictionary formation

Fei-Fei et al. 2005

Page 18: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Image patch examples of codewordsImage patch examples of codewords

Sivic et al. 2005

Page 19: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

3. Image representation3. Image representation

…..

fre

que

ncy

codewords

Page 20: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

feature detection& representation

codewords dictionarycodewords dictionary

image representation

RepresentationRepresentation

1.1.

2.2.

3.3.

Page 21: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

categorycategorydecisiondecision

codewords dictionarycodewords dictionary

category modelscategory models(and/or) classifiers(and/or) classifiers

Learning and Learning and RecognitionRecognition

Page 22: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

category modelscategory models(and/or) classifiers(and/or) classifiers

Learning and Learning and RecognitionRecognition

1. Generative method: - graphical models

2. Discriminative method: - SVM

Page 23: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

2 generative models2 generative models

1. Naïve Bayes classifier– Csurka Bray, Dance & Fan, 2004

2. Hierarchical Bayesian text models (pLSA and LDA)

– Background: Hoffman 2001, Blei, Ng & Jordan, 2004

– Object categorization: Sivic et al. 2005, Sudderth et al. 2005

– Natural scene categorization: Fei-Fei et al. 2005

Page 24: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

• wn: each patch in an image– wn = [0,0,…1,…,0,0]T

• w: a collection of all N patches in an image– w = [w1,w2,…,wN]

• dj: the jth image in an image collection

• c: category of the image

• z: theme or topic of the patch

First, some notationsFirst, some notations

Page 25: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

wN

c

Case #1: the Naïve Bayes modelCase #1: the Naïve Bayes model

)|()( cwpcp

Prior prob. of the object classes

Image likelihoodgiven the class

Csurka et al. 2004

N

nn cwpcp

1

)|()(

Object classdecision

)|( wcpc

c maxarg

Page 26: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Csurka et al. 2004

Page 27: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Csurka et al. 2004

Page 28: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Hoffman, 2001

Case #2: Hierarchical Bayesian Case #2: Hierarchical Bayesian text modelstext models

wN

d z

D

wN

c z

D

Blei et al., 2001

Probabilistic Latent Semantic Analysis (pLSA)

Latent Dirichlet Allocation (LDA)

Page 29: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

wN

d z

D

Case #2: Hierarchical Bayesian Case #2: Hierarchical Bayesian text modelstext models

Probabilistic Latent Semantic Analysis (pLSA)

“face”

Sivic et al. ICCV 2005

Page 30: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

wN

c z

D

Case #2: Hierarchical Bayesian Case #2: Hierarchical Bayesian text modelstext models

Latent Dirichlet Allocation (LDA)

Fei-Fei et al. ICCV 2005

“beach”

Page 31: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Case #2: the pLSA modelCase #2: the pLSA modelwN

d z

D

Page 32: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Case #2: the pLSA modelCase #2: the pLSA modelwN

d z

D

Observed codeword distributions

Codeword distributionsper theme (topic)

Theme distributionsper image

Slide credit: Josef Sivic

K

kjkkiji dzpzwpdwp

1

)|()|()|(

Page 33: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

)|(maxarg dzpzz

Case #2: Recognition using pLSACase #2: Recognition using pLSA

Slide credit: Josef Sivic

Page 34: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Maximize likelihood of data using EM

Observed counts of word i in document j

M … number of codewords

N … number of images

Case #2: Learning the pLSA parametersCase #2: Learning the pLSA parameters

Slide credit: Josef Sivic

Page 35: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

DemoDemo

• Course website

Page 36: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

task: face detection – no labelingtask: face detection – no labeling

Page 37: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

• Output of crude feature detector– Find edges– Draw points randomly from edge set– Draw from uniform distribution to get scale

Demo: feature detectionDemo: feature detection

Page 38: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Demo: learnt parametersDemo: learnt parameters

Codeword distributionsper theme (topic)

Theme distributionsper image

)|( zwp )|( dzp

• Learning the model: do_plsa(‘config_file_1’)• Evaluate and visualize the model: do_plsa_evaluation(‘config_file_1’)

Page 39: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Demo: recognition examplesDemo: recognition examples

Page 40: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

• Performance of each theme

Demo: categorization resultsDemo: categorization results

Page 41: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

category modelscategory models(and/or) classifiers(and/or) classifiers

Learning and Learning and RecognitionRecognition

1. Generative method: - graphical models

2. Discriminative method: - SVM

Page 42: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Zebra

Non-zebra

Decisionboundary

Discriminative methods based on ‘bag of words’ representation

Page 43: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Discriminative methods based on ‘bag of words’ representation

• Grauman & Darrell, 2005, 2006:– SVM w/ Pyramid Match kernels

• Others– Csurka, Bray, Dance & Fan, 2004– Serre & Poggio, 2005

Page 44: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Summary: Pyramid match kernel

optimal partial matching between

sets of features

Grauman & Darrell, 2005, Slide credit: Kristen Grauman

Page 45: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Pyramid Match (Grauman & Darrell 2005)

Histogram intersection

Slide credit: Kristen Grauman

Page 46: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Difference in histogram intersections across levels counts number of new pairs matched

matches at this level matches at previous level

Histogram intersection

Pyramid Match (Grauman & Darrell 2005)

Slide credit: Kristen Grauman

Page 47: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Pyramid match kernel

• Weights inversely proportional to bin size

• Normalize kernel values to avoid favoring large sets

measure of difficulty of a match at level i

histogram pyramids

number of newly matched pairs at level i

Slide credit: Kristen Grauman

Page 48: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Example pyramid matchLevel 0

Slide credit: Kristen Grauman

Page 49: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Example pyramid matchLevel 1

Slide credit: Kristen Grauman

Page 50: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Example pyramid matchLevel 2

Slide credit: Kristen Grauman

Page 51: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Example pyramid match

pyramid match

optimal match

Slide credit: Kristen Grauman

Page 52: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Summary: Pyramid match kernel

optimal partial matching between

sets of features

number of new matches at level idifficulty of a match at level i

Slide credit: Kristen Grauman

Page 53: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Object recognition results

• ETH-80 database 8 object classes (Eichhorn and Chapelle 2004)

• Features: – Harris detector– PCA-SIFT descriptor, d=10

Kernel Complexity Recognition rate

Match [Wallraven et al.] 84%

Bhattacharyya affinity [Kondor & Jebara]

85%

Pyramid match 84%

Slide credit: Kristen Grauman

Page 54: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Object recognition results

• Caltech objects database 101 object classes

• Features:– SIFT detector– PCA-SIFT descriptor, d=10

• 30 training images / class• 43% recognition rate (1% chance performance)• 0.002 seconds per match

Slide credit: Kristen Grauman

Page 55: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

categorycategorydecisiondecision

learninglearning

feature detection& representation

codewords dictionarycodewords dictionary

image representation

category modelscategory models(and/or) classifiers(and/or) classifiers

recognitionrecognition

Page 56: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

What about spatial info?What about spatial info?

?

Page 57: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

What about spatial info?What about spatial info?

• Feature level– Spatial influence through correlogram features:

Savarese, Winn and Criminisi, CVPR 2006

Page 58: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

What about spatial info?What about spatial info?

• Feature level• Generative models

– Sudderth, Torralba, Freeman & Willsky, 2005, 2006– Niebles & Fei-Fei, CVPR 2007

Page 59: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

What about spatial info?What about spatial info?

• Feature level• Generative models

– Sudderth, Torralba, Freeman & Willsky, 2005, 2006– Niebles & Fei-Fei, CVPR 2007

P3

P1 P2

P4

BgImage

w

Page 60: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

What about spatial info?What about spatial info?

• Feature level• Generative models• Discriminative methods

– Lazebnik, Schmid & Ponce, 2006

Page 61: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Invariance issuesInvariance issues• Scale and rotation

– Implicit– Detectors and descriptors

Kadir and Brady. 2003

Page 62: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

• Scale and rotation

• Occlusion– Implicit in the models– Codeword distribution: small variations– (In theory) Theme (z) distribution: different

occlusion patterns

Invariance issuesInvariance issues

Page 63: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

• Scale and rotation

• Occlusion

• Translation– Encode (relative) location information

• Sudderth, Torralba, Freeman & Willsky, 2005, 2006

• Niebles & Fei-Fei, 2007

Invariance issuesInvariance issues

Page 64: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

• Scale and rotation

• Occlusion

• Translation

• View point (in theory)– Codewords: detector

and descriptor– Theme distributions:

different view points

Invariance issuesInvariance issues

Fergus, Fei-Fei, Perona & Zisserman, 2005

Page 65: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Model propertiesModel propertiesOf all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal image was transmitted point by point to visual centers in the brain; the cerebral cortex was a movie screen, so to speak, upon which the image in the eye was projected. Through the discoveries of Hubel and Wiesel we now know that behind the origin of the visual perception in the brain there is a considerably more complicated course of events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step-wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image.

sensory, brain, visual, perception,

retinal, cerebral cortex,eye, cell, optical

nerve, imageHubel, Wiesel

• Intuitive– Analogy to documents

Page 66: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Model propertiesModel properties

• Intuitive• generative models

– Convenient for weakly- or un-supervised, incremental training

– Prior information– Flexibility (e.g. HDP)

Li, Wang & Fei-Fei, CVPR 2007

model

Classification

Dataset Incremental

learning

Sivic, Russell, Efros, Freeman, Zisserman, 2005

Page 67: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Model propertiesModel properties

• Intuitive• generative models• Discriminative method

– Computationally efficient

Grauman et al. CVPR 2005

Page 68: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Model propertiesModel properties

• Intuitive• generative models• Discriminative method• Learning and

recognition relatively fast– Compare to other

methods

Page 69: Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

• No rigorous geometric information of the object components

• It’s intuitive to most of us that objects are made of parts – no such information

• Not extensively tested yet for– View point invariance– Scale invariance

• Segmentation and localization unclear

Weakness of the modelWeakness of the model