Transcript of Daozheng Chen 1, Mustafa Bilgic 2, Lise Getoor 1, David Jacobs 1, Lilyana Mihalkova 1, Tom Yeh 1 1...
- Slide 1
- Daozheng Chen 1, Mustafa Bilgic 2, Lise Getoor 1, David Jacobs
1, Lilyana Mihalkova 1, Tom Yeh 1 1 Department of Computer Science,
University of Maryland, College Park 2 Department of Computer
Science, Illinois Institute of Technology Active Inference for
Retrieval in Camera Networks
- Slide 2
- Problem Search camera network videos to retrieve frames
containing specified individuals.
- Slide 3
- Time query
- Slide 4
- Time query
- Slide 5
- Related Work Person re-identification [Wang et al. 07]
Graphical Models for camera networks [Loy et al. 09] Tracking over
camera networks [Song et al. 07] Active Learning [Settles 09]
- Slide 6
- Our Contributions Map video frames in a camera network onto a
graphical model and use a collective classification algorithm to
predict frame states and perform frame retrieval. Apply active
inference to direct human attention to portions of the videos which
are most likely to have the biggest performance improvement.
- Slide 7
- Graphical Structures
- Slide 8
- Collective Classification
- Slide 9
- Active Inference
- Slide 10
- Active Inference
- Slide 11
- Outline Graphical model construction Iterative classification
algorithm Active inference Experiment Conclusion
- Slide 12
- Graphical Model Construction Temporal neighbors (TN) Frames
from the previous and next k time steps within the same camera.
Positively correlated spatial neighbors (PSN) Correlation of the
labels of two camera is greater than some threshold. Negatively
correlated spatial neighbors (NSN) Correlation of the labels of two
camera is less than some threshold.
- Slide 13
- Temporal Neighbors (k = 1)
- Slide 14
- Positively correlated spatial neighbors
- Slide 15
- Negatively correlated spatial neighbors
- Slide 16
- Graphical Structures
- Slide 17
- Outline Graphical model construction Iterative Classification
Algorithm Active Inference Experiment Conclusion
- Slide 18
- The Iterative Classification Algorithm (ICA) Local Models (LM).
The label of a frame is only dependent on its features. Relational
Models (RM). The label of a frame is dependent on its features and
its neighbors current labels First apply the local model for
initialization, and then use the relational model iteratively until
predicted labels converge. [Sen et al. 08]
- Slide 19
- The Iterative Classification Algorithm (ICA) Local Models (LM).
Logistic regression as the classifier. Cosine similarity based on
signatures using bag-of-feature model as features F q = [f q1,f
q2,,f qn ] F = [f 1,f 2,,f n ] COS(F q,F)
- Slide 20
- The Iterative Classification Algorithm (ICA) Relational Models
(RM). Logistic regression as the classifier. Use aggregation
function to construct a feature vector encoding neighbors
information. F = [f 21,f 22,,f 2n ]
- Slide 21
- The Iterative Classification Algorithm (ICA) Relational Models
(RM). Logistic regression as the classifier. Use aggregation
function to construct a feature vector encoding neighbors
information. F = [f 21,f 22,,f 2n ] F TN = [f TN1,f TN2 ]
- Slide 22
- The Iterative Classification Algorithm (ICA) Relational Models
(RM). Logistic regression as the classifier. Use aggregation
function to construct a feature vector encoding neighbors
information. F = [f 21,f 22,,f 2n ] F TN = [f TN1,f TN2 ] F PSN =
[f PSN1,f PSN2 ]
- Slide 23
- The Iterative Classification Algorithm (ICA) Relational Models
(RM). Logistic regression as the classifier. Use aggregation
function to construct a feature vector encoding neighbors
information. F = [f 21,f 22,,f 2n ] F TN = [f TN1,f TN2 ] F PSN =
[f PSN1,f PSN2 ] F NSN = [f NSN1,f NSN2 ]
- Slide 24
- The Iterative Classification Algorithm (ICA) Relational Models
(RM). Logistic regression as the classifier. Use aggregation
function to construct a feature vector encoding neighbors
information. F = [f 21,f 22,,f 2n ] F TN = [f TN1,f TN2 ] F PSN =
[f PSN1,f PSN2 ] F NSN = [f NSN1,f NSN2 ] F RM
- Slide 25
- Outline Graphical model construction Iterative Classification
Algorithm Active Inference Experiment Conclusion
- Slide 26
- Active Inference The retrieval algorithm can request the
correct labels for some frames at inference time. [Rattigan et al.
07] Subsequent inference using ICA is based on these corrected
labels. Common methods for selecting frames to label: Random (RND).
Uniform (UNI). Most certain to be relevant (MR). Most uncertain
(UNC) Reflect and Correct. [Bilgic and Getoor. 09]
- Slide 27
- Reflect and Correct (RAC) [Bilgic and Getoor. TKDD09]
- Slide 28
- Adaptive RAC (MLI)
- Slide 29
- Outline Graphical model construction Iterative Classification
Algorithm Active Inference Experimental Evaluation Conclusion
- Slide 30
- Dataset [Ding et al. 10]
- Slide 31
- Queries
- Slide 32
- Region of Interests Background subtraction to determine region
of interest in the frame. Densely sample key points in the regions
Use color histogram in RGB space to describe the region spanned by
a key point Quantized the descriptor according to learned 500 code
words. Produce a single signature for a video frame.
- Slide 33
- Spatial Topology
- Slide 34
- Methods for Comparison Active inference based on LM using RND,
UNI, MR, UNC, MLI Active inference based on RM using RND, UNI, MR,
UNC, MLI Average accuracy and Average 11-average precision as
measurement
- Slide 35
- Results UNC-LM has the best performance when results are based
on LM. RM always perform better than LM does under the same
sampling method. UNC-RM and MLI always perform better. MLI never
perform worse than MLI does.
- Slide 36
- Outline Graphical model construction Iterative Classification
Algorithm Active Inference Experiment Conclusion
- Slide 37
- Using a graphical model provides significant performance
improvements in frame retrieval. A simple method that captures the
frame uncertainty has an advantage over other baseline methods. Our
adaptation of RAC has overall better performance.
- Slide 38
- Questions?