Boundary Preserving Dense Local Regions Jaechul Kim and Kristen Grauman Univ. of Texas at Austin.
-
Upload
ethan-mcgee -
Category
Documents
-
view
213 -
download
1
Transcript of Boundary Preserving Dense Local Regions Jaechul Kim and Kristen Grauman Univ. of Texas at Austin.
Boundary Preserving Dense Local Regions
Jaechul Kim and Kristen GraumanUniv. of Texas at Austin
Local feature detection
• A crucial building block for many applications
Image retrieval Object recognition Image matching
How to detect local regions for feature extraction?Key issue:
Related workInterest point detectorse.g., Matas et al. (BMVC 02), Jurie and Schmid (CVPR 04), Mikolajczyk and Schmid (IJCV 04)
Dense samplinge.g., Nowak et al. (ECCV 06)
Segmented regions and Superpixelse.g., Ren and Malik (ICCV 03) , Gu et al. (CVPR 09), Todorovic and Ahuja (CVPR 08), Malisiewicz and Efros (BMVC 07), Levinshtein et al. (ICCV 09)
Hybride.g., Tuytelaars (CVPR 10), Koniusz and Mikolajczyk (BMVC 09)
What makes a good local feature detector?
Existing methods lack one or more of these criteria, e.g.,
Segments Dense sampling Interest points
Desired properties:- Repeatable- Boundary-preserving- Distinctively shaped
Lack repeatability Lack distinctive shape,straddle boundaries
Our idea: Boundary Preserving Local Regions (BPLRs)
• Boundary preserving, dense extraction• Segmentation-driven feature sampling and linking
Repeatable local features capturing objects’ local shapes
Approach: Overview
Initial elements for each segment are sampled based on distance transform of the segment
A single graph structure reflecting main shapes and segment layout
Grouping neighboring elements into BPLR
Sampling elements
Linking elements
Grouping elements
A segment Sampled elements
Min. spanning tree
Neighbor elements BPLR
Approach: Sampling
Segment Distance transformDense regular gridSampled elementsInput image Sampled elementsfrom “all” segments
Zoom-in view
x
x
An ”element”
Sampling Linking Grouping
Approach: Linking
Minimum spanning
tree
Sampled elements’ locations (i.e., elements’ centers)
Global linkage structure
Sampling Linking Grouping
Role of spanning tree linkage
Due to distance transform-based sampling same-segment elements more likely linked
Min spanning tree prefers to link closer elements
Due to multiple segmentations elements in overlapping segments more likely linked
Sampling Linking Grouping
Multiple sampling
+
Approach: Grouping
Reference element’s location
Sampling Linking Grouping
Reference element’s locationZoom-in view
Reference element’s locationTopological neighbor elements’ location
Topological neighborReference element’s locationEuclidean neighbor elements’ location
Euclidean neighbor
Intersection of topology and Euclidean neighbor
Reference element’s locationNeighbor elements
Intersection of topology and Euclidean neighbor
Reference element’s locationBPLR
Descriptor
Example detections of BPLRs(Subset shown for visibility)
Example matches of BPLRs
Leak object boundary
Experiments
• 20-200 segments ~7000 BPLRs in 400 x 300 image– 2-5 seconds to extract BPLRs per an image– PHOG + gPb descriptor used
Tasks:RepeatabilityLocalizationForeground segmentationObject classification
Baselines:Dense sampling (+ SIFT)
MSER (+ SIFT) [1]Semi-local regions (+ SIFT) [2,3]Segmented regions (+ PHOG) [4]Superpixels [5]
[1] Matas et al., BMVC 02. [2] Quack et al., ICCV 07.[3] Lee and Grauman, IJCV 09. [4] Arbelaez et al., CVPR 09.[5] Ren and Malik, ICCV 03.
Example feature extractions
Proposed BPLRs
(Subset shown for visibility)
Segmented regions
Superpixels Interest regions
(MSERs)
Dense sampling
Repeatability for object categoriesBounding Box Hit Rate – False Positive Rate [Quack et al. 2007]
Test image
Train images
True match False positive
Comparison to baseline region detectors on ETHZ shape classes
Applelogo
Swan
Giraffe
Mug
Bottle
Localization accuracyBounding Box Overlapping Score – Recall
Compute overlapping score by projecting the training exemplar’s bounding box
into the test image
Comparison to baseline region detectors on ETHZ shape classes
Applelogo
BottleGiraffe
MugSwan
Localization accuracy
Test image Database images with best matches to test BPLRs
Foreground segmentationReplacing superpixels with BPLRs in GrabCut segmentation
Approach Accuracy(%) BPLR + GrabCut (Ours) 85.6
Superpixel + GrabCut 81.5Superpixel ClassCut (Alexe et al., ECCV 10) 83.6
Superpixel Spatial Topic Model (Cao et al., ICCV 07) 67.0
Foreground segmentation in Caltech-28 dataset
Object classificationNearest-neighbor results on Caltech-101 benchmark
Feature Accuracy(%)BPLR + PHOG (Ours) 61.1
Dense + SIFT 55.2Segment + PHOG 37.6
Dense + PHOG 27.9
Comparison of features using the same Naïve Bayes NN [Boiman et al. 2008] classifier.
Conclusion
Dense local detector that preserves object boundaries– Capture object’s local shape in a repeatable manner– Feature sampling and linking driven by segmentation– Generic bottom-up extraction
Code available:http://vision.cs.utexas.edu/projects/bplr/bplr.html