Stereo Class 7 Read Chapter 7 of tutorial Tsukuba dataset.
-
Upload
carol-hart -
Category
Documents
-
view
220 -
download
5
Transcript of Stereo Class 7 Read Chapter 7 of tutorial Tsukuba dataset.
Stereo Class 7
Read Chapter 7 of tutorial
http://cat.middlebury.edu/stereo/
Tsukuba dataset
Geometric Computer Vision course schedule(tentative)
Lecture Exercise
Sept 16 Introduction -
Sept 23 Geometry & Camera model Camera calibration
Sept 30 Single View Metrology(Changchang Wu)
Measuring in images
Oct. 7 Feature Tracking/Matching Correspondence computation
Oct. 14 Epipolar Geometry F-matrix computation
Oct. 21 Shape-from-Silhouettes Visual-hull computation
Oct. 28 Stereo matching papers
Nov. 4 Structure from motion and visual SLAM
Project proposals
Nov. 11 Multi-view geometry and self-calibration
Papers
Nov. 18 Shape-from-X Papers
Nov. 25 Structured light and active range sensing
Papers
Dec. 2 3D modeling, registration and range/depth fusion
(Christopher Zach?)
Papers
Dec. 9 Appearance modeling and image-based rendering
Papers
Dec. 16 Final project presentations Final project presentations
Stereo
• Standard stereo geometry• Stereo matching
• Aggregation• Optimization (1D, 2D)
• General camera configuration• Rectifications• Plane-sweep
• Multi-view stereo
Stereo
6
Challenges
• Ill-posed inverse problem• Recover 3-D structure from 2-D
information
• Difficulties• Uniform regions• Half-occluded pixels
7
Pixel Dissimilarity• Absolute difference of intensities
c=|I1(x,y)- I2(x-d,y)|• Interval matching [Birchfield 98]
• Considers sensor integration• Represents pixels as intervals
8
Alternative Dissimilarity Measures
• Rank and Census transforms [Zabih ECCV94]
• Rank transform:• Define window containing R pixels around each
pixel• Count the number of pixels with lower
intensities than center pixel in the window• Replace intensity with rank (0..R-1)• Compute SAD on rank-transformed images
• Census transform: • Use bit string, defined by neighbors, instead of
scalar rank
• Robust against illumination changes
9
Rank and Census Transform Results
• Noise free, random dot stereograms• Different gain and bias
10
Systematic Errors of Area-based Stereo
• Ambiguous matches in textureless regions
• Surface over-extension [Okutomi IJCV02]
11
Surface Over-extension• Expected value of E[(x-y)2]
for x in left and y in right image is:
Case A: σF2+ σB
2+(μF- μB)2 for
w/2-λ pixels in each row
Case B: 2 σB2 for w/2+λ pixels in
each row
Right image
Left image
Disparity of back surface
12
Surface Over-extension• Discontinuity
perpendicular to epipolar lines
• Discontinuity parallel to epipolar lines
Right image
Left image
Disparity of back surface
13
Over-extension and shrinkage
• Turns out that:
for discontinuities perpendicular to epipolar lines
• And:
for discontinuities parallel to epipolar lines
26
ww
22
ww
14
Random Dot Stereogram Experiments
15
Random Dot Stereogram Experiments
16
Offset Windows
Equivalent to using minnearby costResult: loss of depth accuracy
17
Discontinuity Detection
• Use offset windows only where appropriate• Bi-modal distribution of SSD • Pixel of interest different than
mode within window
18
Compact Windows
• [Veksler CVPR03]: Adapt windows size based on:• Average matching error per pixel• Variance of matching error• Window size (to bias towards larger
windows)
• Pick window that minimizes cost
19
Integral Image
Sum of shaded part
Compute an integral image for pixel dissimilarity at each possible disparity
A C
DB
Shaded area = A+D-B-CIndependent of size
20
Results using Compact Windows
21
Rod-shaped filters
• Instead of square windows aggregate cost in rod-shaped shiftable windows [Kim CVPR05]
• Search for one that minimizes the cost (assume that it is an iso-disparity curve)
• Typically use 36 orientations
22
Locally Adaptive Support
Apply weights to contributions of neighboringpixels according to similarity and proximity [Yoon CVPR05]
23
Locally Adaptive Support
• Similarity in CIE Lab color space:
• Proximity: Euclidean distance
• Weights:
24
Locally Adaptive Support: Results
Locally Adaptive Support
25
Locally Adaptive Support: Results
Occlusions
(Slide from Pascal Fua)
Exploiting scene constraints
Ordering constraint
11 22 33 4,54,5 66 11 2,32,3 44 55 66
2211 33 4,54,5 6611
2,32,3
44
55
66
surface slicesurface slice surface as a pathsurface as a path
occlusion right
occlusion left
Uniqueness constraint
• In an image pair each pixel has at most one corresponding pixel• In general one corresponding pixel• In case of occlusion there is none
Disparity constraint
surface slicesurface slice surface as a pathsurface as a path
bounding box
dispa
rity b
and
use reconstructed features to determine bounding box
constantdisparitysurfaces
Stereo matching
Optimal path(dynamic programming )
Similarity measure(SSD or NCC)
Constraints• epipolar
• ordering
• uniqueness
• disparity limit
Trade-off
• Matching cost (data)
• Discontinuities (prior)
Consider all paths that satisfy the constraints
pick best using dynamic programming
Hierarchical stereo matching
Dow
nsam
plin
g
(Gau
ssia
n p
yra
mid
)
Dis
pari
ty p
rop
ag
ati
on
Allows faster computation
Deals with large disparity ranges
Disparity map
image I(x,y) image I´(x´,y´)Disparity map D(x,y)
(x´,y´)=(x+D(x,y),y)
Example: reconstruct image from neighboring
images
36
Semi-global optimization
• Optimize: E=Edata+E(|Dp-Dq|=1)+E(|Dp-Dq|>1) [Hirshmüller CVPR05]• Use mutual information as cost
• NP-hard using graph cuts or belief propagation (2-D optimization)
• Instead do dynamic programming along many directions • Don’t use visibility or ordering constraints• Enforce uniqueness• Add costs
37
Results of Semi-global optimization
38
Results of Semi-global optimization
No. 1 overall in Middlebury evaluation(at 0.5 error threshold as of Sep. 2006)
Energy minimization
(Slide from Pascal Fua)
Graph Cut
(Slide from Pascal Fua)
(general formulation requires multi-way cut!)
(Boykov et al ICCV‘99)
(Roy and Cox ICCV‘98)
Simplified graph cut
Belief Propagation
per pixel +left +right +up +down
2 3 4 5 20...subsequent iterations
(adapted from J. Coughlan slides)
first iteration
Belief of one node about another gets propagated through messages (full pdf, not just most likely state)
Stereo matching with general camera configuration
Image pair rectification
Planar rectification
Bring two views Bring two views to standard stereo setupto standard stereo setup
(moves epipole to )(not possible when in/close to image)
~ image size
(calibrated)(calibrated)
Distortion minimization(uncalibrated)
Polar re-parameterization around epipolesRequires only (oriented) epipolar geometryPreserve length of epipolar linesChoose so that no pixels are compressed
original image rectified image
Polar rectification(Pollefeys et al. ICCV’99)
Works for all relative motionsGuarantees minimal image size
polarrectification
planarrectification
originalimage pair
Example: Béguinage of Leuven
Does not work with standard Homography-based approaches
Example: Béguinage of Leuven
Stereo camera configurations
(Slide from Pascal Fua)
Multi-camera configurations
Okutami and Kanade
(illustration from Pascal Fua)
• Multi-baseline, multi-resolution• At each depth, baseline and
resolution selected proportional to that depth
• Allows to keep depth accuracy constant!
Variable Baseline/Resolution Stereo (Gallup et al., CVPR08)
Variable Baseline/Resolution Stereo: comparison
Multi-view depth fusion
• Compute depth for every pixel of reference image• Triangulation• Use multiple views• Up- and down
sequence• Use Kalman filter
(Koch, Pollefeys and Van Gool. ECCV‘98)
Allows to compute robust texture
Plane-sweep multi-view matching
• Simple algorithm for multiple cameras• no rectification necessary• doesn’t deal with occlusions
Collins’96; Roy and Cox’98 (GC); Yang et al.’02/’03 (GPU)
Next class: structured light and active scanning