Lecture Present

download Lecture Present

of 27

Transcript of Lecture Present

  • 8/2/2019 Lecture Present

    1/27

    Bayesian Modeling of Uncertaintyin Low-Level Vision

    Paper by Richard Szeliski

    International Journal of Computer Vision

    (1990)

    Presentation by Michael Ross

  • 8/2/2019 Lecture Present

    2/27

    Bayesian methods for intrinsicimages

    Dense fields and inverse problems (depthmaps, orientation, stereo matching).

    Traditionally solved by energy minimization

    or regularization methods.

    Replace these methods with Bayesianmethods to get explicit modeling ofuncertainty, better understanding of models.

  • 8/2/2019 Lecture Present

    3/27

    Intrinsic images

    Finding denseinformation fieldsfrom sparse or

    uncertain data.

    Interpolation andaccounting for

    uncertainty inmeasurement.

  • 8/2/2019 Lecture Present

    4/27

    Energy methods

    E u 1 E d u , d E p u

    Ed u , d 12 i

    c i u x i, y i d i

    2

    E p u

    1

    2u x

    2u y

    2dx dy

  • 8/2/2019 Lecture Present

    5/27

    Discrete energy methods

    E u 1

    2

    uTA u u

    Tb c

    Ed u , d

    1

    2 u

    d

    T

    A d u

    d

    E p u

    1

    2

    uTA p u

    E u 1

    2u u '

    TA u u ' k

  • 8/2/2019 Lecture Present

    6/27

    Energy implies probability

    Squared error terms imply Gaussian noisemodels.

    Smoothness (and discontinuity) can be

    modeled as a Markov random field.

    Using probability directly allows us access toinformation hidden by energy models(uncertainty modeling, model accuracy).

  • 8/2/2019 Lecture Present

    7/27

    Markov random fields

    p u i

    u p u i

    u j j

    N i

    A Markov random chain intwo dimensions. Mostcommonly used to calculatea MAP estimate, the

    assignment whichmaximizes p(u|d).

  • 8/2/2019 Lecture Present

    8/27

    Prior models

    Describes p(u), our model of the world givenno data. In principle, we can calculate it fromthe MRF terms.

    Gibbs distributions and sampling make thesecalculations simple and tractable. (Geman &Geman)

    p u 1

    Z pexp E p u

    T p E p u E c u

  • 8/2/2019 Lecture Present

    9/27

    Graph cliques

  • 8/2/2019 Lecture Present

    10/27

    Sampling from prior models

    p u i

    u 1

    Z pexp E p u i

    u

    T p

    Thin plate on data. Sample thin plate.

  • 8/2/2019 Lecture Present

    11/27

    Coarse to fine sampling.

    Thin plate on data. Coarse-fine thin plate sample.

    Why is this so much better?

  • 8/2/2019 Lecture Present

    12/27

    Coarse to fine sampling.

    Fourier analysis reveals that the surfacesdefined by thin-plate models are inherentlyfractal (self-similar at different resolutions).

    Gibbs sampling at a single resolution is not apractical way to propagate the low-frequencycomponents of the random field.

    Coarse to fine sampling is faster and producesmore representative samples.

  • 8/2/2019 Lecture Present

    13/27

    Sensor models

    Directly incorporated and easy to swap.

    p d

    u 1

    Zdexp Ed u , d E d u , d

    E di

    u i, d i

    Typical Gaussian sensor model (assuminguncorrelated errors).

    p d i

    u 1

    2

    i

    exp

    u i

    d i 2

    2

    i

  • 8/2/2019 Lecture Present

    14/27

    Contaminated gaussians

    p d i

    u 1

    2

    1

    exp

    u i

    d i2

    2

    1

    2

    2

    2

    exp

    u i

    d i2

    2

    2

    2

  • 8/2/2019 Lecture Present

    15/27

    Virtual sensors

    Use a lower-level vision algorithm (opticalflow, for example) as a dense data source.

    Anandan and Weiss found that optical flow

    errors depend on the type of local dataavailable (none, line, or corner).

    This can be used to construct a higher-level

    model that produces better flow estimates byexplicitly modeling these dependencies in thesensor distribution.

  • 8/2/2019 Lecture Present

    16/27

    Posterior models

    The output (intrinsic image) given the data -used to compute the MAP estimate of p(u | d).

    Described by a Gibbs distribution (MRF) with

    energy:E u E p u

    E d u , d

    Priormodel

    Sensormodel

  • 8/2/2019 Lecture Present

    17/27

    Loss functions

    L

    L u , u ' p u

    d du

    L

    u , u ' p u

    d du

    Generic:

    Map:

    Loss functions allow for top-down influence.

  • 8/2/2019 Lecture Present

    18/27

    Uncertainty estimatation

    We can compute the covariance of theposterior models.

    In the Gaussian sensor model case:

    E u

    1

    2u u '

    TA u u '

    k

    cov u

    A

    1

  • 8/2/2019 Lecture Present

    19/27

    Stochastic variance estimation

    Run the Gibbs sampler and estimate thevariance with a Monte Carlo algorithm.

    Time averages are ergodic only over long

    time frames, unless we use multiresolutionsampling.

    1000 iterations 100 iterations

  • 8/2/2019 Lecture Present

    20/27

    Dynamic models

    Kalman filtering is a natural part of the MRFframework.

    u

    N u 0,P 0

    d k H ku k

    rk

    u k Fku k 1

    q k

    Prior model

    Sensor model

    System model

  • 8/2/2019 Lecture Present

    21/27

    Dynamic models

    This work applies Kalman filtering to denseinformation, by using the sparse informationmatrices rather than the dense covariancematrices.

    Ak

    '

    '

    Ak

    ' '

    Hk

    TR

    k

    1H

    k

    T

    b k'

    '

    b k' '

    H kT

    R k

    1d k

    uk

    Ak

    1b

    k

  • 8/2/2019 Lecture Present

    22/27

    Dynamic models

    Why are the information matrices sparse?

    In the prior model, we have local smoothness,so any column of the information matrix only

    has non-zero values for the locallyneighboring points.

    In the sensor model, we usually assume that

    the sensor errors are uncorrelated, so thesensor information matrix is diagonal.

  • 8/2/2019 Lecture Present

    23/27

    Applications

    Incremental depth from motion, using theKalman filtering approach.

    Motion estimation without correspondence

    (discover the estimate that maximizes thelikelihood that two sets of points are drawnfrom the same smooth surface).

    Maximum likelihood estimation for theregularization parameter.

  • 8/2/2019 Lecture Present

    24/27

    Incremental depth from motion

    Compute a displacement estimate usingcorrelation between frames.

    Transform to disparity map using camera

    motion, integrate with predicted map (Kalmanfiltering).

    Regularize map.

    Predict next map using known cameramotion.

  • 8/2/2019 Lecture Present

    25/27

    Motion without correspondence

    Surface through set #1 Surface through set #2 Surface through both

    Minimize the distance between point set #2 and the surface through set#1 and the surface through both sets.

  • 8/2/2019 Lecture Present

    26/27

    ML parameter estimation

    p u

    d exp

    E d u , d E p u

    p

    2

    p d 2

    H P0H

    T R

    1

    2exp

    1

    2d

    TH P

    0H

    T R

    1d

    P 0 1

    p

    2

    A p

    Take the log and maximize...

  • 8/2/2019 Lecture Present

    27/27

    Conclusion

    Probability models encompass energy models.

    We can draw samples to check their validity.

    We can model sensor behavior easily.

    Flexibility in choosing loss functions.

    Direct calculation of error.