An Efficient Approach to Learning Inhomogenous Gibbs Models Ziqiang Liu, Hong Chen, Heung-Yeung Shum...

Post on 02-Jan-2016

216 views 1 download

Tags:

Transcript of An Efficient Approach to Learning Inhomogenous Gibbs Models Ziqiang Liu, Hong Chen, Heung-Yeung Shum...

An Efficient Approach to Learning Inhomogenous Gibbs Models

Ziqiang Liu, Hong Chen, Heung-Yeung ShumMicrosoft Research AsiaCVPR 2003

Presented by Derek Hoiem

Overview

Build histograms for projections to 1-D Feature selection: max KL divergence between

estimated and true distribution 1-D histograms for a feature computed from

training data and MCMC sampling Fast solution with good starting point and

importance sampling

Maximum Entropy Principle

p(x) and f(x) should have same stats over observed features but p(x) should be as random as possible over other dimensions

Gibbs Distribution and KL-Divergence

The solution: Gibbs distribution

Λ minimizes the KL divergence:

Inhomogeneous Gibbs Model

Gaussian and MoG deemed inadequate Use vector-valued features (histograms)

Approximate Information Gain and KL-Divergence

Effectiveness of feature defined by reduction in KL-divergence:

Approximate information gain given by (old params constant):

For a vector-valued feature: KeyContribution!

gain starting point

Estimating Λ: Importance Sampling

Obtain reference samples xref by MCMC from starting point Update Λ by:

Bad starting point

Good starting point

A Toy Success Story

True

Reference (Initial)

Optimized Estimate

Caricature Generation: Representation

Learn mapping from photo to caricature Active appearance models:

Photos: shape + texture (44-D after PCA)Caricature: shape (25-D after PCA)

Caricature Generation: Learning

Gain(1)=.447 Gain(17)=.196 100,000 reference samples 8 hours on 1.4GHz 256MB

vs 24 hours on 667MHz 18-D

Estimate: Draw samples from: Approximate to:

Caricature Generation: Results

Caricature Generation: Results

Comments

Claims 100x speedup from efficiency analysis (33% speedup in reality)