Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin,...

37
Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura Azzabou, Pierre Carlier

Transcript of Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin,...

Page 1: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Loss-based Visual Learning with Weak Supervision

M. Pawan Kumar

Joint work with Pierre-Yves Baudin, Danny Goodman,

Puneet Kumar, Nikos Paragios,Noura Azzabou, Pierre Carlier

Page 2: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

SPLENDID

Nikos ParagiosEquipe GalenINRIA Saclay

Daphne KollerDAGS

Stanford

Machine Learning

Weak Annotations

Noisy Annotations

Applications

Computer Vision

Medical Imaging

Self-Paced Learning for Exploiting Noisy, Diverse or Incomplete Data

2 Visits from INRIA to Stanford

1 Visit from Stanford to INRIA2012 ICML

3 Visits Planned2013 MICCAI

Page 3: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Medical Image Segmentation

MRI Acquisitions of the thigh

Page 4: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Medical Image Segmentation

MRI Acquisitions of the thigh

Segments correspond to muscle groups

Page 5: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Random Walks Segmentation

Probabilistic segmentation algorithm

Computationally efficient

Interactive segmentation

Automated shape prior driven segmentation

L. Grady, 2006 L. Grady, 2005; Baudin et al., 2012

Page 6: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Random Walks Segmentation

y(i,s): Probability that voxel ‘i’ belongs to segment ‘s’

x: Medical acquisition

miny E(x,y) = yTL(x)y + wshape||y-y0||2

Positive semi-definite Laplacian matrix

Shape prior on the segmentation

Parameter of the RW algorithm

Convex

Hand-tuned

Page 7: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Random Walks Segmentation

Several Laplacians

L(x) = Σα wαLα(x)

Several shape and appearance priors

Σβ wβ||y-yβ||2

Hand-tuning large number of parameters is onerous

Page 8: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Parameter Estimation

Learn the best parameters from training data

Σα wαyTLα(x)y + Σβ wβ||y-yβ||2

Page 9: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Parameter Estimation

Learn the best parameters from training data

wTΨ(x,y)

w is the set of all parameters

Ψ(x,y) is the joint feature vector of input and output

Page 10: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

• Parameter Estimation– Supervised Learning– Hard vs. Soft Segmentation– Mathematical Formulation

• Optimization

• Experiments

• Related and Future Work in SPLENDID

Outline

Page 11: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Supervised LearningDataset of segmented fMRIs

Sample xk, voxel i

zk(i,s) =

1, s is ground-truth

0, otherwise

Probabilistic segmentation??

Page 12: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Supervised Learning

wTΨ(xk,zk)

Energyof

Ground-truth

wTΨ(xk,ŷ)

Energyof

Segmentation

- ≥ Δ(ŷ,zk) - ξk

minw Σk ξk + λ||w||2

Δ(ŷ,zk) = Fraction of incorrectly labeled voxels

Taskar et al., 2003; Tsochantardis et al., 2004

Structured-output Support Vector Machine

Page 13: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Supervised Learning

Convex with several efficient algorithms

No parameter provides ‘hard’ segmentation

We only need a correct ‘soft’ probabilistic segmentation

Page 14: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

• Parameter Estimation– Supervised Learning– Hard vs. Soft Segmentation– Mathematical Formulation

• Optimization

• Experiments

• Related and Future Work in SPLENDID

Outline

Page 15: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Hard vs. Soft SegmentationHard segmentation zk

Don’t require 0-1 probabilities

Page 16: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Hard vs. Soft SegmentationSoft segmentation yk

Compatible with zk

Binarizing yk gives zk

Page 17: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Hard vs. Soft Segmentation

yk C(zk)

Soft segmentation yk

Compatible with zk

Which yk to use??

yk provided by best parameter

Unknown

Page 18: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

• Parameter Estimation– Supervised Learning– Hard vs. Soft Segmentation– Mathematical Formulation

• Optimization

• Experiments

• Related and Future Work in SPLENDID

Outline

Page 19: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Learning with Hard Segmentation

wTΨ(xk,zk)wTΨ(xk,ŷ) - ≥ Δ(ŷ,zk) - ξk

minw Σk ξk + λ||w||2

Page 20: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Learning with Soft Segmentation

wTΨ(xk,yk)wTΨ(xk,ŷ) - ≥ Δ(ŷ,zk) - ξk

minw Σk ξk + λ||w||2

Page 21: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Learning with Soft Segmentation

wTΨ(xk,yk)wTΨ(xk,ŷ) - ≥ Δ(ŷ,zk) - ξk

minw Σk ξk + λ||w||2

Smola et al., 2005; Felzenszwalb et al., 2008; Yu et al., 2009

Latent Support Vector Machine

minyk

yk C(zk)

Page 22: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

• Parameter Estimation

• Optimization

• Experiments

• Related and Future Work in SPLENDID

Outline

Page 23: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Latent SVM

Difference-of-convex problem

minw Σk ξk + λ||w||2

wTΨ(xk,ŷ) – minyk wTΨ(xk,yk) ≥ Δ(ŷ,zk) – ξk

yk C(zk)

Concave-Convex Procedure (CCCP)

Page 24: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

CCCP

yk* = minyk wTΨ(xk,yk) s.t. yk C(zk)

Repeat until convergence

Estimate soft segmentation

Update parametersminw Σk ξk + λ||w||2

wTΨ(xk,ŷ) – wTΨ(xk,yk*) ≥ Δ(ŷ,zk) – ξk

Efficient optimization using dual decomposition

Convex optimization

Page 25: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

• Parameter Estimation

• Optimization

• Experiments

• Related and Future Work in SPLENDID

Outline

Page 26: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Dataset

30 MRI volumes of thigh

Dimensions: 224 x 224 x 100

4 muscle groups + background

80% for training, 20% for testing

Page 27: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Parameters

4 Laplacians

2 shape priors

1 appearance prior

Baudin et al., 2012

Grady, 2005

Page 28: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Baselines

Hand-tuned parameters

Structured-output SVM

Soft segmentation based on signed distance transform

Hard segmentation

Page 29: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Results

Small but statistically significant improvement

Page 30: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

• Parameter Estimation

• Optimization

• Experiments

• Related and Future Work in SPLENDID

Outline

Page 31: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Loss-based Learning

x: Input a: Annotation

Page 32: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Loss-based Learning

x: Input a: Annotation h: Hidden information

h

a = “jumping”h = “soft-segmentation”

Page 33: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Loss-based Learning

min Σk Δ(correct ak, predicted ak) Annotation Mismatch

x: Input a: Annotation h: Hidden information

h

a = “jumping”h = “soft-segmentation”

Page 34: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Loss-based Learning

min Σk Δ(correct ak, predicted ak) Annotation Mismatch

Small improvement using small medical dataset

Page 35: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Loss-based Learning

min Σk Δ(correct ak, predicted ak) Annotation Mismatch

Large improvement using large vision dataset

Page 36: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Loss-based Learning

min Σk Δ(correct {ak,hk}, predicted {ak,hk}) Modeled using a distributionOutput Mismatch

Kumar, Packer and Koller, ICML 2012

Inexpensive annotation

No experts required

Richer models can be learnt

Page 37: Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura.

Questions?