Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. ·...

40
School of Computer Science Probabilistic Graphical Models Max-margin learning of GM Eric Xing Lecture 23, Apr 8, 2015 Reading: b r a c e 1 © Eric Xing @ CMU, 2005-2015

Transcript of Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. ·...

Page 1: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

School of Computer Science

Probabilistic Graphical Models

Max-margin learning of GM

Eric XingLecture 23, Apr 8, 2015

Reading:

b r a c e

1© Eric Xing @ CMU, 2005-2015

Page 2: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Classical Predictive Models• Input and output space:

• Predictive function :

• Examples:

• Learning:

where represents a convex loss, and is a regularizer preventing overfitting

– Logistic Regression• Max-likelihood (or MAP) estimation

• Corresponds to a Log loss with L2 R

– Support Vector Machines (SVM)• Max-margin learning

• Corresponds to a hinge loss with L2 R

2© Eric Xing @ CMU, 2005-2015

Page 3: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Classical Predictive Models• Input and output space:

• Learning:

where represents a convex loss, and is a regularizer preventing overfitting

– Logistic Regression• Max-likelihood (or MAP) estimation

• Corresponds to a Log loss with L2 R

– Support Vector Machines (SVM)• Max-margin learning

• Corresponds to a hinge loss with L2 R

Advantages:1. Full probabilistic semantics2. Straightforward Bayesian or direct

regularization 3. Hidden structures or generative hierarchy

Advantages:1. Dual sparsity: few support vectors2. Kernel tricks3. Strong empirical results

3© Eric Xing @ CMU, 2005-2015

Page 4: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Structured Prediction Problem

“Do you want sugar in it?” <verb pron verb noun prep pron>

Unstructured prediction

Structured prediction Part of speech tagging

Image segmentation

4© Eric Xing @ CMU, 2005-2015

Page 5: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

OCR example

braceSequential structure

x y

a-z

a-z

a-z

a-z

a-z

y

x

5© Eric Xing @ CMU, 2005-2015

Page 6: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Image Segmentation

Jointly segmenting/annotating images

Image-image matching, image-text matching

Problem: Given structure (feature), learning Learning sparse, interpretable,

predictive structures/features

c

ccc yxfxZ

xyp ),(exp),(

1)|(

6© Eric Xing @ CMU, 2005-2015

Page 7: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Challenge: Structured outputs, and globally constrained to be a valid tree

Dependency parsing of Sentences

7© Eric Xing @ CMU, 2005-2015

Page 8: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Structured Prediction Graphical Models

Conditional Random Fields (CRFs) (Lafferty et al 2001)– Based on a Logistic Loss (LR)– Max-likelihood estimation (point-

estimate)

Max-margin Markov Networks (M3Ns) (Taskar et al 2003)– Based on a Hinge Loss (SVM)– Max-margin learning (point-estimate)

• Markov properties are encoded in the feature functions

• Input and output space:

L(D;w) , logXy0

exp(w>f(x;y0))

¡w>f(x;y)

L(D;w) , log maxy0

w>f(x;y0)

¡w>f(x;y) + `(y0;y)

8© Eric Xing @ CMU, 2005-2015

Page 9: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Structured Prediction Graphical Models

Challenges: • SPARSE “Interpretable” prediction model• Prior information of structures• Latent structures/variables• Time series and non-stationarity• Scalable to large-scale problems (e.g., 104 input/output dimension)

+R(w)

Conditional Random Fields (CRFs) (Lafferty et al 2001)– Based on a Logistic Loss (LR)– Max-likelihood estimation (point-

estimate)

Max-margin Markov Networks (M3Ns) (Taskar et al 2003)– Based on a Hinge Loss (SVM)– Max-margin learning (point-estimate)

L(D;w) , logXy0

exp(w>f(x;y0))

¡w>f(x;y)

L(D;w) , log maxy0

w>f(x;y0)

¡w>f(x;y) + `(y0;y)

+R(w)

9© Eric Xing @ CMU, 2005-2015

Page 10: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Comparing to unstructured predictive models

• Input and output space:

• Learning:

where represents a convex loss, and is a regularizer preventing overfitting

– Logistic Regression• Max-likelihood (or MAP) estimation

• Corresponds to a Log loss with L2 R

– Support Vector Machines (SVM)• Max-margin learning

• Corresponds to a hinge loss with L2 R

10© Eric Xing @ CMU, 2005-2015

Page 11: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Structured models

space of feasible outputs

scoring function

Assumptions:

linear combination of features

sum of part scores: • index p represents a part in the structure

11© Eric Xing @ CMU, 2005-2015

Page 12: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Large Margin Estimation

© Eric Xing @ CMU, 2005-2015 12

Given training example (x, y*), we want:

*Taskar et al. 03

Maximize margin Mistake weighted margin:

# of mistakes in y

Page 13: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Large Margin Estimation Recall from SVMs:

Maximizing margin is equivalent to minimizing the square of the L2-norm of the weight vector w:

New objective function:

13© Eric Xing @ CMU, 2005-2015

Page 14: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

OCR Example

We want:argmaxword wT f( , word) = “brace”

Equivalently:wT f( ,“brace”) > wT f( ,“aaaaa”)wT f( ,“brace”) > wT f( ,“aaaab”)…wT f( ,“brace”) > wT f( ,“zzzzz”)

a lot!

14© Eric Xing @ CMU, 2005-2015

Page 15: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Brute force enumeration of constraints:

The constraints are exponential in the size of the structure

Alternative: min-max formulation add only the most violated constraint

Handles more general loss functions Only polynomial # of constraints needed

Min-max Formulation

15© Eric Xing @ CMU, 2005-2015

Page 16: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Min-max Formulation

Key step: convert the maximization in the constraint from discrete to continuous This enables us to plug it into a QP

How to do this conversion? Linear chain example in the next slides

discrete optim. continuous optim.

16© Eric Xing @ CMU, 2005-2015

Page 17: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

y z map for linear chain structures

0 1 . 00 0 . 0. . . 00 0 0 0

10:0

01:0

10:0

01:0

01:0

AB:B

0 0 . 01 0 . 0. . . 00 0 0 0

0 1 . 00 0 . 0. . . 00 0 0 0

0 0 . 00 1 . 0. . . 00 0 0 0

AB:B

A B . B A B . B A B . B A B . B17© Eric Xing @ CMU, 2005-2015

Page 18: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

y z map for linear chain structures

0 0 0 0

0 0 0 0

0 1 0 0

0 0 0 0

0

0

1

0

0 1 0 0 normalization

agreement

Rewriting the maximization function in terms of indicator variables:

18© Eric Xing @ CMU, 2005-2015

Page 19: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Min-max formulation Original problem:

Transformed problem:

Has integral solutions z for chains, trees Can be fractional for untriangulated networks

19© Eric Xing @ CMU, 2005-2015

Page 20: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Min-max formulation Using strong Lagrangian duality:

(beyond the scope of this lecture)

Use the result above to minimize jointly over w and :

20© Eric Xing @ CMU, 2005-2015

Page 21: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Min-max formulation

Formulation produces compact QP for Low-treewidth Markov networks Associative Markov networks Context free grammars Bipartite matchings Any problem with compact LP inference

© Eric Xing @ CMU, 2005-2015

Page 22: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Results: Handwriting Recognition

© Eric Xing @ CMU, 2005-2015

Length: ~8 charsLetter: 16x8 pixels 10-fold Train/Test5000/50000 letters600/6000 words

Models:Multiclass-SVMs*CRFsM3 nets

Crammer & Singer 01

0

5

10

15

20

25

30

CRFsMC–SVMs M^3 nets

Test

err

or (a

vera

ge p

er-c

hara

cter

) rawpixels

quadratickernel

cubickernel

45% error reduction over linear CRFs33% error reduction over multiclass

SVMs

better

Page 23: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Results: Hypertext Classification

© Eric Xing @ CMU, 2005-2015

0

5

10

15

20

Test

Err

or

SVMs RMNS M^3Ns

WebKB dataset Four CS department websites: 1300 pages/3500 links Classify each page: faculty, course, student, project, other Train on three universities/test on fourth

53% error reduction over SVMs38% error reduction over RMNs

*Taskar et al 02

better

Page 24: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

MLE versus max-margin learning Likelihood-based estimation

– Probabilistic (joint/conditional likelihood model)

– Easy to perform Bayesian learning, and incorporate prior knowledge, latent structures, missing data

– Bayesian or direct regularization– Hidden structures or generative

hierarchy

• Max-margin learning– Non-probabilistic (concentrate on input-

output mapping)– Not obvious how to perform Bayesian

learning or consider prior, and missing data– Support vector property, sound theoretical

guarantee with limited samples– Kernel tricks

• Maximum Entropy Discrimination (MED) (Jaakkola, et al., 1999) – Model averaging

– The optimization problem (binary classification)

24© Eric Xing @ CMU, 2005-2015

Page 25: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Structured MaxEnt Discrimination (SMED):

Feasible subspace of weight distribution:

Average from distribution of M3Ns

Maximum Entropy Discrimination Markov Networks

p

25© Eric Xing @ CMU, 2005-2015

Page 26: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Solution to MaxEnDNet

Theorem:– Posterior Distribution:

– Dual Optimization Problem:

26© Eric Xing @ CMU, 2005-2015

Page 27: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Gaussian MaxEnDNet (reduction to M3N)

Theorem– Assume

Posterior distribution: Dual optimization:

Predictive rule:

Thus, MaxEnDNet subsumes M3Ns and admits all the merits of max-margin learning

Furthermore, MaxEnDNet has at least three advantages …

M3N

27© Eric Xing @ CMU, 2005-2015

Page 28: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Three Advantages An averaging Model: PAC-Bayesian prediction error guarantee

(Theorem 3)

Entropy regularization: Introducing useful biases Standard Normal prior => reduction to standard M3N (we’ve seen it)

Laplace prior => Posterior shrinkage effects (sparse M3N)

Integrating Generative and Discriminative principles (next class) Incorporate latent variables and structures (PoMEN) Semisupervised learning (with partially labeled data)

28© Eric Xing @ CMU, 2005-2015

Page 29: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Laplace MaxEnDNet (primal sparse M3N)

Laplace Prior:

Corollary 4: Under a Laplace MaxEnDNet, the posterior mean of parameter vector w is:

The Gaussian MaxEnDNet and the regular M3N has no such shrinkage there, we have

(Zhu and Xing, ICML 2009)

29© Eric Xing @ CMU, 2005-2015

Page 30: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

LapMEDN vs. L2 and L1regularization

L1 and L2 norms KL norms

Corollary 5: LapMEDN corresponding to solving the following primal optimization problem:

KL norm:

30© Eric Xing @ CMU, 2005-2015

Page 31: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Recall Primal and Dual Problems of M3Ns Primal problem:

Algorithms– Cutting plane– Sub-gradient– …

Dual problem:

Algorithms:– SMO– Exponentiated gradient– …

• So, M3N is dual sparse!

31© Eric Xing @ CMU, 2005-2015

Page 32: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Exact primal or dual function is hard to optimize

Use the hierarchical representation of Laplace prior, we get:

We optimize an upper bound:

Why is it easier?– Alternating minimization leads to nicer optimization problems

Keep fixed Keep fixed- The effective prior is normal - Closed form solution of and its expectation

Variational Learning of LapMEDN

32© Eric Xing @ CMU, 2005-2015

Page 33: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Algorithmic issues of solving M3Ns Primal problem:

Algorithms– Cutting plane– Sub-gradient– …

Dual problem:

Algorithms:– SMO– Exponentiated gradient– …

Nonlinear Features with Kernels Generative entropic kernels [Martins et al, JMLR 2009] Nonparametric RKHS embedding of rich distributions [on going]

Approximate decoders for global features LP-relaxed Inference (polyhedral outer approx.) [Martins et al, ICML 09, ACL 09] Balancing Accuracy and Runtime: Loss-augmented inference

33© Eric Xing @ CMU, 2005-2015

Page 34: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Experimental results on OCR datasets

braceStructured output

x y

a-z

a-z

a-z

a-z

a-z

y

x34© Eric Xing @ CMU, 2005-2015

Page 35: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

We randomly construct OCR100, OCR150, OCR200, and OCR250for 10 fold CV.

Experimental results on OCR datasets

35© Eric Xing @ CMU, 2005-2015

Page 36: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Feature Selection

36© Eric Xing @ CMU, 2005-2015

Page 37: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Sensitivity to Regularization Constants

• L1-CRFs are much sensitive to regularization constants; the others are more stable

• LapM3N is the most stable one

L1-CRF and L2-CRF:- 0.001, 0.01, 0.1, 1, 4, 9, 16

M3N and LapM3N:- 1, 4, 9, 16, 25, 36, 49, 64, 81

37© Eric Xing @ CMU, 2005-2015

Page 38: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Summary: Margin-based Learning Paradigms

Structured prediction

Structured prediction

Bayeslearning

Bayeslearning

38© Eric Xing @ CMU, 2005-2015

Page 39: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Open Problems Unsupervised CRF learning and MaxMargin Learning

Only X, but not Y (sometimes part of Y), is available

We want to recognize a pattern that is maximally different from the rest!

What does margin or conditional likelihood mean in these cases?Given only {Xn}, how can we define the cost function?

Algorithmic challenge

c

ccc yxfxZ

xyp ),(exp),(

)|( 1 ),'(),(margin nnnn

T xyFxyFw

39© Eric Xing @ CMU, 2005-2015

Page 40: Probabilistic Graphical Modelsepxing/Class/10708-15/slides/lecture23-M... · 2015. 4. 8. · Structured Prediction Graphical Models Challenges: • SPARSE “Interpretable” prediction

Remember: Elements of Learning Here are some important elements to consider before you start:

Task: Embedding? Classification? Clustering? Topic extraction? …

Data and other info: Input and output (e.g., continuous, binary, counts, …) Supervised or unsupervised, of a blend of everything? Prior knowledge? Bias?

Models and paradigms: BN? MRF? Regression? SVM? Bayesian/Frequents ? Parametric/Nonparametric?

Objective/Loss function: MLE? MCLE? Max margin? Log loss, hinge loss, square loss? …

Tractability and exactness trade off: Exact inference? MCMC? Variational? Gradient? Greedy search? Online? Batch? Distributed?

Evaluation: Visualization? Human interpretability? Perperlexity? Predictive accuracy?

It is better to consider one element at a time!40© Eric Xing @ CMU, 2005-2015