Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf ·...

20
Feature-aware Label Space Dimension Reduction for Multi-label Classification Hsuan-Tien Lin National Taiwan University 11/6/2012 first part: with Farbound Tai, in Neural Computation 2012 second part: with Yao-Nan Chen, to appear in NIPS 2012 H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 1 / 20

Transcript of Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf ·...

Page 1: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Feature-aware Label Space Dimension Reductionfor Multi-label Classification

Hsuan-Tien Lin

National Taiwan University

11/6/2012

first part: with Farbound Tai, in Neural Computation 2012second part: with Yao-Nan Chen, to appear in NIPS 2012

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 1 / 20

Page 2: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

A Short IntroductionHsuan-Tien Lin

Associate Professor (2012–present; AssistantProfessor 2008-2012), Dept. of CSIE, National TaiwanUniversity

Leader of the Computational Learning Laboratory

Co-author of the new introductory ML textbook“Learning from Data: A Short Course”

goal: make machine learning more realistic

multi-class cost-sensitive classification: in ICML ’10 (Haifa),BIBM ’11, KDD ’12, etc.online/active learning: in ACML ’11, ICML ’12, ACML ’12video search: in CVPR ’11multi-label classification : in ACML ’11, NIPS ’12, etc.large-scale data mining (w/ Profs. S.-D. Lin & C.-J. Lin & students):third place of KDDCup ’09, champions of ’10, ’11 (×2), ’12

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 2 / 20

Page 3: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

Which Fruits?

?: {orange, strawberry, kiwi}

apple orange strawberry kiwi

multi-label classification:classify input to multiple (or no) categories

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 3 / 20

Page 4: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

Binary Relevance: Multi-label Classification via Yes/No

Binary Classification

{yes, no}Multi-label w/ L classes: L yes/no questions

apple (N), orange (Y), strawberry (Y), kiwi (Y)

Binary Relevance approach:transformation to multiple isolated binary classificationdisadvantages:

isolation—hidden relations (if any) between labels not exploitedunbalanced—few yes, many noscalability—training time O(L)

Binary Relevance: simple (& good) benchmark withknown disadvantages

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 4 / 20

Page 5: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

Multi-label Classification Setup

Input

N examples (input xn, label-set Yn) ∈ X × 2{1,2,···L}

fruits: X = encoding(pictures), Yn ⊆ {1,2, · · · ,4}

Output

a multi-label classifier g(x) that closely predicts the label-set Yassociated with some unseen inputs x (by exploiting hidden relationsbetween labels)

Hamming loss: normalized symmetric difference 1L |g(x) 4 Y|

multi-label classification: hot and important

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 5 / 20

Page 6: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

Geometric View of Multi-label Classification

label set binary codeY1 = {o} y1 = [0,1,0]

Y2 = {a, o} y2 = [1,1,0]

Y3 = {a, s} y3 = [1,0,1]

Y4 = {o} y4 = [0,1,0]

Y5 = {} y5 = [0,0,0]

y1,y4 y2

y3

y5

subset Y of 2{1,2,··· ,L} ⇔ vertex of hypercube {0,1}L

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 6 / 20

Page 7: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

Geometric Interpretation of Binary Relevance

y1,y4 y2

y3

y5

y2, y3y1, y4, y5

y1, y2, y4

y3, y5

y3

y1, y2, y4, y5

Binary Relevance: project to L natural axes & classify—can we project to other M(� L) directions & learn?

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 7 / 20

Page 8: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

A Label Space Dimension Reduction Approach

General Compressive Sensing

sparse (few 1 many 0) binary vectors y can be robustly compressedby projecting to M � L random directions {p1,p2, · · · ,pM}

Compressive Sensing for Multi-label Classification (Hsu et al., 2009)

1 compress: transform {(xn,yn)} to {(xn,cn)} by cn = Pyn withsome M by L random matrix P = [p1,p2, · · · ,pM ]T

2 learn: get regression function r(x) from xn to cn

3 decode: g(x) = sparse binary vector that P-projects closest to r(x)

Compressive Sensing:efficient in training: random projection w/ M � Linefficient in testing: time-consuming decoding

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 8 / 20

Page 9: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

Geometric Interpretation of Compressive Sensing

y1,y4 y2

y3

y5

Compressive Sensing:

project to random flat (linear subspace)

learn “on” the flat; decode to closest sparse vertex

other (better) flat? other (faster) decoding?

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 9 / 20

Page 10: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

Our Contributions

Two Novel Approaches for Label Space Dimension Reductionalgorithmic: scheme for fast decodingtheoretical: justification for best projection, one feature-unawareand the other feature-awarepractical: significantly better performance than compressivesensing (& binary relevance)

will now introduce the key ideas behind the approaches

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 10 / 20

Page 11: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

Faster Decoding: Round-based

Compressive Sensing Revisited

decode: g(x) = sparse binary vector that P-projects closest to r(x)

For any given “prediction on subspace” r(x),find sparse binary vector that P-projects closest to r(x): slow—optimization of `1-regularized objectivefind any binary vector that P-projects closest to r(x): fast

g(x) = round(PT r(x)) for orthogonal P

round-based decoding: simple & faster alternative

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 11 / 20

Page 12: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

Better Projection: Principal Directions

Compressive Sensing Revisited

compress: transform {(xn,yn)} to {(xn,cn)} by cn = Pyn withsome M by L random matrix P

random projection: arbitrary directionsbest projection: principal directions

principal directions: best approximation to desired out-put yn during dimension reduction (why?)

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 12 / 20

Page 13: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

Novel Theoretical Guarantee

Linear Transform + Learn + Round-based Decoding

Theorem (Tai and Lin, 2012)

If g(x) = round(PT r(x)) (& pm orthogonal to each other),

1L|g(x) 4 Y|︸ ︷︷ ︸

Hamming loss

≤ const ·

‖r(x)− c︷︸︸︷Py ‖2︸ ︷︷ ︸

learn

+ ‖y− PT

c︷︸︸︷Py ‖2︸ ︷︷ ︸

compress

‖r(x)− c‖2: prediction error from input to codeword‖y− PT c‖2: encoding error from desired output to codeword

principal directions: best approximation todesired output yn during dimension reduction (indeed)

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 13 / 20

Page 14: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

Proposed Approach: Principal Label Space Transform

From Compressive Sensing to PLST

1 compress: transform {(xn,yn)} to {(xn,cn)} by cn = Pyn with theM by L principal matrix P

2 learn: get regression function r(x) from xn to cn

3 decode: g(x) = round(PT r(x))

principal directions: via Principal Component Analysis on {yn}Nn=1—BTW, improvements when shifting yn by its estimated meanphysical meaning behind pm: key (linear) label correlations

PLST: improving CS by projecting to key correlations

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 14 / 20

Page 15: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

Hamming Loss Comparison: Full-BR, PLST & CS

0 20 40 60 80 100

0.03

0.035

0.04

0.045

0.05

Full−BR (no reduction)

CS

PLST

mediamill (Linear Regression)0 20 40 60 80 100

0.03

0.035

0.04

0.045

0.05

Full−BR (no reduction)

CS

PLST

mediamill (Decision Tree)

PLST better than Full-BR: fewer dimensions, similar (orbetter) performance

PLST better than CS: faster, better performance

similar findings across data sets and regressionalgorithms (can we do even better?)

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 15 / 20

Page 16: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

Theoretical Guarantee of PLST Revisited

Linear Transform + Learn + Round-based Decoding

Theorem (Tai and Lin, 2012)

If g(x) = round(PT r(x)),

1L|g(x) 4 Y|︸ ︷︷ ︸

Hamming loss

≤ const ·

‖r(x)− c︷︸︸︷Py ‖2︸ ︷︷ ︸

learn

+ ‖y− PT

c︷︸︸︷Py ‖2︸ ︷︷ ︸

compress

‖y− PT c‖2: encoding error, minimized during encoding‖r(x)− c‖2: prediction error, minimized during learningbut good encoding may not be easy to learn; vice versa

PLST: minimize two errors separately (sub-optimal)(can we do even better by minimizing jointly?)

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 16 / 20

Page 17: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

The In-Sample Optimization Problem

minr,P

‖r(X)− PY‖2︸ ︷︷ ︸learn

+ ‖Y− PT PY‖2︸ ︷︷ ︸compress

start from a well-known tool, linear regression, as r

r(X) = XW

for fixed P: a closed-form solution for learn is

W∗ = X†PY

substitute W∗ to objective function, then ...

optimal P:for learn top eigenvectors of YT (I− XX†)Yfor compress top eigenvectors of YT Yfor both top eigenvectors of YT XX†Y

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 17 / 20

Page 18: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

Proposed Approach: Conditional Principal LabelSpace Transform

From PLST to CPLST

1 compress: transform {(xn,yn)} to {(xn,cn)} by cn = Pyn with theM by L conditional principal matrix P

2 learn: get regression function r(x) from xn to cn, ideallyusing linear regression

3 decode: g(x) = round(PT r(x))

conditional principal directions: top eigenvectors of YT XX†Yphysical meaning behind pm: key (linear) label correlations thatare “easy to learn” subject to the features (feature-aware)

CPLST: feature-aware label space dimension reduction—can also pair with kernel regression (non-linear)

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 18 / 20

Page 19: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

Hamming Loss Comparison: PLST & CPLST

0 5 10 150.2

0.205

0.21

0.215

0.22

0.225

0.23

0.235

0.24

0.245

# of dimension

Ha

mn

ing

lo

ss

pbr

cpa

plst

cplst

yeast (Linear Regression)

CPLST better than PLST: better performance across alldimensions

similar findings across data sets and regressionalgorithms (even decision trees)

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 19 / 20

Page 20: Feature-aware Label Space Dimension Reduction for Multi ...htlin/talk/doc/cplst.istw.handout.pdf · Feature-aware Label Space Dimension Reduction for Multi-label Classification ...

Multi-label Classification

Conclusion

PLST

transformation tomulti-output regression

project to principaldirections and capture keycorrelations

efficient learning (after labelspace dimension reduction)

efficient decoding (round)

sound theoretical guarantee

good practical performance(better than CS & BR)

CPLST

project to conditional(feature-aware) principaldirections and capture keylearnable correlations

can be kernelized forexploiting feature power

sound theoretical guarantee(via PLST)

even better practicalperformance (than PLST)

Thank you! Questions?

H.-T. Lin (NTU) Feature-aware LSDR 11/6/2012 20 / 20