(Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf ·...

49
© Mori/Möller (Kernels +) Support Vector Machines Machine Learning Torsten Möller

Transcript of (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf ·...

Page 1: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

(Kernels +)Support Vector Machines

Machine LearningTorsten Möller

Page 2: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Reading

• Chapter 5 of “Machine Learning—An Algorithmic Perspective” by MarslandChapter 6+7 of “Pattern Recognition

and Machine Learning” by Bishop• Chapter 12 of “The Elements of

Statistical Learning” by Hastie, Tibshirani, Friedman

2

Page 3: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Today

• Motivation• Kernels• Support Vectors

– the Idea!– Lagrange Multipliers– Min-max vs. Max-min– solving constraint problem– non-separable data

3

Page 4: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Generalized linear model

• This is called a generalized linear model• f (·) is a fixed non-linear function, e.g.

• Decision boundary between classes will be linear function of x

• Can also apply non-linearity to x4

y(x) = f(wTx+ w0)

f(u) =

⇢1 if u � 0

0 otherwise

Page 5: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

−1 −0.5 0 0.5 1−1

−0.5

0

0.5

1

−1 −0.5 0 0.5 1−1

−0.5

0

0.5

1

−1 −0.5 0 0.5 1−1

−0.5

0

0.5

1

−1 −0.5 0 0.5 1−1

−0.5

0

0.5

1

© Mori/Möller

Perceptron learning illustration

5

Page 6: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Limitations of Perceptrons

• Perceptrons can only solve linearly separable problems in feature space (same as the other models in this chapter)

• Canonical example of non-separable problem is X-OR (real datasets can look like this too)

6

Expressiveness of perceptrons

Consider a perceptron with g = step function (Rosenblatt, 1957, 1960)

Can represent AND, OR, NOT, majority, etc.

Represents a linear separator in input space:

!jWjxj > 0 or W · x > 0

I 1

I 2

I 1

I 2

I 1

I 2

?

(a) (b) (c)

0 1

0

1

0

1 1

0

0 1 0 1

xor I 2I 1orI 1 I 2and I 1 I 2

Chapter 20 11

Page 7: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Non-linear decision boundaries

• it’s not linear in x anymore• separation may be easier in higher dim

7

y(x) = f(wT�(x) + b)

Page 8: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Today

• Motivation• Kernels• Support Vectors

– the Idea!– Lagrange Multipliers– Min-max vs. Max-min– solving constraint problem– non-separable data

8

Page 9: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Non-linear mappings• Last week, for logistic regression (classification),

we looked at models with wTφ(x)• The feature space φ(x) could be high-dimensional• This is good because if data aren’t separable in

original input space (x), they may be in feature space φ(x)

9

Page 10: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Non-linear mappings

• We’d like to avoid computing high-dimensional φ(x)

• We’d like to work with x which doesn’t have a natural vector-space representation– e.g. graphs, sets, strings

10

Page 11: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Kernel trick• Before, we would explicitly compute φ(xi) for each

datapoint– Run algorithm in feature space

• For some feature spaces, can compute dot product φ(xi)Tφ(xj) efficiently

• Efficient method is computation of a kernel function k(xi, xj) = φ(xi)Tφ(xj)

• The kernel trick is to rewrite an algorithm to only have x enter in the form of dot products

• The menu:– Kernel trick examples– Kernel functions

11

Page 12: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

||�(xi)� �(xj)||2 = �(xi)T�(xi)� 2�(xi)

T�(xj) + �(xj)

T�(xj)

= k(xi,xi)� 2k(xi,xj) + k(xj ,xj)

© Mori/Möller

A kernel trick• Let’s look at the nearest-neighbour classification

algorithm• For input point xi, find point xj with smallest

distance:

• If we used a non-linear feature space φ(·)

• So nearest-neighbour can be done in a high-dimensional feature space without actually moving to it

12

||xi � xj ||2 = (xi � xj)T (xi � xj)

= xiTxi � 2xi

Txj + xj

Txj

Page 13: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

k(x, z) = (1 + x

Tz)2

© Mori/Möller

A Kernel Function• Consider the kernel function• we find (x & z be 2D vectors)

• So this particular kernel function does correspond to a dot product in a feature space (is valid)

• Computing k(x, z) is faster than explicitly computing φ(x)Tφ(z)– In higher dimensions, larger exponent, much faster

13

k(x, z) = (1 + x1z1 + x2z2)2

= 1 + 2x1z1 + 2x2z2 + x

21z

21 + 2x1z1x2z2 + x

22z

22

= (1,p2x1,

p2x2, x

21,p2x1x2, x

22)(1,

p2z1,

p2z2, z

21 ,p2z1z2, z

22)

T

= �(x)T�(z)

Page 14: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Why kernels?• Why bother with kernels?

– Often easier to specify how similar two things are (dot product) than to construct explicit feature space φ.

– There are high-dimensional (even infinite) spaces that have efficient-to-compute kernels

• Separability

• So you want to use kernels– Need to know when kernel function is valid,

so we can apply the kernel trick14

Page 15: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Valid kernels• Given some arbitrary function k(xi, xj), how do we

know if it corresponds to a dot product in some space?

• Valid kernels: if k(·, ·) satisfies:– Symmetric;k(xi,xj)=k(xj,xi)– Positive definite; for any x1,...,xN, the Gram matrix K

must be positive semi-definite:• Positive semi-definite means xTKx≥0 for all x

then k(·, ·) corresponds to a dot product in some space φ

• a.k.a. Mercer kernel, admissible kernel, reproducing kernel

15

Page 16: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Examples of kernels

• Linear kernel k(x1,x2)=x1Tx2

– φ(x)=x• Polynomial kernel k(x1,x2) = (1+x1Tx2)d

– Contains all polynomial terms up to degree d• Gaussian (radial) kernel

k(x1,x2) =exp(−||x1−x2||2/2σ2)– Infinite dimension feature space

16

Page 17: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Constructing kernels

• Can build new valid kernels from existing valid ones:– k(x1,x2) = c k1(x1,x2), c>0– k(x1,x2) = k1(x1,x2) + k2(x1,x2)– k(x1,x2) = k1(x1,x2) k2(x1,x2)– k(x1,x2) = exp(k1(x1,x2))

• Table on p. 296 gives many such rules

17

Page 18: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

More kernels• Stationary kernels are only a function of the difference

between arguments: k(x1, x2) = k(x1 − x2)– Translation invariant in input space: k(x1, x2) =k(x1 +c,x2 +c)

• Homogeneous kernels, a.k.a. radial basis functions only a function of magnitude of difference: k(x1, x2) = k(||x1 − x2||)

• Set subsets k(A1, A2) = 2|A1∩A2|, where |A| denotes number of elements in A

• Domain-specific: think hard about your problem, figure out what it means to be similar, define as k(·, ·), prove positive definite (Feynman algorithm)

18

Page 19: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Today

• Motivation• Kernels• Support Vectors

– the Idea!– Lagrange Multipliers– Min-max vs. Max-min– solving constraint problem– non-separable data

19

Page 20: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Non-linear decision boundaries

• consider two-class classification• let’s assume

– moved training data into high-dimensional space feature space

– data are (indeed) linearly separable in that space

• could now apply (simple, linear) classifier, BUT ...

20

y(x) = f(wT�(x) + b)

Page 21: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

... there are many decision boundaries!

• which one to pick?

21

Page 22: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Maximum margin

• We can define the margin of a classifier as the minimum distance to any example

• In support vector machines the decision boundary which maximizes the margin is chosen

22

y = 1y = 0

y = �1

margin

Page 23: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Marginal geometry

• Recall from lecture 3, for• projection of x in w dir. is • y(x) = 0 when wTx = -b, or • So is

signed distance todecision boundary

23

y(x) = w

Tx+ b

w

Tx

||w||

w

Tx

||w|| =�b

||w||w

Tx

||w|| ��b

||w|| =y(x)

||w||x2

x1

w

x

y(x)kwk

x?

�w0kwk

y = 0y < 0

y > 0

R2

R1

Page 24: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

tny(xn)

||w||

© Mori/Möller

Support Vectors• Assuming data are

separated by the hyperplane, distance to decision boundary is

• The maximum margin criterion chooses w, b by:

• Points with this min value are known as support vectors

24

argmax

w,b

⇢1

||w|| min

n[tn(w

T�(xn) + b)]

y = 1

y = 0

y = �1

Page 25: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Canonical representation• This optimization problem is complex:

• Note that rescaling w → κw and b → κb does not change distance tny(xn)/||w|| (many equiv. answers)

• So for x∗ closest to surface, can set:

• All other points are at least this far away:

25

argmax

w,b

⇢1

||w|| min

n[tn(w

T�(xn) + b)]

t⇤(wT�(x⇤) + b) = 1

8n , tn(wT�(xn) + b) � 1

Page 26: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Canonical representation• This optimization problem is complex:

• Under these constraints, the optimization becomes:

• Can be formulated as a constrained optimization problem:

26

argmax

w,b

⇢1

||w|| min

n[tn(w

T�(xn) + b)]

argmax

w,b

1

||w|| = argmin

w,b

1

2

||w||2

Page 27: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Canonical representation

• So the optimization problem is now a constrained optimization problem:

• To solve this, we need to take a detour into Lagrange multipliers

27

argminw,b

1

2||w||2

s.t. , 8n , tn(wT�(xn) + b) � 1

Page 28: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Today

• Motivation• Kernels• Support Vectors

– the Idea!– Lagrange Multipliers– Min-max vs. Max-min– solving constraint problem– non-separable data

28

Page 29: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Lagrange Multipliers• Consider the problem:

• Points on g(x) = 0 must have ∇g(x) normal to surface

• A stationary point must have no change in f in the direction of the surface, so ∇f(x) must also be in this same direction

• So there must be some λ s. t. ∇f(x) + λ∇g(x) = 029

max

x

f(x)

s.t. , g(x) = 0

rf(x)

rg(x)

xA

g(x) = 0

Page 30: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Lagrange Multipliers• Consider the problem:

• So there must be some λ s. t. ∇f(x) + λ∇g(x) = 0• Define Lagrangian:

• Stationary points of L(x,λ) have• ∇xL(x, λ) = ∇f(x) + λ∇g(x) = 0 and ∇λL(x, λ) =g(x) =0• So are stationary points of constrained problem!

30

max

x

f(x)

s.t. , g(x) = 0

rf(x)

rg(x)

xA

g(x) = 0

L(x,�) = f(x) + �g(x)

Page 31: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

• Consider the problem

• Lagrangian:

• Stationary points require:

• So stationary point is© Mori/Möller

Lagrange Multipliers Example

31

max

x

f(x1, x2) = 1� x

21 � x

22

s.t. g(x1, x2) = x1 + x2 � 1 = 0

L(x,�) = 1� x

21 � x

22 + �(x1 + x2 � 1)

@L/@x1 = �2x1 + � = 0

@L/@x2 = �2x2 + � = 0

@L/@� = x1 + x2 � 1 = 0

(x⇤1, x

⇤2) = (

1

2,

1

2), � = 1

g(x1, x2) = 0

x1

x2

(x?1, x

?2)

Page 32: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Lagrange Multipliers - Inequality Constraints• Consider the problem:

• Optimization over a region – solutions either at stationary points (gradients 0) in region or on boundary

L(x, λ) = f(x) + λg(x)• Solutions have either:

– ∇f(x)=0 and λ=0 (in region),or– ∇f(x)=−λ∇g(x) and λ>0 (on boundary, > for maximizing f).– For both, λg(x) = 0

• Solutions have g(x) ≥ 0,λ ≥ 0,λg(x) = 032

max

x

f(x)

s.t., g(x) � 0

rf(x)

rg(x)

xA

xB

g(x) = 0g(x) > 0

Page 33: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Lagrange Multipliers - Inequality Constraints• Consider the problem:

• Exactly how does the Lagrangian relate to the optimization problem in this case?

L(x, λ) = f(x) + λg(x)• It turns out that the solution to optimization problem is:

33

max

x

f(x)

s.t., g(x) � 0

rf(x)

rg(x)

xA

xB

g(x) = 0g(x) > 0

max

x

min

��0L(x,�)

Page 34: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Max-min• Lagrangian

L(x, λ) = f(x) + λg(x)• Consider the following:• If constraint g(x) ≥ 0 is not satisfied, g(x) < 0

– λ can be made ∞, and minλ≥0 L(x,λ) = −∞• Otherwise, minλ≥0 L(x,λ) = f(x), (with λ=0)

34

min��0

L(x,�)

min

��0L(x,�) =

⇢�1 constraint not satisfied

f(x) otherwise

Page 35: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Min-max (Dual form)

• So the solution to optimization problem is (called the primal problem):

• The dual problem is when one switches the order of the max and min:

35

LP (x) = max

x

min

��0L(x,�)

LD(�) = min

��0max

x

L(x,�)

Page 36: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Min-max (Dual form)

• These are not the same, but it is always the case the dual is a bound for the primal (in the SVM case with minimization, LD(λ) ≤ LP(x))

• Slater’s theorem gives conditions for these two problems to be equivalent, with LD(λ) = LP(x).

• Slater’s theorem applies for the SVM optimization problem, and solving the dual leads to kernelization and can be easier than solving the primal

36

LP (x) = max

x

min

��0L(x,�) LD(�) = min

��0max

x

L(x,�)

Page 37: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Today

• Motivation• Kernels• Support Vectors

– the Idea!– Lagrange Multipliers– Min-max vs. Max-min– solving constraint problem– non-separable data

37

Page 38: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Now where were we?

• So the optimization problem is now a constrained optimization problem:

• For this problem, the Lagrangian (with N multipliers an) is:

38

argminw,b

1

2||w||2

s.t. , 8n , tn(wT�(xn) + b) � 1

L(w, b,a) =||w||2

2�

NX

n=1

an�tn(w

T�(xn) + b)� 1

Page 39: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Now where were we?

• The Lagrangian (with N multipliers an) is:

• We can find the derivatives of L wrt w, b and set to 0:

39

L(w, b,a) =||w||2

2�

NX

n=1

an�tn(w

T�(xn) + b)� 1

w =NX

n=1

antn�(xn)

0 =NX

n=1

antn

Page 40: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Dual form• Plugging those equations into L removes w and b

and results in a version of L where ∇w,bL = 0:

• this new L is the dual representation of the problem (maximize with constraints)– Note that it is kernelized– It is quadratic, convex in a– Bounded above since K positive semi-definite– Optimal a can be found– With large datasets, descent strategies employed

40

L(a) =NX

n=1

an � 1

2

NX

n=1

NX

m=1

anamtntm�(xn)T�(xm)

Page 41: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Examples

• SVM trained using Gaussian kernel• Support vectors circled• Note non-linear decision boundary in x

space

41

Page 42: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

21

4.3. Some Examples of Nonlinear SVMs

The first kernels investigated for the pattern recognition problem were the following:

K(x,y) = (x · y + 1)p (74)

K(x,y) = e!"x!y"2/2!2(75)

K(x,y) = tanh(!x · y ! ") (76)

Eq. (74) results in a classifier that is a polynomial of degree p in the data; Eq. (75) givesa Gaussian radial basis function classifier, and Eq. (76) gives a particular kind of two-layersigmoidal neural network. For the RBF case, the number of centers (NS in Eq. (61)),the centers themselves (the si), the weights (#i), and the threshold (b) are all producedautomatically by the SVM training and give excellent results compared to classical RBFs,for the case of Gaussian RBFs (Scholkopf et al, 1997). For the neural network case, thefirst layer consists of NS sets of weights, each set consisting of dL (the dimension of thedata) weights, and the second layer consists of NS weights (the #i), so that an evaluationsimply requires taking a weighted sum of sigmoids, themselves evaluated on dot products ofthe test data with the support vectors. Thus for the neural network case, the architecture(number of weights) is determined by SVM training.

Note, however, that the hyperbolic tangent kernel only satisfies Mercer’s condition forcertain values of the parameters ! and " (and of the data "x"2). This was first noticedexperimentally (Vapnik, 1995); however some necessary conditions on these parameters forpositivity are now known14.

Figure 9 shows results for the same pattern recognition problem as that shown in Figure7, but where the kernel was chosen to be a cubic polynomial. Notice that, even thoughthe number of degrees of freedom is higher, for the linearly separable case (left panel), thesolution is roughly linear, indicating that the capacity is being controlled; and that thelinearly non-separable case (right panel) has become separable.

Figure 9. Degree 3 polynomial kernel. The background colour shows the shape of the decision surface.

Finally, note that although the SVM classifiers described above are binary classifiers, theyare easily combined to handle the multiclass case. A simple, e!ective combination trains

© Mori/Möller

Examples• From Burges, A Tutorial on Support Vector Machines for

Pattern Recognition (1998)• SVM trained using cubic polynomial kernel

• Left is linearly separable– Note decision boundary is almost linear, even using cubic

polynomial kernel• Right is not linearly separable

– But is separable using polynomial kernel

42

k(x1,x2) = (xT1 x2 + 1)3

21

4.3. Some Examples of Nonlinear SVMs

The first kernels investigated for the pattern recognition problem were the following:

K(x,y) = (x · y + 1)p (74)

K(x,y) = e!"x!y"2/2!2(75)

K(x,y) = tanh(!x · y ! ") (76)

Eq. (74) results in a classifier that is a polynomial of degree p in the data; Eq. (75) givesa Gaussian radial basis function classifier, and Eq. (76) gives a particular kind of two-layersigmoidal neural network. For the RBF case, the number of centers (NS in Eq. (61)),the centers themselves (the si), the weights (#i), and the threshold (b) are all producedautomatically by the SVM training and give excellent results compared to classical RBFs,for the case of Gaussian RBFs (Scholkopf et al, 1997). For the neural network case, thefirst layer consists of NS sets of weights, each set consisting of dL (the dimension of thedata) weights, and the second layer consists of NS weights (the #i), so that an evaluationsimply requires taking a weighted sum of sigmoids, themselves evaluated on dot products ofthe test data with the support vectors. Thus for the neural network case, the architecture(number of weights) is determined by SVM training.

Note, however, that the hyperbolic tangent kernel only satisfies Mercer’s condition forcertain values of the parameters ! and " (and of the data "x"2). This was first noticedexperimentally (Vapnik, 1995); however some necessary conditions on these parameters forpositivity are now known14.

Figure 9 shows results for the same pattern recognition problem as that shown in Figure7, but where the kernel was chosen to be a cubic polynomial. Notice that, even thoughthe number of degrees of freedom is higher, for the linearly separable case (left panel), thesolution is roughly linear, indicating that the capacity is being controlled; and that thelinearly non-separable case (right panel) has become separable.

Figure 9. Degree 3 polynomial kernel. The background colour shows the shape of the decision surface.

Finally, note that although the SVM classifiers described above are binary classifiers, theyare easily combined to handle the multiclass case. A simple, e!ective combination trains

Page 43: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Today

• Motivation• Kernels• Support Vectors

– the Idea!– Lagrange Multipliers– Min-max vs. Max-min– solving constraint problem– non-separable data

43

Page 44: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Non-separable data (soft-margin classifier)

• For most problems, data will not be linearly separable (even in feature space φ)

• Can relax the constraints fromtny(xn) ≥ 1 to tny(xn) ≥ 1−ξn

• The ξn ≥ 0 are called slack variables– ξn = 0, satisfy original problem,

so xn is on margin or correct sideof margin

– 0 < ξn < 1, inside margin, but stillcorrectly classified

– ξn > 1, mis-classified44

y = 1

y = 0

y = �1

⇠ > 1

⇠ < 1

⇠ = 0

⇠ = 0

Page 45: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Loss function for non-sep data• Non-zero slack variables are bad, penalize

while maximizing the margin:

• Constant C > 0 controls importance of large margin versus incorrect (non-zero slack)– Set using cross-validation

• Optimization is same quadratic, different constraints, convex

45

minCNX

n=1

⇠n +1

2||w||2

y = 1

y = 0

y = �1

⇠ > 1

⇠ < 1

⇠ = 0

⇠ = 0

Page 46: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

SVM Loss function

• The SVM for the separable case solved the problem:

• Can write this as:

• where E∞(z) = 0 if z ≥ 0, ∞ otherwise46

argminw

1

2||w||2

s.t. 8n , tnyn � 1

argminw

NX

n=1

E1(tnyn � 1) + �||w||2

Page 47: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

SVM Loss function

• The SVM for the separable case solved the problem:

• Non-separable case relaxes this to be:

• where ESV(tnyn − 1) = [1 − tnyn]+ hinge loss– [u]+ =u if u≥0, 0 otherwise

47

argminw

1

2||w||2

s.t. 8n , tnyn � 1

argminw

NX

n=1

ESV (tnyn � 1) + �||w||2

Page 48: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

�2 �1 0 1 2z

E(z)

© Mori/Möller

Loss functions

• Linear classifiers, compare loss function used for learning– Black is misclassification

error– Simple linear classifier,

squared error: (yn − tn)2– Logistic regression,

cross-entropy error: tn ln yn

– SVM, hinge loss: ξn = [1 − tnyn]+

48

Page 49: (Kernels +) Support Vector Machinesvda.univie.ac.at/Teaching/ML/14s/LectureNotes/05_SVM.pdf · 2015. 2. 25. · • Chapter 5 of “Machine Learning—An Algorithmic Perspective”

© Mori/Möller

Summary• Kernels

– high-dim spaces good for separation, bad for computation!

– many algorithms can be re-written with only dot products of features

– NN, perceptron, regression, PCA, SVMs• SVMs

– Maximum margin criterion for deciding on decision boundary

– Linearly separable data– Relax with slack variables for non-separable case– Global optimization is possible: Convex problem (no

local optima) 49