A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t...

22
A Short Introduction to the Theory of Statistical Learning Kee Siong Ng, PhD [email protected]

Transcript of A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t...

Page 1: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

A Short Introduction to the Theory ofStatistical Learning

Kee Siong Ng, [email protected]

Page 2: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Outline

• Introduction

• Bayesian Probability Theory

• Sequence Prediction and Data Compression

• Bayesian Networks

Copyright 2015 Kee Siong Ng. All rights reserved. 1

Page 3: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Outline

• Introduction

1. What is machine learning?

2. When is learning (not) possible?

• Bayesian Probability Theory

• Sequence Prediction and Data Compression

• Bayesian Networks

Copyright 2015 Kee Siong Ng. All rights reserved. 2

Page 4: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

What is Machine Learning?

From Wikipedia:

Machine learning is the subfield of AI that is concerned with thedesign and development of algorithms that allow computers toimprove their performance over time based on data. A majorfocus of machine learning research is to automatically produce(induce) models, such as rules and patterns, from data.

In summary: It is about collecting data, analysing data, interpreting data,and acting on insights so obtained to achieve goals.

Copyright 2015 Kee Siong Ng. All rights reserved. 3

Page 5: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Inductive Learning from Observations

• Machine learning research covers a wide spectrum of activities.

• Focus on classification/prediction problems in this course.

• Informally, given a sequence of data

[(x1,y1),(x2,y2), . . . ,(xn,yn)], xi 2 X ,yi 2 Y

generated by an (unknown) process, learn a function f : X ! Y that,given any x 2 X , predicts the corresponding y value accurately.

Copyright 2015 Kee Siong Ng. All rights reserved. 4

Page 6: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Example Problems

• Spam filtering

Data: [(“Congratulations, you have won the jackpot! ...”,+),

(“Bankwest is conducting an account audit ...”,+),

(“Your COMP3620 assignment is late ... ”,�), . . .]

Is “Congratulations on your COMP3620 assignment ...” a spam?

• Hand-written character recognition

Data: [( ,3),( ,1),( ,3),( ,4),( ,7), . . .]

What’s the label of ?

Copyright 2015 Kee Siong Ng. All rights reserved. 5

Page 7: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Philosophical Concerns about Induction

• Hume (1748): How do you know the future will be like the past?

– Inductive processes are inherently unjustifiable.

• Popper (1935): Scientific theories/conjectures are only falsifiable;they can never be confirmed for sure.

– Competing theories cannot be compared.

• Need to define learning problems carefully.

Copyright 2015 Kee Siong Ng. All rights reserved. 6

Page 8: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Learning (More Formally)

Given

• a sequence of data

D = [(x1,y1),(x2,y2), . . . ,(xn,yn)], xi 2 X ,yi 2 Y,

where each (xi,yi) is drawn independently at random from anunknown probability distribution P on X ⇥Y ;

• a set F of functions from X ! Y ;

Learn an f 2 F that minimises expected error

E(x,y)⇠P I(y 6= f (x)).✓

Â(x,y)

P(x,y)I(y 6= f (x)),I : indicator function◆

Copyright 2015 Kee Siong Ng. All rights reserved. 7

Page 9: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Algorithmic Strategy

• We want f 2 F that minimises E(x,y)⇠P I(y 6= f (x)).

• But we don’t know P!

Copyright 2015 Kee Siong Ng. All rights reserved. 8

Page 10: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Algorithmic Strategy

• We want f 2 F that minimises E(x,y)⇠P I(y 6= f (x)).

• But we don’t know P!

• Idea: Use data D as a proxy for P.

• Instead of seeking argmin f2F E(x,y)⇠P I(y 6= f (x)), we seek f thatminimises empirical error

1n Â(x,y)2D

I(y 6= f (x)). (1)

• Intuitive justification: Law of large numbers

Fix f : (1)�! E(x,y)⇠P I(y 6= f (x)) as |D|�! •

Copyright 2015 Kee Siong Ng. All rights reserved. 8

Page 11: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Laws of Large Numbers

• A Bernoulli trial is an experiment that returns 1 with probability pand 0 with probability 1� p.

• If we do n such experiments, how many times k are we likely to getthe result 1?

• Coin experiment

Copyright 2015 Kee Siong Ng. All rights reserved. 9

Page 12: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Laws of Large Numbers

• A Bernoulli trial is an experiment that returns 1 with probability pand 0 with probability 1� p.

• If we do n such experiments, how many times k are we likely to getthe result 1?

• (Bernoulli’s Theorem) For any small error e, any small difference d,there is a number of trials N such that for any n > N,

Pr[(p� e) k/n (p+ e)]> 1�d

Copyright 2015 Kee Siong Ng. All rights reserved. 10

Page 13: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Laws of Large Numbers

• Bernoulli’s theorem doesn’t tell us how fast k/n converges to p.

• Let b(k;n, p) be the probability of getting k occurrences of 1 in nBernoulli trials.

• (Abraham De Moivre) A binomial distribution b(k;n, p) isapproximated by a normal distribution with mean µ = pn andstandard deviation s =

p(1� p)pn.

• Thus, e.g., the probability that k is within 2s of pn is about 0.95.

Pr[(pn�2p(1� p)pn) k (pn+2

p(1� p)pn)] = 0.95

Copyright 2015 Kee Siong Ng. All rights reserved. 11

Page 14: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Laws of Large Numbers

• Bernoulli’s theorem doesn’t tell us how fast k/n converges to p.

• Let b(k;n, p) be the probability of getting k occurrences of 1 in nBernoulli trials.

• (Abraham De Moivre) A binomial distribution b(k;n, p) isapproximated by a normal distribution with mean µ = pn andstandard deviation s =

p(1� p)pn.

• Thus, e.g., the probability that k is within 2s of pn is about 0.95.

Pr[(pn�2p(1� p)pn) k (pn+2

p(1� p)pn)] = 0.95

= Pr[(p�2

r(1� p)p

n) k/n (p+2

r(1� p)p

n)] = 0.95

Copyright 2015 Kee Siong Ng. All rights reserved. 11

Page 15: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Minimisation Angst

• We still have a problem if the function class F is too large.

• Can have many functions that all minimise empirical error.

• Consider underlying distribution P defined as follows:

P : Density ([0,1]⇥{0,1})

P(x,y) =

8<

:1 if (x � 0.5^ y = 1) or (x < 0.5^ y = 0)

0 otherwise

• We can’t learn if F is the set of all functions.

• Functions that remember the data achieve zero empirical error butincur large expected error!

Copyright 2015 Kee Siong Ng. All rights reserved. 12

Page 16: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

A Sufficient Condition for Successful Learning

• A sufficient (but far from necessary) condition for successful learningis |F |< • and |F |⌧ |X |.

• Let’s look at a simple proof.

• Further assumptions:

– There is an unknown distribution Q on X .

– There is an underlying true function t⇤ 2 F .

– Distribution P on X ⇥Y is defined by

P(x,y) =

8<

:Q(x) if y = t⇤(x)

0 otherwise

Copyright 2015 Kee Siong Ng. All rights reserved. 13

Page 17: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Learnability Result

Define

Fbad = {h 2 F |E(x,y)⇠P I(h(x) 6= y)�E(x,y)⇠P I(t⇤(x) 6= y)� e}= {h 2 F |E(x,y)⇠P I(h(x) 6= y)� e}.

Given n randomly drawn examples D = {(xi,yi)}ni=1 from P,

Pr(9h 2 Fbad consistent with D)

= Pr(_|Fbad |i=1 hi consistent with D)

=|Fbad |

Âi=1

Pr(hi consistent with D)

= |Fbad|(1� e)n |F |(1� e)n.

Copyright 2015 Kee Siong Ng. All rights reserved. 14

Page 18: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Learnability Result

We have

Pr(9h 2 Fbad consistent with D) |F |(1� e)n.

We want |F |(1� e)n d for some small d.

Solving for n yields

n � 1e

✓ln

1d+ ln |F |

◆. (2)

This means if we have seen n examples satisfying (2) and pick ahypothesis h⇤ 2 F that is consistent with the examples, then withprobability 1�d,

E(x,y)⇠P I(h⇤(x) 6= y)< e.

Copyright 2015 Kee Siong Ng. All rights reserved. 15

Page 19: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Learnability Result

• If we drop the assumption that t⇤ 2 F , picking the h⇤ 2 F thatminimises error on data is still a good strategy.

• Examples needed:

n � 12e2

✓ln

1d+ ln |F |

• Given data D = {(xi,yi)}ni=1, with probability 1�d,

E(x,y)⇠P I(h⇤(x) 6= y)<1|D|

n

Âi=1

I(h⇤(xi) 6= yi)+ e.

Copyright 2015 Kee Siong Ng. All rights reserved. 16

Page 20: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Problems in Practice

• We may not be able to get the number of examples required.

• We may not be able to efficiently find the function that minimisesempirical error.

• There could be multiple empirical-error minimising functions.

• The first and third problems can be solved using Bayesian probabilitytheory, which we look at next.

Copyright 2015 Kee Siong Ng. All rights reserved. 17

Page 21: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Session Review

• Learning is not always possible

• When learning is possible, minimising empirical error is a goodstrategy justifiable using laws of large numbers

• Questions?

Copyright 2015 Kee Siong Ng. All rights reserved. 18

Page 22: A Short Introduction to the Theory of Statistical Learning · • Bernoulli’s theorem doesn’t tell us how fast k/n converges to p. • Let b(k;n,p) be the probability of getting

'

&

$

%

Review of Session 1

• Learning is not always possible

• When learning is possible, minimising empirical error is a goodstrategy justifiable using laws of large numbers

• Problems in practice

– We may not be able to get the number of examples required.

– We may not be able to efficiently find the function that minimisesempirical error.

– There could be multiple empirical-error minimising functions.

Copyright 2015 Kee Siong Ng. All rights reserved. 1