Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1....

20
Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1.http://www.scholarpedia.org/article/Ensemble_learning (optional) 2.http://en.wikipedia.org/wiki/Ensemble_learning 3. http://en.wikipedia.org/wiki/Adaboost 4.Rudin video(optional): http://videolectures.net/mlss05us_rudin_da/ You do not need to read the textbook, as long you understand the Eick: Ensemble Learni

Transcript of Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1....

Page 1: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

Ensembles—Combining Multiple LearnersFor Better Accuracy

Reading Material Ensembles:1.http://www.scholarpedia.org/article/Ensemble_learning (optional)2.http://en.wikipedia.org/wiki/Ensemble_learning 3. http://en.wikipedia.org/wiki/Adaboost 4.Rudin video(optional): http://videolectures.net/mlss05us_rudin_da/ You do not need to read the textbook, as long you understand the transparencies!

Eick: Ensemble Learning

Page 2: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)2

Rationale No Free Lunch Theorem: There is no algorithm that is always

the most accuratehttp://en.wikipedia.org/wiki/No_free_lunch_in_search_and_optimization

Generate a group of base-learners which when combined has higher accuracy

Each algorithm makes assumptions which might be or not be valid for the problem at hand or not.

Different learners use different Algorithms Hyperparameters Representations (Modalities) Training sets Subproblems

Page 3: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

General IdeaOriginal

Training data

....D1D2 Dt-1 Dt

D

Step 1:Create Multiple

Data Sets

C1 C2 Ct -1 Ct

Step 2:Build Multiple

Classifiers

C*Step 3:

CombineClassifiers

Eick: Ensemble Learning

Page 4: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

Fixed Combination Rules

4

Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e © The MIT Press (V1.0)

Page 5: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

Why does it work?

Suppose there are 25 base classifiers Each classifier has error rate, = 0.35 Assume classifiers are independent Probability that the ensemble classifier makes a wrong

prediction:

25

13

25 06.0)1(25

i

ii

i

Eick: Ensemble Learning

Page 6: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

Bayesian perspective:

If dj are independent

Bias does not change, variance decreases by L If dependent, error increase with positive

correlation

jjii PxCPxCPj

MMM

,|| models all

6

jjj

jj

j dL

dLL

dL

dL

y Var1

Var1

Var11

VarVar22

j j jijij

jj ddCovd

Ld

Ly ),(VarVarVar 2

1122

Why are Ensembles Successful?

Page 7: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

What is the Main Challenge for Developing Ensemble Models?

The main challenge is not to obtain highly accurate base models, but rather to obtain base models which make different kinds of errors.

For example, if ensembles are used for classification, high accuracies can be accomplished if different base models misclassify different training examples, even if the base classifier accuracy is low. Independence between two base classifiers can be assessed in this case by measuring the degree of overlap in training examples they misclassify (|AB|/|AB|)—more overlap means less independence between two models.

Eick: Ensemble Learning

Page 8: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

8

Bagging

Use bootstrapping to generate L training sets and train one base-learner with each (Breiman, 1996)

Use voting (Average or median with regression) Unstable algorithms profit from bagging

Eick: Ensemble Learning

Page 9: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

Bagging

Sampling with replacement

Build classifier on each bootstrap sample

Each sample has probability (1 – 1/n)n of being selected

Original Data 1 2 3 4 5 6 7 8 9 10Bagging (Round 1) 7 8 10 8 2 5 10 10 5 9Bagging (Round 2) 1 4 9 1 2 3 2 7 3 2Bagging (Round 3) 1 8 5 10 5 5 9 6 3 7

Eick: Ensemble Learning

Page 10: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

Boosting An iterative procedure to adaptively change

distribution of training data by focusing more on previously misclassified records Initially, all N records are assigned equal weights Unlike bagging, weights may change at the end of

boosting round

Eick: Ensemble Learning

Page 11: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

Boosting

Records that are wrongly classified will have their weights increased

Records that are classified correctly will have their weights decreased

Original Data 1 2 3 4 5 6 7 8 9 10Boosting (Round 1) 7 3 2 8 7 9 4 10 6 3Boosting (Round 2) 5 4 9 4 2 5 1 7 4 2Boosting (Round 3) 4 4 8 10 4 5 4 6 3 4

• Example 4 is hard to classify

• Its weight is increased, therefore it is more likely to be chosen again in subsequent rounds

Eick: Ensemble Learning

Page 12: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

Basic AdaBoost LoopD1= initial dataset with equal weights

FOR i=1 to k DO Learn new classifier Ci;

Compute i (classifier’s importance);

Update example weights; Create new training set Di+1 (using weighted sampling)

END FOR

Construct Ensemble which uses Ciweighted by i (i=1,k)

Eick: Ensemble Learning

Page 13: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

Example: AdaBoost Base classifiers: C1, C2, …, CT

Error rate (weights add up to 1):

Importance of a classifier:

N

jjjiji yxCw

N 1

)(1

i

ii

1ln

2

1

Eick: Ensemble Learning

Page 14: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

Example: AdaBoost

Weight update:

If any intermediate rounds produce error rate higher than 50%, the weights are reverted back to 1/n and the re-sampling procedure is repeated

Classification (j is a classifier’s importance for the whole dataset):

factor ionnormalizat theis where

)( ifexp

)( ifexp)()1(

j

iij

iij

j

jij

i

Z

yxC

yxC

Z

ww

j

j

T

jjj

yyxCxC

1

)(maxarg)(*

Increase weight if misclassification;Increase is proportional to classifiers Importance.

http://en.wikipedia.org/wiki/Adaboost

Eick: Ensemble Learning

Page 15: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

BoostingRound 1 + + + -- - - - - -

0.0094 0.0094 0.4623B1

= 1.9459

Illustrating AdaBoostData points for training

Initial weights for each data point

OriginalData + + + -- - - - + +

0.1 0.1 0.1

Video Containing Introduction to AdaBoost: http://videolectures.net/mlss05us_rudin_da/

Eick: Ensemble Learning

Page 16: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

Illustrating AdaBoostBoostingRound 1 + + + -- - - - - -

BoostingRound 2 - - - -- - - - + +

BoostingRound 3 + + + ++ + + + + +

Overall + + + -- - - - + +

0.0094 0.0094 0.4623

0.3037 0.0009 0.0422

0.0276 0.1819 0.0038

B1

B2

B3

= 1.9459

= 2.9323

= 3.8744

Eick: Ensemble Learning

Page 17: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)17

Mixture of Experts

Voting where weights are input-dependent (gating)

(Jacobs et al., 1991)Experts or gating can be nonlinear

L

jjjdwy

1

Page 18: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)18

Stacking

Combiner f () is another learner (Wolpert, 1992)

Page 19: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)

Cascading

Use dj only if preceding ones are not confident

Cascade learners in order of complexity

Page 20: Ensembles—Combining Multiple Learners For Better Accuracy Reading Material Ensembles: 1. (optional).

20

Summary Ensemble Learning Ensemble approaches use multiple models in their decision making.

They frequently accomplish high accuracies, are less likely to over-fit and exhibit a low variance. They have been successfully used in the Netflix contest and for other tasks. However, some research suggest that they are sensitive to noise (http://www.phillong.info/publications/LS10_potential.pdf ).

The key of designing ensembles is diversity and not necessarily high accuracy of the base classifiers: Members of the ensemble should vary in the examples they misclassify. Therefore, most ensemble approaches, such as AdaBoost, seek to promote diversity among the models they combine.

The trained ensemble represents a single hypothesis. This hypothesis, however, is not necessarily contained within the hypothesis space of the models from which it is built. Thus, ensembles can be shown to have more flexibility in the functions they can represent. Example: http://www.scholarpedia.org/article/Ensemble_learning

Current research on ensembles centers on: more complex ways to combine models, understanding the convergence behavior of ensemble learning algorithms, parameter learning, understanding over-fitting in ensemble learning, characterization of ensemble models, sensitivity to noise.

Eick: Ensemble Learning