… Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

21
COMBINING BASE LEARNERS Algo 2 Algo 3 Meta-Learning Algo

description

… Different Algorithms Different Datasets …

Transcript of … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

Page 1: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

COMBINING BASE LEARNERS

Algo 1 Algo 2 Algo 3 Algo N

Meta-Learning Algo

Page 2: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)

2

RATİONALE

No Free Lunch theorem: There is no algorithm that is always the most accurate.

Generate a group of algorithms which when combined display higher accuracy.

Page 3: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

DATA AND ALGO VARIATION

Different Algorithms

Different Datasets

Page 4: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

Bagging

Page 5: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

Bagging

• Bagging can easily be extended to regression.

• Bagging is most efficient when the base-learner is unstable.

• Bagging typically increases accuracy.

• Interpretability is lost.

Page 6: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

“Breiman's work bridged the gap between statisticians and computer scientists, particularly in the field of machine learning.Perhaps his most important contributions were his work on classification and regression trees and ensembles of trees fit to bootstrap samples.

Bootstrap aggregation was given the name bagging by Breiman. Another of Breiman's ensemble approaches is the random forest.” (Extracted from Wikipedia).

Page 7: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

Boosting

Page 8: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

Boosting

• Boosting tries to combine weak learners into a strong learner.

• Originally all examples have the same weight, but in following iterations examples wrongly classified increase their weight.

• Boosting can be applied to any learner.

Page 9: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

BoostingInitialize all weights wi to 1/N (N: no. of examples)error = 0Repeat (until error > 0.5 or max. iterations reached)

Train classifier and get hypothesis ht(x)

Compute error as the sum of weights for misclassified exs. error = Σ wi if wi is incorrectly classified.

Set αt = log ( 1-error / error )

Updates weights wi = [ wi e - (αt yi ht(xi) ] / Z

Output f(x) = sign ( Σt αt ht(x) )

Page 10: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

Boosting

Misclassified Example Increase Weights

Page 11: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

DATA AND ALGO VARIATION

Different Algorithms

Different Datasets

Page 12: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)

12

MİXTURE OF EXPERTSVoting where weights are input-dependent (gating)

(Jacobs et al., 1991)Experts or gating can be nonlinear

Page 13: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)

13

MİXTURE OF EXPERTS

Robert JacobsUniversity of Rochester

Michael JordanUC Berkeley

Page 14: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

Stacking

Page 15: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

Stacking

• Variations are among learners.

• The predictions of the base learners form a new meta-dataset.

• A testing example is first transformed into a new meta-example and then classified.

• Several variations have been proposed around stacking.

Page 16: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

Cascade Generalization

Page 17: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

Cascade Generalization

• Variations are among learners.

• Classifiers are used in sequence rather than in parallel as in stacking.

• The prediction of the first classifier is added to the example feature vector to form an extended dataset.

• The process can go on through many iterations.

Page 18: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

Cascading

Page 19: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

Cascading

• Like boosting, distribution changes across datasets.

• But unlike boosting we will vary the classifiers.

• Classification is based on prediction confidence.

• Cascading creates rules that account for most instances catching exceptions at the final step.

Page 20: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)

20

ERROR-CORRECTİNG OUTPUT CODES

110100101010

011001000111

W

K classes; L problems (Dietterich and Bakiri, 1995) Code matrix W codes classes in terms of learners

One per classL=K

PairwiseL=K(K-1)/2

1111111111111111

W

Page 21: … Algo 1 Algo 2 Algo 3 Algo N Meta-Learning Algo.

Lecture Notes for E Alpaydın 2004 Introduction to Machine Learning © The MIT Press (V1.1)

21

Full code L=2(K-1)-1

With reasonable L, find W such that the Hamming distance between rows and columns is maximized.

1111111111111111111111111111

W