Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

35
Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003

description

Machine Learning Learning: Acquiring a function, based on past inputs and values, from new inputs to values. Learn concepts, classifications, values –Identify regularities in data

Transcript of Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Page 1: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Learning with Decision Trees

Artificial IntelligenceCMSC 25000

February 18, 2003

Page 2: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Agenda

• Learning from examples– Machine learning overview– Identification Trees:

• Basic characteristics• Sunburn example• From trees to rules• Learning by minimizing heterogeneity• Analysis: Pros & Cons

Page 3: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Machine Learning

• Learning: Acquiring a function, based on past inputs and values, from new inputs to values.

• Learn concepts, classifications, values– Identify regularities in data

Page 4: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Machine Learning Examples

• Pronunciation: – Spelling of word => sounds

• Speech recognition:– Acoustic signals => sentences

• Robot arm manipulation:– Target => torques

• Credit rating:– Financial data => loan qualification

Page 5: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Machine Learning Characterization

• Distinctions:– Are output values known for any inputs?

• Supervised vs unsupervised learning– Supervised: training consists of inputs + true output value

» E.g. letters+pronunciation– Unsupervised: training consists only of inputs

» E.g. letters only

• Course studies supervised methods

Page 6: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Machine Learning Characterization

• Distinctions:– Are output values discrete or continuous?

• Discrete: “Classification”– E.g. Qualified/Unqualified for a loan application

• Continuous: “Regression”– E.g. Torques for robot arm motion

• Characteristic of task

Page 7: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Machine Learning Characterization

• Distinctions:– What form of function is learned?

• Also called “inductive bias”• Graphically, decision boundary• E.g. Single, linear separator

– Rectangular boundaries - ID trees– Vornoi spaces…etc…

+ + + - - -

Page 8: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Machine Learning Functions

• Problem: Can the representation effectively model the class to be learned?

• Motivates selection of learning algorithm

++ + + + +

- - - - - - - - -

For this function,Linear discriminant is GREAT!Rectangular boundaries (e.g. ID trees)

TERRIBLE!

Pick the right representation!

Page 9: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Machine Learning Features

• Inputs: – E.g.words, acoustic measurements, financial data– Vectors of features:

• E.g. word: letters – ‘cat’: L1=c; L2 = a; L3 = t

• Financial data: F1= # late payments/yr : Integer• F2 = Ratio of income to expense: Real

Page 10: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Machine Learning Features

• Question: – Which features should be used?– How should they relate to each other?

• Issue 1: How do we define relation in feature space if features have different scales? – Solution: Scaling/normalization

• Issue 2: Which ones are important?– If differ in irrelevant feature, should ignore

Page 11: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Complexity & Generalization

• Goal: Predict values accurately on new inputs• Problem:

– Train on sample data– Can make arbitrarily complex model to fit– BUT, will probably perform badly on NEW data

• Strategy:– Limit complexity of model (e.g. degree of equ’n)– Split training and validation sets

• Hold out data to check for overfitting

Page 12: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Learning: Identification Trees

• (aka Decision Trees)• Supervised learning• Primarily classification• Rectangular decision boundaries

– More restrictive than nearest neighbor• Robust to irrelevant attributes, noise• Fast prediction

Page 13: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Sunburn ExampleName Hair Height Weight Lotion Result

Sarah Blonde Average Light No Burn

Dana Blonde Tall Average Yes None

Alex Brown Short Average Yes None

Annie Blonde Short Average No Burn

Emily Red Average Heavy No Burn

Pete Brown Tall Heavy No None

John Brown Average Heavy No None

Katie Blonde Short Light Yes None

Page 14: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Learning about Sunburn

• Goal:– Train on labeled examples– Predict Burn/None for new instances

• Solution??– Exact match: same features, same output

• Problem: 2*3^3 feature combinations– Could be much worse

– Nearest Neighbor style• Problem: What’s close? Which features matter?

– Many match on two features but differ on result

Page 15: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Learning about Sunburn

• Better Solution: – Identification tree:– Training:

• Divide examples into subsets based on feature tests• Sets of samples at leaves define classification

– Prediction:• Route NEW instance through tree to leaf based on

feature tests• Assign same value as samples at leaf

Page 16: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Sunburn Identification Tree

Hair Color

Lotion Used

BlondeRed

Brown

Alex: NoneJohn: NonePete: None

Emily: Burn

No Yes

Sarah: BurnAnnie: Burn

Katie: NoneDana: None

Page 17: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Simplicity

• Occam’s Razor:– Simplest explanation that covers the data is best

• Occam’s Razor for ID trees:– Smallest tree consistent with samples will be

best predictor for new data• Problem:

– Finding all trees & finding smallest: Expensive!• Solution:

– Greedily build a small tree

Page 18: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Building ID Trees

• Goal: Build a small tree such that all samples at leaves have same class

• Greedy solution:– At each node, pick test such that branches are

closest to having same class• Split into subsets with least “disorder”

– (Disorder ~ Entropy)

– Find test that minimizes disorder

Page 19: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Minimizing DisorderHair Color

BlondeRed

Brown

Alex: NPete: NJohn: N

Emily: BSarah: BDana: NAnnie: BKatie: N

Height

WeightLotion

Short AverageTall

Alex:NAnnie:BKatie:N

Sarah:BEmily:BJohn:N

Dana:NPete:N

Sarah:BKatie:N

Light AverageHeavy

Dana:NAlex:NAnnie:B

Emily:BPete:NJohn:N

No Yes

Sarah:BAnnie:BEmily:BPete:NJohn:N

Dana:NAlex:NKatie:N

Page 20: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Minimizing DisorderHeight

WeightLotion

Short AverageTall

Annie:BKatie:N

Sarah:B Dana:N

Sarah:BKatie:N

Light AverageHeavy

Dana:NAnnie:B

No Yes

Sarah:BAnnie:B

Dana:NKatie:N

Page 21: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Measuring Disorder

• Problem: – In general, tests on large DB’s don’t yield

homogeneous subsets• Solution:

– General information theoretic measure of disorder– Desired features:

• Homogeneous set: least disorder = 0• Even split: most disorder = 1

Page 22: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Measuring Entropy• If split m objects into 2 bins size m1 & m2,

what is the entropy?

mm

mm

mm

mm

mm

mm ii

i

22

212

1

2

loglog

log

0

0.2

0.4

0.6

0.8

1

1.2

0 0.2 0.4 0.6 0.8 1 1.2

m1/m

Disorder

Page 23: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Measuring DisorderEntropy

the probability of being in bin i

i

ii pp 2log

mmp ii /

Entropy (disorder) of a split

i

ip 1

00log0 2 Assume

10 ip

-½ log2½ - ½ log2½ = ½ +½ = 1½½-¼ log2¼ - ¾ log2¾ = 0.5 + 0.311 = 0.811

¾¼

-1log21 - 0log20 = 0 - 0 = 001Entropyp2p1

Page 24: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Computing Disorder

k

i i

ci

classc i

ci

t

i

nn

nn

nnrAvgDisorde

1

,2

, log

Disorder of class distribution on branch i

Fraction of samples down branch i

N instances

Branch1 Branch 2

N1 a N1 b

N2 aN2 b

Page 25: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Entropy in Sunburn Example

k

i i

ci

classc i

ci

t

i

nn

nn

nnrAvgDisorde

1

,2

, log

Hair color = 4/8(-2/4 log 2/4 - 2/4log2/4) + 1/8*0 + 3/8 *0 = 0.5

Height = 0.69Weight = 0.94Lotion = 0.61

Page 26: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Entropy in Sunburn Example

k

i i

ci

classc i

ci

t

i

nn

nn

nnrAvgDisorde

1

,2

, log

Height = 2/4(-1/2log1/2-1/2log1/2) + 1/4*0+1/4*0 = 0.5Weight = 2/4(-1/2log1/2-1/2log1/2) +2/4(-1/2log1/2-1/2log1/2) = 1 Lotion = 0

Page 27: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Building ID Trees with Disorder

• Until each leaf is as homogeneous as possible – Select an inhomogeneous leaf node– Replace that leaf node by a test node creating

subsets with least average disorder• Effectively creates set of rectangular regions

– Repeatedly draws lines in different axes

Page 28: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Features in ID Trees: Pros

• Feature selection:– Tests features that yield low disorder

• E.g. selects features that are important!– Ignores irrelevant features

• Feature type handling:– Discrete type: 1 branch per value– Continuous type: Branch on >= value

• Need to search to find best breakpoint

• Absent features: Distribute uniformly

Page 29: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Features in ID Trees: Cons

• Features – Assumed independent– If want group effect, must model explicitly

• E.g. make new feature AorB

• Feature tests conjunctive

Page 30: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

From Trees to Rules

• Tree:– Branches from root to leaves =– Tests => classifications– Tests = if antecedents; Leaf labels= consequent– All ID trees-> rules; Not all rules as trees

Page 31: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

From ID Trees to RulesHair Color

Lotion Used

BlondeRed

Brown

Alex: NoneJohn: NonePete: None

Emily: Burn

No Yes

Sarah: BurnAnnie: Burn

Katie: NoneDana: None

(if (equal haircolor blonde) (equal lotionused yes) (then None))(if (equal haircolor blonde) (equal lotionused no) (then Burn))(if (equal haircolor red) (then Burn))(if (equal haircolor brown) (then None))

Page 32: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Identification Trees

• Train:– Build tree by forming subsets of least disorder

• Predict:– Traverse tree based on feature tests– Assign leaf node sample label

• Pros: Robust to irrelevant features, some noise, fast prediction, perspicuous rule reading

• Cons: Poor feature combination, dependency, optimal tree build intractable

Page 33: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Machine Learning: Review

• Learning: – Automatically acquire a function from inputs to

output values, based on previously seen inputs and output values.

– Input: Vector of feature values– Output: Value

• Examples: Word pronunciation, robot motion, speech recognition

Page 34: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Machine Learning: Review

• Key contrasts:– Supervised versus Unsupervised

• With or without labeled examples (known outputs)– Classification versus Regression

• Output values: Discrete versus continuous-valued– Types of functions learned

• aka “Inductive Bias” • Learning algorithm restricts things that can be learned

Page 35: Learning with Decision Trees Artificial Intelligence CMSC 25000 February 18, 2003.

Machine Learning: Review

• Key issues:– Feature selection:

• What features should be used?• How do they relate to each other?• How sensitive is the technique to feature selection?

– Irrelevant, noisy, absent feature; feature types

– Complexity & Generalization• Tension between

– Matching training data– Performing well on NEW UNSEEN inputs