Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd January 6, 2008

15
Cogs 118A: Natural Computation I Angela Yu [email protected] January 6, 2008 ://www.cogsci.ucsd.edu/~ajyu/Teaching/Cogs118A_wi09/cogs118A.

description

Cogs 118A: Natural Computation I Angela Yu [email protected] January 6, 2008. http://www.cogsci.ucsd.edu/~ajyu/Teaching/Cogs118A_wi09/cogs118A.html. Course Overview. Statistical methods. assignments. Neural networks. Advanced topics. midterm. Applications to cognitive science. final project. - PowerPoint PPT Presentation

Transcript of Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd January 6, 2008

Page 1: Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd January 6, 2008

Cogs 118A: Natural Computation I

Angela Yu

[email protected]

January 6, 2008

http://www.cogsci.ucsd.edu/~ajyu/Teaching/Cogs118A_wi09/cogs118A.html

Page 2: Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd January 6, 2008

Course Overview

Statistical methods

Neural networks

Advanced topics

Applications to cognitive science

midterm

finalproject

assignments

Page 3: Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd January 6, 2008

Three Types of Learning

Supervised learning

The agent also observes a seqence of labels or outputs y1, y2, …,

and the goal is to learn the mapping f: x y.

Unsupervised learning

The agent’s goal is to learn a statistical model for the inputs, p(x),to be used for prediction, compact representation, decisions, …

Reinforcement learning

The agent can perform actions a1, a2, a3, …, which affect the

state of the world, and receives rewards r1, r2, r3, … Its goal is

to learn a policy : {x1, x2, …, xt} at that maximizes rewards.

Imagine an agent observes a sequence of of inputs: x1, x2, x3, …

Page 4: Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd January 6, 2008

Supervised LearningClassification: the outputs y1, y2, … are discrete labels

Dog Cat

Regression: the outputs y1, y2, … are continuously valued

x

y

Page 5: Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd January 6, 2008

Applications: Cognitive ScienceObject Categorization Speech Recognition

WET VET

Face Recognition Motor Learning

Page 6: Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd January 6, 2008

Challenges

Motor Learning• Noisy sensory inputs

• Incomplete information

• Changing environment

• Many variables (mostly irrelevant)

• No (one) right answer

• Prior knowledge

• Inductive inference (open-ended)

• Uncertainty, not deterministic

Page 7: Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd January 6, 2008

An Example: Curve-Fitting

x

y

Page 8: Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd January 6, 2008

Polynomial Curve-Fitting

y(x;w) = w0 + w1x +w2x 2 +K + wn x n = w j xj

j= 0

n

Linear model

Error function (root-mean-square)

E(w) = y(x i,w) − y j( )2

j= 0

n

∑ N

Page 9: Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd January 6, 2008

Linear Fit?

x

y

Page 10: Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd January 6, 2008

Quadratic Fit?

x

y

Page 11: Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd January 6, 2008

5th-Order Polynomial Fit?

x

y

Page 12: Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd January 6, 2008

10th-Order Polynomial Fit?

x

y

Page 13: Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd January 6, 2008

And the Answer is…

x

y

Quadratic

Page 14: Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd January 6, 2008

Training Error vs. Test Error

x

y

y

x

1st-order

5th-order 10th-order

2nd-order

Page 15: Cogs 118A: Natural Computation I Angela Yu ajyu@ucsd January 6, 2008

• Model complexity important

• Minimizing training error overtraining

• A better error metric: test error

• But test error costs extra data (precious)

• Fix 1: regularization (3.1)

• Fix 2: Bayesian model comparison (3.3)

What Did We Learn?