Plasticity and learning

42
Plasticity and learning Dayan and Abbot Chapter 8

description

Plasticity and learning. Dayan and Abbot Chapter 8. Introduction. Learning occurs through synaptic plasticity Hebb (1949): If neuron A often contributes to the firing of neuron B, then the synapse from A to B should be strengthened Stimulus response (Pavlov) - PowerPoint PPT Presentation

Transcript of Plasticity and learning

Page 1: Plasticity and learning

Plasticity and learning

Dayan and Abbot

Chapter 8

Page 2: Plasticity and learning

Introduction

• Learning occurs through synaptic plasticity

• Hebb (1949): If neuron A often contributes to the firing of neuron B, then the synapse from A to B should be strengthened– Stimulus response (Pavlov)

– Converse: If neuron A does not contribute to the firing of B, the synapse is weakened

– Hippocampus, neocortex, cerebellum

Page 3: Plasticity and learning

LTP and LTD at Shaffer collateral inputs to CA1 region of rat hippocampal slice. High stimulation yields LTP. Low stimulation yields LTD. NB: no stimulation yields no LTD

Page 4: Plasticity and learning

Function of learning

• Unsupervised learning (ch 10)– Feature selection, receptive fields, density estimation

• Supervised learning (ch 7)– Input-output mapping, feedback as teacher signal

• Reinforcement learning (ch 9)– Feedback in terms of reward, similar to control theory

• Hebbian learning (ch 8)– Biologically plausible + normalization

– Covariance rule for (un)supervised learning

– Occular dominance, maps

Page 5: Plasticity and learning

Rate model with fast time scale

• Neural activity as continuous rate, not spike train

V is output neuron, u is vector input neurons, w is vector of weightsIf tau_r small wrt learning time:

Page 6: Plasticity and learning

Basic Hebb rule

• V and u are functions of time. Makes dynamics hard to solve. Alternative is to assume v,u from distribution p(v,u) and assume p time independent.

Using v=w. u we get

Page 7: Plasticity and learning

Basis Hebb rule

• Hebb rule is unstable, because norm always increases

• Continuous differential equation can be simulated using Euler scheme

Page 8: Plasticity and learning

Covariance rule

• Basic Hebb rule describes only LTP since u, v positive

• LTD occurs when pre-synaptic activity co-occurs with low post synaptic activity

• Alternatively,

Page 9: Plasticity and learning

Covariance rule

• When

Either rule produces

Covariance rule is unstable

Page 10: Plasticity and learning

BCM rule

• Bienenstock, Munro, Cooper (1982):

Requires both pre and post synaptic activity for learning

For fixed threshold, the BMC rule is also unstable

Page 11: Plasticity and learning

BMC rule

• BMC rule can be made stable by threshold dynamics

• Tau_theta is smaller than tau_w

• BMC rule implements competition between synapses– Strenghtening one synapse, increases the threshold, makes

strengthening of other synapses more difficult

• Such competition can also be implemented by normalization

Page 12: Plasticity and learning

Synaptic normalization

• Limit the sum of weights or sum of squared weights

• Impose this constraint rigidly, or dynamically

• Two examples:– Rigid scheme for sum of weights constraint (subtractive norm.)

– Dynamic scheme for sum of squared weights (multipl. Norm.)

Page 13: Plasticity and learning

Subtractive normalization

• Subtractive normalization ensures that sum w does not change

Not clear how to implement this rule biophysically (non-locality).

We must add a constraint that weights are non-negative.

Page 14: Plasticity and learning

Multiplicative normalization

• Oja rule (1982)

• The rule implements the constraint dynamically:

Page 15: Plasticity and learning

Unsupervised learning

• Adapting the network for a set of tasks – Neural selectivity, receptive field

– Cortical map

• Process depends partly on neural activity and partly not (axon growth)

• Ocular dominance– Adult neurons favor one eye over the other (layer 4 input from

LGN)

– Neurons are clustered in bands or stripes

Page 16: Plasticity and learning

Single post-synaptic neuron

• We analyze Eq. 8.5

Page 17: Plasticity and learning

Single post-synaptic neuron

• Solution in terms of eigenvalues of Q

• Eigen values are positive, so solution explodes.

• Asymptotically

– e1 is the principle eigen direction

– Neuron projects input onto this direction:

Page 18: Plasticity and learning

Single post-synaptic neuron

• Example with two weights. • Weights grow indefinite, one positive one negative. Choice

depends on initial conditions. • Limit to [0, 1] yields different solutions depending in init value

Page 19: Plasticity and learning

Single post-synaptic neuron

• Subtractive normalization. Averaging over inputs:

• Analysis in terms of eigenvectors:– In ocular dominance e1=n/sqrt(n). W in direction of e1 has rhs

equal to zero. Ie this component of w is unaltered

– In other directions normalizing term is zero

– W asymptotically dominated by second eigenvector

Page 20: Plasticity and learning

Hebbian development of ocular dominance

Subtractive normalization may solve this, since e1=n weight grows proportional to e2=(1,-1)

Page 21: Plasticity and learning

Single post-synaptic neuron

• Using the Oja rule

• Show that each eigenvector of Q is solution.

• One can show that only principal eigenvector is stable.

Page 22: Plasticity and learning

Single post-synaptic neuron

• A: Behavior of– Unnormalized Hebbian learning

– Multiplicative normalization (Oja rule)

gives w propto e1. This is similar to PCA.

• B: Shifting mean of u may yield different solution

• C: Covariance based learning corrects for mean

• Saturation constraints may alter this conclusion

Page 23: Plasticity and learning

Hebbian development of ocular dominance

• Model layer 4 cell with input from two LGN cells, each associated with different eye.

Page 24: Plasticity and learning

Hebbian development of orientation selectivity

Cortical receptive fields from LGN. ON-center (white) and OFF-center (black) cells excite cortical neuron.

Spectral analysis also applicable to non-linear systemsDominant eigenvector uniform.Non-uniform receptive fields result from sub-dominanteigenvector

Page 25: Plasticity and learning

Multiple postsynaptic neurons

Page 26: Plasticity and learning

Hebbian development of ocular dominance stripes

• A: model with right and left eye inputs drive array of cortical neurons

• B: ocular dominance maps. Top: light and dark areas in top and bottom cortical layer show ocular dominance in cat primary cortex. Bottom: model of 512 neurons with Hebbian learning

Page 27: Plasticity and learning

Hebbian development of ocular dominance stripes

• Use 8.31 with W=(w+,w-) the n*2 matrix. See book.

• Subtractive normalization dw+/dt=0

• Ocular dominance pattern given by largest eig. vector of K

Page 28: Plasticity and learning

Hebbian development of ocular dominance stripes

• Suppose K translation invariant – Periodic boundary conditions simulating a patch of cortex ignoring

boundary effects

– Eigenvectors are

– Eigenvalues are Fourrier components

– Solution of learning is spatially periodic (viz. fig 8.7)

Page 29: Plasticity and learning

Feature based models

• Multi dimensional input (retinal location, ocular dominance, orientation preference, ....)

• Replace input neurons by input features, W_ab is selectivity of neuron a to feature b– Feature u1 is location on retina in coordinates

– Feature u2 is ocularity (how much is the stimulus prefering left over right eye), a single number

• The coupling to neuron a describe the preferred stimulus

• Activity of output a is

Page 30: Plasticity and learning

Feature based models

• Output is soft-max

• Combined with lateral averaging– Self-organizing map (SOM)

– Elastic net

Page 31: Plasticity and learning

Feature based models

optical imaging shows ocularity and orientation selectivity in macaque primary visual cortex. Dark lines are ocular dominance boundaries, light lines are iso-orientation contours. Pin wheel singularities, linear zones

Page 32: Plasticity and learning

Feature based models

• Elastic net output• SOM, competitive Hebbian rules can produce similar output

Page 33: Plasticity and learning

Anti-hebbian modification

• Another way to make different outputs specialize is by adaptive anti-Hebbian modification• Consider Oja rule:

• Each output a will be identical • Anti-Hebbian modification is shown at synapses from parallel fibers to Purkinje cells in

cerebellum.

• Combination yields different eigenvectors as outputs

Page 34: Plasticity and learning

Timing based rules

• Left: in vitro cortical slice. Right: in vivo xenopus tadpoles

• LTP when pre-synaptic spike precedes post-synaptic spike

• LTD when pre-synaptic spike follows post-synaptic spike

Page 35: Plasticity and learning

Timing based rules

• Simulating spike-time plasticity requires spiking neurons

• Approximate description with firing rates

• H(t) positive/negative for t positive/negative

Page 36: Plasticity and learning

Timing based plasticity and prediction

• Consider array of neurons labeled by a with receptive fields f_a(s) (dashed and solid curves)

• Timing based learning

rule. Stimulus s moves

from left to right.

Page 37: Plasticity and learning

Timing based plasticity and prediction

• If a left of b, then link a to b is strengthened and link b to a is weakened. Receptive field of neuron a is asymmetrically deformed (A solid bold line)

• Prediction: next presentation of s(t) will activate a earlier.

• In agreement with shift of place field mean when rats run around track (B).

Page 38: Plasticity and learning

Supervised Hebbian learning

• Weight decay:

• Asymptotic solution is

Page 39: Plasticity and learning

Classification and the Perceptron

• If output values are +/- 1 the model implements a classifier, called the Perceptron:

– The weight vector defines a separating hyper plane: = gamma.

– The perceptron can solve problems that are ‘linearly separable’

Page 40: Plasticity and learning
Page 41: Plasticity and learning
Page 42: Plasticity and learning