Computaonal+principles+of+ synap(c+memory+ - UHzpkilpat/icmns2017/FusiTutorialSyn... · A. Roxin,...
Transcript of Computaonal+principles+of+ synap(c+memory+ - UHzpkilpat/icmns2017/FusiTutorialSyn... · A. Roxin,...
Long-‐term memory
Declara0ve (Explicit): consciously accessible
memories
Non-‐declara0ve (Implicit, Procedural)
Episodic: memories of specific events + (me and situa(on in which they occurred (e.g. autobiographical memories)
Seman0c: informa(on about general knowledge (facts or concepts). E.g. the capital of Italy
Skill learning e.g. riding a bicycle
Priming: previous exposure to a sensory s(mulus affects our reac(on (mes to a later s(mula(on
Condi0oning:e.g. saliva(ng when you see your favorite food
time
ξ1 ξ2 ξ3 ξ4 ξ5
Formalizing the memory problem
x=wij (weight of the synapse connecting neuron j to i)
… toward more realis(c synapses
Unbounded
Bounded (binary), offline
learning
Bounded (binary), online
learning
Sompolinsky 1986
pre post
Δw=+1
Δw=-1
A learning rule for binary synapses
+1 +1
-1 +1
Δw=-1
Δw=+1
+1 -1
-1 -1
with probability q
Initial signal (starting from equilibrium)
Probability that Δwij1=wij by chance
Probability that initially Δwij1≠wij (by chance, 1/2) and
the weight is flipped by learning (q)
Initial signal (starting from equilibrium)
Probability that Δwij1=wij by chance
Probability that initially Δwij1≠wij (by chance, 1/2) and
the weight is flipped by learning (q)
SNR(0) Number of memories
FAST
SLOW
Learning rate
~ 1
0
MULTI-STATE SYNAPSE (BALANCED LTP/LTD)
MULTI-STATE SYNAPSE (IMBALANCED LTP/LTD)
Sparse memories
ξ=1 with probability f
ξ=0 with probability 1-f
pre post
Δw=+1 (LTP) with probability q+
Δw=-1 (LTD) with probability q-
Learning rule
Tsodyks Feigelman 1989, Amit Fusi 1994
if q+=q- then: strong imbalance
Sparse memories
ξ=1 with probability f
ξ=0 with probability 1-f
pre post
Δw=+1 (LTP) with probability q+
Δw=-1 (LTD) with probability q-
Learning rule
q+=q
q-= qf
Tsodyks Feigelman 1989, Amit Fusi 1994
A significant improvement, but… 1) Not robust to noise (ξ=0 must be exactly 0). In the presence of noise: p ~ Nsyn
2) The amount of informa(on per memory is significantly smaller (it scales like f) 3) Not scalable (for large Nsyn it is very difficult to readout the relevant info)
Ben Dayan Rubin, Fusi, Frontiers in Comp. Neuroscience 2007
SNR(0) Number of memories
FAST
SLOW
Learning rate
SNR
HETEROGENEOUS
Fusi, Abbott, Neuron 2005; Roxin Fusi, PLoS Comp Biol. 2013
SNR(0) Number of memories
FAST
SLOW
Learning rate
SNR
HETEROGENEOUS
Fusi, Abbott, Neuron 2005; Roxin Fusi, PLoS Comp Biol. 2013
The cascade model
1/ ~Signal Noise t N−
Fusi, Drew, AbboY, Neuron (2005) Strong Weak
… … METAPLASTICITY
PLASTICITY
SNR(0) Number of memories Learning rate
HETEROGENEOUS
HETEROGENEOUS with MEMORY TRANSFER
VARIABLE (metaplasticity)
SNR(0) Number of memories Learning rate
HETEROGENEOUS
HETEROGENEOUS with MEMORY TRANSFER
VARIABLE (metaplasticity)
Tomorrow… p ~ Nsyn
Conclusions
Synapses that are bounded and can modified with limited precision require special machinery for preventing catastrophic forgetting Two important principles to improve performance: 1) Heterogeneity (multiple timescales) 2) Efficient memory transfer