Slide 1
description
Transcript of Slide 1
![Page 1: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/1.jpg)
Slide 1
Lecture 2-3-4
ASSOCIATIONS RULES AND MACHINES
CONCEPT OF AN E-MACHINEsimulating symbolic readwrite memory by changing dynamical attributes of data in a long-term memory
Victor Eliashberg
Consulting professor Stanford University Department of Electrical Engineering
BDW
Human-like robot (DB)External world W
External system (WD)Sensorimotor devices D
Computing system B simulatingthe work of human nervous system
Slide 2
ldquoWhen you have eliminated the impossible whatever remains however improbable must be the truthrdquo (Sherlock Holmes)
SCIENTIFIC EGINEERING APPROACH
ZERO-APPROXIMATION MODEL
Slide 3
s(ν+1)
s(ν)
BIOLOGICAL INTERPRETATION
Slide 4
Working memory episodic memory and mental imagery
ASAM
Motor control
PROBLEM 1 LEARNING TO SIMULATE the Teacher This problem is simple system AM needs to learn a manageable number of fixed rules
Slide 5
NMy
Teacher
AM
symbol read
X11
X12
0y
current state of mind next state of mind
movetype symbol
X y
sel
NM1
PROBLEM 2 LEARNING TO SIMULATE EXTERNAL SYSTEM This problem is hard the number of fixed rules needed to represent a RAM with n locations explodes exponentially with n
Slide 6
y
1
2
NS
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
Programmable logic array (PLA) a logic implementation of a local associative memory (solves problem 1 from slide 5)
Slide 7
BASIC CONCEPTS FROM THE AREA OF ARTIFICIAL NEURAL NETWORKS
Slide 8
Typical neuron
Neuron is a very specialized cell There are several types of neurons with different shapes and different types of membrane proteins Biological neuron is a complex functional unit However it is helpful to start with a simple artificial neuron (next slide)
Slide 9
Neuron as the first-order linear threshold element
x1
xk xmgkg1 gm
y
u
Inputs xk RrsquoOutput y Rrsquo
Parameters g1hellip gm Rrsquo
y=L( u )
(1)
(2)
L( u) = u if u gt 00 otherwise
(3)
Equations
dudt + u =
mΣ gkxk k=1
τ
where
u0
y=L( u )
Rrsquo is the set of real non-negative numbers
u
x1xk
xm
g1
gk
gm
y
s
A more convenient notation
xk is the k-th component of input vectorgk is the gain (weight) of the k-th synapses =
Σ gkxk
m
k=1
is the total postsynaptic current
u is the postsynaptic potentialy is the neuron outputτ is the time constant of the neuron
τ
Slide 10
Input synaptic matrix input long-term memory (ILTM) and DECODING
si =
Σ gxikxk
m
k=1
i=1hellipn
(1) fdec X times Gx S (2)
An abstract representation of (1)
x1xkxm
s1 si sn
gx1
k DECODING (computing similarity)
x
s1 si sn
ILTMgxik gx
nk
Notation
x=(x1 xm) are the signals from input neurons (not shown)
gx = (gxik) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we
postulate that this matrix represents input long-term memory (ILTM)s=(s1 sn) is the similarity function
Slide 11
Layer with inhibitory connections as the mechanism of the winner-take-all (WTA) choice
Slide 12
Note Small white and black circles represent excitatory and inhibitory synapses respectively
s1
d1
α
β
si
di
α
β
sn
dn
α
β
uiu1un
xinhq
τττ Equations
(1)
(2)
(3)
iwin i si=max sj gt 0
( j )
if (i == iwin) di=1 else di=0
(4)
(5)
Procedural representationRANDOM CHOICE
s1 si sn
iwin
ldquo ldquo denotes random equally probable choice
Output synaptic matrix output long-term memory (OLTM) and ENCODING
y1ykyp gy
ki
d1 di dn
gykngy
k
1
NOTATION
d=(d1 dm) signals from the WTA layer (see previous slide) gy = (gy
ki) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we postulate that this matrix represents output long-term memory (OLTM)y=(y1 yp) output vector
OLTM
ENCODING (data retrieval)
y
d1 di dn
yk =
Σ gykidi
i=1k=1hellipp (1) fenc D times Gy Y (2)
An abstract representation of (1)n
Slide 13
A neural implementation of a local associative memory (solves problem 1 from slide 5) (WTAEXE)
Slide 14
DECODING
ENCODING
RANDOM CHOICE
Input long-term memory (ILTM)
Output long-term memory (OLTM)
addressing by content
retrieval
S21(Ij)
N1(j)
S21(ij)
A functional model of the previous network [7][8][11]
(WTAEXE)
(1)
(2)
(3)
(4)
(5)
Slide 15
Slide 16
HOW CAN WE SOLVE THE HARD PROBLEM 2 from slide 6
Slide 17
External system as a generalized RAM
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 2: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/2.jpg)
BDW
Human-like robot (DB)External world W
External system (WD)Sensorimotor devices D
Computing system B simulatingthe work of human nervous system
Slide 2
ldquoWhen you have eliminated the impossible whatever remains however improbable must be the truthrdquo (Sherlock Holmes)
SCIENTIFIC EGINEERING APPROACH
ZERO-APPROXIMATION MODEL
Slide 3
s(ν+1)
s(ν)
BIOLOGICAL INTERPRETATION
Slide 4
Working memory episodic memory and mental imagery
ASAM
Motor control
PROBLEM 1 LEARNING TO SIMULATE the Teacher This problem is simple system AM needs to learn a manageable number of fixed rules
Slide 5
NMy
Teacher
AM
symbol read
X11
X12
0y
current state of mind next state of mind
movetype symbol
X y
sel
NM1
PROBLEM 2 LEARNING TO SIMULATE EXTERNAL SYSTEM This problem is hard the number of fixed rules needed to represent a RAM with n locations explodes exponentially with n
Slide 6
y
1
2
NS
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
Programmable logic array (PLA) a logic implementation of a local associative memory (solves problem 1 from slide 5)
Slide 7
BASIC CONCEPTS FROM THE AREA OF ARTIFICIAL NEURAL NETWORKS
Slide 8
Typical neuron
Neuron is a very specialized cell There are several types of neurons with different shapes and different types of membrane proteins Biological neuron is a complex functional unit However it is helpful to start with a simple artificial neuron (next slide)
Slide 9
Neuron as the first-order linear threshold element
x1
xk xmgkg1 gm
y
u
Inputs xk RrsquoOutput y Rrsquo
Parameters g1hellip gm Rrsquo
y=L( u )
(1)
(2)
L( u) = u if u gt 00 otherwise
(3)
Equations
dudt + u =
mΣ gkxk k=1
τ
where
u0
y=L( u )
Rrsquo is the set of real non-negative numbers
u
x1xk
xm
g1
gk
gm
y
s
A more convenient notation
xk is the k-th component of input vectorgk is the gain (weight) of the k-th synapses =
Σ gkxk
m
k=1
is the total postsynaptic current
u is the postsynaptic potentialy is the neuron outputτ is the time constant of the neuron
τ
Slide 10
Input synaptic matrix input long-term memory (ILTM) and DECODING
si =
Σ gxikxk
m
k=1
i=1hellipn
(1) fdec X times Gx S (2)
An abstract representation of (1)
x1xkxm
s1 si sn
gx1
k DECODING (computing similarity)
x
s1 si sn
ILTMgxik gx
nk
Notation
x=(x1 xm) are the signals from input neurons (not shown)
gx = (gxik) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we
postulate that this matrix represents input long-term memory (ILTM)s=(s1 sn) is the similarity function
Slide 11
Layer with inhibitory connections as the mechanism of the winner-take-all (WTA) choice
Slide 12
Note Small white and black circles represent excitatory and inhibitory synapses respectively
s1
d1
α
β
si
di
α
β
sn
dn
α
β
uiu1un
xinhq
τττ Equations
(1)
(2)
(3)
iwin i si=max sj gt 0
( j )
if (i == iwin) di=1 else di=0
(4)
(5)
Procedural representationRANDOM CHOICE
s1 si sn
iwin
ldquo ldquo denotes random equally probable choice
Output synaptic matrix output long-term memory (OLTM) and ENCODING
y1ykyp gy
ki
d1 di dn
gykngy
k
1
NOTATION
d=(d1 dm) signals from the WTA layer (see previous slide) gy = (gy
ki) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we postulate that this matrix represents output long-term memory (OLTM)y=(y1 yp) output vector
OLTM
ENCODING (data retrieval)
y
d1 di dn
yk =
Σ gykidi
i=1k=1hellipp (1) fenc D times Gy Y (2)
An abstract representation of (1)n
Slide 13
A neural implementation of a local associative memory (solves problem 1 from slide 5) (WTAEXE)
Slide 14
DECODING
ENCODING
RANDOM CHOICE
Input long-term memory (ILTM)
Output long-term memory (OLTM)
addressing by content
retrieval
S21(Ij)
N1(j)
S21(ij)
A functional model of the previous network [7][8][11]
(WTAEXE)
(1)
(2)
(3)
(4)
(5)
Slide 15
Slide 16
HOW CAN WE SOLVE THE HARD PROBLEM 2 from slide 6
Slide 17
External system as a generalized RAM
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 3: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/3.jpg)
ZERO-APPROXIMATION MODEL
Slide 3
s(ν+1)
s(ν)
BIOLOGICAL INTERPRETATION
Slide 4
Working memory episodic memory and mental imagery
ASAM
Motor control
PROBLEM 1 LEARNING TO SIMULATE the Teacher This problem is simple system AM needs to learn a manageable number of fixed rules
Slide 5
NMy
Teacher
AM
symbol read
X11
X12
0y
current state of mind next state of mind
movetype symbol
X y
sel
NM1
PROBLEM 2 LEARNING TO SIMULATE EXTERNAL SYSTEM This problem is hard the number of fixed rules needed to represent a RAM with n locations explodes exponentially with n
Slide 6
y
1
2
NS
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
Programmable logic array (PLA) a logic implementation of a local associative memory (solves problem 1 from slide 5)
Slide 7
BASIC CONCEPTS FROM THE AREA OF ARTIFICIAL NEURAL NETWORKS
Slide 8
Typical neuron
Neuron is a very specialized cell There are several types of neurons with different shapes and different types of membrane proteins Biological neuron is a complex functional unit However it is helpful to start with a simple artificial neuron (next slide)
Slide 9
Neuron as the first-order linear threshold element
x1
xk xmgkg1 gm
y
u
Inputs xk RrsquoOutput y Rrsquo
Parameters g1hellip gm Rrsquo
y=L( u )
(1)
(2)
L( u) = u if u gt 00 otherwise
(3)
Equations
dudt + u =
mΣ gkxk k=1
τ
where
u0
y=L( u )
Rrsquo is the set of real non-negative numbers
u
x1xk
xm
g1
gk
gm
y
s
A more convenient notation
xk is the k-th component of input vectorgk is the gain (weight) of the k-th synapses =
Σ gkxk
m
k=1
is the total postsynaptic current
u is the postsynaptic potentialy is the neuron outputτ is the time constant of the neuron
τ
Slide 10
Input synaptic matrix input long-term memory (ILTM) and DECODING
si =
Σ gxikxk
m
k=1
i=1hellipn
(1) fdec X times Gx S (2)
An abstract representation of (1)
x1xkxm
s1 si sn
gx1
k DECODING (computing similarity)
x
s1 si sn
ILTMgxik gx
nk
Notation
x=(x1 xm) are the signals from input neurons (not shown)
gx = (gxik) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we
postulate that this matrix represents input long-term memory (ILTM)s=(s1 sn) is the similarity function
Slide 11
Layer with inhibitory connections as the mechanism of the winner-take-all (WTA) choice
Slide 12
Note Small white and black circles represent excitatory and inhibitory synapses respectively
s1
d1
α
β
si
di
α
β
sn
dn
α
β
uiu1un
xinhq
τττ Equations
(1)
(2)
(3)
iwin i si=max sj gt 0
( j )
if (i == iwin) di=1 else di=0
(4)
(5)
Procedural representationRANDOM CHOICE
s1 si sn
iwin
ldquo ldquo denotes random equally probable choice
Output synaptic matrix output long-term memory (OLTM) and ENCODING
y1ykyp gy
ki
d1 di dn
gykngy
k
1
NOTATION
d=(d1 dm) signals from the WTA layer (see previous slide) gy = (gy
ki) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we postulate that this matrix represents output long-term memory (OLTM)y=(y1 yp) output vector
OLTM
ENCODING (data retrieval)
y
d1 di dn
yk =
Σ gykidi
i=1k=1hellipp (1) fenc D times Gy Y (2)
An abstract representation of (1)n
Slide 13
A neural implementation of a local associative memory (solves problem 1 from slide 5) (WTAEXE)
Slide 14
DECODING
ENCODING
RANDOM CHOICE
Input long-term memory (ILTM)
Output long-term memory (OLTM)
addressing by content
retrieval
S21(Ij)
N1(j)
S21(ij)
A functional model of the previous network [7][8][11]
(WTAEXE)
(1)
(2)
(3)
(4)
(5)
Slide 15
Slide 16
HOW CAN WE SOLVE THE HARD PROBLEM 2 from slide 6
Slide 17
External system as a generalized RAM
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 4: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/4.jpg)
BIOLOGICAL INTERPRETATION
Slide 4
Working memory episodic memory and mental imagery
ASAM
Motor control
PROBLEM 1 LEARNING TO SIMULATE the Teacher This problem is simple system AM needs to learn a manageable number of fixed rules
Slide 5
NMy
Teacher
AM
symbol read
X11
X12
0y
current state of mind next state of mind
movetype symbol
X y
sel
NM1
PROBLEM 2 LEARNING TO SIMULATE EXTERNAL SYSTEM This problem is hard the number of fixed rules needed to represent a RAM with n locations explodes exponentially with n
Slide 6
y
1
2
NS
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
Programmable logic array (PLA) a logic implementation of a local associative memory (solves problem 1 from slide 5)
Slide 7
BASIC CONCEPTS FROM THE AREA OF ARTIFICIAL NEURAL NETWORKS
Slide 8
Typical neuron
Neuron is a very specialized cell There are several types of neurons with different shapes and different types of membrane proteins Biological neuron is a complex functional unit However it is helpful to start with a simple artificial neuron (next slide)
Slide 9
Neuron as the first-order linear threshold element
x1
xk xmgkg1 gm
y
u
Inputs xk RrsquoOutput y Rrsquo
Parameters g1hellip gm Rrsquo
y=L( u )
(1)
(2)
L( u) = u if u gt 00 otherwise
(3)
Equations
dudt + u =
mΣ gkxk k=1
τ
where
u0
y=L( u )
Rrsquo is the set of real non-negative numbers
u
x1xk
xm
g1
gk
gm
y
s
A more convenient notation
xk is the k-th component of input vectorgk is the gain (weight) of the k-th synapses =
Σ gkxk
m
k=1
is the total postsynaptic current
u is the postsynaptic potentialy is the neuron outputτ is the time constant of the neuron
τ
Slide 10
Input synaptic matrix input long-term memory (ILTM) and DECODING
si =
Σ gxikxk
m
k=1
i=1hellipn
(1) fdec X times Gx S (2)
An abstract representation of (1)
x1xkxm
s1 si sn
gx1
k DECODING (computing similarity)
x
s1 si sn
ILTMgxik gx
nk
Notation
x=(x1 xm) are the signals from input neurons (not shown)
gx = (gxik) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we
postulate that this matrix represents input long-term memory (ILTM)s=(s1 sn) is the similarity function
Slide 11
Layer with inhibitory connections as the mechanism of the winner-take-all (WTA) choice
Slide 12
Note Small white and black circles represent excitatory and inhibitory synapses respectively
s1
d1
α
β
si
di
α
β
sn
dn
α
β
uiu1un
xinhq
τττ Equations
(1)
(2)
(3)
iwin i si=max sj gt 0
( j )
if (i == iwin) di=1 else di=0
(4)
(5)
Procedural representationRANDOM CHOICE
s1 si sn
iwin
ldquo ldquo denotes random equally probable choice
Output synaptic matrix output long-term memory (OLTM) and ENCODING
y1ykyp gy
ki
d1 di dn
gykngy
k
1
NOTATION
d=(d1 dm) signals from the WTA layer (see previous slide) gy = (gy
ki) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we postulate that this matrix represents output long-term memory (OLTM)y=(y1 yp) output vector
OLTM
ENCODING (data retrieval)
y
d1 di dn
yk =
Σ gykidi
i=1k=1hellipp (1) fenc D times Gy Y (2)
An abstract representation of (1)n
Slide 13
A neural implementation of a local associative memory (solves problem 1 from slide 5) (WTAEXE)
Slide 14
DECODING
ENCODING
RANDOM CHOICE
Input long-term memory (ILTM)
Output long-term memory (OLTM)
addressing by content
retrieval
S21(Ij)
N1(j)
S21(ij)
A functional model of the previous network [7][8][11]
(WTAEXE)
(1)
(2)
(3)
(4)
(5)
Slide 15
Slide 16
HOW CAN WE SOLVE THE HARD PROBLEM 2 from slide 6
Slide 17
External system as a generalized RAM
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 5: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/5.jpg)
PROBLEM 1 LEARNING TO SIMULATE the Teacher This problem is simple system AM needs to learn a manageable number of fixed rules
Slide 5
NMy
Teacher
AM
symbol read
X11
X12
0y
current state of mind next state of mind
movetype symbol
X y
sel
NM1
PROBLEM 2 LEARNING TO SIMULATE EXTERNAL SYSTEM This problem is hard the number of fixed rules needed to represent a RAM with n locations explodes exponentially with n
Slide 6
y
1
2
NS
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
Programmable logic array (PLA) a logic implementation of a local associative memory (solves problem 1 from slide 5)
Slide 7
BASIC CONCEPTS FROM THE AREA OF ARTIFICIAL NEURAL NETWORKS
Slide 8
Typical neuron
Neuron is a very specialized cell There are several types of neurons with different shapes and different types of membrane proteins Biological neuron is a complex functional unit However it is helpful to start with a simple artificial neuron (next slide)
Slide 9
Neuron as the first-order linear threshold element
x1
xk xmgkg1 gm
y
u
Inputs xk RrsquoOutput y Rrsquo
Parameters g1hellip gm Rrsquo
y=L( u )
(1)
(2)
L( u) = u if u gt 00 otherwise
(3)
Equations
dudt + u =
mΣ gkxk k=1
τ
where
u0
y=L( u )
Rrsquo is the set of real non-negative numbers
u
x1xk
xm
g1
gk
gm
y
s
A more convenient notation
xk is the k-th component of input vectorgk is the gain (weight) of the k-th synapses =
Σ gkxk
m
k=1
is the total postsynaptic current
u is the postsynaptic potentialy is the neuron outputτ is the time constant of the neuron
τ
Slide 10
Input synaptic matrix input long-term memory (ILTM) and DECODING
si =
Σ gxikxk
m
k=1
i=1hellipn
(1) fdec X times Gx S (2)
An abstract representation of (1)
x1xkxm
s1 si sn
gx1
k DECODING (computing similarity)
x
s1 si sn
ILTMgxik gx
nk
Notation
x=(x1 xm) are the signals from input neurons (not shown)
gx = (gxik) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we
postulate that this matrix represents input long-term memory (ILTM)s=(s1 sn) is the similarity function
Slide 11
Layer with inhibitory connections as the mechanism of the winner-take-all (WTA) choice
Slide 12
Note Small white and black circles represent excitatory and inhibitory synapses respectively
s1
d1
α
β
si
di
α
β
sn
dn
α
β
uiu1un
xinhq
τττ Equations
(1)
(2)
(3)
iwin i si=max sj gt 0
( j )
if (i == iwin) di=1 else di=0
(4)
(5)
Procedural representationRANDOM CHOICE
s1 si sn
iwin
ldquo ldquo denotes random equally probable choice
Output synaptic matrix output long-term memory (OLTM) and ENCODING
y1ykyp gy
ki
d1 di dn
gykngy
k
1
NOTATION
d=(d1 dm) signals from the WTA layer (see previous slide) gy = (gy
ki) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we postulate that this matrix represents output long-term memory (OLTM)y=(y1 yp) output vector
OLTM
ENCODING (data retrieval)
y
d1 di dn
yk =
Σ gykidi
i=1k=1hellipp (1) fenc D times Gy Y (2)
An abstract representation of (1)n
Slide 13
A neural implementation of a local associative memory (solves problem 1 from slide 5) (WTAEXE)
Slide 14
DECODING
ENCODING
RANDOM CHOICE
Input long-term memory (ILTM)
Output long-term memory (OLTM)
addressing by content
retrieval
S21(Ij)
N1(j)
S21(ij)
A functional model of the previous network [7][8][11]
(WTAEXE)
(1)
(2)
(3)
(4)
(5)
Slide 15
Slide 16
HOW CAN WE SOLVE THE HARD PROBLEM 2 from slide 6
Slide 17
External system as a generalized RAM
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 6: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/6.jpg)
PROBLEM 2 LEARNING TO SIMULATE EXTERNAL SYSTEM This problem is hard the number of fixed rules needed to represent a RAM with n locations explodes exponentially with n
Slide 6
y
1
2
NS
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
Programmable logic array (PLA) a logic implementation of a local associative memory (solves problem 1 from slide 5)
Slide 7
BASIC CONCEPTS FROM THE AREA OF ARTIFICIAL NEURAL NETWORKS
Slide 8
Typical neuron
Neuron is a very specialized cell There are several types of neurons with different shapes and different types of membrane proteins Biological neuron is a complex functional unit However it is helpful to start with a simple artificial neuron (next slide)
Slide 9
Neuron as the first-order linear threshold element
x1
xk xmgkg1 gm
y
u
Inputs xk RrsquoOutput y Rrsquo
Parameters g1hellip gm Rrsquo
y=L( u )
(1)
(2)
L( u) = u if u gt 00 otherwise
(3)
Equations
dudt + u =
mΣ gkxk k=1
τ
where
u0
y=L( u )
Rrsquo is the set of real non-negative numbers
u
x1xk
xm
g1
gk
gm
y
s
A more convenient notation
xk is the k-th component of input vectorgk is the gain (weight) of the k-th synapses =
Σ gkxk
m
k=1
is the total postsynaptic current
u is the postsynaptic potentialy is the neuron outputτ is the time constant of the neuron
τ
Slide 10
Input synaptic matrix input long-term memory (ILTM) and DECODING
si =
Σ gxikxk
m
k=1
i=1hellipn
(1) fdec X times Gx S (2)
An abstract representation of (1)
x1xkxm
s1 si sn
gx1
k DECODING (computing similarity)
x
s1 si sn
ILTMgxik gx
nk
Notation
x=(x1 xm) are the signals from input neurons (not shown)
gx = (gxik) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we
postulate that this matrix represents input long-term memory (ILTM)s=(s1 sn) is the similarity function
Slide 11
Layer with inhibitory connections as the mechanism of the winner-take-all (WTA) choice
Slide 12
Note Small white and black circles represent excitatory and inhibitory synapses respectively
s1
d1
α
β
si
di
α
β
sn
dn
α
β
uiu1un
xinhq
τττ Equations
(1)
(2)
(3)
iwin i si=max sj gt 0
( j )
if (i == iwin) di=1 else di=0
(4)
(5)
Procedural representationRANDOM CHOICE
s1 si sn
iwin
ldquo ldquo denotes random equally probable choice
Output synaptic matrix output long-term memory (OLTM) and ENCODING
y1ykyp gy
ki
d1 di dn
gykngy
k
1
NOTATION
d=(d1 dm) signals from the WTA layer (see previous slide) gy = (gy
ki) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we postulate that this matrix represents output long-term memory (OLTM)y=(y1 yp) output vector
OLTM
ENCODING (data retrieval)
y
d1 di dn
yk =
Σ gykidi
i=1k=1hellipp (1) fenc D times Gy Y (2)
An abstract representation of (1)n
Slide 13
A neural implementation of a local associative memory (solves problem 1 from slide 5) (WTAEXE)
Slide 14
DECODING
ENCODING
RANDOM CHOICE
Input long-term memory (ILTM)
Output long-term memory (OLTM)
addressing by content
retrieval
S21(Ij)
N1(j)
S21(ij)
A functional model of the previous network [7][8][11]
(WTAEXE)
(1)
(2)
(3)
(4)
(5)
Slide 15
Slide 16
HOW CAN WE SOLVE THE HARD PROBLEM 2 from slide 6
Slide 17
External system as a generalized RAM
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 7: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/7.jpg)
Programmable logic array (PLA) a logic implementation of a local associative memory (solves problem 1 from slide 5)
Slide 7
BASIC CONCEPTS FROM THE AREA OF ARTIFICIAL NEURAL NETWORKS
Slide 8
Typical neuron
Neuron is a very specialized cell There are several types of neurons with different shapes and different types of membrane proteins Biological neuron is a complex functional unit However it is helpful to start with a simple artificial neuron (next slide)
Slide 9
Neuron as the first-order linear threshold element
x1
xk xmgkg1 gm
y
u
Inputs xk RrsquoOutput y Rrsquo
Parameters g1hellip gm Rrsquo
y=L( u )
(1)
(2)
L( u) = u if u gt 00 otherwise
(3)
Equations
dudt + u =
mΣ gkxk k=1
τ
where
u0
y=L( u )
Rrsquo is the set of real non-negative numbers
u
x1xk
xm
g1
gk
gm
y
s
A more convenient notation
xk is the k-th component of input vectorgk is the gain (weight) of the k-th synapses =
Σ gkxk
m
k=1
is the total postsynaptic current
u is the postsynaptic potentialy is the neuron outputτ is the time constant of the neuron
τ
Slide 10
Input synaptic matrix input long-term memory (ILTM) and DECODING
si =
Σ gxikxk
m
k=1
i=1hellipn
(1) fdec X times Gx S (2)
An abstract representation of (1)
x1xkxm
s1 si sn
gx1
k DECODING (computing similarity)
x
s1 si sn
ILTMgxik gx
nk
Notation
x=(x1 xm) are the signals from input neurons (not shown)
gx = (gxik) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we
postulate that this matrix represents input long-term memory (ILTM)s=(s1 sn) is the similarity function
Slide 11
Layer with inhibitory connections as the mechanism of the winner-take-all (WTA) choice
Slide 12
Note Small white and black circles represent excitatory and inhibitory synapses respectively
s1
d1
α
β
si
di
α
β
sn
dn
α
β
uiu1un
xinhq
τττ Equations
(1)
(2)
(3)
iwin i si=max sj gt 0
( j )
if (i == iwin) di=1 else di=0
(4)
(5)
Procedural representationRANDOM CHOICE
s1 si sn
iwin
ldquo ldquo denotes random equally probable choice
Output synaptic matrix output long-term memory (OLTM) and ENCODING
y1ykyp gy
ki
d1 di dn
gykngy
k
1
NOTATION
d=(d1 dm) signals from the WTA layer (see previous slide) gy = (gy
ki) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we postulate that this matrix represents output long-term memory (OLTM)y=(y1 yp) output vector
OLTM
ENCODING (data retrieval)
y
d1 di dn
yk =
Σ gykidi
i=1k=1hellipp (1) fenc D times Gy Y (2)
An abstract representation of (1)n
Slide 13
A neural implementation of a local associative memory (solves problem 1 from slide 5) (WTAEXE)
Slide 14
DECODING
ENCODING
RANDOM CHOICE
Input long-term memory (ILTM)
Output long-term memory (OLTM)
addressing by content
retrieval
S21(Ij)
N1(j)
S21(ij)
A functional model of the previous network [7][8][11]
(WTAEXE)
(1)
(2)
(3)
(4)
(5)
Slide 15
Slide 16
HOW CAN WE SOLVE THE HARD PROBLEM 2 from slide 6
Slide 17
External system as a generalized RAM
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 8: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/8.jpg)
BASIC CONCEPTS FROM THE AREA OF ARTIFICIAL NEURAL NETWORKS
Slide 8
Typical neuron
Neuron is a very specialized cell There are several types of neurons with different shapes and different types of membrane proteins Biological neuron is a complex functional unit However it is helpful to start with a simple artificial neuron (next slide)
Slide 9
Neuron as the first-order linear threshold element
x1
xk xmgkg1 gm
y
u
Inputs xk RrsquoOutput y Rrsquo
Parameters g1hellip gm Rrsquo
y=L( u )
(1)
(2)
L( u) = u if u gt 00 otherwise
(3)
Equations
dudt + u =
mΣ gkxk k=1
τ
where
u0
y=L( u )
Rrsquo is the set of real non-negative numbers
u
x1xk
xm
g1
gk
gm
y
s
A more convenient notation
xk is the k-th component of input vectorgk is the gain (weight) of the k-th synapses =
Σ gkxk
m
k=1
is the total postsynaptic current
u is the postsynaptic potentialy is the neuron outputτ is the time constant of the neuron
τ
Slide 10
Input synaptic matrix input long-term memory (ILTM) and DECODING
si =
Σ gxikxk
m
k=1
i=1hellipn
(1) fdec X times Gx S (2)
An abstract representation of (1)
x1xkxm
s1 si sn
gx1
k DECODING (computing similarity)
x
s1 si sn
ILTMgxik gx
nk
Notation
x=(x1 xm) are the signals from input neurons (not shown)
gx = (gxik) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we
postulate that this matrix represents input long-term memory (ILTM)s=(s1 sn) is the similarity function
Slide 11
Layer with inhibitory connections as the mechanism of the winner-take-all (WTA) choice
Slide 12
Note Small white and black circles represent excitatory and inhibitory synapses respectively
s1
d1
α
β
si
di
α
β
sn
dn
α
β
uiu1un
xinhq
τττ Equations
(1)
(2)
(3)
iwin i si=max sj gt 0
( j )
if (i == iwin) di=1 else di=0
(4)
(5)
Procedural representationRANDOM CHOICE
s1 si sn
iwin
ldquo ldquo denotes random equally probable choice
Output synaptic matrix output long-term memory (OLTM) and ENCODING
y1ykyp gy
ki
d1 di dn
gykngy
k
1
NOTATION
d=(d1 dm) signals from the WTA layer (see previous slide) gy = (gy
ki) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we postulate that this matrix represents output long-term memory (OLTM)y=(y1 yp) output vector
OLTM
ENCODING (data retrieval)
y
d1 di dn
yk =
Σ gykidi
i=1k=1hellipp (1) fenc D times Gy Y (2)
An abstract representation of (1)n
Slide 13
A neural implementation of a local associative memory (solves problem 1 from slide 5) (WTAEXE)
Slide 14
DECODING
ENCODING
RANDOM CHOICE
Input long-term memory (ILTM)
Output long-term memory (OLTM)
addressing by content
retrieval
S21(Ij)
N1(j)
S21(ij)
A functional model of the previous network [7][8][11]
(WTAEXE)
(1)
(2)
(3)
(4)
(5)
Slide 15
Slide 16
HOW CAN WE SOLVE THE HARD PROBLEM 2 from slide 6
Slide 17
External system as a generalized RAM
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 9: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/9.jpg)
Typical neuron
Neuron is a very specialized cell There are several types of neurons with different shapes and different types of membrane proteins Biological neuron is a complex functional unit However it is helpful to start with a simple artificial neuron (next slide)
Slide 9
Neuron as the first-order linear threshold element
x1
xk xmgkg1 gm
y
u
Inputs xk RrsquoOutput y Rrsquo
Parameters g1hellip gm Rrsquo
y=L( u )
(1)
(2)
L( u) = u if u gt 00 otherwise
(3)
Equations
dudt + u =
mΣ gkxk k=1
τ
where
u0
y=L( u )
Rrsquo is the set of real non-negative numbers
u
x1xk
xm
g1
gk
gm
y
s
A more convenient notation
xk is the k-th component of input vectorgk is the gain (weight) of the k-th synapses =
Σ gkxk
m
k=1
is the total postsynaptic current
u is the postsynaptic potentialy is the neuron outputτ is the time constant of the neuron
τ
Slide 10
Input synaptic matrix input long-term memory (ILTM) and DECODING
si =
Σ gxikxk
m
k=1
i=1hellipn
(1) fdec X times Gx S (2)
An abstract representation of (1)
x1xkxm
s1 si sn
gx1
k DECODING (computing similarity)
x
s1 si sn
ILTMgxik gx
nk
Notation
x=(x1 xm) are the signals from input neurons (not shown)
gx = (gxik) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we
postulate that this matrix represents input long-term memory (ILTM)s=(s1 sn) is the similarity function
Slide 11
Layer with inhibitory connections as the mechanism of the winner-take-all (WTA) choice
Slide 12
Note Small white and black circles represent excitatory and inhibitory synapses respectively
s1
d1
α
β
si
di
α
β
sn
dn
α
β
uiu1un
xinhq
τττ Equations
(1)
(2)
(3)
iwin i si=max sj gt 0
( j )
if (i == iwin) di=1 else di=0
(4)
(5)
Procedural representationRANDOM CHOICE
s1 si sn
iwin
ldquo ldquo denotes random equally probable choice
Output synaptic matrix output long-term memory (OLTM) and ENCODING
y1ykyp gy
ki
d1 di dn
gykngy
k
1
NOTATION
d=(d1 dm) signals from the WTA layer (see previous slide) gy = (gy
ki) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we postulate that this matrix represents output long-term memory (OLTM)y=(y1 yp) output vector
OLTM
ENCODING (data retrieval)
y
d1 di dn
yk =
Σ gykidi
i=1k=1hellipp (1) fenc D times Gy Y (2)
An abstract representation of (1)n
Slide 13
A neural implementation of a local associative memory (solves problem 1 from slide 5) (WTAEXE)
Slide 14
DECODING
ENCODING
RANDOM CHOICE
Input long-term memory (ILTM)
Output long-term memory (OLTM)
addressing by content
retrieval
S21(Ij)
N1(j)
S21(ij)
A functional model of the previous network [7][8][11]
(WTAEXE)
(1)
(2)
(3)
(4)
(5)
Slide 15
Slide 16
HOW CAN WE SOLVE THE HARD PROBLEM 2 from slide 6
Slide 17
External system as a generalized RAM
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 10: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/10.jpg)
Neuron as the first-order linear threshold element
x1
xk xmgkg1 gm
y
u
Inputs xk RrsquoOutput y Rrsquo
Parameters g1hellip gm Rrsquo
y=L( u )
(1)
(2)
L( u) = u if u gt 00 otherwise
(3)
Equations
dudt + u =
mΣ gkxk k=1
τ
where
u0
y=L( u )
Rrsquo is the set of real non-negative numbers
u
x1xk
xm
g1
gk
gm
y
s
A more convenient notation
xk is the k-th component of input vectorgk is the gain (weight) of the k-th synapses =
Σ gkxk
m
k=1
is the total postsynaptic current
u is the postsynaptic potentialy is the neuron outputτ is the time constant of the neuron
τ
Slide 10
Input synaptic matrix input long-term memory (ILTM) and DECODING
si =
Σ gxikxk
m
k=1
i=1hellipn
(1) fdec X times Gx S (2)
An abstract representation of (1)
x1xkxm
s1 si sn
gx1
k DECODING (computing similarity)
x
s1 si sn
ILTMgxik gx
nk
Notation
x=(x1 xm) are the signals from input neurons (not shown)
gx = (gxik) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we
postulate that this matrix represents input long-term memory (ILTM)s=(s1 sn) is the similarity function
Slide 11
Layer with inhibitory connections as the mechanism of the winner-take-all (WTA) choice
Slide 12
Note Small white and black circles represent excitatory and inhibitory synapses respectively
s1
d1
α
β
si
di
α
β
sn
dn
α
β
uiu1un
xinhq
τττ Equations
(1)
(2)
(3)
iwin i si=max sj gt 0
( j )
if (i == iwin) di=1 else di=0
(4)
(5)
Procedural representationRANDOM CHOICE
s1 si sn
iwin
ldquo ldquo denotes random equally probable choice
Output synaptic matrix output long-term memory (OLTM) and ENCODING
y1ykyp gy
ki
d1 di dn
gykngy
k
1
NOTATION
d=(d1 dm) signals from the WTA layer (see previous slide) gy = (gy
ki) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we postulate that this matrix represents output long-term memory (OLTM)y=(y1 yp) output vector
OLTM
ENCODING (data retrieval)
y
d1 di dn
yk =
Σ gykidi
i=1k=1hellipp (1) fenc D times Gy Y (2)
An abstract representation of (1)n
Slide 13
A neural implementation of a local associative memory (solves problem 1 from slide 5) (WTAEXE)
Slide 14
DECODING
ENCODING
RANDOM CHOICE
Input long-term memory (ILTM)
Output long-term memory (OLTM)
addressing by content
retrieval
S21(Ij)
N1(j)
S21(ij)
A functional model of the previous network [7][8][11]
(WTAEXE)
(1)
(2)
(3)
(4)
(5)
Slide 15
Slide 16
HOW CAN WE SOLVE THE HARD PROBLEM 2 from slide 6
Slide 17
External system as a generalized RAM
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 11: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/11.jpg)
Input synaptic matrix input long-term memory (ILTM) and DECODING
si =
Σ gxikxk
m
k=1
i=1hellipn
(1) fdec X times Gx S (2)
An abstract representation of (1)
x1xkxm
s1 si sn
gx1
k DECODING (computing similarity)
x
s1 si sn
ILTMgxik gx
nk
Notation
x=(x1 xm) are the signals from input neurons (not shown)
gx = (gxik) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we
postulate that this matrix represents input long-term memory (ILTM)s=(s1 sn) is the similarity function
Slide 11
Layer with inhibitory connections as the mechanism of the winner-take-all (WTA) choice
Slide 12
Note Small white and black circles represent excitatory and inhibitory synapses respectively
s1
d1
α
β
si
di
α
β
sn
dn
α
β
uiu1un
xinhq
τττ Equations
(1)
(2)
(3)
iwin i si=max sj gt 0
( j )
if (i == iwin) di=1 else di=0
(4)
(5)
Procedural representationRANDOM CHOICE
s1 si sn
iwin
ldquo ldquo denotes random equally probable choice
Output synaptic matrix output long-term memory (OLTM) and ENCODING
y1ykyp gy
ki
d1 di dn
gykngy
k
1
NOTATION
d=(d1 dm) signals from the WTA layer (see previous slide) gy = (gy
ki) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we postulate that this matrix represents output long-term memory (OLTM)y=(y1 yp) output vector
OLTM
ENCODING (data retrieval)
y
d1 di dn
yk =
Σ gykidi
i=1k=1hellipp (1) fenc D times Gy Y (2)
An abstract representation of (1)n
Slide 13
A neural implementation of a local associative memory (solves problem 1 from slide 5) (WTAEXE)
Slide 14
DECODING
ENCODING
RANDOM CHOICE
Input long-term memory (ILTM)
Output long-term memory (OLTM)
addressing by content
retrieval
S21(Ij)
N1(j)
S21(ij)
A functional model of the previous network [7][8][11]
(WTAEXE)
(1)
(2)
(3)
(4)
(5)
Slide 15
Slide 16
HOW CAN WE SOLVE THE HARD PROBLEM 2 from slide 6
Slide 17
External system as a generalized RAM
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 12: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/12.jpg)
Layer with inhibitory connections as the mechanism of the winner-take-all (WTA) choice
Slide 12
Note Small white and black circles represent excitatory and inhibitory synapses respectively
s1
d1
α
β
si
di
α
β
sn
dn
α
β
uiu1un
xinhq
τττ Equations
(1)
(2)
(3)
iwin i si=max sj gt 0
( j )
if (i == iwin) di=1 else di=0
(4)
(5)
Procedural representationRANDOM CHOICE
s1 si sn
iwin
ldquo ldquo denotes random equally probable choice
Output synaptic matrix output long-term memory (OLTM) and ENCODING
y1ykyp gy
ki
d1 di dn
gykngy
k
1
NOTATION
d=(d1 dm) signals from the WTA layer (see previous slide) gy = (gy
ki) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we postulate that this matrix represents output long-term memory (OLTM)y=(y1 yp) output vector
OLTM
ENCODING (data retrieval)
y
d1 di dn
yk =
Σ gykidi
i=1k=1hellipp (1) fenc D times Gy Y (2)
An abstract representation of (1)n
Slide 13
A neural implementation of a local associative memory (solves problem 1 from slide 5) (WTAEXE)
Slide 14
DECODING
ENCODING
RANDOM CHOICE
Input long-term memory (ILTM)
Output long-term memory (OLTM)
addressing by content
retrieval
S21(Ij)
N1(j)
S21(ij)
A functional model of the previous network [7][8][11]
(WTAEXE)
(1)
(2)
(3)
(4)
(5)
Slide 15
Slide 16
HOW CAN WE SOLVE THE HARD PROBLEM 2 from slide 6
Slide 17
External system as a generalized RAM
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 13: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/13.jpg)
Output synaptic matrix output long-term memory (OLTM) and ENCODING
y1ykyp gy
ki
d1 di dn
gykngy
k
1
NOTATION
d=(d1 dm) signals from the WTA layer (see previous slide) gy = (gy
ki) i=1hellipn k=1hellipm is the matrix of synaptic gains -- we postulate that this matrix represents output long-term memory (OLTM)y=(y1 yp) output vector
OLTM
ENCODING (data retrieval)
y
d1 di dn
yk =
Σ gykidi
i=1k=1hellipp (1) fenc D times Gy Y (2)
An abstract representation of (1)n
Slide 13
A neural implementation of a local associative memory (solves problem 1 from slide 5) (WTAEXE)
Slide 14
DECODING
ENCODING
RANDOM CHOICE
Input long-term memory (ILTM)
Output long-term memory (OLTM)
addressing by content
retrieval
S21(Ij)
N1(j)
S21(ij)
A functional model of the previous network [7][8][11]
(WTAEXE)
(1)
(2)
(3)
(4)
(5)
Slide 15
Slide 16
HOW CAN WE SOLVE THE HARD PROBLEM 2 from slide 6
Slide 17
External system as a generalized RAM
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 14: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/14.jpg)
A neural implementation of a local associative memory (solves problem 1 from slide 5) (WTAEXE)
Slide 14
DECODING
ENCODING
RANDOM CHOICE
Input long-term memory (ILTM)
Output long-term memory (OLTM)
addressing by content
retrieval
S21(Ij)
N1(j)
S21(ij)
A functional model of the previous network [7][8][11]
(WTAEXE)
(1)
(2)
(3)
(4)
(5)
Slide 15
Slide 16
HOW CAN WE SOLVE THE HARD PROBLEM 2 from slide 6
Slide 17
External system as a generalized RAM
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 15: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/15.jpg)
A functional model of the previous network [7][8][11]
(WTAEXE)
(1)
(2)
(3)
(4)
(5)
Slide 15
Slide 16
HOW CAN WE SOLVE THE HARD PROBLEM 2 from slide 6
Slide 17
External system as a generalized RAM
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 16: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/16.jpg)
Slide 16
HOW CAN WE SOLVE THE HARD PROBLEM 2 from slide 6
Slide 17
External system as a generalized RAM
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 17: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/17.jpg)
Slide 17
External system as a generalized RAM
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 18: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/18.jpg)
Slide 18
Concept of a generalized RAM (GRAM)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 19: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/19.jpg)
Slide 18Slide 19
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 20: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/20.jpg)
Slide 20
Representation of local associative memory in terms of three ldquoone-steprdquo procedures DECODING CHOICE ENCODING
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 21: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/21.jpg)
INTERPRETATION PROCEDURE
Slide 21
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 22: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/22.jpg)
At the stage of training sel=1 at the stage of examination sel=0System AS simply ldquotape-recordsrdquo its experience (x1x2xy)(0ν)
Slide 22
NOTE System (WD) shown in slide 3 has the properties of a random access memory (RAM)
y
1
2
NS
GRAM
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 23: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/23.jpg)
EXPERIMENT 1 Fixed rules and variable rules
Slide 23
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 24: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/24.jpg)
EXPERIMENT 1 (continued 1)
Slide 24
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 25: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/25.jpg)
EXPERIMENT 1 (continued 2)
Slide 25
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 26: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/26.jpg)
A COMPLETE MEMORY MACHINE (CMM) SOLVES PROBLEM 2 but this solution can be easily falsified
Slide 26
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 27: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/27.jpg)
Slide 27
GRAM as a state machine combinatorial explosion of the number of fixed rules
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 28: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/28.jpg)
Slide 28
Concept of a primitive E-machine
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 29: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/29.jpg)
Slide 29
s(i) gt c
(αlt 5)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 30: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/30.jpg)
Effect of a RAM wo a RAM buffer
abc
abc
abc
abc
1 2 3 4
b a c b
1 2 3 4
abc
abc
abc
abc
1 2 3 4
c b a c
1 2 3 4
a b c a b c a b c a b c
1 2 3 4
G-state
E-state
abc
abc
abc
abc
1 2 3 4
a c b a
1 2 3 4
Slide 30
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 31: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/31.jpg)
EFFECT OF ldquoMANY MACHINES IN ONErdquo
Slide 31
0
00
1
00
0
10
1
10
1 2 3 4
0
01
1
01
0
11
1
11
5 6 7 8
X(1)
X(2)
y(1)
AND
OR
XOR
NAND
NOR
N=2m2
A table with n=2 m+1
represents
different m-input 1-output Boolean functions
Let m=10 Then n=2048
and N=21024
G-state
E-state
n=8 locations of LTM
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 32: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/32.jpg)
Simulation of GRAM with A=12 and D=abε
Slide 32
a
a1
b
b1
b
b2
a
a2
1 2 3 4
b
1
a
2
5 6 7
addr
din
dout
i
s(i)
ν = 5
e(i)
se(i)se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
dout=b is read from i=2 that has se(i)=max(se)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 33: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/33.jpg)
Slide 33
i
s(i)
e(i)
se(i)
a
a1
b
b1
b
b2
a
a2
1 2 3 4
addr
din
dout
ν = 5se(i) = s(i) ( 1+a e(i) ) (alt5)
if ( s(i)gte(i) ) e(i)(ν+1) = s(i)(ν)
else e(i)(ν+1) = c e(i)(ν) τ=1(1-c)
s (i) is the number of matches in the first two rows Input (addrdin) = (1ε) produces s(i)=1 for i=1 and i=2
iwin i se(i)=max(se)gt0
Assume that the E-machine starts with the state of LTM shown in the table and doesnrsquot learn more so this state remains the same What changes is the E-state e(1)hellipe(4) Assume that at ν=1 e(1)=e(4)=0 Let us send the input sequence (addrdin)(15) = (1a) (1b)(2a)(2b)(1ε) As can be verified at ν = 5 the state e(i) and functions s(i) and se(i) for i=14 are as shown below Accordingly iwin=2 and dout=b
y =gy(iwin) (alt5)
gx(114)
gx(214)
gy(114)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-
![Page 34: Slide 1](https://reader034.fdocuments.us/reader034/viewer/2022042901/56814e7b550346895dbc1803/html5/thumbnails/34.jpg)
What can be efficiently computed in this ldquononclassicalrdquo symbolicdynamical computational paradigm (call it the E-machine paradigm)
What computational resources are available in the brain -- especially in the neocortex -- for the implementation of this paradigm
How can dynamical equations (such as the last equation in slide 29) be efficiently implemented in biologically plausible neural network models
Slide 34
- Slide 1
- Slide 2
- Slide 3
- Slide 4
- Slide 5
- Slide 6
- Slide 7
- Slide 8
- Slide 9
- Slide 10
- Slide 11
- Slide 12
- Slide 13
- Slide 14
- Slide 15
- Slide 16
- Slide 17
- Slide 18
- Slide 19
- Slide 20
- Slide 21
- Slide 22
- Slide 23
- Slide 24
- Slide 25
- Slide 26
- Slide 27
- Slide 28
- Slide 29
- Slide 30
- Slide 31
- Slide 32
- Slide 33
- Slide 34
-