Post on 30-Sep-2020
Nagging Questions
Dynamic Response AdaptationDynamic Pattern RecognitionDynamic Pattern Storage and Retrieval
Feedforward vs RecurrentA Paradigmatic Change
Online Generation, Recognition, Storage and Retrieval of Dynamic Patterns using Self-active Recurrent Neural Networks
0 2 4 6 8 10
0.6
0.8
1Training Input
Time (s)
Activ
ity
0 2 4 6 8 10−2
0
2Training Output
Time
Targ
et a
nd O
utpu
t
RMS error = 0.0048 Target
Output
0 2 4 6 8 100
2
4
x 10−3 Weight update rate
Time (s)
|w|
−2 0 2 4 6 8 100
0.2
0.4
0.6
0.8
1
Testing Input
Time (s)
Activ
ity
−2 0 2 4 6 8 10−1
−0.5
0
0.5
1
Testing Output
Time (s)
Targ
et a
nd O
utpu
t
RMS error = 0.042 Target
4 6 8 10 12 14 160
0.2
0.4
0.6
0.8
1
Time
Mor
ph p
aram
eter
Morphing input
4 6 8 10 12 14 16
−1
−0.5
0
0.5
1
Morphing output
Time
Activ
ity
The network is trained to generate two distinct periodic functions that are selected by constant input values. The network is presented the input/target pairs only twice and only for a few cycles each. Afterwards, the network is able to faithfully recall each pattern when the corresponding input is given. A morph between two trained input values makes the network generate intermediate patterns (green curve = analytic morph between target functions).
0 2 4 6 8 10
−1
0
1
Training Input
Time (s)
Activ
ity
0 2 4 6 8 100
1
2
3Training Output
Time
Targ
et a
nd O
utpu
t
RMS error = 0.043 Target
Output
0 2 4 6 8 100
2
4
x 10−3 Weight update rate
Time (s)
|w|
−2 0 2 4 6 8 10
−1
−0.5
0
0.5
1
Testing Input
Time (s)
Activ
ity
−2 0 2 4 6 8 100
0.5
1
1.5
2
2.5Testing Output
Time (s)
Targ
et a
nd O
utpu
t
RMS error = 0.054 Target
Output
0 2 4 6 8 10 12 14 16−1.5
−1
−0.5
0
0.5
1
1.5
Time
Mor
ph p
aram
eter
Morphing input
0 2 4 6 8 10 12 14 16−0.5
0
0.5
1
1.5
2
2.5Morphing output
Time
Activ
ity
4 6 8 10 12 14 16−1
−0.5
0
0.5
1
Time
Mor
ph p
aram
eter
Morphing input
4 6 8 10 12 14 16−0.2
0
0.2
0.4
0.6
0.8
1
Morphing output
Time
Activ
ity
0 2 4 6 8 10−1
0
1Training Input
Time (s)
Activ
ity
0 2 4 6 8 100
0.5
1
1.5Training Output
Time
Targ
et a
nd O
utpu
t
RMS error = 0.0085 Target
Output
0 2 4 6 8 100
2
4
x 10−3 Weight update rate
Time (s)
|w|
−2 0 2 4 6 8 10−1
−0.5
0
0.5
1Testing Input
Time (s)
Activ
ity
−2 0 2 4 6 8 10−0.5
0
0.5
1
1.5Testing Output
Time (s)
Targ
et a
nd O
utpu
t
RMS error = 0.065 Target
Output
Input → Output device. Empty input yields empty output, "invalid" input
induces machine failure.
Open dynamic system. Does a water surface "calculate" the ripples? Can
there be "syntactic errors"?
Neural Network
The network is trained to generate periodic functions when fed with other periodic functions. Here, two "artifical" functions act as input and two experimentally determined muscular activations ("biological") functions, act as training targets. After two lessons, the network responds to random sequences of input functions with the correct output function. A morph between two trained input functions generates intermediate output functions.
What happens when a piano player plays a piece by heart? How can such complex movement patterns be generated, memorized and recalled?
Until 10 years ago, recurrent neural networks would require enormous amounts of calculational power without providing significant advantages over feedforward networks. Thus, 95% of the literature and nearly all applications were concerned with feedforward networks. For academic reasons, small recurrent networks (~30 neurons) were implemented and studied.
Layer1 Layer2
Input Output
compare
Target
Feedback
Input
Output
But then▪ Herbert Jaeger (2001): Echo-State Networks▪ Maass et al (2002): Liquid State Machines
revolutionized the way recurrent networks are constructed and construed.
The common basic idea is that a recurrent network is not just a sophisticated input-output device but rather a dynamic system (termed "reservoir") interacting with the environment. These ideas were carried further by▪ Sussillo and Abbott (2009): FORCE Learning
Feedforward networks are input-output devices with a modular internal structure. Incoming information is processed by subsequent "hidden layers", the final result is issued at the output layer. The entire process is analoguous to an algorithmic calculation. If there is no input, there will be no output.
Recurrent networks do generally not have a modular structure, nor is there a dedicated input or output layer. Any neuron can act as input and as output. Self-Activity is achieved when the synaptic weights globally exceed a critical threshold, so that the network sustains spontaneous activity. Thus, even in the absence of input there will be output. Echo-State Networks and Liquid Machines operate on the edge of self-activity, FORCE Networks operate beyond it.In reservoir computing, the network remains randomly connected, only the output weights are modified during training. For the training to suceed, the network must be large enough (~1000 neurons).
FORCE = First-Order Reduced ErrorInput function and global feedback signal are injected during training. Weights are modified online, the output immediately approaches the target function. Convergence when weight update rate vanishes.
How come we recognize complex movement patterns almost immediately and effortlessly?
How come we quickly adapt our movements to our perceptions without even thinking about it?
Why can we adequately deal with situations that we've never faced before?
Can artificial systems mimic (and maybe explain) such capabilities?
The network is trained to generate constant output when fed with periodic functions. After two "lessons" the network is able to respond to random sequences of trained functions with the correct output. A morph between two trained input functions generates a stochastic flip between the corresponding output values. This resembles the phenomenon of bistable perception reported for human and animal subjects.
Kim Joris Boström • Heiko Wagner
Dynamic information is stored in static synaptic weights!
Active network state = neuronal activity (visible to imaging techniques)Passive network state = synaptic weights (invisible to imaging techniques)
Although the network has never learned the intermediate inputs, it adequately responds to
them.The network recognizes dynamical patterns and
displays bistability-like behavior.
The network adapts its output dynamically to the input and adequately responds to
intermediate untrained input.
mo
tion
sci
en
ce münster