On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof....

23
On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute of Computer Science Clausthal University of Technology Nicolas Neubauer, M.Sc. Neural Information Processing Group Department of Electrical Engineering & Computer Science Technische Universität Berlin

Transcript of On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof....

Page 1: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

On the capacity of unsupervised recursive neural networks for symbol processing

29. 8. 2006

Prof. Dr. Barbara HammerComputational Intelligence GroupInstitute of Computer ScienceClausthal University of Technology

Nicolas Neubauer, M.Sc.Neural Information Processing GroupDepartment of Electrical Engineering &

Computer ScienceTechnische Universität Berlin

Page 2: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Overview

Introduction GSOMSD models and capacities

Main part Implementing deterministic push-down automata in RecSOM

Conclusion

Page 3: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Unsupervised neural networks

Clustering algorithms– Each neuron/prototype i has a weight vector wi

– Inputs x are mapped to a winning neuron i such that d(x,wi) is minimal

– Training: Adapt weights to minimize some error function

Self-Organizing Maps (SOMs): Neurons arranged on lattice – During training, adapt neighbours also– After training, similar inputs neighbouring neurons– variants without fixed grid (e.g. neural gas)

Defined for finite-dimensional input vectors– Question:

How to adapt algorithms for inputs of non-fixed length like time series?

– E.g. time windowing, statistical analysis, ...

Rn R?

Page 4: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Recursive processing of time series

• Process each input of time series separately

• Along with a representation C of the map’s response to the previous input (the context)– function rep: RN Rr (N=number of neurons)

– Ct = rep(d1(t-1), ..., dN(t-1))

• Neurons respond not only to input, but to context also– Neurons require additional context weights c– distance of neuron i at timestep t:

di(t) = αd(xt,wi) + βdr(Ct,ci)

x1

Cinit C2

x2

Ct

xt

...rep rep rep

C

Page 5: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Generalized SOM for Structured Data (GSOMSD) [Hammer, Micheli, Sperduti, Stricker, 04]

Unsupervised recursive algorithms: Instances of GSOMSD

Varying in– context function rep, distances d, dr

– and lattice (metric, hyperbolic, neural gas, ...)

Example– Recursive SOM (RecSOM)

context stores all neurons’ activationsr=N, rep(d1,...,dN) = -exp(d1),...,-exp(dN)

each neuron needs N context weights! (memory ~ N^2)

– other models store• properties of winning neuron• previous activations only for single neurons

Page 6: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Computational capacity

0111…1 != 1111…1

• Ability to keep state information: Equivalence to Finite State Automata (FSA) / regular languages

• Decaying context will eventually „forget“ leading 0

([(…)]) != ([(…)]]

• Ability to keep stack information: Equivalence to Pushdown Automata (PDA) / context-free languages

• Finite context cannot store potentially infinite stack

• … Ability to store at least two binary stacks: Turing Machine Equivalence

connecting context models to Chomsky hierarchy

Page 7: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Why capacity matters

• Explore dynamics of algorithms in detail

• Distinguish power of different models– different contexts within GSOMSD

e.g., justify huge memory costs of RecSOM compared to other models– to other approaches

e.g., supervised recurrent networks

• Supervised recurrent networks: Turing machine equivalence – in exponential time

for sigmoidal activation functions [Kilian/Siegelmann ’96]

– in polynomial time for semilinear activation functions[Siegelmann/Sontag ’95]

Page 8: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Various recursive models and their capacity

TKM [Chappell/Taylor,93]

RSOM[Koskela/Varsta/Heikkonen,98]

MSOM[Hammer/Strickert,04]

SOMSD[Hagenbuchner/Sperduti/Tsoi,03]

RecSOM[Voegtlin,02]

context neuron itself

neuron itself winner content

winner index exp(all act.)

encoding input space

input space input space index space activ. space

lattice all all all SOM/ HSOM all

capacity <FSA <FSA FSA FSA ≥ PDA*

* for WTA semilinear context

[Strickert/Hammer,05]

TKM [Chappell/Taylor,93]

RSOM[Koskela/Varsta/Heikkonen,98]

MSOM[Hammer/Strickert,04]

SOMSD[Hagenbuchner/Sperduti/Tsoi,03]

RecSOM[Voegtlin,02]

context neuron itself

neuron itself winner content

winner index exp(all act.)

encoding input space

input space input space index space activ. space

lattice all all all SOM/ HSOM all

capacity <FSA <FSA FSA FSA ≥ PDA*

Page 9: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Overview

Introduction GSOMSD models and capacities

Main part Implementing deterministic push-down automata in RecSOM

Conclusion

Page 10: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Goal: Build a Deterministic PDA

Using• the GSOMSD recursive equation di(t) = αd(xt,wi) + βdr(Ct,ci)

• L1 distances for d, dr (i.e., d(a,b) = |a-b|)

• parameters– α=1– β=¼

• modified RecSOM context:– instead of original exp(-di): max(1-di,0)

• similar overall shape• easier to handle analytically

– additionally, winner-takes-all• required at one place in the proof...• makes life easier overall, however

semilinear vs exponential activation function

0

0,2

0,4

0,6

0,8

1

0 0,2 0,4 0,6 0,8 1 1,2 1,4

xac

tiva

tio

n

max(1-x,0)

exp(-x)

Page 11: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Three levels of abstraction

1. Layerstemporal („vertical“) grouping feed-forward architecture

2. Operatorsfunctional grouping defined operations

3. Phasescombining operators -> automaton simulations

~1

1

00 a b

~ weight space!(not lattice)

Page 12: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

First level of abstraction: Feed-forward

• One-dimensional input weights w

• Encoding function enc: Σ Nl Input symbol σi

series of inputs (i, ε, 2ε, 3ε, …, (l-1)ε)with ε >> max(i)=|Σ|

• Each neuron is active for at most one component of enc– resulting in l layers of neurons

• In layer l, we know that only neurons from layer l-1 have been active, i.e. are represented >0 in context– pass on activity from layer to layer

Page 13: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Feed-forward: Generic Architecture

b

σ1,aσ0,a σ0,b σ1,b

a

Sample architecture• 2 states – S={a,b} • 2 inputs – Σ={σ0,σ1}

Output layerNetwork represents state a

a active C=(0,0,...,1,0)

Network represents state b b active C=(0,0,...,0,1)

Input layerencoding input X state

- get input via input weight- get state via context weight

0|… 1|… 0|… 1|… 0|(…,1,0) 1|(…,1,0) 0|(…,0,1) 1|(…,0,1) 0|(1,0) 1|(1,0) 0|(0,1) 1|(0,1)

0,1

εaux1 aux2

ε |(1,1,0,0) ε |(0,0,1,1)

(l-1)ε

(l-1)ε|(…)(l-1)ε|(…)

... ...ε

“Hidden” layersarbitrary intermediate computations

en

c

Simulation of a state transition function δ: S x Σ S

w|c

Page 14: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Second Layer of Abstraction: Operators

We might be finished– In fact, we are - for the FSA case

However, what about the stack?– looks like γ0 γ1 γ1 ... – how to store potentially infinite symbol sequences?

General idea: 1. Encode stack in the winner neuron’s activation2. Then build ‘operators’ to

• read• modify or• copy

the stack by changing the winner’s activation

Page 15: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

00,

125

0,25

0,37

50,

50,

625

0,75

0,87

5 1

Encoding the stack in neurons’ activations

To save a sequence of stack symbols within the map,

• turn γ0 γ1 γ1 into binary sequence alpha=011

• f4(alpha) =

– f4 (□) = 0– f4 (0) = ¼– f4 (1) = ¾– f4 (01) = ¼ + 3/16– f4 (011) = ¼ + 3/16 + 3/64

– push(s,γ1) = ¼s + ¾ – pop(s,γ0 ) = (s- ¼)*4

• Encode stack in activation: Activation a = 1–¼s push(a, γ1)= 13/16 -1/16s pop(a, γ0) = 5/4 – s

00,

125

0,25

0,37

50,

50,

625

0,75

0,87

5 1

Page 16: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Operators

COPY– copy activation into next layer

TOP– identify top stack symbol

OR– get activation of active neurons

(if any)

PUSH– modify activation for push– push(a, γ1)= 13/16 -1/16s

POP– modify activation for pop– pop(a, γ0) = 5/4 – s

00,

125

0,25

0,37

50,

50,

625

0,75

0,87

5 1

Page 17: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Third abstraction: PhasesSet Content Elements: generic / examples

S States s a s(uccess) f(ail)

Σ Input alphabet σ ( ) [ ] ε

Γ Stack alphabet γ ( [ □

U Stack actions push( push[ pop( pop) do nothing

Phase Task Input Output Required operators

Finalize Collect all results leading to same state

U x S S OR, COPY

Execute Manipulate stack where needed

U x S U x S PUSH,COPY/POP,COPY

Merge Collect all states resulting in common stack & state

S x Σ x Γ U x S OR, COPY

Separate Read top stack symbol S x Σ S x Σ x Γ TOP

σ1,aσ0,a σ0,b σ1,b

ba

Page 18: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

The final architecture

0S21S3

Page 19: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Overview

Introduction GSOMSD models and capacities

Main part Implementing deterministic push-down automata in RecSOM

Conclusion

Page 20: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Conclusions

• RecSOM: stronger computational capacity than SOMSD/MSOM

• Does this mean it‘s worth the cost?– Simulations not learnable with Hebbian learning:

Practical relevance questionable– Anyway: Elaborate context (costly) rather hindering for simulations

too much context: results in a lot of noise maybe better: simpler models, slightly enhanced for example: MSOM, SOMSD with context variable

indicating last winner‘s activation

• Turing machines also possible?– Storing two stacks into a real number is possible– Reconstructing two stacks from real number is hard

particularly when using only differences– may have to leave constant-size simulations– Other representation of stacks may be required

Page 21: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Thanks

Page 22: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Aux slide: PDA definition

Page 23: On the capacity of unsupervised recursive neural networks for symbol processing 29. 8. 2006 Prof. Dr. Barbara Hammer Computational Intelligence Group Institute.

Aux slide: PDA definition suitable for map construction