Large scale analysis for spiking data

22
Montecarlo-based method for large scale network analysis with Gibbs distribution Hassan Nasser & Bruno Cessac Neuromathcomp team – INRIA Sophia-Antipolis

Transcript of Large scale analysis for spiking data

Page 1: Large scale analysis for spiking data

Montecarlo-based method for large scale network analysis

with Gibbs distribution

Hassan Nasser & Bruno CessacNeuromathcomp team – INRIA Sophia-Antipolis

Page 2: Large scale analysis for spiking data

Spike train dictionary

Page 3: Large scale analysis for spiking data

Observable

● Observable: a function which associates to a spike train a real number.

● Ex: Firing rate, pairwise correlation.

Firing rate

Neurons k1 fires at n1 while the neuron k2 fires at the time n2

Neurons k1 fires at k1 while the neurons n2 is silent at time n2

Page 4: Large scale analysis for spiking data

Empirical average

● The frequency of an observable is the spike train.

Page 5: Large scale analysis for spiking data

Gibbs potential

● Gibbs potential: represents a model of spike train where observables and observable coefficients (parameters) are represented.

Potential Parameters Observable (weights)

Page 6: Large scale analysis for spiking data

Modeling a spike train with a Gibbs potential

● Given a spike train (and its empirical averages = empirical probability distribution).

● Given a Gibbs potential and the associated Gibbs probability distribution.

● Our aim is to find parameters such that the KL distance between the empirical and theoretical Gibbs distribution is minimal --> Maximal entropy.

Page 7: Large scale analysis for spiking data

Relevant previous works

● Vasquez et al. 2012 showed that Gibbs potential models with memory reproduce more preciselt the statistical distibution of a spike train --> Small size networks.

● Tkacik et al. 2008 showed a Montecarlo based method to reproduce the statistics for an Ising models --> Large scale network.

Page 8: Large scale analysis for spiking data

Maximum entropy Vs montecarlo

● Maximum entropyMaximum entropy● Precise (Solving with the transfer matrix).Precise (Solving with the transfer matrix).

● Computation time grows exponentially with the Computation time grows exponentially with the network and memory size (Computation of network and memory size (Computation of transfer matrix).transfer matrix).

● Montecarlo:Montecarlo:● Not precise (Error depends on the chain length).Not precise (Error depends on the chain length).

● Fast (Liner growth with network and memory Fast (Liner growth with network and memory size).size).

Page 9: Large scale analysis for spiking data

My work

● Reproducing the statistics for large networks using Montecarlo.

● I began with implementing the classical Montecarlo in order to reproduce the statistics but the problem didn't converge.

Page 10: Large scale analysis for spiking data

● The situtation is different when in the spatio-temporal case, since:

● The normalization factor changes in the spatio-temporal case. However, the classical Montecarlo works for Ising and Memory = 1 models (Taking into account a detailed balance assumption).

Page 11: Large scale analysis for spiking data

● The situtation is different when in the spatio-temporal case, since:

● The normalization factor changes in the spatio-temporal case. However, the classical Montecarlo works for Ising and Memory = 1 models (Taking into account a detailed balance assumption).

Page 12: Large scale analysis for spiking data

● The situtation is different when in the spatio-temporal case, since:

● The normalization factor changes in the spatio-temporal case. However, the classical Montecarlo works for Ising and Memory = 1 models (Taking into account a detailed balance assumption).

Page 13: Large scale analysis for spiking data

Then

Page 14: Large scale analysis for spiking data

How it works?

● Given a real spike train.● We generate a random spike train of length

(Ntimes) and a random set of parameters.● We choose random events in the raster and we

flip the event (0 --> 1 or 1-->0) and we compute the difference of energy.

● If the energy increase, we accept the new state. Otherwise, we reject.

● We do this flipping (Nflip) times.

Page 15: Large scale analysis for spiking data

Reproducing the statistics● With a simple know-distribution raster, where

empirical probabilities are known, we could generate another raster with the same probability distribution.

Empirical Probability

The

oret

ica

l Pro

babi

lity

Page 16: Large scale analysis for spiking data

How fast is Montecarlo?

LinerarExp

onential

Page 17: Large scale analysis for spiking data

CPU time Vs Ntimes

Page 18: Large scale analysis for spiking data

Influence of Ntimes and Nflip on the error

3 Neurons 7 Neurons

Page 19: Large scale analysis for spiking data

● What we presented was the application of Montecarlo for small size networks.

● Why? ● Because with small size networks, we can

compute the error committed on the observable computation.

● Why? ● Because the maximal entropy can compute

observable averages only for small size networks.

Page 20: Large scale analysis for spiking data

Bruno invented a “particular potential”

● A potential where observable averages could be computed analytically even for large networks.

● This potential is a set of pairs of events (neuron, time).

This is the estimated average with Montecarlo. We can compute the error by comparing this estimated average with the empirical average given by the real raster

Page 21: Large scale analysis for spiking data

Error and CPU time

Page 22: Large scale analysis for spiking data

What we did - what we haven't done yet.

● Implementing Montecarlo based algorithm that gives an estimation for a spike train statistics on a Gibbs distribution for large networks and for models with memory

● Now we can deal with large networks.● We want to compute the parameters.● Application of our method on really real data.