Laura Sacerdote

Lunch
January 26, 2017
When:
September 28, 2018 @ 12:30 pm – 1:30 pm
2018-09-28T12:30:00+02:00
2018-09-28T13:30:00+02:00

A consistency problem in neural modelling. Coherence between input and output can be obtained using heavy tails distributions.

The coherence between the input and the output of the single units is a problem, sometimes underestimated, in network modeling.

An example in this direction is given by Integrate and Fire models used to describe the membrane potential dynamics of a neuron in a network. The focus of these models concerns the description of the inter-times between events (the InterSpike Intervals, ISIs), i.e. of the output of the neuron. This type of models describes the membrane potential evolution through a suitable stochastic process and the output of the neuron corresponds to the First Passage Time of the considered stochastic process through a boundary. However, the input mechanism determining the membrane potential dynamics often disregards its origin as function of the output of other units.

The seminal idea for these models goes back to 1964 when Gernstein and Mandelbrot proposed the Integrate and Fire model to account for the observed stable behavior of the Inter-spike Interval distribution. They suggested to model the membrane potential dynamics through a Wiener process in order to get the Inverse Gaussian distribution for the inter-times between the successive spikes of the neuron, i.e. its output. The use of the Wiener process was first motivated by its property to be the continuous limit of a random walk, later the randomized random walk was proposed to account for the continuous time characterizing the membrane potential dynamics. In this model, the arrival of inputs from the network determines jumps of fixed size on the membrane potential value. When the membrane potential attains a threshold value \(S\), the neuron releases a spike and the process restart anew. Furthermore, the inter-times between jumps are exponentially distributed.

Unfortunately, this last hypothesis is contradictory with the heavy tail distribution of the output, since the incoming inputs are output of other neurons. Later many variants of the original model appeared in the literature. Their aim was to improve the realism of the model but
unfortunately they forgot the initial clue for it, the heavy tails of the observed output distribution.

However, the ISIs models (or their variants) are generally recognized to be a good compromise between the realism and its easy use and have been proposed to model large networks. These facts motivate us to rethink the ISIs model allowing heavy tail distributions both for the ISIs of the neurons surrounding the modeled neuron and for its output.

Here, we propose to start the model formulation from the main property, i.e. the heavy tails exhibited by the ISIs. This approach allows us to propose here an Integrate and Fire model coherent with these features. The ideal framework for this rethinking involves regularly varying random variables and vectors.
We assume that each input to a unit corresponds to the output of one of \(N < \infty \) neurons of the network. The inter-times between spikes of the same neuron are independent random variables, with regularly varying distribution. Different neurons of the network are not independent, due to the network connections. The only hypothesis that we introduce to account for the dependence is very general, i.e. their ISIs determine a regularly varying vector. Based on these hypothesis we prove that the output inter-times of the considered neuron, described through an Integrate and Fire model, is a regularly varying random variable.

The next step of this modeling procedure requests a suitable rescaling of the obtained process to obtain a time fractional limit for the process describing the membrane potential evolution. We already have some result in this direction, allowing to write down the Laplace transform of the first passage time of the rescaled process through the threshold \(S\) but some mathematical step should be improved to account for the dependence between jumps times also for the limiting process.

References
[1] Bingham, N. H., Goldie, C. M. and Teugels, J. L., Regular variation, volume 27 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, 1989. ISBN 0-521-37943-1.
[2] Gal A., and Marom S. Entrainment of the Intrinsic Dynamics of Single Isola Neurons by Natural-Like Input,. The Journal of Neurosciences 33 (18) pp. 7912-7918, 2013.
[3] Gernstein G.L., Mandelbrot B. Random walk models for the activity of a single neuron., Biophys. J. 4 pp. 41-68, 1964.
[4] Holden, A.V. A Note on Convolution and Stable Distributions in the Nervous System., Biol. Cybern. 20 pp. 171-173, 1975.
[5] Jessen, A. H., and Mikosch, T. Regularly varying functions. Publ. Inst. Math. (Beograd) (N.S.), 80(94):171–192, 2006. ISSN 0350-1302.
[6] Lindner B. Superposition of many independent spike trains is generally not a Poisson process, . Physical review E 73 pp. 022901, 2006.
[7] Kyprianou, A. Fluctuations of Lévy Processes with Applications., Springer-Verlag, 2014.
[8] Persi E., Horn D., Volman V., Segev R. and Ben-Jacob E. Modeling of Synchronized Bursting Events: the importance of Inhomogeneity., Neural Computation 16 pp. 2577-2595, 2004.
[9] Tsubo Y., Isomura Y. and Fukai T. Power-Law Inter-Spike Interval Distributions infer a Conditional Maximization of Entropy in Cortical Neurons,. PLOS Computational Biology 8,4 pp. e1002461, 2012.

Comments are closed.