Partner: Maria Sanchez-Vives

ICREA-IDIBAPS (ES)

Recent publications
1.Arnold M.M., Szczepański J., Montejo N., Amigó J.M., Wajnryb E., Sanchez-Vives M.V., Information content in cortical spike trains during brain state transitions, JOURNAL OF SLEEP RESEARCH, ISSN: 0962-1105, DOI: 10.1111/j.1365-2869.2012.01031.x, Vol.22, pp.13-21, 2013
Abstract:

Even in the absence of external stimuli there is ongoing activity in the cerebral cortex as a result of recurrent connectivity. This paper attempts to characterize one aspect of this ongoing activity by examining how the information content carried by specific neurons varies as a function of brain state. We recorded from rats chronically implanted with tetrodes in the primary visual cortex during awake and sleep periods. Electro-encephalogram and spike trains were recorded during 30-min periods, and 2–4 neuronal spikes were isolated per tetrode off-line. All the activity included in the analysis was spontaneous, being recorded from the visual cortex in the absence of visual stimuli. The brain state was determined through a combination of behavior evaluation, electroencephalogram and electromyogram analysis. Information in the spike trains was determined by using Lempel–Ziv Complexity. Complexity was used to estimate the entropy of neural discharges and thus the information content (Amigóet al. Neural Comput., 2004, 16: 717–736). The information content in spike trains (range 4–70 bits s−1) was evaluated during different brain states and particularly during the transition periods. Transitions toward states of deeper sleep coincided with a decrease of information, while transitions to the awake state resulted in an increase in information. Changes in both directions were of the same magnitude, about 30%. Information in spike trains showed a high temporal correlation between neurons, reinforcing the idea of the impact of the brain state in the information content of spike trains.

Keywords:

awake, brain states, entropy, firing rate, information, sleep, spike train

Affiliations:
Arnold M.M.-Universidad Miguel Hernández-CSIC (ES)
Szczepański J.-IPPT PAN
Montejo N.-Universidad Miguel Hernández-CSIC (ES)
Amigó J.M.-Universidad Miguel Hernández-CSIC (ES)
Wajnryb E.-IPPT PAN
Sanchez-Vives M.V.-ICREA-IDIBAPS (ES)
2.Szczepański J., Arnold M., Wajnryb E., Amigó J.M., Sanchez-Vives M.V., Mutual information and redundancy in spontaneous communication between cortical neurons, BIOLOGICAL CYBERNETICS, ISSN: 0340-1200, DOI: 10.1007/s00422-011-0425-y, Vol.104, pp.161-174, 2011
Abstract:

An important question in neural information processing is how neurons cooperate to transmit information. To study this question, we resort to the concept of redundancy in the information transmitted by a group of neurons and, at the same time, we introduce a novel concept for measuring cooperation between pairs of neurons called relative mutual information (RMI). Specifically, we studied these two parameters for spike trains generated by neighboring neurons from the primary visual cortex in the awake, freely moving rat. The spike trains studied here were spontaneously generated in the cortical network, in the absence of visual stimulation. Under these conditions, our analysis revealed that while the value of RMI oscillated slightly around an average value, the redundancy exhibited a behavior characterized by a higher variability. We conjecture that this combination of approximately constant RMI and greater variable redundancy makes information transmission more resistant to noise disturbances. Furthermore, the redundancy values suggest that neurons can cooperate in a flexible way during information transmission. This mostly occurs via a leading neuron with higher transmission rate or, less frequently, through the information rate of the whole group being higher than the sum of the individual information rates—in other words in a synergetic manner. The proposed method applies not only to the stationary, but also to locally stationary neural signals.

Keywords:

Neurons, Shannon information, Entropy, Mutual information, Redundancy, Visual cortex, Spikes train, Spontaneous activity

Affiliations:
Szczepański J.-IPPT PAN
Arnold M.-Universidad Miguel Hernández-CSIC (ES)
Wajnryb E.-IPPT PAN
Amigó J.M.-Universidad Miguel Hernández-CSIC (ES)
Sanchez-Vives M.V.-ICREA-IDIBAPS (ES)
3.Szczepański J., Wajnryb E., Amigó J.M., Sanchez-Vives M.V., Slater M. , Biometric random number generators, COMPUTERS AND SECURITY, ISSN: 0167-4048, DOI: 10.1016/S0167-4048(04)00064-1, Vol.23, No.1, pp.77-84, 2004
Abstract:

Up to now biometric methods have been used in cryptography for authentication purposes. In this paper we propose to use biological data for generating sequences of random bits. We point out that this new approach could be particularly useful to generate seeds for pseudo-random number generators and so-called “key sessions”. Our method is very simple and is based on the observation that, for typical biometric readings, the last binary digits fluctuate “randomly”. We apply our method to two data sets, the first based on animal neurophysiological brain responses and the second on human galvanic skin response. For comparison we also test our approach on numerical samplings of the Ornstein–Uhlenbeck stochastic process. To verify the randomness of the sequences generated, we apply the standard suite of statistical tests (FIPS 140-2) recommended by the National Institute of Standard and Technology for studying the quality of the physical random number generators, especially those implemented in cryptographic modules. Additionally, to confirm the high cryptographic quality of the biometric generators, we also use the often recommended Maurer's universal test and the Lempel–Ziv complexity test, which estimate the entropy of the source. The results of all these verifications show that, after appropriate choice of encoding and experimental parameters, the sequences obtained exhibit excellent statistical properties, which opens the possibility of a new design technology for true random number generators. It remains a challenge to find appropriate biological phenomena characterized by easy accessibility, fast sampling rate, high accuracy of measurement and variability of sampling rate.

Affiliations:
Szczepański J.-IPPT PAN
Wajnryb E.-IPPT PAN
Amigó J.M.-Universidad Miguel Hernández-CSIC (ES)
Sanchez-Vives M.V.-ICREA-IDIBAPS (ES)
Slater M. -other affiliation
4.Szczepański J., Amigó J.M., Wajnryb E., Sanchez-Vives M.V., Characterizing spike trains with Lempel-Ziv complexity, NEUROCOMPUTING, ISSN: 0925-2312, DOI: 10.1016/j.neucom.2004.01.026, Vol.58-60, pp.79-84, 2004
Abstract:

We review several applications of Lempel–Ziv complexity to the characterization of neural responses. In particular, Lempel–Ziv complexity allows to estimate the entropy of binned spike trains in an alternative way to the usual method based on the relative frequencies of words, with the definitive advantage of no requiring very long registers. We also use complexity to discriminate neural responses to different kinds of stimuli and to evaluate the number of states of neuronal sources.

Keywords:

Lempel–Ziv complexity, Entropy, Spike trains, Neuronal sources

Affiliations:
Szczepański J.-IPPT PAN
Amigó J.M.-Universidad Miguel Hernández-CSIC (ES)
Wajnryb E.-IPPT PAN
Sanchez-Vives M.V.-ICREA-IDIBAPS (ES)
5.Amigó J.M., Szczepański J., Wajnryb E., Sanchez-Vives M.V., Estimating the Entropy Rate of Spike Trains via Lempel-Ziv Complexity, Neural Computation, ISSN: 0899-7667, DOI: 10.1162/089976604322860677, Vol.16, No.4, pp.717-736, 2004
Abstract:

Normalized Lempel-Ziv complexity, which measures the generation rate of new patterns along a digital sequence, is closely related to such important source properties as entropy and compression ratio, but, in contrast to these, it is a property of individual sequences. In this article, we propose to exploit this concept to estimate (or, at least, to bound from below) the entropy of neural discharges (spike trains). The main advantages of this method include fast convergence of the estimator (as supported by numerical simulation) and the fact that there is no need to know the probability law of the process generating the signal. Furthermore, we present numerical and experimental comparisons of the new method against the standard method based on word frequencies, providing evidence that this new approach is an alternative entropy estimator for binned spike trains.

Affiliations:
Amigó J.M.-Universidad Miguel Hernández-CSIC (ES)
Szczepański J.-IPPT PAN
Wajnryb E.-IPPT PAN
Sanchez-Vives M.V.-ICREA-IDIBAPS (ES)
6.Amigó J.M., Szczepański J., Wajnryb E., Sanchez-Vives M.V., On the number of states of the neuronal sources, BIOSYSTEMS, ISSN: 0303-2647, DOI: 10.1016/S0303-2647(02)00156-9, Vol.68, No.1, pp.57-66, 2003
Abstract:

In a previous paper (Proceedings of the World Congress on Neuroinformatics (2001)) the authors applied the so-called Lempel–Ziv complexity to study neural discharges (spike trains) from an information-theoretical point of view. Along with other results, it is shown there that this concept of complexity allows to characterize the responses of primary visual cortical neurons to both random and periodic stimuli. To this aim we modeled the neurons as information sources and the spike trains as messages generated by them. In this paper, we study further consequences of this mathematical approach, this time concerning the number of states of such neuronal information sources. In this context, the state of an information source means an internal degree of freedom (or parameter) which allows outputs with more general stochastic properties, since symbol generation probabilities at every time step may additionally depend on the value of the current state of the neuron. Furthermore, if the source is ergodic and Markovian, the number of states is directly related to the stochastic dependence lag of the source and provides a measure of the autocorrelation of its messages. Here, we find that the number of states of the neurons depends on the kind of stimulus and the type of preparation ( in vivo versus in vitro recordings), thus providing another way of differentiating neuronal responses. In particular, we observed that (for the encoding methods considered) in vitro sources have a higher lag than in vivo sources for periodic stimuli. This supports the conclusion put forward in the paper mentioned above that, for the same kind of stimulus, in vivo responses are more random (hence, more difficult to compress) than in vitro responses and, consequently, the former transmit more information than the latter.

Keywords:

Spike trains, Encoding, Lempel–Ziv complexity, Entropy, Internal states, Numerical invariants for neuronal responses

Affiliations:
Amigó J.M.-Universidad Miguel Hernández-CSIC (ES)
Szczepański J.-IPPT PAN
Wajnryb E.-IPPT PAN
Sanchez-Vives M.V.-ICREA-IDIBAPS (ES)
7.Szczepański J., Amigó J.M., Wajnryb E., Sanchez-Vives M.V., Application of Lempel–Ziv complexity to the analysis of neural discharges, Network: Computation in Neural Systems, ISSN: 0954-898X, DOI: 10.1088/0954-898X_14_2_309, Vol.14, No.2, pp.335-350, 2003
Abstract:

Pattern matching is a simple method for studying the properties of information sources based on individual sequences (Wyner et al 1998 IEEE Trans. Inf. Theory 44 2045–56). In particular, the normalized Lempel–Ziv complexity (Lempel and Ziv 1976 IEEE Trans. Inf. Theory 22 75–88), which measures the rate of generation of new patterns along a sequence, is closely related to such important source properties as entropy and information compression ratio. We make use of this concept to characterize the responses of neurons of the primary visual cortex to different kinds of stimulus, including visual stimulation (sinusoidal drifting gratings) and intracellular current injections (sinusoidal and random currents), under two conditions (in vivo and in vitro preparations). Specifically, we digitize the neuronal discharges with several encoding techniques and employ the complexity curves of the resulting discrete signals as fingerprints of the stimuli ensembles. Our results show, for example, that if the neural discharges are encoded with a particular one-parameter method (‘interspike time coding’), the normalized complexity remains constant within some classes of stimuli for a wide range of the parameter. Such constant values of the normalized complexity allow then the differentiation of the stimuli classes. With other encodings (e.g. ‘bin coding’), the whole complexity curve is needed to achieve this goal. In any case, it turns out that the normalized complexity of the neural discharges in vivo are higher (and hence carry more information in the sense of Shannon) than in vitro for the same kind of stimulus.

Affiliations:
Szczepański J.-IPPT PAN
Amigó J.M.-Universidad Miguel Hernández-CSIC (ES)
Wajnryb E.-IPPT PAN
Sanchez-Vives M.V.-ICREA-IDIBAPS (ES)

Conference abstracts
1.Szczepański J., Sanchez-Vives M.V., Arnold M.M., Montejo N., Paprocki B., Pręgowska A., Amigó J.M., Wajnryb E., Analyzing Neuroscience Signals using Information Theory and Complexity Shannon Communication Approach, 12th INCF, 12th INCF Workshop on Node Communication and Collaborative Neuroinformatics, 2015-04-16/04-17, Warszawa (PL), pp.1-32, 2015