Fitting neuron models to spike trains
Directory of Open Access Journals (Sweden)
Cyrille eRossant
2011-02-01
Full Text Available Computational modeling is increasingly used to understand the function of neural circuitsin systems neuroscience.These studies require models of individual neurons with realisticinput-output properties.Recently, it was found that spiking models can accurately predict theprecisely timed spike trains produced by cortical neurons in response tosomatically injected currents,if properly fitted. This requires fitting techniques that are efficientand flexible enough to easily test different candidate models.We present a generic solution, based on the Brian simulator(a neural network simulator in Python, which allowsthe user to define and fit arbitrary neuron models to electrophysiological recordings.It relies on vectorization and parallel computing techniques toachieve efficiency.We demonstrate its use on neural recordings in the barrel cortex andin the auditory brainstem, and confirm that simple adaptive spiking modelscan accurately predict the response of cortical neurons. Finally, we show how a complexmulticompartmental model can be reduced to a simple effective spiking model.
Building functional networks of spiking model neurons.
Abbott, L F; DePasquale, Brian; Memmesheimer, Raoul-Martin
2016-03-01
Most of the networks used by computer scientists and many of those studied by modelers in neuroscience represent unit activities as continuous variables. Neurons, however, communicate primarily through discontinuous spiking. We review methods for transferring our ability to construct interesting networks that perform relevant tasks from the artificial continuous domain to more realistic spiking network models. These methods raise a number of issues that warrant further theoretical and experimental study.
Automatic fitting of spiking neuron models to electrophysiological recordings
Directory of Open Access Journals (Sweden)
Cyrille Rossant
2010-03-01
Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.
Stochastic models for spike trains of single neurons
Sampath, G
1977-01-01
1 Some basic neurophysiology 4 The neuron 1. 1 4 1. 1. 1 The axon 7 1. 1. 2 The synapse 9 12 1. 1. 3 The soma 1. 1. 4 The dendrites 13 13 1. 2 Types of neurons 2 Signals in the nervous system 14 2. 1 Action potentials as point events - point processes in the nervous system 15 18 2. 2 Spontaneous activi~ in neurons 3 Stochastic modelling of single neuron spike trains 19 3. 1 Characteristics of a neuron spike train 19 3. 2 The mathematical neuron 23 4 Superposition models 26 4. 1 superposition of renewal processes 26 4. 2 Superposition of stationary point processe- limiting behaviour 34 4. 2. 1 Palm functions 35 4. 2. 2 Asymptotic behaviour of n stationary point processes superposed 36 4. 3 Superposition models of neuron spike trains 37 4. 3. 1 Model 4. 1 39 4. 3. 2 Model 4. 2 - A superposition model with 40 two input channels 40 4. 3. 3 Model 4. 3 4. 4 Discussion 41 43 5 Deletion models 5. 1 Deletion models with 1nd~endent interaction of excitatory and inhibitory sequences 44 VI 5. 1. 1 Model 5. 1 The basic de...
From spiking neuron models to linear-nonlinear models.
Ostojic, Srdjan; Brunel, Nicolas
2011-01-20
Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN) cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF), exponential integrate-and-fire (EIF) and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Cochlear spike synchronization and neuron coincidence detection model
Bader, Rolf
2018-02-01
Coincidence detection of a spike pattern fed from the cochlea into a single neuron is investigated using a physical Finite-Difference model of the cochlea and a physiologically motivated neuron model. Previous studies have shown experimental evidence of increased spike synchronization in the nucleus cochlearis and the trapezoid body [Joris et al., J. Neurophysiol. 71(3), 1022-1036 and 1037-1051 (1994)] and models show tone partial phase synchronization at the transition from mechanical waves on the basilar membrane into spike patterns [Ch. F. Babbs, J. Biophys. 2011, 435135]. Still the traveling speed of waves on the basilar membrane cause a frequency-dependent time delay of simultaneously incoming sound wavefronts up to 10 ms. The present model shows nearly perfect synchronization of multiple spike inputs as neuron outputs with interspike intervals (ISI) at the periodicity of the incoming sound for frequencies from about 30 to 300 Hz for two different amounts of afferent nerve fiber neuron inputs. Coincidence detection serves here as a fusion of multiple inputs into one single event enhancing pitch periodicity detection for low frequencies, impulse detection, or increased sound or speech intelligibility due to dereverberation.
Colored noise and memory effects on formal spiking neuron models
da Silva, L. A.; Vilela, R. D.
2015-06-01
Simplified neuronal models capture the essence of the electrical activity of a generic neuron, besides being more interesting from the computational point of view when compared to higher-dimensional models such as the Hodgkin-Huxley one. In this work, we propose a generalized resonate-and-fire model described by a generalized Langevin equation that takes into account memory effects and colored noise. We perform a comprehensive numerical analysis to study the dynamics and the point process statistics of the proposed model, highlighting interesting new features such as (i) nonmonotonic behavior (emergence of peak structures, enhanced by the choice of colored noise characteristic time scale) of the coefficient of variation (CV) as a function of memory characteristic time scale, (ii) colored noise-induced shift in the CV, and (iii) emergence and suppression of multimodality in the interspike interval (ISI) distribution due to memory-induced subthreshold oscillations. Moreover, in the noise-induced spike regime, we study how memory and colored noise affect the coherence resonance (CR) phenomenon. We found that for sufficiently long memory, not only is CR suppressed but also the minimum of the CV-versus-noise intensity curve that characterizes the presence of CR may be replaced by a maximum. The aforementioned features allow to interpret the interplay between memory and colored noise as an effective control mechanism to neuronal variability. Since both variability and nontrivial temporal patterns in the ISI distribution are ubiquitous in biological cells, we hope the present model can be useful in modeling real aspects of neurons.
A model-based spike sorting algorithm for removing correlation artifacts in multi-neuron recordings.
Pillow, Jonathan W; Shlens, Jonathon; Chichilnisky, E J; Simoncelli, Eero P
2013-01-01
We examine the problem of estimating the spike trains of multiple neurons from voltage traces recorded on one or more extracellular electrodes. Traditional spike-sorting methods rely on thresholding or clustering of recorded signals to identify spikes. While these methods can detect a large fraction of the spikes from a recording, they generally fail to identify synchronous or near-synchronous spikes: cases in which multiple spikes overlap. Here we investigate the geometry of failures in traditional sorting algorithms, and document the prevalence of such errors in multi-electrode recordings from primate retina. We then develop a method for multi-neuron spike sorting using a model that explicitly accounts for the superposition of spike waveforms. We model the recorded voltage traces as a linear combination of spike waveforms plus a stochastic background component of correlated Gaussian noise. Combining this measurement model with a Bernoulli prior over binary spike trains yields a posterior distribution for spikes given the recorded data. We introduce a greedy algorithm to maximize this posterior that we call "binary pursuit". The algorithm allows modest variability in spike waveforms and recovers spike times with higher precision than the voltage sampling rate. This method substantially corrects cross-correlation artifacts that arise with conventional methods, and substantially outperforms clustering methods on both real and simulated data. Finally, we develop diagnostic tools that can be used to assess errors in spike sorting in the absence of ground truth.
Routes to Chaos Induced by a Discontinuous Resetting Process in a Hybrid Spiking Neuron Model.
Nobukawa, Sou; Nishimura, Haruhiko; Yamanishi, Teruya
2018-01-10
Several hybrid spiking neuron models combining continuous spike generation mechanisms and discontinuous resetting processes following spiking have been proposed. The Izhikevich neuron model, for example, can reproduce many spiking patterns. This model clearly possesses various types of bifurcations and routes to chaos under the effect of a state-dependent jump in the resetting process. In this study, we focus further on the relation between chaotic behaviour and the state-dependent jump, approaching the subject by comparing spiking neuron model versions with and without the resetting process. We first adopt a continuous two-dimensional spiking neuron model in which the orbit in the spiking state does not exhibit divergent behaviour. We then insert the resetting process into the model. An evaluation using the Lyapunov exponent with a saltation matrix and a characteristic multiplier of the Poincar'e map reveals that two types of chaotic behaviour (i.e. bursting chaotic spikes and near-period-two chaotic spikes) are induced by the resetting process. In addition, we confirm that this chaotic bursting state is generated from the periodic spiking state because of the slow- and fast-scale dynamics that arise when jumping to the hyperpolarization and depolarization regions, respectively.
Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang
2011-12-01
An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away") and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.
Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang
2011-11-01
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.
DEFF Research Database (Denmark)
Østergaard, Jacob; Kramer, Mark A.; Eden, Uri T.
2018-01-01
current. We then fit these spike train datawith a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven...... by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured....... are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input...
Tuckwell, Henry C
2013-06-01
Serotonergic neurons of the dorsal raphe nuclei, with their extensive innervation of nearly the whole brain have important modulatory effects on many cognitive and physiological processes. They play important roles in clinical depression and other psychiatric disorders. In order to quantify the effects of serotonergic transmission on target cells it is desirable to construct computational models and to this end these it is necessary to have details of the biophysical and spike properties of the serotonergic neurons. Here several basic properties are reviewed with data from several studies since the 1960s to the present. The quantities included are input resistance, resting membrane potential, membrane time constant, firing rate, spike duration, spike and afterhyperpolarization (AHP) amplitude, spike threshold, cell capacitance, soma and somadendritic areas. The action potentials of these cells are normally triggered by a combination of sodium and calcium currents which may result in autonomous pacemaker activity. We here analyse the mechanisms of high-threshold calcium spikes which have been demonstrated in these cells the presence of TTX (tetrodotoxin). The parameters for calcium dynamics required to give calcium spikes are quite different from those for regular spiking which suggests the involvement of restricted parts of the soma-dendritic surface as has been found, for example, in hippocampal neurons. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Computing with Spiking Neuron Networks
H. Paugam-Moisy; S.M. Bohte (Sander); G. Rozenberg; T.H.W. Baeck (Thomas); J.N. Kok (Joost)
2012-01-01
htmlabstractAbstract Spiking Neuron Networks (SNNs) are often referred to as the 3rd gener- ation of neural networks. Highly inspired from natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an ac- curate modeling of synaptic interactions
Minimal time spiking in various ChR2-controlled neuron models.
Renault, Vincent; Thieullen, Michèle; Trélat, Emmanuel
2018-02-01
We use conductance based neuron models, and the mathematical modeling of optogenetics to define controlled neuron models and we address the minimal time control of these affine systems for the first spike from equilibrium. We apply tools of geometric optimal control theory to study singular extremals, and we implement a direct method to compute optimal controls. When the system is too large to theoretically investigate the existence of singular optimal controls, we observe numerically the optimal bang-bang controls.
Information transmission with spiking Bayesian neurons
International Nuclear Information System (INIS)
Lochmann, Timm; Deneve, Sophie
2008-01-01
Spike trains of cortical neurons resulting from repeatedpresentations of a stimulus are variable and exhibit Poisson-like statistics. Many models of neural coding therefore assumed that sensory information is contained in instantaneous firing rates, not spike times. Here, we ask how much information about time-varying stimuli can be transmitted by spiking neurons with such input and output variability. In particular, does this variability imply spike generation to be intrinsically stochastic? We consider a model neuron that estimates optimally the current state of a time-varying binary variable (e.g. presence of a stimulus) by integrating incoming spikes. The unit signals its current estimate to other units with spikes whenever the estimate increased by a fixed amount. As shown previously, this computation results in integrate and fire dynamics with Poisson-like output spike trains. This output variability is entirely due to the stochastic input rather than noisy spike generation. As a result such a deterministic neuron can transmit most of the information about the time varying stimulus. This contrasts with a standard model of sensory neurons, the linear-nonlinear Poisson (LNP) model which assumes that most variability in output spike trains is due to stochastic spike generation. Although it yields the same firing statistics, we found that such noisy firing results in the loss of most information. Finally, we use this framework to compare potential effects of top-down attention versus bottom-up saliency on information transfer with spiking neurons
Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang
2011-01-01
An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717
Directory of Open Access Journals (Sweden)
Dejan Pecevski
2011-12-01
Full Text Available An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away" and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.
Decoding spikes in a spiking neuronal network
Energy Technology Data Exchange (ETDEWEB)
Feng Jianfeng [Department of Informatics, University of Sussex, Brighton BN1 9QH (United Kingdom); Ding, Mingzhou [Department of Mathematics, Florida Atlantic University, Boca Raton, FL 33431 (United States)
2004-06-04
We investigate how to reliably decode the input information from the output of a spiking neuronal network. A maximum likelihood estimator of the input signal, together with its Fisher information, is rigorously calculated. The advantage of the maximum likelihood estimation over the 'brute-force rate coding' estimate is clearly demonstrated. It is pointed out that the ergodic assumption in neuroscience, i.e. a temporal average is equivalent to an ensemble average, is in general not true. Averaging over an ensemble of neurons usually gives a biased estimate of the input information. A method on how to compensate for the bias is proposed. Reconstruction of dynamical input signals with a group of spiking neurons is extensively studied and our results show that less than a spike is sufficient to accurately decode dynamical inputs.
Decoding spikes in a spiking neuronal network
International Nuclear Information System (INIS)
Feng Jianfeng; Ding, Mingzhou
2004-01-01
We investigate how to reliably decode the input information from the output of a spiking neuronal network. A maximum likelihood estimator of the input signal, together with its Fisher information, is rigorously calculated. The advantage of the maximum likelihood estimation over the 'brute-force rate coding' estimate is clearly demonstrated. It is pointed out that the ergodic assumption in neuroscience, i.e. a temporal average is equivalent to an ensemble average, is in general not true. Averaging over an ensemble of neurons usually gives a biased estimate of the input information. A method on how to compensate for the bias is proposed. Reconstruction of dynamical input signals with a group of spiking neurons is extensively studied and our results show that less than a spike is sufficient to accurately decode dynamical inputs
Directory of Open Access Journals (Sweden)
Kyriaki Sidiropoulou
Full Text Available Proper functioning of working memory involves the expression of stimulus-selective persistent activity in pyramidal neurons of the prefrontal cortex (PFC, which refers to neural activity that persists for seconds beyond the end of the stimulus. The mechanisms which PFC pyramidal neurons use to discriminate between preferred vs. neutral inputs at the cellular level are largely unknown. Moreover, the presence of pyramidal cell subtypes with different firing patterns, such as regular spiking and intrinsic bursting, raises the question as to what their distinct role might be in persistent firing in the PFC. Here, we use a compartmental modeling approach to search for discriminatory features in the properties of incoming stimuli to a PFC pyramidal neuron and/or its response that signal which of these stimuli will result in persistent activity emergence. Furthermore, we use our modeling approach to study cell-type specific differences in persistent activity properties, via implementing a regular spiking (RS and an intrinsic bursting (IB model neuron. We identify synaptic location within the basal dendrites as a feature of stimulus selectivity. Specifically, persistent activity-inducing stimuli consist of activated synapses that are located more distally from the soma compared to non-inducing stimuli, in both model cells. In addition, the action potential (AP latency and the first few inter-spike-intervals of the neuronal response can be used to reliably detect inducing vs. non-inducing inputs, suggesting a potential mechanism by which downstream neurons can rapidly decode the upcoming emergence of persistent activity. While the two model neurons did not differ in the coding features of persistent activity emergence, the properties of persistent activity, such as the firing pattern and the duration of temporally-restricted persistent activity were distinct. Collectively, our results pinpoint to specific features of the neuronal response to a given
The Effects of Guanfacine and Phenylephrine on a Spiking Neuron Model of Working Memory.
Duggins, Peter; Stewart, Terrence C; Choo, Xuan; Eliasmith, Chris
2017-01-01
We use a spiking neural network model of working memory (WM) capable of performing the spatial delayed response task (DRT) to investigate two drugs that affect WM: guanfacine (GFC) and phenylephrine (PHE). In this model, the loss of information over time results from changes in the spiking neural activity through recurrent connections. We reproduce the standard forgetting curve and then show that this curve changes in the presence of GFC and PHE, whose application is simulated by manipulating functional, neural, and biophysical properties of the model. In particular, applying GFC causes increased activity in neurons that are sensitive to the information currently being remembered, while applying PHE leads to decreased activity in these same neurons. Interestingly, these differential effects emerge from network-level interactions because GFC and PHE affect all neurons equally. We compare our model to both electrophysiological data from neurons in monkey dorsolateral prefrontal cortex and to behavioral evidence from monkeys performing the DRT. Copyright © 2016 Cognitive Science Society, Inc.
Spiking Neurons for Analysis of Patterns
Huntsberger, Terrance
2008-01-01
Artificial neural networks comprising spiking neurons of a novel type have been conceived as improved pattern-analysis and pattern-recognition computational systems. These neurons are represented by a mathematical model denoted the state-variable model (SVM), which among other things, exploits a computational parallelism inherent in spiking-neuron geometry. Networks of SVM neurons offer advantages of speed and computational efficiency, relative to traditional artificial neural networks. The SVM also overcomes some of the limitations of prior spiking-neuron models. There are numerous potential pattern-recognition, tracking, and data-reduction (data preprocessing) applications for these SVM neural networks on Earth and in exploration of remote planets. Spiking neurons imitate biological neurons more closely than do the neurons of traditional artificial neural networks. A spiking neuron includes a central cell body (soma) surrounded by a tree-like interconnection network (dendrites). Spiking neurons are so named because they generate trains of output pulses (spikes) in response to inputs received from sensors or from other neurons. They gain their speed advantage over traditional neural networks by using the timing of individual spikes for computation, whereas traditional artificial neurons use averages of activity levels over time. Moreover, spiking neurons use the delays inherent in dendritic processing in order to efficiently encode the information content of incoming signals. Because traditional artificial neurons fail to capture this encoding, they have less processing capability, and so it is necessary to use more gates when implementing traditional artificial neurons in electronic circuitry. Such higher-order functions as dynamic tasking are effected by use of pools (collections) of spiking neurons interconnected by spike-transmitting fibers. The SVM includes adaptive thresholds and submodels of transport of ions (in imitation of such transport in biological
Memristors Empower Spiking Neurons With Stochasticity
Al-Shedivat, Maruan
2015-06-01
Recent theoretical studies have shown that probabilistic spiking can be interpreted as learning and inference in cortical microcircuits. This interpretation creates new opportunities for building neuromorphic systems driven by probabilistic learning algorithms. However, such systems must have two crucial features: 1) the neurons should follow a specific behavioral model, and 2) stochastic spiking should be implemented efficiently for it to be scalable. This paper proposes a memristor-based stochastically spiking neuron that fulfills these requirements. First, the analytical model of the memristor is enhanced so it can capture the behavioral stochasticity consistent with experimentally observed phenomena. The switching behavior of the memristor model is demonstrated to be akin to the firing of the stochastic spike response neuron model, the primary building block for probabilistic algorithms in spiking neural networks. Furthermore, the paper proposes a neural soma circuit that utilizes the intrinsic nondeterminism of memristive switching for efficient spike generation. The simulations and analysis of the behavior of a single stochastic neuron and a winner-take-all network built of such neurons and trained on handwritten digits confirm that the circuit can be used for building probabilistic sampling and pattern adaptation machinery in spiking networks. The findings constitute an important step towards scalable and efficient probabilistic neuromorphic platforms. © 2011 IEEE.
Bayesian population decoding of spiking neurons.
Gerwinn, Sebastian; Macke, Jakob; Bethge, Matthias
2009-01-01
The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrate-and-fire neurons constitute a popular class of encoding models, in which spike times depend directly on the temporal structure of the inputs. However, optimal decoding rules for these models have only been studied explicitly in the noiseless case. Here, we study decoding rules for probabilistic inference of a continuous stimulus from the spike times of a population of leaky integrate-and-fire neurons with threshold noise. We derive three algorithms for approximating the posterior distribution over stimuli as a function of the observed spike trains. In addition to a reconstruction of the stimulus we thus obtain an estimate of the uncertainty as well. Furthermore, we derive a 'spike-by-spike' online decoding scheme that recursively updates the posterior with the arrival of each new spike. We use these decoding rules to reconstruct time-varying stimuli represented by a Gaussian process from spike trains of single neurons as well as neural populations.
Bayesian population decoding of spiking neurons
Directory of Open Access Journals (Sweden)
Sebastian Gerwinn
2009-10-01
Full Text Available The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrate-and-fire neurons constitute a popular class of encoding models, in which spike times depend directly on the temporal structure of the inputs. However, optimal decoding rules for these models have only been studied explicitly in the noiseless case. Here, we study decoding rules for probabilistic inference of a continuous stimulus from the spike times of a population of leaky integrate-and-fire neurons with threshold noise. We derive three algorithms for approximating the posterior distribution over stimuli as a function of the observed spike trains. In addition to a reconstruction of the stimulus we thus obtain an estimate of the uncertainty as well. Furthermore, we derive a `spike-by-spike' online decoding scheme that recursively updates the posterior with the arrival of each new spike. We use these decoding rules to reconstruct time-varying stimuli represented by a Gaussian process from spike trains of single neurons as well as neural populations.
Antunes, Gabriela; Faria da Silva, Samuel F; Simoes de Souza, Fabio M
2018-06-01
Mirror neurons fire action potentials both when the agent performs a certain behavior and watches someone performing a similar action. Here, we present an original mirror neuron model based on the spike-timing-dependent plasticity (STDP) between two morpho-electrical models of neocortical pyramidal neurons. Both neurons fired spontaneously with basal firing rate that follows a Poisson distribution, and the STDP between them was modeled by the triplet algorithm. Our simulation results demonstrated that STDP is sufficient for the rise of mirror neuron function between the pairs of neocortical neurons. This is a proof of concept that pairs of neocortical neurons associating sensory inputs to motor outputs could operate like mirror neurons. In addition, we used the mirror neuron model to investigate whether channelopathies associated with autism spectrum disorder could impair the modeled mirror function. Our simulation results showed that impaired hyperpolarization-activated cationic currents (Ih) affected the mirror function between the pairs of neocortical neurons coupled by STDP.
Spike timing precision of neuronal circuits.
Kilinc, Deniz; Demir, Alper
2018-04-17
Spike timing is believed to be a key factor in sensory information encoding and computations performed by the neurons and neuronal circuits. However, the considerable noise and variability, arising from the inherently stochastic mechanisms that exist in the neurons and the synapses, degrade spike timing precision. Computational modeling can help decipher the mechanisms utilized by the neuronal circuits in order to regulate timing precision. In this paper, we utilize semi-analytical techniques, which were adapted from previously developed methods for electronic circuits, for the stochastic characterization of neuronal circuits. These techniques, which are orders of magnitude faster than traditional Monte Carlo type simulations, can be used to directly compute the spike timing jitter variance, power spectral densities, correlation functions, and other stochastic characterizations of neuronal circuit operation. We consider three distinct neuronal circuit motifs: Feedback inhibition, synaptic integration, and synaptic coupling. First, we show that both the spike timing precision and the energy efficiency of a spiking neuron are improved with feedback inhibition. We unveil the underlying mechanism through which this is achieved. Then, we demonstrate that a neuron can improve on the timing precision of its synaptic inputs, coming from multiple sources, via synaptic integration: The phase of the output spikes of the integrator neuron has the same variance as that of the sample average of the phases of its inputs. Finally, we reveal that weak synaptic coupling among neurons, in a fully connected network, enables them to behave like a single neuron with a larger membrane area, resulting in an improvement in the timing precision through cooperation.
Sundt, Danielle; Gamper, Nikita; Jaffe, David B
2015-12-01
Unmyelinated C-fibers are a major type of sensory neurons conveying pain information. Action potential conduction is regulated by the bifurcation (T-junction) of sensory neuron axons within the dorsal root ganglia (DRG). Understanding how C-fiber signaling is influenced by the morphology of the T-junction and the local expression of ion channels is important for understanding pain signaling. In this study we used biophysical computer modeling to investigate the influence of axon morphology within the DRG and various membrane conductances on the reliability of spike propagation. As expected, calculated input impedance and the amplitude of propagating action potentials were both lowest at the T-junction. Propagation reliability for single spikes was highly sensitive to the diameter of the stem axon and the density of voltage-gated Na(+) channels. A model containing only fast voltage-gated Na(+) and delayed-rectifier K(+) channels conducted trains of spikes up to frequencies of 110 Hz. The addition of slowly activating KCNQ channels (i.e., KV7 or M-channels) to the model reduced the following frequency to 30 Hz. Hyperpolarization produced by addition of a much slower conductance, such as a Ca(2+)-dependent K(+) current, was needed to reduce the following frequency to 6 Hz. Attenuation of driving force due to ion accumulation or hyperpolarization produced by a Na(+)-K(+) pump had no effect on following frequency but could influence the reliability of spike propagation mutually with the voltage shift generated by a Ca(2+)-dependent K(+) current. These simulations suggest how specific ion channels within the DRG may contribute toward therapeutic treatments for chronic pain. Copyright © 2015 the American Physiological Society.
Directory of Open Access Journals (Sweden)
Marc Ebner
2011-01-01
Full Text Available Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on “autopilot”. Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the “conscious pilot” suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious “auto-pilot” cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways “gap junctions” in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of
Directory of Open Access Journals (Sweden)
Sushmita Lakshmi Allam
2012-10-01
Full Text Available Over the past decades, our view of astrocytes has switched from passive support cells to active processing elements in the brain. The current view is that astrocytes shape neuronal communication and also play an important role in many neurodegenerative diseases. Despite the growing awareness of the importance of astrocytes, the exact mechanisms underlying neuron-astrocyte communication and the physiological consequences of astrocytic-neuronal interactions remain largely unclear. In this work, we define a modeling framework that will permit to address unanswered questions regarding the role of astrocytes. Our computational model of a detailed glutamatergic synapse facilitates the analysis of neural system responses to various stimuli and conditions that are otherwise difficult to obtain experimentally, in particular the readouts at the sub-cellular level. In this paper, we extend a detailed glutamatergic synaptic model, to include astrocytic glutamate transporters. We demonstrate how these glial transporters, responsible for the majority of glutamate uptake, modulate synaptic transmission mediated by ionotropic AMPA and NMDA receptors at glutamatergic synapses. Furthermore, we investigate how these local signaling effects at the synaptic level are translated into varying spatio-temporal patterns of neuron firing. Paired pulse stimulation results reveal that the effect of astrocytic glutamate uptake is more apparent when the input inter-spike interval is sufficiently long to allow the receptors to recover from desensitization. These results suggest an important functional role of astrocytes in spike timing dependent processes and demand further investigation of the molecular basis of certain neurological diseases specifically related to alterations in astrocytic glutamate uptake, such as epilepsy.
Memory-induced resonancelike suppression of spike generation in a resonate-and-fire neuron model
Mankin, Romi; Paekivi, Sander
2018-01-01
The behavior of a stochastic resonate-and-fire neuron model based on a reduction of a fractional noise-driven generalized Langevin equation (GLE) with a power-law memory kernel is considered. The effect of temporally correlated random activity of synaptic inputs, which arise from other neurons forming local and distant networks, is modeled as an additive fractional Gaussian noise in the GLE. Using a first-passage-time formulation, in certain system parameter domains exact expressions for the output interspike interval (ISI) density and for the survival probability (the probability that a spike is not generated) are derived and their dependence on input parameters, especially on the memory exponent, is analyzed. In the case of external white noise, it is shown that at intermediate values of the memory exponent the survival probability is significantly enhanced in comparison with the cases of strong and weak memory, which causes a resonancelike suppression of the probability of spike generation as a function of the memory exponent. Moreover, an examination of the dependence of multimodality in the ISI distribution on input parameters shows that there exists a critical memory exponent αc≈0.402 , which marks a dynamical transition in the behavior of the system. That phenomenon is illustrated by a phase diagram describing the emergence of three qualitatively different structures of the ISI distribution. Similarities and differences between the behavior of the model at internal and external noises are also discussed.
Reinforcement learning of targeted movement in a spiking neuronal model of motor cortex.
Directory of Open Access Journals (Sweden)
George L Chadderdon
Full Text Available Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint "forearm" to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. The model consisted of 144 excitatory and 64 inhibitory event-based neurons, each with AMPA, NMDA, and GABA synapses. Proprioceptive cell input to this model encoded the 2 muscle lengths. Plasticity was only enabled in feedforward connections between input and output excitatory units, using spike-timing-dependent eligibility traces for synaptic credit or blame assignment. Learning resulted from a global 3-valued signal: reward (+1, no learning (0, or punishment (-1, corresponding to phasic increases, lack of change, or phasic decreases of dopaminergic cell firing, respectively. Successful learning only occurred when both reward and punishment were enabled. In this case, 5 target angles were learned successfully within 180 s of simulation time, with a median error of 8 degrees. Motor babbling allowed exploratory learning, but decreased the stability of the learned behavior, since the hand continued moving after reaching the target. Our model demonstrated that a global reinforcement signal, coupled with eligibility traces for synaptic plasticity, can train a spiking sensorimotor network to perform goal-directed motor behavior.
Reinforcement learning of targeted movement in a spiking neuronal model of motor cortex.
Chadderdon, George L; Neymotin, Samuel A; Kerr, Cliff C; Lytton, William W
2012-01-01
Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint "forearm" to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. The model consisted of 144 excitatory and 64 inhibitory event-based neurons, each with AMPA, NMDA, and GABA synapses. Proprioceptive cell input to this model encoded the 2 muscle lengths. Plasticity was only enabled in feedforward connections between input and output excitatory units, using spike-timing-dependent eligibility traces for synaptic credit or blame assignment. Learning resulted from a global 3-valued signal: reward (+1), no learning (0), or punishment (-1), corresponding to phasic increases, lack of change, or phasic decreases of dopaminergic cell firing, respectively. Successful learning only occurred when both reward and punishment were enabled. In this case, 5 target angles were learned successfully within 180 s of simulation time, with a median error of 8 degrees. Motor babbling allowed exploratory learning, but decreased the stability of the learned behavior, since the hand continued moving after reaching the target. Our model demonstrated that a global reinforcement signal, coupled with eligibility traces for synaptic plasticity, can train a spiking sensorimotor network to perform goal-directed motor behavior.
Evoking prescribed spike times in stochastic neurons
Doose, Jens; Lindner, Benjamin
2017-09-01
Single cell stimulation in vivo is a powerful tool to investigate the properties of single neurons and their functionality in neural networks. We present a method to determine a cell-specific stimulus that reliably evokes a prescribed spike train with high temporal precision of action potentials. We test the performance of this stimulus in simulations for two different stochastic neuron models. For a broad range of parameters and a neuron firing with intermediate firing rates (20-40 Hz) the reliability in evoking the prescribed spike train is close to its theoretical maximum that is mainly determined by the level of intrinsic noise.
Cao, Yongqiang; Grossberg, Stephen
2012-02-01
A laminar cortical model of stereopsis and 3D surface perception is developed and simulated. The model shows how spiking neurons that interact in hierarchically organized laminar circuits of the visual cortex can generate analog properties of 3D visual percepts. The model describes how monocular and binocular oriented filtering interact with later stages of 3D boundary formation and surface filling-in in the LGN and cortical areas V1, V2, and V4. It proposes how interactions between layers 4, 3B, and 2/3 in V1 and V2 contribute to stereopsis, and how binocular and monocular information combine to form 3D boundary and surface representations. The model suggests how surface-to-boundary feedback from V2 thin stripes to pale stripes helps to explain how computationally complementary boundary and surface formation properties lead to a single consistent percept, eliminate redundant 3D boundaries, and trigger figure-ground perception. The model also shows how false binocular boundary matches may be eliminated by Gestalt grouping properties. In particular, the disparity filter, which helps to solve the correspondence problem by eliminating false matches, is realized using inhibitory interneurons as part of the perceptual grouping process by horizontal connections in layer 2/3 of cortical area V2. The 3D sLAMINART model simulates 3D surface percepts that are consciously seen in 18 psychophysical experiments. These percepts include contrast variations of dichoptic masking and the correspondence problem, the effect of interocular contrast differences on stereoacuity, Panum's limiting case, the Venetian blind illusion, stereopsis with polarity-reversed stereograms, da Vinci stereopsis, and perceptual closure. The model hereby illustrates a general method of unlumping rate-based models that use the membrane equations of neurophysiology into models that use spiking neurons, and which may be embodied in VLSI chips that use spiking neurons to minimize heat production. Copyright
Directory of Open Access Journals (Sweden)
Wassim M. Haddad
2014-07-01
Full Text Available Advances in neuroscience have been closely linked to mathematical modeling beginning with the integrate-and-fire model of Lapicque and proceeding through the modeling of the action potential by Hodgkin and Huxley to the current era. The fundamental building block of the central nervous system, the neuron, may be thought of as a dynamic element that is “excitable”, and can generate a pulse or spike whenever the electrochemical potential across the cell membrane of the neuron exceeds a threshold. A key application of nonlinear dynamical systems theory to the neurosciences is to study phenomena of the central nervous system that exhibit nearly discontinuous transitions between macroscopic states. A very challenging and clinically important problem exhibiting this phenomenon is the induction of general anesthesia. In any specific patient, the transition from consciousness to unconsciousness as the concentration of anesthetic drugs increases is very sharp, resembling a thermodynamic phase transition. This paper focuses on multistability theory for continuous and discontinuous dynamical systems having a set of multiple isolated equilibria and/or a continuum of equilibria. Multistability is the property whereby the solutions of a dynamical system can alternate between two or more mutually exclusive Lyapunov stable and convergent equilibrium states under asymptotically slowly changing inputs or system parameters. In this paper, we extend the theory of multistability to continuous, discontinuous, and stochastic nonlinear dynamical systems. In particular, Lyapunov-based tests for multistability and synchronization of dynamical systems with continuously differentiable and absolutely continuous flows are established. The results are then applied to excitatory and inhibitory biological neuronal networks to explain the underlying mechanism of action for anesthesia and consciousness from a multistable dynamical system perspective, thereby providing a
The transfer function of neuron spike.
Palmieri, Igor; Monteiro, Luiz H A; Miranda, Maria D
2015-08-01
The mathematical modeling of neuronal signals is a relevant problem in neuroscience. The complexity of the neuron behavior, however, makes this problem a particularly difficult task. Here, we propose a discrete-time linear time-invariant (LTI) model with a rational function in order to represent the neuronal spike detected by an electrode located in the surroundings of the nerve cell. The model is presented as a cascade association of two subsystems: one that generates an action potential from an input stimulus, and one that represents the medium between the cell and the electrode. The suggested approach employs system identification and signal processing concepts, and is dissociated from any considerations about the biophysical processes of the neuronal cell, providing a low-complexity alternative to model the neuronal spike. The model is validated by using in vivo experimental readings of intracellular and extracellular signals. A computational simulation of the model is presented in order to assess its proximity to the neuronal signal and to observe the variability of the estimated parameters. The implications of the results are discussed in the context of spike sorting. Copyright © 2015 Elsevier Ltd. All rights reserved.
Paraskevov, A V; Zendrikov, D K
2017-03-23
We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.
Modeling spiking behavior of neurons with time-dependent Poisson processes.
Shinomoto, S; Tsubo, Y
2001-10-01
Three kinds of interval statistics, as represented by the coefficient of variation, the skewness coefficient, and the correlation coefficient of consecutive intervals, are evaluated for three kinds of time-dependent Poisson processes: pulse regulated, sinusoidally regulated, and doubly stochastic. Among these three processes, the sinusoidally regulated and doubly stochastic Poisson processes, in the case when the spike rate varies slowly compared with the mean interval between spikes, are found to be consistent with the three statistical coefficients exhibited by data recorded from neurons in the prefrontal cortex of monkeys.
Gerhard, Felipe; Deger, Moritz; Truccolo, Wilson
2017-02-01
Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a
International Nuclear Information System (INIS)
Hasegawa, Hideo
2003-01-01
A dynamical mean-field approximation (DMA) previously proposed by the present author [H. Hasegawa, Phys. Rev E 67, 041903 (2003)] has been extended to ensembles described by a general noisy spiking neuron model. Ensembles of N-unit neurons, each of which is expressed by coupled K-dimensional differential equations (DEs), are assumed to be subject to spatially correlated white noises. The original KN-dimensional stochastic DEs have been replaced by K(K+2)-dimensional deterministic DEs expressed in terms of means and the second-order moments of local and global variables: the fourth-order contributions are taken into account by the Gaussian decoupling approximation. Our DMA has been applied to an ensemble of Hodgkin-Huxley (HH) neurons (K=4), for which effects of the noise, the coupling strength, and the ensemble size on the response to a single-spike input have been investigated. Numerical results calculated by the DMA theory are in good agreement with those obtained by direct simulations, although the former computation is about a thousand times faster than the latter for a typical HH neuron ensemble with N=100
Spiking neuron devices consisting of single-flux-quantum circuits
International Nuclear Information System (INIS)
Hirose, Tetsuya; Asai, Tetsuya; Amemiya, Yoshihito
2006-01-01
Single-flux-quantum (SFQ) circuits can be used for making spiking neuron devices, which are useful elements for constructing intelligent, brain-like computers. The device we propose is based on the leaky integrate-and-fire neuron (IFN) model and uses a SFQ pulse as an action signal or a spike of neurons. The operation of the neuron device is confirmed by computer simulator. It can operate with a short delay of 100 ps or less and is the highest-speed neuron device ever reported
Implementing Signature Neural Networks with Spiking Neurons.
Carrillo-Medina, José Luis; Latorre, Roberto
2016-01-01
Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence
Spiking Activity of a LIF Neuron in Distributed Delay Framework
Directory of Open Access Journals (Sweden)
Saket Kumar Choudhary
2016-06-01
Full Text Available Evolution of membrane potential and spiking activity for a single leaky integrate-and-fire (LIF neuron in distributed delay framework (DDF is investigated. DDF provides a mechanism to incorporate memory element in terms of delay (kernel function into a single neuron models. This investigation includes LIF neuron model with two different kinds of delay kernel functions, namely, gamma distributed delay kernel function and hypo-exponential distributed delay kernel function. Evolution of membrane potential for considered models is studied in terms of stationary state probability distribution (SPD. Stationary state probability distribution of membrane potential (SPDV for considered neuron models are found asymptotically similar which is Gaussian distributed. In order to investigate the effect of membrane potential delay, rate code scheme for neuronal information processing is applied. Firing rate and Fano-factor for considered neuron models are calculated and standard LIF model is used for comparative study. It is noticed that distributed delay increases the spiking activity of a neuron. Increase in spiking activity of neuron in DDF is larger for hypo-exponential distributed delay function than gamma distributed delay function. Moreover, in case of hypo-exponential delay function, a LIF neuron generates spikes with Fano-factor less than 1.
Liu, Jianbo; Khalil, Hassan K.; Oweiss, Karim G.
2011-08-01
Controlling the spatiotemporal firing pattern of an intricately connected network of neurons through microstimulation is highly desirable in many applications. We investigated in this paper the feasibility of using a model-based approach to the analysis and control of a basal ganglia (BG) network model of Hodgkin-Huxley (HH) spiking neurons through microstimulation. Detailed analysis of this network model suggests that it can reproduce the experimentally observed characteristics of BG neurons under a normal and a pathological Parkinsonian state. A simplified neuronal firing rate model, identified from the detailed HH network model, is shown to capture the essential network dynamics. Mathematical analysis of the simplified model reveals the presence of a systematic relationship between the network's structure and its dynamic response to spatiotemporally patterned microstimulation. We show that both the network synaptic organization and the local mechanism of microstimulation can impose tight constraints on the possible spatiotemporal firing patterns that can be generated by the microstimulated network, which may hinder the effectiveness of microstimulation to achieve a desired objective under certain conditions. Finally, we demonstrate that the feedback control design aided by the mathematical analysis of the simplified model is indeed effective in driving the BG network in the normal and Parskinsonian states to follow a prescribed spatiotemporal firing pattern. We further show that the rhythmic/oscillatory patterns that characterize a dopamine-depleted BG network can be suppressed as a direct consequence of controlling the spatiotemporal pattern of a subpopulation of the output Globus Pallidus internalis (GPi) neurons in the network. This work may provide plausible explanations for the mechanisms underlying the therapeutic effects of deep brain stimulation (DBS) in Parkinson's disease and pave the way towards a model-based, network level analysis and closed
Directory of Open Access Journals (Sweden)
K. Usha
2016-09-01
Full Text Available This paper evaluates the change in metabolic energy required to maintain the signalling activity of neurons in the presence of an external electric field. We have analysed the Hodgkin–Huxley type conductance based fast spiking neuron model as electrical circuit by changing the frequency and amplitude of the applied electric field. The study has shown that, the presence of electric field increases the membrane potential, electrical energy supply and metabolic energy consumption. As the amplitude of applied electric field increases by keeping a constant frequency, the membrane potential increases and consequently the electrical energy supply and metabolic energy consumption increases. On increasing the frequency of the applied field, the peak value of membrane potential after depolarization gradually decreases as a result electrical energy supply decreases which results in a lower rate of hydrolysis of ATP molecules.
Constructing Precisely Computing Networks with Biophysical Spiking Neurons.
Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T
2015-07-15
While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks
Neuronal coding and spiking randomness
Czech Academy of Sciences Publication Activity Database
Košťál, Lubomír; Lánský, Petr; Rospars, J. P.
2007-01-01
Roč. 26, č. 10 (2007), s. 2693-2988 ISSN 0953-816X R&D Projects: GA MŠk(CZ) LC554; GA AV ČR(CZ) 1ET400110401; GA AV ČR(CZ) KJB100110701 Grant - others:ECO-NET(FR) 112644PF Institutional research plan: CEZ:AV0Z50110509 Keywords : spike train * variability * neurovědy Subject RIV: FH - Neurology Impact factor: 3.673, year: 2007
Directory of Open Access Journals (Sweden)
Jörg Encke
2018-03-01
Full Text Available The mammalian auditory system is able to extract temporal and spectral features from sound signals at the two ears. One important cue for localization of low-frequency sound sources in the horizontal plane are inter-aural time differences (ITDs which are first analyzed in the medial superior olive (MSO in the brainstem. Neural recordings of ITD tuning curves at various stages along the auditory pathway suggest that ITDs in the mammalian brainstem are not represented in form of a Jeffress-type place code. An alternative is the hemispheric opponent-channel code, according to which ITDs are encoded as the difference in the responses of the MSO nuclei in the two hemispheres. In this study, we present a physiologically-plausible, spiking neuron network model of the mammalian MSO circuit and apply two different methods of extracting ITDs from arbitrary sound signals. The network model is driven by a functional model of the auditory periphery and physiological models of the cochlear nucleus and the MSO. Using a linear opponent-channel decoder, we show that the network is able to detect changes in ITD with a precision down to 10 μs and that the sensitivity of the decoder depends on the slope of the ITD-rate functions. A second approach uses an artificial neuronal network to predict ITDs directly from the spiking output of the MSO and ANF model. Using this predictor, we show that the MSO-network is able to reliably encode static and time-dependent ITDs over a large frequency range, also for complex signals like speech.
Communication through resonance in spiking neuronal networks.
Hahn, Gerald; Bujan, Alejandro F; Frégnac, Yves; Aertsen, Ad; Kumar, Arvind
2014-08-01
The cortex processes stimuli through a distributed network of specialized brain areas. This processing requires mechanisms that can route neuronal activity across weakly connected cortical regions. Routing models proposed thus far are either limited to propagation of spiking activity across strongly connected networks or require distinct mechanisms that create local oscillations and establish their coherence between distant cortical areas. Here, we propose a novel mechanism which explains how synchronous spiking activity propagates across weakly connected brain areas supported by oscillations. In our model, oscillatory activity unleashes network resonance that amplifies feeble synchronous signals and promotes their propagation along weak connections ("communication through resonance"). The emergence of coherent oscillations is a natural consequence of synchronous activity propagation and therefore the assumption of different mechanisms that create oscillations and provide coherence is not necessary. Moreover, the phase-locking of oscillations is a side effect of communication rather than its requirement. Finally, we show how the state of ongoing activity could affect the communication through resonance and propose that modulations of the ongoing activity state could influence information processing in distributed cortical networks.
Bistability induces episodic spike communication by inhibitory neurons in neuronal networks.
Kazantsev, V B; Asatryan, S Yu
2011-09-01
Bistability is one of the important features of nonlinear dynamical systems. In neurodynamics, bistability has been found in basic Hodgkin-Huxley equations describing the cell membrane dynamics. When the neuron is clamped near its threshold, the stable rest potential may coexist with the stable limit cycle describing periodic spiking. However, this effect is often neglected in network computations where the neurons are typically reduced to threshold firing units (e.g., integrate-and-fire models). We found that the bistability may induce spike communication by inhibitory coupled neurons in the spiking network. The communication is realized in the form of episodic discharges with synchronous (correlated) spikes during the episodes. A spiking phase map is constructed to describe the synchronization and to estimate basic spike phase locking modes.
Integrated workflows for spiking neuronal network simulations
Directory of Open Access Journals (Sweden)
Ján eAntolík
2013-12-01
Full Text Available The increasing availability of computational resources is enabling more detailed, realistic modelling in computational neuroscience, resulting in a shift towards more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeller's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modellers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity.To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualisation into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organised configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualisation stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modelling studies by relieving the user from manual handling of the flow of metadata between the individual
Spike Code Flow in Cultured Neuronal Networks.
Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime; Kamimura, Takuya; Yagi, Yasushi; Mizuno-Matsumoto, Yuko; Chen, Yen-Wei
2016-01-01
We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of "1101" and "1011," which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the "maximum cross-correlations" among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.
Spike Code Flow in Cultured Neuronal Networks
Directory of Open Access Journals (Sweden)
Shinichi Tamura
2016-01-01
Full Text Available We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of “1101” and “1011,” which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the “maximum cross-correlations” among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.
Noise-enhanced coding in phasic neuron spike trains.
Ly, Cheng; Doiron, Brent
2017-01-01
The stochastic nature of neuronal response has lead to conjectures about the impact of input fluctuations on the neural coding. For the most part, low pass membrane integration and spike threshold dynamics have been the primary features assumed in the transfer from synaptic input to output spiking. Phasic neurons are a common, but understudied, neuron class that are characterized by a subthreshold negative feedback that suppresses spike train responses to low frequency signals. Past work has shown that when a low frequency signal is accompanied by moderate intensity broadband noise, phasic neurons spike trains are well locked to the signal. We extend these results with a simple, reduced model of phasic activity that demonstrates that a non-Markovian spike train structure caused by the negative feedback produces a noise-enhanced coding. Further, this enhancement is sensitive to the timescales, as opposed to the intensity, of a driving signal. Reduced hazard function models show that noise-enhanced phasic codes are both novel and separate from classical stochastic resonance reported in non-phasic neurons. The general features of our theory suggest that noise-enhanced codes in excitable systems with subthreshold negative feedback are a particularly rich framework to study.
Wang, Yangyang; Rubin, Jonathan E
2017-12-01
Neural networks generate a variety of rhythmic activity patterns, often involving different timescales. One example arises in the respiratory network in the pre-Bötzinger complex of the mammalian brainstem, which can generate the eupneic rhythm associated with normal respiration as well as recurrent low-frequency, large-amplitude bursts associated with sighing. Two competing hypotheses have been proposed to explain sigh generation: the recruitment of a neuronal population distinct from the eupneic rhythm-generating subpopulation or the reconfiguration of activity within a single population. Here, we consider two recent computational models, one of which represents each of the hypotheses. We use methods of dynamical systems theory, such as fast-slow decomposition, averaging, and bifurcation analysis, to understand the multiple-timescale mechanisms underlying sigh generation in each model. In the course of our analysis, we discover that a third timescale is required to generate sighs in both models. Furthermore, we identify the similarities of the underlying mechanisms in the two models and the aspects in which they differ.
Stochastic optimal control of single neuron spike trains
DEFF Research Database (Denmark)
Iolov, Alexandre; Ditlevsen, Susanne; Longtin, Andrë
2014-01-01
stimulation of a neuron to achieve a target spike train under the physiological constraint to not damage tissue. Approach. We pose a stochastic optimal control problem to precisely specify the spike times in a leaky integrate-and-fire (LIF) model of a neuron with noise assumed to be of intrinsic or synaptic...... origin. In particular, we allow for the noise to be of arbitrary intensity. The optimal control problem is solved using dynamic programming when the controller has access to the voltage (closed-loop control), and using a maximum principle for the transition density when the controller only has access...... to the spike times (open-loop control). Main results. We have developed a stochastic optimal control algorithm to obtain precise spike times. It is applicable in both the supra-threshold and sub-threshold regimes, under open-loop and closed-loop conditions and with an arbitrary noise intensity; the accuracy...
Inherently stochastic spiking neurons for probabilistic neural computation
Al-Shedivat, Maruan
2015-04-01
Neuromorphic engineering aims to design hardware that efficiently mimics neural circuitry and provides the means for emulating and studying neural systems. In this paper, we propose a new memristor-based neuron circuit that uniquely complements the scope of neuron implementations and follows the stochastic spike response model (SRM), which plays a cornerstone role in spike-based probabilistic algorithms. We demonstrate that the switching of the memristor is akin to the stochastic firing of the SRM. Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards memristive, scalable and efficient stochastic neuromorphic platforms. © 2015 IEEE.
Cannon, Jonathan
2017-01-01
Mutual information is a commonly used measure of communication between neurons, but little theory exists describing the relationship between mutual information and the parameters of the underlying neuronal interaction. Such a theory could help us understand how specific physiological changes affect the capacity of neurons to synaptically communicate, and, in particular, they could help us characterize the mechanisms by which neuronal dynamics gate the flow of information in the brain. Here we study a pair of linear-nonlinear-Poisson neurons coupled by a weak synapse. We derive an analytical expression describing the mutual information between their spike trains in terms of synapse strength, neuronal activation function, the time course of postsynaptic currents, and the time course of the background input received by the two neurons. This expression allows mutual information calculations that would otherwise be computationally intractable. We use this expression to analytically explore the interaction of excitation, information transmission, and the convexity of the activation function. Then, using this expression to quantify mutual information in simulations, we illustrate the information-gating effects of neural oscillations and oscillatory coherence, which may either increase or decrease the mutual information across the synapse depending on parameters. Finally, we show analytically that our results can quantitatively describe the selection of one information pathway over another when multiple sending neurons project weakly to a single receiving neuron.
Spikes matter for phase-locked bursting in inhibitory neurons
Jalil, Sajiya; Belykh, Igor; Shilnikov, Andrey
2012-03-01
We show that inhibitory networks composed of two endogenously bursting neurons can robustly display several coexistent phase-locked states in addition to stable antiphase and in-phase bursting. This work complements and enhances our recent result [Jalil, Belykh, and Shilnikov, Phys. Rev. EPLEEE81539-375510.1103/PhysRevE.81.045201 81, 045201(R) (2010)] that fast reciprocal inhibition can synchronize bursting neurons due to spike interactions. We reveal the role of spikes in generating multiple phase-locked states and demonstrate that this multistability is generic by analyzing diverse models of bursting networks with various fast inhibitory synapses; the individual cell models include the reduced leech heart interneuron, the Sherman model for pancreatic beta cells, and the Purkinje neuron model.
A memristive spiking neuron with firing rate coding
Directory of Open Access Journals (Sweden)
Marina eIgnatov
2015-10-01
Full Text Available Perception, decisions, and sensations are all encoded into trains of action potentials in the brain. The relation between stimulus strength and all-or-nothing spiking of neurons is widely believed to be the basis of this coding. This initiated the development of spiking neuron models; one of today's most powerful conceptual tool for the analysis and emulation of neural dynamics. The success of electronic circuit models and their physical realization within silicon field-effect transistor circuits lead to elegant technical approaches. Recently, the spectrum of electronic devices for neural computing has been extended by memristive devices, mainly used to emulate static synaptic functionality. Their capabilities for emulations of neural activity were recently demonstrated using a memristive neuristor circuit, while a memristive neuron circuit has so far been elusive. Here, a spiking neuron model is experimentally realized in a compact circuit comprising memristive and memcapacitive devices based on the strongly correlated electron material vanadium dioxide (VO2 and on the chemical electromigration cell Ag/TiO2-x/Al. The circuit can emulate dynamical spiking patterns in response to an external stimulus including adaptation, which is at the heart of firing rate coding as first observed by E.D. Adrian in 1926.
Directory of Open Access Journals (Sweden)
Praveen K Pilly
Full Text Available Medial entorhinal grid cells and hippocampal place cells provide neural correlates of spatial representation in the brain. A place cell typically fires whenever an animal is present in one or more spatial regions, or places, of an environment. A grid cell typically fires in multiple spatial regions that form a regular hexagonal grid structure extending throughout the environment. Different grid and place cells prefer spatially offset regions, with their firing fields increasing in size along the dorsoventral axes of the medial entorhinal cortex and hippocampus. The spacing between neighboring fields for a grid cell also increases along the dorsoventral axis. This article presents a neural model whose spiking neurons operate in a hierarchy of self-organizing maps, each obeying the same laws. This spiking GridPlaceMap model simulates how grid cells and place cells may develop. It responds to realistic rat navigational trajectories by learning grid cells with hexagonal grid firing fields of multiple spatial scales and place cells with one or more firing fields that match neurophysiological data about these cells and their development in juvenile rats. The place cells represent much larger spaces than the grid cells, which enable them to support navigational behaviors. Both self-organizing maps amplify and learn to categorize the most frequent and energetic co-occurrences of their inputs. The current results build upon a previous rate-based model of grid and place cell learning, and thus illustrate a general method for converting rate-based adaptive neural models, without the loss of any of their analog properties, into models whose cells obey spiking dynamics. New properties of the spiking GridPlaceMap model include the appearance of theta band modulation. The spiking model also opens a path for implementation in brain-emulating nanochips comprised of networks of noisy spiking neurons with multiple-level adaptive weights for controlling autonomous
Emergent properties of interacting populations of spiking neurons.
Cardanobile, Stefano; Rotter, Stefan
2011-01-01
Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations.
First-spike latency in Hodgkin's three classes of neurons.
Wang, Hengtong; Chen, Yueling; Chen, Yong
2013-07-07
We study the first-spike latency (FSL) in Hodgkin's three classes of neurons with the Morris-Lecar neuron model. It is found that all the three classes of neurons can encode an external stimulus into FSLs. With DC inputs, the FSLs of all of the neurons decrease with input intensity. With input current decreased to the threshold, class 1 neurons show an arbitrary long FSL whereas class 2 and 3 neurons exhibit the short-limit FSLs. When the input current is sinusoidal, the amplitude, frequency and initial phase can be encoded by all the three classes of neurons. The FSLs of all of the neurons decrease with the input amplitude and frequency. When the input frequency is too high, all of the neurons respond with infinite FSLs. When the initial phase increases, the FSL decreases and then jumps to a maximal value and finally decreases linearly. With changes in the input parameters, the FSLs of the class 1 and 2 neurons exhibit similar properties. However, the FSL of the class 3 neurons became slightly longer and only produces responses for a narrow range of initial phase if input frequencies are low. Moreover, our results also show that the FSL and firing rate responses are mutually independent processes and that neurons can encode an external stimulus into different FSLs and firing rates simultaneously. This finding is consistent with the current theory of dual or multiple complementary coding mechanisms. Copyright © 2013 Elsevier Ltd. All rights reserved.
A Simple Deep Learning Method for Neuronal Spike Sorting
Yang, Kai; Wu, Haifeng; Zeng, Yu
2017-10-01
Spike sorting is one of key technique to understand brain activity. With the development of modern electrophysiology technology, some recent multi-electrode technologies have been able to record the activity of thousands of neuronal spikes simultaneously. The spike sorting in this case will increase the computational complexity of conventional sorting algorithms. In this paper, we will focus spike sorting on how to reduce the complexity, and introduce a deep learning algorithm, principal component analysis network (PCANet) to spike sorting. The introduced method starts from a conventional model and establish a Toeplitz matrix. Through the column vectors in the matrix, we trains a PCANet, where some eigenvalue vectors of spikes could be extracted. Finally, support vector machine (SVM) is used to sort spikes. In experiments, we choose two groups of simulated data from public databases availably and compare this introduced method with conventional methods. The results indicate that the introduced method indeed has lower complexity with the same sorting errors as the conventional methods.
Real-time computing platform for spiking neurons (RT-spike).
Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael
2006-07-01
A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.
Emergent properties of interacting populations of spiking neurons
Directory of Open Access Journals (Sweden)
Stefano eCardanobile
2011-12-01
Full Text Available Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system.Here, we discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks on the population level is faithfully reflected by a set of non-linear rate equations, describing all interactions on this level. These equations, in turn, are similar in structure to the Lotka-Volterra equations, well known by their use in modeling predator-prey relationships in population biology, but abundant applications to economic theory have also been described.We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of neural populations.
Asynchronous Rate Chaos in Spiking Neuronal Circuits.
Directory of Open Access Journals (Sweden)
Omri Harish
2015-07-01
Full Text Available The brain exhibits temporally complex patterns of activity with features similar to those of chaotic systems. Theoretical studies over the last twenty years have described various computational advantages for such regimes in neuronal systems. Nevertheless, it still remains unclear whether chaos requires specific cellular properties or network architectures, or whether it is a generic property of neuronal circuits. We investigate the dynamics of networks of excitatory-inhibitory (EI spiking neurons with random sparse connectivity operating in the regime of balance of excitation and inhibition. Combining Dynamical Mean-Field Theory with numerical simulations, we show that chaotic, asynchronous firing rate fluctuations emerge generically for sufficiently strong synapses. Two different mechanisms can lead to these chaotic fluctuations. One mechanism relies on slow I-I inhibition which gives rise to slow subthreshold voltage and rate fluctuations. The decorrelation time of these fluctuations is proportional to the time constant of the inhibition. The second mechanism relies on the recurrent E-I-E feedback loop. It requires slow excitation but the inhibition can be fast. In the corresponding dynamical regime all neurons exhibit rate fluctuations on the time scale of the excitation. Another feature of this regime is that the population-averaged firing rate is substantially smaller in the excitatory population than in the inhibitory population. This is not necessarily the case in the I-I mechanism. Finally, we discuss the neurophysiological and computational significance of our results.
Asynchronous Rate Chaos in Spiking Neuronal Circuits
Harish, Omri; Hansel, David
2015-01-01
The brain exhibits temporally complex patterns of activity with features similar to those of chaotic systems. Theoretical studies over the last twenty years have described various computational advantages for such regimes in neuronal systems. Nevertheless, it still remains unclear whether chaos requires specific cellular properties or network architectures, or whether it is a generic property of neuronal circuits. We investigate the dynamics of networks of excitatory-inhibitory (EI) spiking neurons with random sparse connectivity operating in the regime of balance of excitation and inhibition. Combining Dynamical Mean-Field Theory with numerical simulations, we show that chaotic, asynchronous firing rate fluctuations emerge generically for sufficiently strong synapses. Two different mechanisms can lead to these chaotic fluctuations. One mechanism relies on slow I-I inhibition which gives rise to slow subthreshold voltage and rate fluctuations. The decorrelation time of these fluctuations is proportional to the time constant of the inhibition. The second mechanism relies on the recurrent E-I-E feedback loop. It requires slow excitation but the inhibition can be fast. In the corresponding dynamical regime all neurons exhibit rate fluctuations on the time scale of the excitation. Another feature of this regime is that the population-averaged firing rate is substantially smaller in the excitatory population than in the inhibitory population. This is not necessarily the case in the I-I mechanism. Finally, we discuss the neurophysiological and computational significance of our results. PMID:26230679
A new supervised learning algorithm for spiking neurons.
Xu, Yan; Zeng, Xiaoqin; Zhong, Shuiming
2013-06-01
The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by the precise firing times of spikes. If only running time is considered, the supervised learning for a spiking neuron is equivalent to distinguishing the times of desired output spikes and the other time during the running process of the neuron through adjusting synaptic weights, which can be regarded as a classification problem. Based on this idea, this letter proposes a new supervised learning method for spiking neurons with temporal encoding; it first transforms the supervised learning into a classification problem and then solves the problem by using the perceptron learning rule. The experiment results show that the proposed method has higher learning accuracy and efficiency over the existing learning methods, so it is more powerful for solving complex and real-time problems.
Solving constraint satisfaction problems with networks of spiking neurons
Directory of Open Access Journals (Sweden)
Zeno eJonke
2016-03-01
Full Text Available Network of neurons in the brain apply – unlike processors in our current generation ofcomputer hardware – an event-based processing strategy, where short pulses (spikes areemitted sparsely by neurons to signal the occurrence of an event at a particular point intime. Such spike-based computations promise to be substantially more power-efficient thantraditional clocked processing schemes. However it turned out to be surprisingly difficult todesign networks of spiking neurons that can solve difficult computational problems on the levelof single spikes (rather than rates of spikes. We present here a new method for designingnetworks of spiking neurons via an energy function. Furthermore we show how the energyfunction of a network of stochastically firing neurons can be shaped in a quite transparentmanner by composing the networks of simple stereotypical network motifs. We show that thisdesign approach enables networks of spiking neurons to produce approximate solutions todifficult (NP-hard constraint satisfaction problems from the domains of planning/optimizationand verification/logical inference. The resulting networks employ noise as a computationalresource. Nevertheless the timing of spikes (rather than just spike rates plays an essential rolein their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines and Gibbs sampling.
How adaptation shapes spike rate oscillations in recurrent neuronal networks
Directory of Open Access Journals (Sweden)
Moritz eAugustin
2013-02-01
Full Text Available Neural mass signals from in-vivo recordings often show oscillations with frequencies ranging from <1 Hz to 100 Hz. Fast rhythmic activity in the beta and gamma range can be generated by network based mechanisms such as recurrent synaptic excitation-inhibition loops. Slower oscillations might instead depend on neuronal adaptation currents whose timescales range from tens of milliseconds to seconds. Here we investigate how the dynamics of such adaptation currents contribute to spike rate oscillations and resonance properties in recurrent networks of excitatory and inhibitory neurons. Based on a network of sparsely coupled spiking model neurons with two types of adaptation current and conductance based synapses with heterogeneous strengths and delays we use a mean-field approach to analyze oscillatory network activity. For constant external input, we find that spike-triggered adaptation currents provide a mechanism to generate slow oscillations over a wide range of adaptation timescales as long as recurrent synaptic excitation is sufficiently strong. Faster rhythms occur when recurrent inhibition is slower than excitation and oscillation frequency increases with the strength of inhibition. Adaptation facilitates such network based oscillations for fast synaptic inhibition and leads to decreased frequencies. For oscillatory external input, adaptation currents amplify a narrow band of frequencies and cause phase advances for low frequencies in addition to phase delays at higher frequencies. Our results therefore identify the different key roles of neuronal adaptation dynamics for rhythmogenesis and selective signal propagation in recurrent networks.
Inverse stochastic resonance in networks of spiking neurons.
Uzuntarla, Muhammet; Barreto, Ernest; Torres, Joaquin J
2017-07-01
Inverse Stochastic Resonance (ISR) is a phenomenon in which the average spiking rate of a neuron exhibits a minimum with respect to noise. ISR has been studied in individual neurons, but here, we investigate ISR in scale-free networks, where the average spiking rate is calculated over the neuronal population. We use Hodgkin-Huxley model neurons with channel noise (i.e., stochastic gating variable dynamics), and the network connectivity is implemented via electrical or chemical connections (i.e., gap junctions or excitatory/inhibitory synapses). We find that the emergence of ISR depends on the interplay between each neuron's intrinsic dynamical structure, channel noise, and network inputs, where the latter in turn depend on network structure parameters. We observe that with weak gap junction or excitatory synaptic coupling, network heterogeneity and sparseness tend to favor the emergence of ISR. With inhibitory coupling, ISR is quite robust. We also identify dynamical mechanisms that underlie various features of this ISR behavior. Our results suggest possible ways of experimentally observing ISR in actual neuronal systems.
Channel noise effects on first spike latency of a stochastic Hodgkin-Huxley neuron
Maisel, Brenton; Lindenberg, Katja
2017-02-01
While it is widely accepted that information is encoded in neurons via action potentials or spikes, it is far less understood what specific features of spiking contain encoded information. Experimental evidence has suggested that the timing of the first spike may be an energy-efficient coding mechanism that contains more neural information than subsequent spikes. Therefore, the biophysical features of neurons that underlie response latency are of considerable interest. Here we examine the effects of channel noise on the first spike latency of a Hodgkin-Huxley neuron receiving random input from many other neurons. Because the principal feature of a Hodgkin-Huxley neuron is the stochastic opening and closing of channels, the fluctuations in the number of open channels lead to fluctuations in the membrane voltage and modify the timing of the first spike. Our results show that when a neuron has a larger number of channels, (i) the occurrence of the first spike is delayed and (ii) the variation in the first spike timing is greater. We also show that the mean, median, and interquartile range of first spike latency can be accurately predicted from a simple linear regression by knowing only the number of channels in the neuron and the rate at which presynaptic neurons fire, but the standard deviation (i.e., neuronal jitter) cannot be predicted using only this information. We then compare our results to another commonly used stochastic Hodgkin-Huxley model and show that the more commonly used model overstates the first spike latency but can predict the standard deviation of first spike latencies accurately. We end by suggesting a more suitable definition for the neuronal jitter based upon our simulations and comparison of the two models.
Error-backpropagation in temporally encoded networks of spiking neurons
S.M. Bohte (Sander); J.A. La Poutré (Han); J.N. Kok (Joost)
2000-01-01
textabstractFor a network of spiking neurons that encodes information in the timing of individual spike-times, we derive a supervised learning rule, emph{SpikeProp, akin to traditional error-backpropagation and show how to overcome the discontinuities introduced by thresholding. With this algorithm,
Clustering predicts memory performance in networks of spiking and non-spiking neurons
Directory of Open Access Journals (Sweden)
Weiliang eChen
2011-03-01
Full Text Available The problem we address in this paper is that of finding effective and parsimonious patterns of connectivity in sparse associative memories. This problem must be addressed in real neuronal systems, so that results in artificial systems could throw light on real systems. We show that there are efficient patterns of connectivity and that these patterns are effective in models with either spiking or non-spiking neurons. This suggests that there may be some underlying general principles governing good connectivity in such networks. We also show that the clustering of the network, measured by Clustering Coefficient, has a strong linear correlation to the performance of associative memory. This result is important since a purely static measure of network connectivity appears to determine an important dynamic property of the network.
Joint Probability-Based Neuronal Spike Train Classification
Directory of Open Access Journals (Sweden)
Yan Chen
2009-01-01
Full Text Available Neuronal spike trains are used by the nervous system to encode and transmit information. Euclidean distance-based methods (EDBMs have been applied to quantify the similarity between temporally-discretized spike trains and model responses. In this study, using the same discretization procedure, we developed and applied a joint probability-based method (JPBM to classify individual spike trains of slowly adapting pulmonary stretch receptors (SARs. The activity of individual SARs was recorded in anaesthetized, paralysed adult male rabbits, which were artificially-ventilated at constant rate and one of three different volumes. Two-thirds of the responses to the 600 stimuli presented at each volume were used to construct three response models (one for each stimulus volume consisting of a series of time bins, each with spike probabilities. The remaining one-third of the responses where used as test responses to be classified into one of the three model responses. This was done by computing the joint probability of observing the same series of events (spikes or no spikes, dictated by the test response in a given model and determining which probability of the three was highest. The JPBM generally produced better classification accuracy than the EDBM, and both performed well above chance. Both methods were similarly affected by variations in discretization parameters, response epoch duration, and two different response alignment strategies. Increasing bin widths increased classification accuracy, which also improved with increased observation time, but primarily during periods of increasing lung inflation. Thus, the JPBM is a simple and effective method performing spike train classification.
Solving Constraint Satisfaction Problems with Networks of Spiking Neurons.
Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang
2016-01-01
Network of neurons in the brain apply-unlike processors in our current generation of computer hardware-an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling.
Coherent and intermittent ensemble oscillations emerge from networks of irregular spiking neurons.
Hoseini, Mahmood S; Wessel, Ralf
2016-01-01
Local field potential (LFP) recordings from spatially distant cortical circuits reveal episodes of coherent gamma oscillations that are intermittent, and of variable peak frequency and duration. Concurrently, single neuron spiking remains largely irregular and of low rate. The underlying potential mechanisms of this emergent network activity have long been debated. Here we reproduce such intermittent ensemble oscillations in a model network, consisting of excitatory and inhibitory model neurons with the characteristics of regular-spiking (RS) pyramidal neurons, and fast-spiking (FS) and low-threshold spiking (LTS) interneurons. We find that fluctuations in the external inputs trigger reciprocally connected and irregularly spiking RS and FS neurons in episodes of ensemble oscillations, which are terminated by the recruitment of the LTS population with concurrent accumulation of inhibitory conductance in both RS and FS neurons. The model qualitatively reproduces experimentally observed phase drift, oscillation episode duration distributions, variation in the peak frequency, and the concurrent irregular single-neuron spiking at low rate. Furthermore, consistent with previous experimental studies using optogenetic manipulation, periodic activation of FS, but not RS, model neurons causes enhancement of gamma oscillations. In addition, increasing the coupling between two model networks from low to high reveals a transition from independent intermittent oscillations to coherent intermittent oscillations. In conclusion, the model network suggests biologically plausible mechanisms for the generation of episodes of coherent intermittent ensemble oscillations with irregular spiking neurons in cortical circuits. Copyright © 2016 the American Physiological Society.
A spiking neuron circuit based on a carbon nanotube transistor
International Nuclear Information System (INIS)
Chen, C-L; Kim, K; Truong, Q; Shen, A; Li, Z; Chen, Y
2012-01-01
A spiking neuron circuit based on a carbon nanotube (CNT) transistor is presented in this paper. The spiking neuron circuit has a crossbar architecture in which the transistor gates are connected to its row electrodes and the transistor sources are connected to its column electrodes. An electrochemical cell is incorporated in the gate of the transistor by sandwiching a hydrogen-doped poly(ethylene glycol)methyl ether (PEG) electrolyte between the CNT channel and the top gate electrode. An input spike applied to the gate triggers a dynamic drift of the hydrogen ions in the PEG electrolyte, resulting in a post-synaptic current (PSC) through the CNT channel. Spikes input into the rows trigger PSCs through multiple CNT transistors, and PSCs cumulate in the columns and integrate into a ‘soma’ circuit to trigger output spikes based on an integrate-and-fire mechanism. The spiking neuron circuit can potentially emulate biological neuron networks and their intelligent functions. (paper)
Note on the coefficient of variations of neuronal spike trains.
Lengler, Johannes; Steger, Angelika
2017-08-01
It is known that many neurons in the brain show spike trains with a coefficient of variation (CV) of the interspike times of approximately 1, thus resembling the properties of Poisson spike trains. Computational studies have been able to reproduce this phenomenon. However, the underlying models were too complex to be examined analytically. In this paper, we offer a simple model that shows the same effect but is accessible to an analytic treatment. The model is a random walk model with a reflecting barrier; we give explicit formulas for the CV in the regime of excess inhibition. We also analyze the effect of probabilistic synapses in our model and show that it resembles previous findings that were obtained by simulation.
Efficient computation in networks of spiking neurons: simulations and theory
International Nuclear Information System (INIS)
Natschlaeger, T.
1999-01-01
One of the most prominent features of biological neural systems is that individual neurons communicate via short electrical pulses, the so called action potentials or spikes. In this thesis we investigate possible mechanisms which can in principle explain how complex computations in spiking neural networks (SNN) can be performed very fast, i.e. within a few 10 milliseconds. Some of these models are based on the assumption that relevant information is encoded by the timing of individual spikes (temporal coding). We will also discuss a model which is based on a population code and still is able to perform fast complex computations. In their natural environment biological neural systems have to process signals with a rich temporal structure. Hence it is an interesting question how neural systems process time series. In this context we explore possible links between biophysical characteristics of single neurons (refractory behavior, connectivity, time course of postsynaptic potentials) and synapses (unreliability, dynamics) on the one hand and possible computations on times series on the other hand. Furthermore we describe a general model of computation that exploits dynamic synapses. This model provides a general framework for understanding how neural systems process time-varying signals. (author)
A self-resetting spiking phase-change neuron
Cobley, R. A.; Hayat, H.; Wright, C. D.
2018-05-01
Neuromorphic, or brain-inspired, computing applications of phase-change devices have to date concentrated primarily on the implementation of phase-change synapses. However, the so-called accumulation mode of operation inherent in phase-change materials and devices can also be used to mimic the integrative properties of a biological neuron. Here we demonstrate, using physical modelling of nanoscale devices and SPICE modelling of associated circuits, that a single phase-change memory cell integrated into a comparator type circuit can deliver a basic hardware mimic of an integrate-and-fire spiking neuron with self-resetting capabilities. Such phase-change neurons, in combination with phase-change synapses, can potentially open a new route for the realisation of all-phase-change neuromorphic computing.
A self-resetting spiking phase-change neuron.
Cobley, R A; Hayat, H; Wright, C D
2018-05-11
Neuromorphic, or brain-inspired, computing applications of phase-change devices have to date concentrated primarily on the implementation of phase-change synapses. However, the so-called accumulation mode of operation inherent in phase-change materials and devices can also be used to mimic the integrative properties of a biological neuron. Here we demonstrate, using physical modelling of nanoscale devices and SPICE modelling of associated circuits, that a single phase-change memory cell integrated into a comparator type circuit can deliver a basic hardware mimic of an integrate-and-fire spiking neuron with self-resetting capabilities. Such phase-change neurons, in combination with phase-change synapses, can potentially open a new route for the realisation of all-phase-change neuromorphic computing.
The chronotron: a neuron that learns to fire temporally precise spike patterns.
Directory of Open Access Journals (Sweden)
Răzvan V Florian
Full Text Available In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons, one that provides high memory capacity (E-learning, and one that has a higher biological plausibility (I-learning. With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.
Inherently stochastic spiking neurons for probabilistic neural computation
Al-Shedivat, Maruan; Naous, Rawan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled N.
2015-01-01
. Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards
Spiking and bursting patterns of fractional-order Izhikevich model
Teka, Wondimu W.; Upadhyay, Ranjit Kumar; Mondal, Argha
2018-03-01
Bursting and spiking oscillations play major roles in processing and transmitting information in the brain through cortical neurons that respond differently to the same signal. These oscillations display complex dynamics that might be produced by using neuronal models and varying many model parameters. Recent studies have shown that models with fractional order can produce several types of history-dependent neuronal activities without the adjustment of several parameters. We studied the fractional-order Izhikevich model and analyzed different kinds of oscillations that emerge from the fractional dynamics. The model produces a wide range of neuronal spike responses, including regular spiking, fast spiking, intrinsic bursting, mixed mode oscillations, regular bursting and chattering, by adjusting only the fractional order. Both the active and silent phase of the burst increase when the fractional-order model further deviates from the classical model. For smaller fractional order, the model produces memory dependent spiking activity after the pulse signal turned off. This special spiking activity and other properties of the fractional-order model are caused by the memory trace that emerges from the fractional-order dynamics and integrates all the past activities of the neuron. On the network level, the response of the neuronal network shifts from random to scale-free spiking. Our results suggest that the complex dynamics of spiking and bursting can be the result of the long-term dependence and interaction of intracellular and extracellular ionic currents.
Spectral components of cytosolic [Ca2+] spiking in neurons
DEFF Research Database (Denmark)
Kardos, J; Szilágyi, N; Juhász, G
1998-01-01
. Delayed complex responses of large [Ca2+]c spiking observed in cells from a different set of cultures were synthesized by a set of frequencies within the range 0.018-0.117 Hz. Differential frequency patterns are suggested as characteristics of the [Ca2+]c spiking responses of neurons under different...
Extracting functionally feedforward networks from a population of spiking neurons.
Vincent, Kathleen; Tauskela, Joseph S; Thivierge, Jean-Philippe
2012-01-01
Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABA(A) receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV
Input-output relation and energy efficiency in the neuron with different spike threshold dynamics.
Yi, Guo-Sheng; Wang, Jiang; Tsang, Kai-Ming; Wei, Xi-Le; Deng, Bin
2015-01-01
Neuron encodes and transmits information through generating sequences of output spikes, which is a high energy-consuming process. The spike is initiated when membrane depolarization reaches a threshold voltage. In many neurons, threshold is dynamic and depends on the rate of membrane depolarization (dV/dt) preceding a spike. Identifying the metabolic energy involved in neural coding and their relationship to threshold dynamic is critical to understanding neuronal function and evolution. Here, we use a modified Morris-Lecar model to investigate neuronal input-output property and energy efficiency associated with different spike threshold dynamics. We find that the neurons with dynamic threshold sensitive to dV/dt generate discontinuous frequency-current curve and type II phase response curve (PRC) through Hopf bifurcation, and weak noise could prohibit spiking when bifurcation just occurs. The threshold that is insensitive to dV/dt, instead, results in a continuous frequency-current curve, a type I PRC and a saddle-node on invariant circle bifurcation, and simultaneously weak noise cannot inhibit spiking. It is also shown that the bifurcation, frequency-current curve and PRC type associated with different threshold dynamics arise from the distinct subthreshold interactions of membrane currents. Further, we observe that the energy consumption of the neuron is related to its firing characteristics. The depolarization of spike threshold improves neuronal energy efficiency by reducing the overlap of Na(+) and K(+) currents during an action potential. The high energy efficiency is achieved at more depolarized spike threshold and high stimulus current. These results provide a fundamental biophysical connection that links spike threshold dynamics, input-output relation, energetics and spike initiation, which could contribute to uncover neural encoding mechanism.
Predicting Spike Occurrence and Neuronal Responsiveness from LFPs in Primary Somatosensory Cortex
Storchi, Riccardo; Zippo, Antonio G.; Caramenti, Gian Carlo; Valente, Maurizio; Biella, Gabriele E. M.
2012-01-01
Local Field Potentials (LFPs) integrate multiple neuronal events like synaptic inputs and intracellular potentials. LFP spatiotemporal features are particularly relevant in view of their applications both in research (e.g. for understanding brain rhythms, inter-areal neural communication and neronal coding) and in the clinics (e.g. for improving invasive Brain-Machine Interface devices). However the relation between LFPs and spikes is complex and not fully understood. As spikes represent the fundamental currency of neuronal communication this gap in knowledge strongly limits our comprehension of neuronal phenomena underlying LFPs. We investigated the LFP-spike relation during tactile stimulation in primary somatosensory (S-I) cortex in the rat. First we quantified how reliably LFPs and spikes code for a stimulus occurrence. Then we used the information obtained from our analyses to design a predictive model for spike occurrence based on LFP inputs. The model was endowed with a flexible meta-structure whose exact form, both in parameters and structure, was estimated by using a multi-objective optimization strategy. Our method provided a set of nonlinear simple equations that maximized the match between models and true neurons in terms of spike timings and Peri Stimulus Time Histograms. We found that both LFPs and spikes can code for stimulus occurrence with millisecond precision, showing, however, high variability. Spike patterns were predicted significantly above chance for 75% of the neurons analysed. Crucially, the level of prediction accuracy depended on the reliability in coding for the stimulus occurrence. The best predictions were obtained when both spikes and LFPs were highly responsive to the stimuli. Spike reliability is known to depend on neuron intrinsic properties (i.e. on channel noise) and on spontaneous local network fluctuations. Our results suggest that the latter, measured through the LFP response variability, play a dominant role. PMID:22586452
Emergent dynamics of spiking neurons with fluctuating threshold
Bhattacharjee, Anindita; Das, M. K.
2017-05-01
Role of fluctuating threshold on neuronal dynamics is investigated. The threshold function is assumed to follow a normal probability distribution. Standard deviation of inter-spike interval of the response is computed as an indicator of irregularity in spike emission. It has been observed that, the irregularity in spiking is more if the threshold variation is more. A significant change in modal characteristics of Inter Spike Intervals (ISI) is seen to occur as a function of fluctuation parameter. Investigation is further carried out for coupled system of neurons. Cooperative dynamics of coupled neurons are discussed in view of synchronization. Total and partial synchronization regimes are depicted with the help of contour plots of synchrony measure under various conditions. Results of this investigation may provide a basis for exploring the complexities of neural communication and brain functioning.
Memristors Empower Spiking Neurons With Stochasticity
Al-Shedivat, Maruan; Naous, Rawan; Cauwenberghs, Gert; Salama, Khaled N.
2015-01-01
Recent theoretical studies have shown that probabilistic spiking can be interpreted as learning and inference in cortical microcircuits. This interpretation creates new opportunities for building neuromorphic systems driven by probabilistic learning
Grewe, Jan; Kruscha, Alexandra; Lindner, Benjamin; Benda, Jan
2017-03-07
Synchronous activity in populations of neurons potentially encodes special stimulus features. Selective readout of either synchronous or asynchronous activity allows formation of two streams of information processing. Theoretical work predicts that such a synchrony code is a fundamental feature of populations of spiking neurons if they operate in specific noise and stimulus regimes. Here we experimentally test the theoretical predictions by quantifying and comparing neuronal response properties in tuberous and ampullary electroreceptor afferents of the weakly electric fish Apteronotus leptorhynchus These related systems show similar levels of synchronous activity, but only in the more irregularly firing tuberous afferents a synchrony code is established, whereas in the more regularly firing ampullary afferents it is not. The mere existence of synchronous activity is thus not sufficient for a synchrony code. Single-cell features such as the irregularity of spiking and the frequency dependence of the neuron's transfer function determine whether synchronous spikes possess a distinct meaning for the encoding of time-dependent signals.
Spiking irregularity and frequency modulate the behavioral report of single-neuron stimulation.
Doron, Guy; von Heimendahl, Moritz; Schlattmann, Peter; Houweling, Arthur R; Brecht, Michael
2014-02-05
The action potential activity of single cortical neurons can evoke measurable sensory effects, but it is not known how spiking parameters and neuronal subtypes affect the evoked sensations. Here, we examined the effects of spike train irregularity, spike frequency, and spike number on the detectability of single-neuron stimulation in rat somatosensory cortex. For regular-spiking, putative excitatory neurons, detectability increased with spike train irregularity and decreasing spike frequencies but was not affected by spike number. Stimulation of single, fast-spiking, putative inhibitory neurons led to a larger sensory effect compared to regular-spiking neurons, and the effect size depended only on spike irregularity. An ideal-observer analysis suggests that, under our experimental conditions, rats were using integration windows of a few hundred milliseconds or more. Our data imply that the behaving animal is sensitive to single neurons' spikes and even to their temporal patterning. Copyright © 2014 Elsevier Inc. All rights reserved.
Time Resolution Dependence of Information Measures for Spiking Neurons: Scaling and Universality
Directory of Open Access Journals (Sweden)
James P Crutchfield
2015-08-01
Full Text Available The mutual information between stimulus and spike-train response is commonly used to monitor neural coding efficiency, but neuronal computation broadly conceived requires more refined and targeted information measures of input-output joint processes. A first step towards that larger goal is todevelop information measures for individual output processes, including information generation (entropy rate, stored information (statisticalcomplexity, predictable information (excess entropy, and active information accumulation (bound information rate. We calculate these for spike trains generated by a variety of noise-driven integrate-and-fire neurons as a function of time resolution and for alternating renewal processes. We show that their time-resolution dependence reveals coarse-grained structural properties of interspike interval statistics; e.g., $tau$-entropy rates that diverge less quickly than the firing rate indicate interspike interval correlations. We also find evidence that the excess entropy and regularized statistical complexity of different types of integrate-and-fire neurons are universal in the continuous-time limit in the sense that they do not depend on mechanism details. This suggests a surprising simplicity in the spike trains generated by these model neurons. Interestingly, neurons with gamma-distributed ISIs and neurons whose spike trains are alternating renewal processes do not fall into the same universality class. These results lead to two conclusions. First, the dependence of information measures on time resolution reveals mechanistic details about spike train generation. Second, information measures can be used as model selection tools for analyzing spike train processes.
Neuronal spike sorting based on radial basis function neural networks
Directory of Open Access Journals (Sweden)
Taghavi Kani M
2011-02-01
Full Text Available "nBackground: Studying the behavior of a society of neurons, extracting the communication mechanisms of brain with other tissues, finding treatment for some nervous system diseases and designing neuroprosthetic devices, require an algorithm to sort neuralspikes automatically. However, sorting neural spikes is a challenging task because of the low signal to noise ratio (SNR of the spikes. The main purpose of this study was to design an automatic algorithm for classifying neuronal spikes that are emitted from a specific region of the nervous system."n "nMethods: The spike sorting process usually consists of three stages: detection, feature extraction and sorting. We initially used signal statistics to detect neural spikes. Then, we chose a limited number of typical spikes as features and finally used them to train a radial basis function (RBF neural network to sort the spikes. In most spike sorting devices, these signals are not linearly discriminative. In order to solve this problem, the aforesaid RBF neural network was used."n "nResults: After the learning process, our proposed algorithm classified any arbitrary spike. The obtained results showed that even though the proposed Radial Basis Spike Sorter (RBSS reached to the same error as the previous methods, however, the computational costs were much lower compared to other algorithms. Moreover, the competitive points of the proposed algorithm were its good speed and low computational complexity."n "nConclusion: Regarding the results of this study, the proposed algorithm seems to serve the purpose of procedures that require real-time processing and spike sorting.
Guo, Lilin; Wang, Zhenzhong; Cabrerizo, Mercedes; Adjouadi, Malek
2017-05-01
This study introduces a novel learning algorithm for spiking neurons, called CCDS, which is able to learn and reproduce arbitrary spike patterns in a supervised fashion allowing the processing of spatiotemporal information encoded in the precise timing of spikes. Unlike the Remote Supervised Method (ReSuMe), synapse delays and axonal delays in CCDS are variants which are modulated together with weights during learning. The CCDS rule is both biologically plausible and computationally efficient. The properties of this learning rule are investigated extensively through experimental evaluations in terms of reliability, adaptive learning performance, generality to different neuron models, learning in the presence of noise, effects of its learning parameters and classification performance. Results presented show that the CCDS learning method achieves learning accuracy and learning speed comparable with ReSuMe, but improves classification accuracy when compared to both the Spike Pattern Association Neuron (SPAN) learning rule and the Tempotron learning rule. The merit of CCDS rule is further validated on a practical example involving the automated detection of interictal spikes in EEG records of patients with epilepsy. Results again show that with proper encoding, the CCDS rule achieves good recognition performance.
J. Luthman (Johannes); F.E. Hoebeek (Freek); R. Maex (Reinoud); N. Davey (Neil); R. Adams (Rod); C.I. de Zeeuw (Chris); V. Steuber (Volker)
2011-01-01
textabstractNeurons in the cerebellar nuclei (CN) receive inhibitory inputs from Purkinje cells in the cerebellar cortex and provide the major output from the cerebellum, but their computational function is not well understood. It has recently been shown that the spike activity of Purkinje cells is
Supervised learning with decision margins in pools of spiking neurons.
Le Mouel, Charlotte; Harris, Kenneth D; Yger, Pierre
2014-10-01
Learning to categorise sensory inputs by generalising from a few examples whose category is precisely known is a crucial step for the brain to produce appropriate behavioural responses. At the neuronal level, this may be performed by adaptation of synaptic weights under the influence of a training signal, in order to group spiking patterns impinging on the neuron. Here we describe a framework that allows spiking neurons to perform such "supervised learning", using principles similar to the Support Vector Machine, a well-established and robust classifier. Using a hinge-loss error function, we show that requesting a margin similar to that of the SVM improves performance on linearly non-separable problems. Moreover, we show that using pools of neurons to discriminate categories can also increase the performance by sharing the load among neurons.
Span: spike pattern association neuron for learning spatio-temporal spike patterns.
Mohemmed, Ammar; Schliebs, Stefan; Matsuda, Satoshi; Kasabov, Nikola
2012-08-01
Spiking Neural Networks (SNN) were shown to be suitable tools for the processing of spatio-temporal information. However, due to their inherent complexity, the formulation of efficient supervised learning algorithms for SNN is difficult and remains an important problem in the research area. This article presents SPAN - a spiking neuron that is able to learn associations of arbitrary spike trains in a supervised fashion allowing the processing of spatio-temporal information encoded in the precise timing of spikes. The idea of the proposed algorithm is to transform spike trains during the learning phase into analog signals so that common mathematical operations can be performed on them. Using this conversion, it is possible to apply the well-known Widrow-Hoff rule directly to the transformed spike trains in order to adjust the synaptic weights and to achieve a desired input/output spike behavior of the neuron. In the presented experimental analysis, the proposed learning algorithm is evaluated regarding its learning capabilities, its memory capacity, its robustness to noisy stimuli and its classification performance. Differences and similarities of SPAN regarding two related algorithms, ReSuMe and Chronotron, are discussed.
Response of spiking neurons to correlated inputs
International Nuclear Information System (INIS)
Moreno, Ruben; Rocha, Jaime de la; Renart, Alfonso; Parga, Nestor
2002-01-01
The effect of a temporally correlated afferent current on the firing rate of a leaky integrate-and-fire neuron is studied. This current is characterized in terms of rates, autocorrelations, and cross correlations, and correlation time scale τ c of excitatory and inhibitory inputs. The output rate ν out is calculated in the Fokker-Planck formalism in the limit of both small and large τ c compared to the membrane time constant τ of the neuron. By simulations we check the analytical results, provide an interpolation valid for all τ c , and study the neuron's response to rapid changes in the correlation magnitude
International Nuclear Information System (INIS)
Shimazaki, Hideaki
2013-01-01
Neurons in cortical circuits exhibit coordinated spiking activity, and can produce correlated synchronous spikes during behavior and cognition. We recently developed a method for estimating the dynamics of correlated ensemble activity by combining a model of simultaneous neuronal interactions (e.g., a spin-glass model) with a state-space method (Shimazaki et al. 2012 PLoS Comput Biol 8 e1002385). This method allows us to estimate stimulus-evoked dynamics of neuronal interactions which is reproducible in repeated trials under identical experimental conditions. However, the method may not be suitable for detecting stimulus responses if the neuronal dynamics exhibits significant variability across trials. In addition, the previous model does not include effects of past spiking activity of the neurons on the current state of ensemble activity. In this study, we develop a parametric method for simultaneously estimating the stimulus and spike-history effects on the ensemble activity from single-trial data even if the neurons exhibit dynamics that is largely unrelated to these effects. For this goal, we model ensemble neuronal activity as a latent process and include the stimulus and spike-history effects as exogenous inputs to the latent process. We develop an expectation-maximization algorithm that simultaneously achieves estimation of the latent process, stimulus responses, and spike-history effects. The proposed method is useful to analyze an interaction of internal cortical states and sensory evoked activity
Engelken, Rainer; Farkhooi, Farzad; Hansel, David; van Vreeswijk, Carl; Wolf, Fred
2016-01-01
Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks of spiking neuron models. A recent study investigated spiking balanced networks of leaky integrate and fire (LIF) neurons and compared their dynamics to a matched rate network with identical topology, where single unit input-output functions were chosen from isolated LIF neurons receiving Gaussian white noise input. A mathematical analogy between the chaotic instability in networks of rate units and the spiking network dynamics was proposed. Here we revisit the behavior of the spiking LIF networks and these matched rate networks. We find expected hallmarks of a chaotic instability in the rate network: For supercritical coupling strength near the transition point, the autocorrelation time diverges. For subcritical coupling strengths, we observe critical slowing down in response to small external perturbations. In the spiking network, we found in contrast that the timescale of the autocorrelations is insensitive to the coupling strength and that rate deviations resulting from small input perturbations rapidly decay. The decay speed even accelerates for increasing coupling strength. In conclusion, our reanalysis demonstrates fundamental differences between the behavior of pulse-coupled spiking LIF networks and rate networks with matched topology and input-output function. In particular there is no indication of a corresponding chaotic instability in the spiking network.
Superficial dorsal horn neurons with double spike activity in the rat.
Rojas-Piloni, Gerardo; Dickenson, Anthony H; Condés-Lara, Miguel
2007-05-29
Superficial dorsal horn neurons promote the transfer of nociceptive information from the periphery to supraspinal structures. The membrane and discharge properties of spinal cord neurons can alter the reliability of peripheral signals. In this paper, we analyze the location and response properties of a particular class of dorsal horn neurons that exhibits double spike discharge with a very short interspike interval (2.01+/-0.11 ms). These neurons receive nociceptive C-fiber input and are located in laminae I-II. Double spikes are generated spontaneously or by depolarizing current injection (interval of 2.37+/-0.22). Cells presenting double spike (interval 2.28+/-0.11) increased the firing rate by electrical noxious stimulation, as well as, in the first minutes after carrageenan injection into their receptive field. Carrageenan is a polysaccharide soluble in water and it is used for producing an experimental model of semi-chronic pain. In the present study carrageenan also produces an increase in the interval between double spikes and then, reduced their occurrence after 5-10 min. The results suggest that double spikes are due to intrinsic membrane properties and that their frequency is related to C-fiber nociceptive activity. The present work shows evidence that double spikes in superficial spinal cord neurones are related to the nociceptive stimulation, and they are possibly part of an acute pain-control mechanism.
Nicotine-Mediated ADP to Spike Transition: Double Spiking in Septal Neurons.
Kodirov, Sodikdjon A; Wehrmeister, Michael; Colom, Luis
2016-04-01
The majority of neurons in lateral septum (LS) are electrically silent at resting membrane potential. Nicotine transiently excites a subset of neurons and occasionally leads to long lasting bursting activity upon longer applications. We have observed simultaneous changes in frequencies and amplitudes of spontaneous action potentials (AP) in the presence of nicotine. During the prolonged exposure, nicotine increased numbers of spikes within a burst. One of the hallmarks of nicotine effects was the occurrences of double spikes (known also as bursting). Alignment of 51 spontaneous spikes, triggered upon continuous application of nicotine, revealed that the slope of after-depolarizing potential gradually increased (1.4 vs. 3 mV/ms) and neuron fired the second AP, termed as double spiking. A transition from a single AP to double spikes increased the amplitude of after-hyperpolarizing potential. The amplitude of the second (premature) AP was smaller compared to the first one, and this correlation persisted in regard to their duration (half-width). A similar bursting activity in the presence of nicotine, to our knowledge, has not been reported previously in the septal structure in general and in LS in particular.
Detection of bursts in neuronal spike trains by the mean inter-spike interval method
Institute of Scientific and Technical Information of China (English)
Lin Chen; Yong Deng; Weihua Luo; Zhen Wang; Shaoqun Zeng
2009-01-01
Bursts are electrical spikes firing with a high frequency, which are the most important property in synaptic plasticity and information processing in the central nervous system. However, bursts are difficult to identify because bursting activities or patterns vary with phys-iological conditions or external stimuli. In this paper, a simple method automatically to detect bursts in spike trains is described. This method auto-adaptively sets a parameter (mean inter-spike interval) according to intrinsic properties of the detected burst spike trains, without any arbitrary choices or any operator judgrnent. When the mean value of several successive inter-spike intervals is not larger than the parameter, a burst is identified. By this method, bursts can be automatically extracted from different bursting patterns of cultured neurons on multi-electrode arrays, as accurately as by visual inspection. Furthermore, significant changes of burst variables caused by electrical stimulus have been found in spontaneous activity of neuronal network. These suggest that the mean inter-spike interval method is robust for detecting changes in burst patterns and characteristics induced by environmental alterations.
Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J
2014-01-01
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.
Deep Learning with Dynamic Spiking Neurons and Fixed Feedback Weights.
Samadi, Arash; Lillicrap, Timothy P; Tweed, Douglas B
2017-03-01
Recent work in computer science has shown the power of deep learning driven by the backpropagation algorithm in networks of artificial neurons. But real neurons in the brain are different from most of these artificial ones in at least three crucial ways: they emit spikes rather than graded outputs, their inputs and outputs are related dynamically rather than by piecewise-smooth functions, and they have no known way to coordinate arrays of synapses in separate forward and feedback pathways so that they change simultaneously and identically, as they do in backpropagation. Given these differences, it is unlikely that current deep learning algorithms can operate in the brain, but we that show these problems can be solved by two simple devices: learning rules can approximate dynamic input-output relations with piecewise-smooth functions, and a variation on the feedback alignment algorithm can train deep networks without having to coordinate forward and feedback synapses. Our results also show that deep spiking networks learn much better if each neuron computes an intracellular teaching signal that reflects that cell's nonlinearity. With this mechanism, networks of spiking neurons show useful learning in synapses at least nine layers upstream from the output cells and perform well compared to other spiking networks in the literature on the MNIST digit recognition task.
Consensus-Based Sorting of Neuronal Spike Waveforms.
Fournier, Julien; Mueller, Christian M; Shein-Idelson, Mark; Hemberger, Mike; Laurent, Gilles
2016-01-01
Optimizing spike-sorting algorithms is difficult because sorted clusters can rarely be checked against independently obtained "ground truth" data. In most spike-sorting algorithms in use today, the optimality of a clustering solution is assessed relative to some assumption on the distribution of the spike shapes associated with a particular single unit (e.g., Gaussianity) and by visual inspection of the clustering solution followed by manual validation. When the spatiotemporal waveforms of spikes from different cells overlap, the decision as to whether two spikes should be assigned to the same source can be quite subjective, if it is not based on reliable quantitative measures. We propose a new approach, whereby spike clusters are identified from the most consensual partition across an ensemble of clustering solutions. Using the variability of the clustering solutions across successive iterations of the same clustering algorithm (template matching based on K-means clusters), we estimate the probability of spikes being clustered together and identify groups of spikes that are not statistically distinguishable from one another. Thus, we identify spikes that are most likely to be clustered together and therefore correspond to consistent spike clusters. This method has the potential advantage that it does not rely on any model of the spike shapes. It also provides estimates of the proportion of misclassified spikes for each of the identified clusters. We tested our algorithm on several datasets for which there exists a ground truth (simultaneous intracellular data), and show that it performs close to the optimum reached by a support vector machine trained on the ground truth. We also show that the estimated rate of misclassification matches the proportion of misclassified spikes measured from the ground truth data.
Inference of neuronal network spike dynamics and topology from calcium imaging data
Directory of Open Access Journals (Sweden)
Henry eLütcke
2013-12-01
Full Text Available Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP occurrence ('spike trains' from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties.
Rule, Michael E.; Vargas-Irwin, Carlos; Donoghue, John P.; Truccolo, Wilson
2015-01-01
Understanding the sources of variability in single-neuron spiking responses is an important open problem for the theory of neural coding. This variability is thought to result primarily from spontaneous collective dynamics in neuronal networks. Here, we investigate how well collective dynamics reflected in motor cortex local field potentials (LFPs) can account for spiking variability during motor behavior. Neural activity was recorded via microelectrode arrays implanted in ventral and dorsal premotor and primary motor cortices of non-human primates performing naturalistic 3-D reaching and grasping actions. Point process models were used to quantify how well LFP features accounted for spiking variability not explained by the measured 3-D reach and grasp kinematics. LFP features included the instantaneous magnitude, phase and analytic-signal components of narrow band-pass filtered (δ,θ,α,β) LFPs, and analytic signal and amplitude envelope features in higher-frequency bands. Multiband LFP features predicted single-neuron spiking (1ms resolution) with substantial accuracy as assessed via ROC analysis. Notably, however, models including both LFP and kinematics features displayed marginal improvement over kinematics-only models. Furthermore, the small predictive information added by LFP features to kinematic models was redundant to information available in fast-timescale (spiking history. Overall, information in multiband LFP features, although predictive of single-neuron spiking during movement execution, was redundant to information available in movement parameters and spiking history. Our findings suggest that, during movement execution, collective dynamics reflected in motor cortex LFPs primarily relate to sensorimotor processes directly controlling movement output, adding little explanatory power to variability not accounted by movement parameters. PMID:26157365
Simulating large-scale spiking neuronal networks with NEST
Schücker, Jannis; Eppler, Jochen Martin
2014-01-01
The Neural Simulation Tool NEST [1, www.nest-simulator.org] is the simulator for spiking neural networkmodels of the HBP that focuses on the dynamics, size and structure of neural systems rather than on theexact morphology of individual neurons. Its simulation kernel is written in C++ and it runs on computinghardware ranging from simple laptops to clusters and supercomputers with thousands of processor cores.The development of NEST is coordinated by the NEST Initiative [www.nest-initiative.or...
From spiking neurons to brain waves
Visser, S.
2013-01-01
No single model would be able to capture all processes in the brain at once, since its interactions are too numerous and too complex. Therefore, it is common practice to simplify the parts of the system. Typically, the goal is to describe the collective action of many underlying processes, without
Gerstner, Wulfram
2017-01-01
Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50–2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations. PMID:28422957
Schwalger, Tilo; Deger, Moritz; Gerstner, Wulfram
2017-04-01
Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50-2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations.
Goal-Directed Decision Making with Spiking Neurons.
Friedrich, Johannes; Lengyel, Máté
2016-02-03
Behavioral and neuroscientific data on reward-based decision making point to a fundamental distinction between habitual and goal-directed action selection. The formation of habits, which requires simple updating of cached values, has been studied in great detail, and the reward prediction error theory of dopamine function has enjoyed prominent success in accounting for its neural bases. In contrast, the neural circuit mechanisms of goal-directed decision making, requiring extended iterative computations to estimate values online, are still unknown. Here we present a spiking neural network that provably solves the difficult online value estimation problem underlying goal-directed decision making in a near-optimal way and reproduces behavioral as well as neurophysiological experimental data on tasks ranging from simple binary choice to sequential decision making. Our model uses local plasticity rules to learn the synaptic weights of a simple neural network to achieve optimal performance and solves one-step decision-making tasks, commonly considered in neuroeconomics, as well as more challenging sequential decision-making tasks within 1 s. These decision times, and their parametric dependence on task parameters, as well as the final choice probabilities match behavioral data, whereas the evolution of neural activities in the network closely mimics neural responses recorded in frontal cortices during the execution of such tasks. Our theory provides a principled framework to understand the neural underpinning of goal-directed decision making and makes novel predictions for sequential decision-making tasks with multiple rewards. Goal-directed actions requiring prospective planning pervade decision making, but their circuit-level mechanisms remain elusive. We show how a model circuit of biologically realistic spiking neurons can solve this computationally challenging problem in a novel way. The synaptic weights of our network can be learned using local plasticity rules
Coincidence Detection Using Spiking Neurons with Application to Face Recognition
Directory of Open Access Journals (Sweden)
Fadhlan Kamaruzaman
2015-01-01
Full Text Available We elucidate the practical implementation of Spiking Neural Network (SNN as local ensembles of classifiers. Synaptic time constant τs is used as learning parameter in representing the variations learned from a set of training data at classifier level. This classifier uses coincidence detection (CD strategy trained in supervised manner using a novel supervised learning method called τs Prediction which adjusts the precise timing of output spikes towards the desired spike timing through iterative adaptation of τs. This paper also discusses the approximation of spike timing in Spike Response Model (SRM for the purpose of coincidence detection. This process significantly speeds up the whole process of learning and classification. Performance evaluations with face datasets such as AR, FERET, JAFFE, and CK+ datasets show that the proposed method delivers better face classification performance than the network trained with Supervised Synaptic-Time Dependent Plasticity (STDP. We also found that the proposed method delivers better classification accuracy than k nearest neighbor, ensembles of kNN, and Support Vector Machines. Evaluation on several types of spike codings also reveals that latency coding delivers the best result for face classification as well as for classification of other multivariate datasets.
Kayama, Tasuku; Suzuki, Ikuro; Odawara, Aoi; Sasaki, Takuya; Ikegaya, Yuji
2018-01-01
In culture conditions, human induced-pluripotent stem cells (hiPSC)-derived neurons form synaptic connections with other cells and establish neuronal networks, which are expected to be an in vitro model system for drug discovery screening and toxicity testing. While early studies demonstrated effects of co-culture of hiPSC-derived neurons with astroglial cells on survival and maturation of hiPSC-derived neurons, the population spiking patterns of such hiPSC-derived neurons have not been fully characterized. In this study, we analyzed temporal spiking patterns of hiPSC-derived neurons recorded by a multi-electrode array system. We discovered that specific sets of hiPSC-derived neurons co-cultured with astrocytes showed more frequent and highly coherent non-random synchronized spike trains and more dynamic changes in overall spike patterns over time. These temporally coordinated spiking patterns are physiological signs of organized circuits of hiPSC-derived neurons and suggest benefits of co-culture of hiPSC-derived neurons with astrocytes. Copyright © 2017 Elsevier Inc. All rights reserved.
Spiking, Bursting, and Population Dynamics in a Network of Growth Transform Neurons.
Gangopadhyay, Ahana; Chakrabartty, Shantanu
2017-04-27
This paper investigates the dynamical properties of a network of neurons, each of which implements an asynchronous mapping based on polynomial growth transforms. In the first part of this paper, we present a geometric approach for visualizing the dynamics of the network where each of the neurons traverses a trajectory in a dual optimization space, whereas the network itself traverses a trajectory in an equivalent primal optimization space. We show that as the network learns to solve basic classification tasks, different choices of primal-dual mapping produce unique but interpretable neural dynamics like noise shaping, spiking, and bursting. While the proposed framework is general enough, in this paper, we demonstrate its use for designing support vector machines (SVMs) that exhibit noise-shaping properties similar to those of ΣΔ modulators, and for designing SVMs that learn to encode information using spikes and bursts. It is demonstrated that the emergent switching, spiking, and burst dynamics produced by each neuron encodes its respective margin of separation from a classification hyperplane whose parameters are encoded by the network population dynamics. We believe that the proposed growth transform neuron model and the underlying geometric framework could serve as an important tool to connect well-established machine learning algorithms like SVMs to neuromorphic principles like spiking, bursting, population encoding, and noise shaping.
Does spike-timing-dependent synaptic plasticity couple or decouple neurons firing in synchrony?
Directory of Open Access Journals (Sweden)
Andreas eKnoblauch
2012-08-01
Full Text Available Spike synchronization is thought to have a constructive role for feature integration, attention, associativelearning, and the formation of bidirectionally connected Hebbian cell assemblies. By contrast, theoreticalstudies on spike-timing-dependent plasticity (STDP report an inherently decoupling influence of spikesynchronization on synaptic connections of coactivated neurons. For example, bidirectional synapticconnections as found in cortical areas could be reproduced only by assuming realistic models of STDP andrate coding. We resolve this conflict by theoretical analysis and simulation of various simple and realisticSTDP models that provide a more complete characterization of conditions when STDP leads to eithercoupling or decoupling of neurons firing in synchrony. In particular, we show that STDP consistentlycouples synchronized neurons if key model parameters are matched to physiological data: First, synapticpotentiation must be significantly stronger than synaptic depression for small (positive or negative timelags between presynaptic and postsynaptic spikes. Second, spike synchronization must be sufficientlyimprecise, for example, within a time window of 5-10msec instead of 1msec. Third, axonal propagationdelays should not be much larger than dendritic delays. Under these assumptions synchronized neuronswill be strongly coupled leading to a dominance of bidirectional synaptic connections even for simpleSTDP models and low mean firing rates at the level of spontaneous activity.
Spike-Timing of Orbitofrontal Neurons Is Synchronized With Breathing.
Kőszeghy, Áron; Lasztóczi, Bálint; Forro, Thomas; Klausberger, Thomas
2018-01-01
The orbitofrontal cortex (OFC) has been implicated in a multiplicity of complex brain functions, including representations of expected outcome properties, post-decision confidence, momentary food-reward values, complex flavors and odors. As breathing rhythm has an influence on odor processing at primary olfactory areas, we tested the hypothesis that it may also influence neuronal activity in the OFC, a prefrontal area involved also in higher order processing of odors. We recorded spike timing of orbitofrontal neurons as well as local field potentials (LFPs) in awake, head-fixed mice, together with the breathing rhythm. We observed that a large majority of orbitofrontal neurons showed robust phase-coupling to breathing during immobility and running. The phase coupling of action potentials to breathing was significantly stronger in orbitofrontal neurons compared to cells in the medial prefrontal cortex. The characteristic synchronization of orbitofrontal neurons with breathing might provide a temporal framework for multi-variable processing of olfactory, gustatory and reward-value relationships.
Spike-Timing of Orbitofrontal Neurons Is Synchronized With Breathing
Directory of Open Access Journals (Sweden)
Áron Kőszeghy
2018-04-01
Full Text Available The orbitofrontal cortex (OFC has been implicated in a multiplicity of complex brain functions, including representations of expected outcome properties, post-decision confidence, momentary food-reward values, complex flavors and odors. As breathing rhythm has an influence on odor processing at primary olfactory areas, we tested the hypothesis that it may also influence neuronal activity in the OFC, a prefrontal area involved also in higher order processing of odors. We recorded spike timing of orbitofrontal neurons as well as local field potentials (LFPs in awake, head-fixed mice, together with the breathing rhythm. We observed that a large majority of orbitofrontal neurons showed robust phase-coupling to breathing during immobility and running. The phase coupling of action potentials to breathing was significantly stronger in orbitofrontal neurons compared to cells in the medial prefrontal cortex. The characteristic synchronization of orbitofrontal neurons with breathing might provide a temporal framework for multi-variable processing of olfactory, gustatory and reward-value relationships.
Directory of Open Access Journals (Sweden)
Jan eHahne
2015-09-01
Full Text Available Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy...
Detecting dependencies between spike trains of pairs of neurons through copulas
DEFF Research Database (Denmark)
Sacerdote, Laura; Tamborrino, Massimiliano; Zucca, Cristina
2011-01-01
The dynamics of a neuron are influenced by the connections with the network where it lies. Recorded spike trains exhibit patterns due to the interactions between neurons. However, the structure of the network is not known. A challenging task is to investigate it from the analysis of simultaneously...... the two neurons. Furthermore, the method recognizes the presence of delays in the spike propagation....
Ponzi, Adam; Wickens, Jeff
2010-04-28
The striatum is composed of GABAergic medium spiny neurons with inhibitory collaterals forming a sparse random asymmetric network and receiving an excitatory glutamatergic cortical projection. Because the inhibitory collaterals are sparse and weak, their role in striatal network dynamics is puzzling. However, here we show by simulation of a striatal inhibitory network model composed of spiking neurons that cells form assemblies that fire in sequential coherent episodes and display complex identity-temporal spiking patterns even when cortical excitation is simply constant or fluctuating noisily. Strongly correlated large-scale firing rate fluctuations on slow behaviorally relevant timescales of hundreds of milliseconds are shown by members of the same assembly whereas members of different assemblies show strong negative correlation, and we show how randomly connected spiking networks can generate this activity. Cells display highly irregular spiking with high coefficients of variation, broadly distributed low firing rates, and interspike interval distributions that are consistent with exponentially tailed power laws. Although firing rates vary coherently on slow timescales, precise spiking synchronization is absent in general. Our model only requires the minimal but striatally realistic assumptions of sparse to intermediate random connectivity, weak inhibitory synapses, and sufficient cortical excitation so that some cells are depolarized above the firing threshold during up states. Our results are in good qualitative agreement with experimental studies, consistent with recently determined striatal anatomy and physiology, and support a new view of endogenously generated metastable state switching dynamics of the striatal network underlying its information processing operations.
Directory of Open Access Journals (Sweden)
Henry C Tuckwell
2010-05-01
Full Text Available Many neurons have epochs in which they fire action potentials in an approximately periodic fashion. To see what effects noise of relatively small amplitude has on such repetitive activity we recently examined the response of the Hodgkin-Huxley (HH space-clamped system to such noise as the mean and variance of the applied current vary, near the bifurcation to periodic firing. This article is concerned with a more realistic neuron model which includes spatial extent. Employing the Hodgkin-Huxley partial differential equation system, the deterministic component of the input current is restricted to a small segment whereas the stochastic component extends over a region which may or may not overlap the deterministic component. For mean values below, near and above the critical values for repetitive spiking, the effects of weak noise of increasing strength is ascertained by simulation. As in the point model, small amplitude noise near the critical value dampens the spiking activity and leads to a minimum as noise level increases. This was the case for both additive noise and conductance-based noise. Uniform noise along the whole neuron is only marginally more effective in silencing the cell than noise which occurs near the region of excitation. In fact it is found that if signal and noise overlap in spatial extent, then weak noise may inhibit spiking. If, however, signal and noise are applied on disjoint intervals, then the noise has no effect on the spiking activity, no matter how large its region of application, though the trajectories are naturally altered slightly by noise. Such effects could not be discerned in a point model and are important for real neuron behavior. Interference with the spike train does nevertheless occur when the noise amplitude is larger, even when noise and signal do not overlap, being due to the instigation of secondary noise-induced wave phenomena rather than switching the system from one attractor (firing regularly to
Spike Neural Models Part II: Abstract Neural Models
Directory of Open Access Journals (Sweden)
Johnson, Melissa G.
2018-02-01
Full Text Available Neurons are complex cells that require a lot of time and resources to model completely. In spiking neural networks (SNN though, not all that complexity is required. Therefore simple, abstract models are often used. These models save time, use less computer resources, and are easier to understand. This tutorial presents two such models: Izhikevich's model, which is biologically realistic in the resulting spike trains but not in the parameters, and the Leaky Integrate and Fire (LIF model which is not biologically realistic but does quickly and easily integrate input to produce spikes. Izhikevich's model is based on Hodgkin-Huxley's model but simplified such that it uses only two differentiation equations and four parameters to produce various realistic spike patterns. LIF is based on a standard electrical circuit and contains one equation. Either of these two models, or any of the many other models in literature can be used in a SNN. Choosing a neural model is an important task that depends on the goal of the research and the resources available. Once a model is chosen, network decisions such as connectivity, delay, and sparseness, need to be made. Understanding neural models and how they are incorporated into the network is the first step in creating a SNN.
Mapping cortical mesoscopic networks of single spiking cortical or sub-cortical neurons.
Xiao, Dongsheng; Vanni, Matthieu P; Mitelut, Catalin C; Chan, Allen W; LeDue, Jeffrey M; Xie, Yicheng; Chen, Andrew Cn; Swindale, Nicholas V; Murphy, Timothy H
2017-02-04
Understanding the basis of brain function requires knowledge of cortical operations over wide-spatial scales, but also within the context of single neurons. In vivo, wide-field GCaMP imaging and sub-cortical/cortical cellular electrophysiology were used in mice to investigate relationships between spontaneous single neuron spiking and mesoscopic cortical activity. We make use of a rich set of cortical activity motifs that are present in spontaneous activity in anesthetized and awake animals. A mesoscale spike-triggered averaging procedure allowed the identification of motifs that are preferentially linked to individual spiking neurons by employing genetically targeted indicators of neuronal activity. Thalamic neurons predicted and reported specific cycles of wide-scale cortical inhibition/excitation. In contrast, spike-triggered maps derived from single cortical neurons yielded spatio-temporal maps expected for regional cortical consensus function. This approach can define network relationships between any point source of neuronal spiking and mesoscale cortical maps.
Directory of Open Access Journals (Sweden)
Zedong eBi
2016-02-01
Full Text Available In neural systems, synaptic plasticity is usually driven by spike trains. Due to the inherent noises of neurons and synapses as well as the randomness of connection details, spike trains typically exhibit variability such as spatial randomness and temporal stochasticity, resulting in variability of synaptic changes under plasticity, which we call efficacy variability. How the variability of spike trains influences the efficacy variability of synapses remains unclear. In this paper, we try to understand this influence under pair-wise additive spike-timing dependent plasticity (STDP when the mean strength of plastic synapses into a neuron is bounded (synaptic homeostasis. Specifically, we systematically study, analytically and numerically, how four aspects of statistical features, i.e. synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations, as well as their interactions influence the efficacy variability in converging motifs (simple networks in which one neuron receives from many other neurons. Neurons (including the post-synaptic neuron in a converging motif generate spikes according to statistical models with tunable parameters. In this way, we can explicitly control the statistics of the spike patterns, and investigate their influence onto the efficacy variability, without worrying about the feedback from synaptic changes onto the dynamics of the post-synaptic neuron. We separate efficacy variability into two parts: the drift part (DriftV induced by the heterogeneity of change rates of different synapses, and the diffusion part (DiffV induced by weight diffusion caused by stochasticity of spike trains. Our main findings are: (1 synchronous firing and burstiness tend to increase DiffV, (2 heterogeneity of rates induces DriftV when potentiation and depression in STDP are not balanced, and (3 heterogeneity of cross-correlations induces DriftV together with heterogeneity of rates. We anticipate our
2012-01-01
Action potentials at the neurons and graded signals at the synapses are primary codes in the brain. In terms of their functional interaction, the studies were focused on the influence of presynaptic spike patterns on synaptic activities. How the synapse dynamics quantitatively regulates the encoding of postsynaptic digital spikes remains unclear. We investigated this question at unitary glutamatergic synapses on cortical GABAergic neurons, especially the quantitative influences of release probability on synapse dynamics and neuronal encoding. Glutamate release probability and synaptic strength are proportionally upregulated by presynaptic sequential spikes. The upregulation of release probability and the efficiency of probability-driven synaptic facilitation are strengthened by elevating presynaptic spike frequency and Ca2+. The upregulation of release probability improves spike capacity and timing precision at postsynaptic neuron. These results suggest that the upregulation of presynaptic glutamate release facilitates a conversion of synaptic analogue signals into digital spikes in postsynaptic neurons, i.e., a functional compatibility between presynaptic and postsynaptic partners. PMID:22852823
Symbol manipulation and rule learning in spiking neuronal networks.
Fernando, Chrisantha
2011-04-21
It has been claimed that the productivity, systematicity and compositionality of human language and thought necessitate the existence of a physical symbol system (PSS) in the brain. Recent discoveries about temporal coding suggest a novel type of neuronal implementation of a physical symbol system. Furthermore, learning classifier systems provide a plausible algorithmic basis by which symbol re-write rules could be trained to undertake behaviors exhibiting systematicity and compositionality, using a kind of natural selection of re-write rules in the brain, We show how the core operation of a learning classifier system, namely, the replication with variation of symbol re-write rules, can be implemented using spike-time dependent plasticity based supervised learning. As a whole, the aim of this paper is to integrate an algorithmic and an implementation level description of a neuronal symbol system capable of sustaining systematic and compositional behaviors. Previously proposed neuronal implementations of symbolic representations are compared with this new proposal. Copyright © 2011 Elsevier Ltd. All rights reserved.
Motif statistics and spike correlations in neuronal networks
International Nuclear Information System (INIS)
Hu, Yu; Shea-Brown, Eric; Trousdale, James; Josić, Krešimir
2013-01-01
Motifs are patterns of subgraphs of complex networks. We studied the impact of such patterns of connectivity on the level of correlated, or synchronized, spiking activity among pairs of cells in a recurrent network of integrate and fire neurons. For a range of network architectures, we find that the pairwise correlation coefficients, averaged across the network, can be closely approximated using only three statistics of network connectivity. These are the overall network connection probability and the frequencies of two second order motifs: diverging motifs, in which one cell provides input to two others, and chain motifs, in which two cells are connected via a third intermediary cell. Specifically, the prevalence of diverging and chain motifs tends to increase correlation. Our method is based on linear response theory, which enables us to express spiking statistics using linear algebra, and a resumming technique, which extrapolates from second order motifs to predict the overall effect of coupling on network correlation. Our motif-based results seek to isolate the effect of network architecture perturbatively from a known network state. (paper)
Directory of Open Access Journals (Sweden)
Robert R Kerr
Full Text Available Learning rules, such as spike-timing-dependent plasticity (STDP, change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.
An online supervised learning method based on gradient descent for spiking neurons.
Xu, Yan; Yang, Jing; Zhong, Shuiming
2017-09-01
The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by precise firing times of spikes. The gradient-descent-based (GDB) learning methods are widely used and verified in the current research. Although the existing GDB multi-spike learning (or spike sequence learning) methods have good performance, they work in an offline manner and still have some limitations. This paper proposes an online GDB spike sequence learning method for spiking neurons that is based on the online adjustment mechanism of real biological neuron synapses. The method constructs error function and calculates the adjustment of synaptic weights as soon as the neurons emit a spike during their running process. We analyze and synthesize desired and actual output spikes to select appropriate input spikes in the calculation of weight adjustment in this paper. The experimental results show that our method obviously improves learning performance compared with the offline learning manner and has certain advantage on learning accuracy compared with other learning methods. Stronger learning ability determines that the method has large pattern storage capacity. Copyright © 2017 Elsevier Ltd. All rights reserved.
Dynamic Control of Synchronous Activity in Networks of Spiking Neurons.
Directory of Open Access Journals (Sweden)
Axel Hutt
Full Text Available Oscillatory brain activity is believed to play a central role in neural coding. Accumulating evidence shows that features of these oscillations are highly dynamic: power, frequency and phase fluctuate alongside changes in behavior and task demands. The role and mechanism supporting this variability is however poorly understood. We here analyze a network of recurrently connected spiking neurons with time delay displaying stable synchronous dynamics. Using mean-field and stability analyses, we investigate the influence of dynamic inputs on the frequency of firing rate oscillations. We show that afferent noise, mimicking inputs to the neurons, causes smoothing of the system's response function, displacing equilibria and altering the stability of oscillatory states. Our analysis further shows that these noise-induced changes cause a shift of the peak frequency of synchronous oscillations that scales with input intensity, leading the network towards critical states. We lastly discuss the extension of these principles to periodic stimulation, in which externally applied driving signals can trigger analogous phenomena. Our results reveal one possible mechanism involved in shaping oscillatory activity in the brain and associated control principles.
Dynamic Control of Synchronous Activity in Networks of Spiking Neurons.
Hutt, Axel; Mierau, Andreas; Lefebvre, Jérémie
Oscillatory brain activity is believed to play a central role in neural coding. Accumulating evidence shows that features of these oscillations are highly dynamic: power, frequency and phase fluctuate alongside changes in behavior and task demands. The role and mechanism supporting this variability is however poorly understood. We here analyze a network of recurrently connected spiking neurons with time delay displaying stable synchronous dynamics. Using mean-field and stability analyses, we investigate the influence of dynamic inputs on the frequency of firing rate oscillations. We show that afferent noise, mimicking inputs to the neurons, causes smoothing of the system's response function, displacing equilibria and altering the stability of oscillatory states. Our analysis further shows that these noise-induced changes cause a shift of the peak frequency of synchronous oscillations that scales with input intensity, leading the network towards critical states. We lastly discuss the extension of these principles to periodic stimulation, in which externally applied driving signals can trigger analogous phenomena. Our results reveal one possible mechanism involved in shaping oscillatory activity in the brain and associated control principles.
Development of on-off spiking in superior paraolivary nucleus neurons of the mouse
Felix, Richard A.; Vonderschen, Katrin; Berrebi, Albert S.
2013-01-01
The superior paraolivary nucleus (SPON) is a prominent cell group in the auditory brain stem that has been increasingly implicated in representing temporal sound structure. Although SPON neurons selectively respond to acoustic signals important for sound periodicity, the underlying physiological specializations enabling these responses are poorly understood. We used in vitro and in vivo recordings to investigate how SPON neurons develop intrinsic cellular properties that make them well suited for encoding temporal sound features. In addition to their hallmark rebound spiking at the stimulus offset, SPON neurons were characterized by spiking patterns termed onset, adapting, and burst in response to depolarizing stimuli in vitro. Cells with burst spiking had some morphological differences compared with other SPON neurons and were localized to the dorsolateral region of the nucleus. Both membrane and spiking properties underwent strong developmental regulation, becoming more temporally precise with age for both onset and offset spiking. Single-unit recordings obtained in young mice demonstrated that SPON neurons respond with temporally precise onset spiking upon tone stimulation in vivo, in addition to the typical offset spiking. Taken together, the results of the present study demonstrate that SPON neurons develop sharp on-off spiking, which may confer sensitivity to sound amplitude modulations or abrupt sound transients. These findings are consistent with the proposed involvement of the SPON in the processing of temporal sound structure, relevant for encoding communication cues. PMID:23515791
Directory of Open Access Journals (Sweden)
Risako Kato
2016-11-01
Full Text Available Pentobarbital potentiates γ-aminobutyric acid (GABA-mediated inhibitory synaptic transmission by prolonging the open time of GABAA receptors. However, it is unknown how pentobarbital regulates cortical neuronal activities via local circuits in vivo. To examine this question, we performed extracellular unit recording in rat insular cortex under awake and anesthetic conditions. Not a few studies apply time-rescaling theorem to detect the features of repetitive spike firing. Similar to these methods, we define an average spike interval locally in time using random matrix theory (RMT, which enables us to compare different activity states on a universal scale. Neurons with high spontaneous firing frequency (> 5 Hz and bursting were classified as HFB neurons (n = 10, and those with low spontaneous firing frequency (< 10 Hz and without bursting were classified as non-HFB neurons (n = 48. Pentobarbital injection (30 mg/kg reduced firing frequency in all HFB neurons and in 78% of non-HFB neurons. RMT analysis demonstrated that pentobarbital increased in the number of neurons with repulsion in both HFB and non-HFB neurons, suggesting that there is a correlation between spikes within a short interspike interval. Under awake conditions, in 50% of HFB and 40% of non-HFB neurons, the decay phase of normalized histograms of spontaneous firing were fitted to an exponential function, which indicated that the first spike had no correlation with subsequent spikes. In contrast, under pentobarbital-induced anesthesia conditions, the number of non-HFB neurons that were fitted to an exponential function increased to 80%, but almost no change in HFB neurons was observed. These results suggest that under both awake and pentobarbital-induced anesthetized conditions, spike firing in HFB neurons is more robustly regulated by preceding spikes than by non-HFB neurons, which may reflect the GABAA receptor-mediated regulation of cortical activities. Whole-cell patch
Directory of Open Access Journals (Sweden)
Frank Rattay
Full Text Available Our knowledge about the neural code in the auditory nerve is based to a large extent on experiments on cats. Several anatomical differences between auditory neurons in human and cat are expected to lead to functional differences in speed and safety of spike conduction.Confocal microscopy was used to systematically evaluate peripheral and central process diameters, commonness of myelination and morphology of spiral ganglion neurons (SGNs along the cochlea of three human and three cats. Based on these morphometric data, model analysis reveales that spike conduction in SGNs is characterized by four phases: a postsynaptic delay, constant velocity in the peripheral process, a presomatic delay and constant velocity in the central process. The majority of SGNs are type I, connecting the inner hair cells with the brainstem. In contrast to those of humans, type I neurons of the cat are entirely myelinated. Biophysical model evaluation showed delayed and weak spikes in the human soma region as a consequence of a lack of myelin. The simulated spike conduction times are in accordance with normal interwave latencies from auditory brainstem response recordings from man and cat. Simulated 400 pA postsynaptic currents from inner hair cell ribbon synapses were 15 times above threshold. They enforced quick and synchronous spiking. Both of these properties were not present in type II cells as they receive fewer and much weaker (∼26 pA synaptic stimuli.Wasting synaptic energy boosts spike initiation, which guarantees the rapid transmission of temporal fine structure of auditory signals. However, a lack of myelin in the soma regions of human type I neurons causes a large delay in spike conduction in comparison with cat neurons. The absent myelin, in combination with a longer peripheral process, causes quantitative differences of temporal parameters in the electrically stimulated human cochlea compared to the cat cochlea.
Rattay, Frank; Potrusil, Thomas; Wenger, Cornelia; Wise, Andrew K.; Glueckert, Rudolf; Schrott-Fischer, Anneliese
2013-01-01
Background Our knowledge about the neural code in the auditory nerve is based to a large extent on experiments on cats. Several anatomical differences between auditory neurons in human and cat are expected to lead to functional differences in speed and safety of spike conduction. Methodology/Principal Findings Confocal microscopy was used to systematically evaluate peripheral and central process diameters, commonness of myelination and morphology of spiral ganglion neurons (SGNs) along the cochlea of three human and three cats. Based on these morphometric data, model analysis reveales that spike conduction in SGNs is characterized by four phases: a postsynaptic delay, constant velocity in the peripheral process, a presomatic delay and constant velocity in the central process. The majority of SGNs are type I, connecting the inner hair cells with the brainstem. In contrast to those of humans, type I neurons of the cat are entirely myelinated. Biophysical model evaluation showed delayed and weak spikes in the human soma region as a consequence of a lack of myelin. The simulated spike conduction times are in accordance with normal interwave latencies from auditory brainstem response recordings from man and cat. Simulated 400 pA postsynaptic currents from inner hair cell ribbon synapses were 15 times above threshold. They enforced quick and synchronous spiking. Both of these properties were not present in type II cells as they receive fewer and much weaker (∼26 pA) synaptic stimuli. Conclusions/Significance Wasting synaptic energy boosts spike initiation, which guarantees the rapid transmission of temporal fine structure of auditory signals. However, a lack of myelin in the soma regions of human type I neurons causes a large delay in spike conduction in comparison with cat neurons. The absent myelin, in combination with a longer peripheral process, causes quantitative differences of temporal parameters in the electrically stimulated human cochlea compared to the cat
Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State.
Lagzi, Fereshteh; Rotter, Stefan
2015-01-01
We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the "within" versus "between" connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed "winnerless competition", which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might suggest a
Spike neural models (part I: The Hodgkin-Huxley model
Directory of Open Access Journals (Sweden)
Johnson, Melissa G.
2017-05-01
Full Text Available Artificial neural networks, or ANNs, have grown a lot since their inception back in the 1940s. But no matter the changes, one of the most important components of neural networks is still the node, which represents the neuron. Within spiking neural networks, the node is especially important because it contains the functions and properties of neurons that are necessary for their network. One important aspect of neurons is the ionic flow which produces action potentials, or spikes. Forces of diffusion and electrostatic pressure work together with the physical properties of the cell to move ions around changing the cell membrane potential which ultimately produces the action potential. This tutorial reviews the Hodkgin-Huxley model and shows how it simulates the ionic flow of the giant squid axon via four differential equations. The model is implemented in Matlab using Euler's Method to approximate the differential equations. By using Euler's method, an extra parameter is created, the time step. This new parameter needs to be carefully considered or the results of the node may be impaired.
A Neuron Model Based Ultralow Current Sensor System for Bioapplications
Directory of Open Access Journals (Sweden)
A. K. M. Arifuzzman
2016-01-01
Full Text Available An ultralow current sensor system based on the Izhikevich neuron model is presented in this paper. The Izhikevich neuron model has been used for its superior computational efficiency and greater biological plausibility over other well-known neuron spiking models. Of the many biological neuron spiking features, regular spiking, chattering, and neostriatal spiny projection spiking have been reproduced by adjusting the parameters associated with the model at hand. This paper also presents a modified interpretation of the regular spiking feature in which the firing pattern is similar to that of the regular spiking but with improved dynamic range offering. The sensor current ranges between 2 pA and 8 nA and exhibits linearity in the range of 0.9665 to 0.9989 for different spiking features. The efficacy of the sensor system in detecting low amount of current along with its high linearity attribute makes it very suitable for biomedical applications.
Reinforcement learning using a continuous time actor-critic framework with spiking neurons.
Directory of Open Access Journals (Sweden)
Nicolas Frémaux
2013-04-01
Full Text Available Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD learning of Doya (2000 to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity.
Training Spiking Neural Models Using Artificial Bee Colony
Vazquez, Roberto A.; Garro, Beatriz A.
2015-01-01
Spiking neurons are models designed to simulate, in a realistic manner, the behavior of biological neurons. Recently, it has been proven that this type of neurons can be applied to solve pattern recognition problems with great efficiency. However, the lack of learning strategies for training these models do not allow to use them in several pattern recognition problems. On the other hand, several bioinspired algorithms have been proposed in the last years for solving a broad range of optimization problems, including those related to the field of artificial neural networks (ANNs). Artificial bee colony (ABC) is a novel algorithm based on the behavior of bees in the task of exploring their environment to find a food source. In this paper, we describe how the ABC algorithm can be used as a learning strategy to train a spiking neuron aiming to solve pattern recognition problems. Finally, the proposed approach is tested on several pattern recognition problems. It is important to remark that to realize the powerfulness of this type of model only one neuron will be used. In addition, we analyze how the performance of these models is improved using this kind of learning strategy. PMID:25709644
DL-ReSuMe: A Delay Learning-Based Remote Supervised Method for Spiking Neurons.
Taherkhani, Aboozar; Belatreche, Ammar; Li, Yuhua; Maguire, Liam P
2015-12-01
Recent research has shown the potential capability of spiking neural networks (SNNs) to model complex information processing in the brain. There is biological evidence to prove the use of the precise timing of spikes for information coding. However, the exact learning mechanism in which the neuron is trained to fire at precise times remains an open problem. The majority of the existing learning methods for SNNs are based on weight adjustment. However, there is also biological evidence that the synaptic delay is not constant. In this paper, a learning method for spiking neurons, called delay learning remote supervised method (DL-ReSuMe), is proposed to merge the delay shift approach and ReSuMe-based weight adjustment to enhance the learning performance. DL-ReSuMe uses more biologically plausible properties, such as delay learning, and needs less weight adjustment than ReSuMe. Simulation results have shown that the proposed DL-ReSuMe approach achieves learning accuracy and learning speed improvements compared with ReSuMe.
Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.
Directory of Open Access Journals (Sweden)
Gabriel Koch Ocker
2015-08-01
Full Text Available The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.
Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.
Ocker, Gabriel Koch; Litwin-Kumar, Ashok; Doiron, Brent
2015-08-01
The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.
Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua
2015-01-01
In this paper, we investigate how clustering factors influent spiking regularity of the neuronal network of subnetworks. In order to do so, we fix the averaged coupling probability and the averaged coupling strength, and take the cluster number M, the ratio of intra-connection probability and inter-connection probability R, the ratio of intra-coupling strength and inter-coupling strength S as controlled parameters. With the obtained simulation results, we find that spiking regularity of the neuronal networks has little variations with changing of R and S when M is fixed. However, cluster number M could reduce the spiking regularity to low level when the uniform neuronal network's spiking regularity is at high level. Combined the obtained results, we can see that clustering factors have little influences on the spiking regularity when the entire energy is fixed, which could be controlled by the averaged coupling strength and the averaged connection probability.
SPAN: spike pattern association neuron for learning spatio-temporal sequences
Mohemmed, A; Schliebs, S; Matsuda, S; Kasabov, N
2012-01-01
Spiking Neural Networks (SNN) were shown to be suitable tools for the processing of spatio-temporal information. However, due to their inherent complexity, the formulation of efficient supervised learning algorithms for SNN is difficult and remains an important problem in the research area. This article presents SPAN — a spiking neuron that is able to learn associations of arbitrary spike trains in a supervised fashion allowing the processing of spatio-temporal information encoded in the prec...
Simulation of a spiking neuron circuit using carbon nanotube transistors
International Nuclear Information System (INIS)
Najari, Montassar; El-Grour, Tarek; Jelliti, Sami; Hakami, Othman Mousa
2016-01-01
Neuromorphic engineering is related to the existing analogies between the physical semiconductor VLSI (Very Large Scale Integration) and biophysics. Neuromorphic systems propose to reproduce the structure and function of biological neural systems for transferring their calculation capacity on silicon. Since the innovative research of Carver Mead, the neuromorphic engineering continues to emerge remarkable implementation of biological system. This work presents a simulation of an elementary neuron cell with a carbon nanotube transistor (CNTFET) based technology. The model of the cell neuron which was simulated is called integrate and fire (I&F) model firstly introduced by G. Indiveri in 2009. This circuit has been simulated with CNTFET technology using ADS environment to verify the neuromorphic activities in terms of membrane potential. This work has demonstrated the efficiency of this emergent device; i.e CNTFET on the design of such architecture in terms of power consumption and technology integration density.
Simulation of a spiking neuron circuit using carbon nanotube transistors
Energy Technology Data Exchange (ETDEWEB)
Najari, Montassar, E-mail: malnjar@jazanu.edu.sa [Departement of Physics, Faculty of Sciences, University of Gabes, Gabes (Tunisia); IKCE unit, Jazan University, Jazan (Saudi Arabia); El-Grour, Tarek, E-mail: grour-tarek@hotmail.fr [Departement of Physics, Faculty of Sciences, University of Gabes, Gabes (Tunisia); Jelliti, Sami, E-mail: sjelliti@jazanu.edu.sa [IKCE unit, Jazan University, Jazan (Saudi Arabia); Hakami, Othman Mousa, E-mail: omhakami@jazanu.edu.sa [IKCE unit, Jazan University, Jazan (Saudi Arabia); Faculty of Sciences, Jazan University, Jazan (Saudi Arabia)
2016-06-10
Neuromorphic engineering is related to the existing analogies between the physical semiconductor VLSI (Very Large Scale Integration) and biophysics. Neuromorphic systems propose to reproduce the structure and function of biological neural systems for transferring their calculation capacity on silicon. Since the innovative research of Carver Mead, the neuromorphic engineering continues to emerge remarkable implementation of biological system. This work presents a simulation of an elementary neuron cell with a carbon nanotube transistor (CNTFET) based technology. The model of the cell neuron which was simulated is called integrate and fire (I&F) model firstly introduced by G. Indiveri in 2009. This circuit has been simulated with CNTFET technology using ADS environment to verify the neuromorphic activities in terms of membrane potential. This work has demonstrated the efficiency of this emergent device; i.e CNTFET on the design of such architecture in terms of power consumption and technology integration density.
Adaptive coupling optimized spiking coherence and synchronization in Newman-Watts neuronal networks.
Gong, Yubing; Xu, Bo; Wu, Ya'nan
2013-09-01
In this paper, we have numerically studied the effect of adaptive coupling on the temporal coherence and synchronization of spiking activity in Newman-Watts Hodgkin-Huxley neuronal networks. It is found that random shortcuts can enhance the spiking synchronization more rapidly when the increment speed of adaptive coupling is increased and can optimize the temporal coherence of spikes only when the increment speed of adaptive coupling is appropriate. It is also found that adaptive coupling strength can enhance the synchronization of spikes and can optimize the temporal coherence of spikes when random shortcuts are appropriate. These results show that adaptive coupling has a big influence on random shortcuts related spiking activity and can enhance and optimize the temporal coherence and synchronization of spiking activity of the network. These findings can help better understand the roles of adaptive coupling for improving the information processing and transmission in neural systems.
Qiao, Ning; Mostafa, Hesham; Corradi, Federico; Osswald, Marc; Stefanini, Fabio; Sumislawska, Dora; Indiveri, Giacomo
2015-01-01
Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks, with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm(2), and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.
An FPGA Platform for Real-Time Simulation of Spiking Neuronal Networks.
Pani, Danilo; Meloni, Paolo; Tuveri, Giuseppe; Palumbo, Francesca; Massobrio, Paolo; Raffo, Luigi
2017-01-01
In the last years, the idea to dynamically interface biological neurons with artificial ones has become more and more urgent. The reason is essentially due to the design of innovative neuroprostheses where biological cell assemblies of the brain can be substituted by artificial ones. For closed-loop experiments with biological neuronal networks interfaced with in silico modeled networks, several technological challenges need to be faced, from the low-level interfacing between the living tissue and the computational model to the implementation of the latter in a suitable form for real-time processing. Field programmable gate arrays (FPGAs) can improve flexibility when simple neuronal models are required, obtaining good accuracy, real-time performance, and the possibility to create a hybrid system without any custom hardware, just programming the hardware to achieve the required functionality. In this paper, this possibility is explored presenting a modular and efficient FPGA design of an in silico spiking neural network exploiting the Izhikevich model. The proposed system, prototypically implemented on a Xilinx Virtex 6 device, is able to simulate a fully connected network counting up to 1,440 neurons, in real-time, at a sampling rate of 10 kHz, which is reasonable for small to medium scale extra-cellular closed-loop experiments.
A Reward-Maximizing Spiking Neuron as a Bounded Rational Decision Maker.
Leibfried, Felix; Braun, Daniel A
2015-08-01
Rate distortion theory describes how to communicate relevant information most efficiently over a channel with limited capacity. One of the many applications of rate distortion theory is bounded rational decision making, where decision makers are modeled as information channels that transform sensory input into motor output under the constraint that their channel capacity is limited. Such a bounded rational decision maker can be thought to optimize an objective function that trades off the decision maker's utility or cumulative reward against the information processing cost measured by the mutual information between sensory input and motor output. In this study, we interpret a spiking neuron as a bounded rational decision maker that aims to maximize its expected reward under the computational constraint that the mutual information between the neuron's input and output is upper bounded. This abstract computational constraint translates into a penalization of the deviation between the neuron's instantaneous and average firing behavior. We derive a synaptic weight update rule for such a rate distortion optimizing neuron and show in simulations that the neuron efficiently extracts reward-relevant information from the input by trading off its synaptic strengths against the collected reward.
Schmitt, Michael
2004-09-01
We study networks of spiking neurons that use the timing of pulses to encode information. Nonlinear interactions model the spatial groupings of synapses on the neural dendrites and describe the computations performed at local branches. Within a theoretical framework of learning we analyze the question of how many training examples these networks must receive to be able to generalize well. Bounds for this sample complexity of learning can be obtained in terms of a combinatorial parameter known as the pseudodimension. This dimension characterizes the computational richness of a neural network and is given in terms of the number of network parameters. Two types of feedforward architectures are considered: constant-depth networks and networks of unconstrained depth. We derive asymptotically tight bounds for each of these network types. Constant depth networks are shown to have an almost linear pseudodimension, whereas the pseudodimension of general networks is quadratic. Networks of spiking neurons that use temporal coding are becoming increasingly more important in practical tasks such as computer vision, speech recognition, and motor control. The question of how well these networks generalize from a given set of training examples is a central issue for their successful application as adaptive systems. The results show that, although coding and computation in these networks is quite different and in many cases more powerful, their generalization capabilities are at least as good as those of traditional neural network models.
Directory of Open Access Journals (Sweden)
Huu eHoang
2015-05-01
Full Text Available The inverse problem for estimating model parameters from brain spike data is an ill-posed problem because of a huge mismatch in the system complexity between the model and the brain as well as its non-stationary dynamics, and needs a stochastic approach that finds the most likely solution among many possible solutions. In the present study, we developed a segmental Bayesian method to estimate the two parameters of interest, the gap-junctional (gc and inhibitory conductance (gi from inferior olive spike data. Feature vectors were estimated for the spike data in a segment-wise fashion to compensate for the non-stationary firing dynamics. Hierarchical Bayesian estimation was conducted to estimate the gc and gi for every spike segment using a forward model constructed in the principal component analysis (PCA space of the feature vectors, and to merge the segmental estimates into single estimates for every neuron. The segmental Bayesian estimation gave smaller fitting errors than the conventional Bayesian inference, which finds the estimates once across the entire spike data, or the minimum error method, which directly finds the closest match in the PCA space. The segmental Bayesian inference has the potential to overcome the problem of non-stationary dynamics and resolve the ill-posedness of the inverse problem because of the mismatch between the model and the brain under the constraints based, and it is a useful tool to evaluate parameters of interest for neuroscience from experimental spike train data.
Axonal propagation of simple and complex spikes in cerebellar Purkinje neurons.
Khaliq, Zayd M; Raman, Indira M
2005-01-12
In cerebellar Purkinje neurons, the reliability of propagation of high-frequency simple spikes and spikelets of complex spikes is likely to regulate inhibition of Purkinje target neurons. To test the extent to which a one-to-one correspondence exists between somatic and axonal spikes, we made dual somatic and axonal recordings from Purkinje neurons in mouse cerebellar slices. Somatic action potentials were recorded with a whole-cell pipette, and the corresponding axonal signals were recorded extracellularly with a loose-patch pipette. Propagation of spontaneous and evoked simple spikes was highly reliable. At somatic firing rates of approximately 200 spikes/sec, 375 Hz during somatic hyperpolarizations that silenced spontaneous firing to approximately 150 Hz during spontaneous activity. The probability of propagation of individual spikelets could be described quantitatively as a saturating function of spikelet amplitude, rate of rise, or preceding interspike interval. The results suggest that ion channels of Purkinje axons are adapted to produce extremely short refractory periods and that brief bursts of forward-propagating action potentials generated by complex spikes may contribute transiently to inhibition of postsynaptic neurons.
Operant conditioning of synaptic and spiking activity patterns in single hippocampal neurons.
Ishikawa, Daisuke; Matsumoto, Nobuyoshi; Sakaguchi, Tetsuya; Matsuki, Norio; Ikegaya, Yuji
2014-04-02
Learning is a process of plastic adaptation through which a neural circuit generates a more preferable outcome; however, at a microscopic level, little is known about how synaptic activity is patterned into a desired configuration. Here, we report that animals can generate a specific form of synaptic activity in a given neuron in the hippocampus. In awake, head-restricted mice, we applied electrical stimulation to the lateral hypothalamus, a reward-associated brain region, when whole-cell patch-clamped CA1 neurons exhibited spontaneous synaptic activity that met preset criteria. Within 15 min, the mice learned to generate frequently the excitatory synaptic input pattern that satisfied the criteria. This reinforcement learning of synaptic activity was not observed for inhibitory input patterns. When a burst unit activity pattern was conditioned in paired and nonpaired paradigms, the frequency of burst-spiking events increased and decreased, respectively. The burst reinforcement occurred in the conditioned neuron but not in other adjacent neurons; however, ripple field oscillations were concomitantly reinforced. Neural conditioning depended on activation of NMDA receptors and dopamine D1 receptors. Acutely stressed mice and depression model mice that were subjected to forced swimming failed to exhibit the neural conditioning. This learning deficit was rescued by repetitive treatment with fluoxetine, an antidepressant. Therefore, internally motivated animals are capable of routing an ongoing action potential series into a specific neural pathway of the hippocampal network.
Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State
Lagzi, Fereshteh; Rotter, Stefan
2015-01-01
We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the “within” versus “between” connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed “winnerless competition”, which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might
Neuronal spike-train responses in the presence of threshold noise.
Coombes, S; Thul, R; Laudanski, J; Palmer, A R; Sumner, C J
2011-03-01
The variability of neuronal firing has been an intense topic of study for many years. From a modelling perspective it has often been studied in conductance based spiking models with the use of additive or multiplicative noise terms to represent channel fluctuations or the stochastic nature of neurotransmitter release. Here we propose an alternative approach using a simple leaky integrate-and-fire model with a noisy threshold. Initially, we develop a mathematical treatment of the neuronal response to periodic forcing using tools from linear response theory and use this to highlight how a noisy threshold can enhance downstream signal reconstruction. We further develop a more general framework for understanding the responses to large amplitude forcing based on a calculation of first passage times. This is ideally suited to understanding stochastic mode-locking, for which we numerically determine the Arnol'd tongue structure. An examination of data from regularly firing stellate neurons within the ventral cochlear nucleus, responding to sinusoidally amplitude modulated pure tones, shows tongue structures consistent with these predictions and highlights that stochastic, as opposed to deterministic, mode-locking is utilised at the level of the single stellate cell to faithfully encode periodic stimuli.
Spiking sychronization regulated by noise in three types of Hodgkin—Huxley neuronal networks
International Nuclear Information System (INIS)
Zhang Zheng-Zhen; Zeng Shang-You; Tang Wen-Yan; Hu Jin-Lin; Zeng Shao-Wen; Ning Wei-Lian; Qiu Yi; Wu Hui-Si
2012-01-01
In this paper, we study spiking synchronization in three different types of Hodgkin—Huxley neuronal networks, which are the small-world, regular, and random neuronal networks. All the neurons are subjected to subthreshold stimulus and external noise. It is found that in each of all the neuronal networks there is an optimal strength of noise to induce the maximal spiking synchronization. We further demonstrate that in each of the neuronal networks there is a range of synaptic conductance to induce the effect that an optimal strength of noise maximizes the spiking synchronization. Only when the magnitude of the synaptic conductance is moderate, will the effect be considerable. However, if the synaptic conductance is small or large, the effect vanishes. As the connections between neurons increase, the synaptic conductance to maximize the effect decreases. Therefore, we show quantitatively that the noise-induced maximal synchronization in the Hodgkin—Huxley neuronal network is a general effect, regardless of the specific type of neuronal network
SpikingLab: modelling agents controlled by Spiking Neural Networks in Netlogo.
Jimenez-Romero, Cristian; Johnson, Jeffrey
2017-01-01
The scientific interest attracted by Spiking Neural Networks (SNN) has lead to the development of tools for the simulation and study of neuronal dynamics ranging from phenomenological models to the more sophisticated and biologically accurate Hodgkin-and-Huxley-based and multi-compartmental models. However, despite the multiple features offered by neural modelling tools, their integration with environments for the simulation of robots and agents can be challenging and time consuming. The implementation of artificial neural circuits to control robots generally involves the following tasks: (1) understanding the simulation tools, (2) creating the neural circuit in the neural simulator, (3) linking the simulated neural circuit with the environment of the agent and (4) programming the appropriate interface in the robot or agent to use the neural controller. The accomplishment of the above-mentioned tasks can be challenging, especially for undergraduate students or novice researchers. This paper presents an alternative tool which facilitates the simulation of simple SNN circuits using the multi-agent simulation and the programming environment Netlogo (educational software that simplifies the study and experimentation of complex systems). The engine proposed and implemented in Netlogo for the simulation of a functional model of SNN is a simplification of integrate and fire (I&F) models. The characteristics of the engine (including neuronal dynamics, STDP learning and synaptic delay) are demonstrated through the implementation of an agent representing an artificial insect controlled by a simple neural circuit. The setup of the experiment and its outcomes are described in this work.
How many neurons can we see with current spike sorting algorithms?
Pedreira, Carlos; Martinez, Juan; Ison, Matias J; Quian Quiroga, Rodrigo
2012-10-15
Recent studies highlighted the disagreement between the typical number of neurons observed with extracellular recordings and the ones to be expected based on anatomical and physiological considerations. This disagreement has been mainly attributed to the presence of sparsely firing neurons. However, it is also possible that this is due to limitations of the spike sorting algorithms used to process the data. To address this issue, we used realistic simulations of extracellular recordings and found a relatively poor spike sorting performance for simulations containing a large number of neurons. In fact, the number of correctly identified neurons for single-channel recordings showed an asymptotic behavior saturating at about 8-10 units, when up to 20 units were present in the data. This performance was significantly poorer for neurons with low firing rates, as these units were twice more likely to be missed than the ones with high firing rates in simulations containing many neurons. These results uncover one of the main reasons for the relatively low number of neurons found in extracellular recording and also stress the importance of further developments of spike sorting algorithms. Copyright © 2012 Elsevier B.V. All rights reserved.
Versatile Networks of Simulated Spiking Neurons Displaying Winner-Take-All Behavior
Directory of Open Access Journals (Sweden)
Yanqing eChen
2013-03-01
Full Text Available We describe simulations of large-scale networks of excitatory and inhibitory spiking neurons that can generate dynamically stable winner-take-all (WTA behavior. The network connectivity is a variant of center-surround architecture that we call center-annular-surround (CAS. In this architecture each neuron is excited by nearby neighbors and inhibited by more distant neighbors in an annular-surround region. The neural units of these networks simulate conductance-based spiking neurons that interact via mechanisms susceptible to both short-term synaptic plasticity and STDP. We show that such CAS networks display robust WTA behavior unlike the center-surround networks and other control architectures that we have studied. We find that a large-scale network of spiking neurons with separate populations of excitatory and inhibitory neurons can give rise to smooth maps of sensory input. In addition, we show that a humanoid Brain-Based-Device (BBD under the control of a spiking WTA neural network can learn to reach to target positions in its visual field, thus demonstrating the acquisition of sensorimotor coordination.
Versatile networks of simulated spiking neurons displaying winner-take-all behavior.
Chen, Yanqing; McKinstry, Jeffrey L; Edelman, Gerald M
2013-01-01
We describe simulations of large-scale networks of excitatory and inhibitory spiking neurons that can generate dynamically stable winner-take-all (WTA) behavior. The network connectivity is a variant of center-surround architecture that we call center-annular-surround (CAS). In this architecture each neuron is excited by nearby neighbors and inhibited by more distant neighbors in an annular-surround region. The neural units of these networks simulate conductance-based spiking neurons that interact via mechanisms susceptible to both short-term synaptic plasticity and STDP. We show that such CAS networks display robust WTA behavior unlike the center-surround networks and other control architectures that we have studied. We find that a large-scale network of spiking neurons with separate populations of excitatory and inhibitory neurons can give rise to smooth maps of sensory input. In addition, we show that a humanoid brain-based-device (BBD) under the control of a spiking WTA neural network can learn to reach to target positions in its visual field, thus demonstrating the acquisition of sensorimotor coordination.
Directory of Open Access Journals (Sweden)
Guo-Sheng Yi
Full Text Available Dynamic spike threshold plays a critical role in neuronal input-output relations. In many neurons, the threshold potential depends on the rate of membrane potential depolarization (dV/dt preceding a spike. There are two basic classes of neural excitability, i.e., Type I and Type II, according to input-output properties. Although the dynamical and biophysical basis of their spike initiation has been established, the spike threshold dynamic for each cell type has not been well described. Here, we use a biophysical model to investigate how spike threshold depends on dV/dt in two types of neuron. It is observed that Type II spike threshold is more depolarized and more sensitive to dV/dt than Type I. With phase plane analysis, we show that each threshold dynamic arises from the different separatrix and K+ current kinetics. By analyzing subthreshold properties of membrane currents, we find the activation of hyperpolarizing current prior to spike initiation is a major factor that regulates the threshold dynamics. The outward K+ current in Type I neuron does not activate at the perithresholds, which makes its spike threshold insensitive to dV/dt. The Type II K+ current activates prior to spike initiation and there is a large net hyperpolarizing current at the perithresholds, which results in a depolarized threshold as well as a pronounced threshold dynamic. These predictions are further attested in several other functionally equivalent cases of neural excitability. Our study provides a fundamental description about how intrinsic biophysical properties contribute to the threshold dynamics in Type I and Type II neurons, which could decipher their significant functions in neural coding.
Synaptic Plasticity and Spike Synchronisation in Neuronal Networks
Borges, Rafael R.; Borges, Fernando S.; Lameu, Ewandson L.; Protachevicz, Paulo R.; Iarosz, Kelly C.; Caldas, Iberê L.; Viana, Ricardo L.; Macau, Elbert E. N.; Baptista, Murilo S.; Grebogi, Celso; Batista, Antonio M.
2017-12-01
Brain plasticity, also known as neuroplasticity, is a fundamental mechanism of neuronal adaptation in response to changes in the environment or due to brain injury. In this review, we show our results about the effects of synaptic plasticity on neuronal networks composed by Hodgkin-Huxley neurons. We show that the final topology of the evolved network depends crucially on the ratio between the strengths of the inhibitory and excitatory synapses. Excitation of the same order of inhibition revels an evolved network that presents the rich-club phenomenon, well known to exist in the brain. For initial networks with considerably larger inhibitory strengths, we observe the emergence of a complex evolved topology, where neurons sparsely connected to other neurons, also a typical topology of the brain. The presence of noise enhances the strength of both types of synapses, but if the initial network has synapses of both natures with similar strengths. Finally, we show how the synchronous behaviour of the evolved network will reflect its evolved topology.
Pulsed neural networks consisting of single-flux-quantum spiking neurons
International Nuclear Information System (INIS)
Hirose, T.; Asai, T.; Amemiya, Y.
2007-01-01
An inhibitory pulsed neural network was developed for brain-like information processing, by using single-flux-quantum (SFQ) circuits. It consists of spiking neuron devices that are coupled to each other through all-to-all inhibitory connections. The network selects neural activity. The operation of the neural network was confirmed by computer simulation. SFQ neuron devices can imitate the operation of the inhibition phenomenon of neural networks
Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.
Directory of Open Access Journals (Sweden)
Stojan Jovanović
2016-06-01
Full Text Available The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.
Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.
Jovanović, Stojan; Rotter, Stefan
2016-06-01
The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs) are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.
International Nuclear Information System (INIS)
Gu Hua-Guang; Chen Sheng-Gen; Li Yu-Ye
2015-01-01
We investigated the synchronization dynamics of a coupled neuronal system composed of two identical Chay model neurons. The Chay model showed coexisting period-1 and period-2 bursting patterns as a parameter and initial values are varied. We simulated multiple periodic and chaotic bursting patterns with non-(NS), burst phase (BS), spike phase (SS), complete (CS), and lag synchronization states. When the coexisting behavior is near period-2 bursting, the transitions of synchronization states of the coupled system follows very complex transitions that begins with transitions between BS and SS, moves to transitions between CS and SS, and to CS. Most initial values lead to the CS state of period-2 bursting while only a few lead to the CS state of period-1 bursting. When the coexisting behavior is near period-1 bursting, the transitions begin with NS, move to transitions between SS and BS, to transitions between SS and CS, and then to CS. Most initial values lead to the CS state of period-1 bursting but a few lead to the CS state of period-2 bursting. The BS was identified as chaos synchronization. The patterns for NS and transitions between BS and SS are insensitive to initial values. The patterns for transitions between CS and SS and the CS state are sensitive to them. The number of spikes per burst of non-CS bursting increases with increasing coupling strength. These results not only reveal the initial value- and parameter-dependent synchronization transitions of coupled systems with coexisting behaviors, but also facilitate interpretation of various bursting patterns and synchronization transitions generated in the nervous system with weak coupling strength. (paper)
Modulation of the spike activity of neocortex neurons during a conditioned reflex.
Storozhuk, V M; Sanzharovskii, A V; Sachenko, V V; Busel, B I
2000-01-01
Experiments were conducted on cats to study the effects of iontophoretic application of glutamate and a number of modulators on the spike activity of neurons in the sensorimotor cortex during a conditioned reflex. These studies showed that glutamate, as well as exerting a direct influence on neuron spike activity, also had a delayed facilitatory action lasting 10-20 min after iontophoresis was finished. Adrenomimetics were found to have a double modulatory effect on intracortical glutamate connections: inhibitory and facilitatory effects were mediated by beta1 and beta2 adrenoceptors respectively. Although dopamine, like glutamate, facilitated neuron spike activity during the period of application, the simultaneous facilitatory actions of glutamate and L-DOPA were accompanied by occlusion of spike activity, and simultaneous application of glutamate and haloperidol suppressed spike activity associated with the conditioned reflex response. Facilitation thus appears to show a significant level of dependence on metabotropic glutamate receptors which, like dopamine receptors, are linked to the intracellular medium via Gi proteins.
Code-specific learning rules improve action selection by populations of spiking neurons.
Friedrich, Johannes; Urbanczik, Robert; Senn, Walter
2014-08-01
Population coding is widely regarded as a key mechanism for achieving reliable behavioral decisions. We previously introduced reinforcement learning for population-based decision making by spiking neurons. Here we generalize population reinforcement learning to spike-based plasticity rules that take account of the postsynaptic neural code. We consider spike/no-spike, spike count and spike latency codes. The multi-valued and continuous-valued features in the postsynaptic code allow for a generalization of binary decision making to multi-valued decision making and continuous-valued action selection. We show that code-specific learning rules speed up learning both for the discrete classification and the continuous regression tasks. The suggested learning rules also speed up with increasing population size as opposed to standard reinforcement learning rules. Continuous action selection is further shown to explain realistic learning speeds in the Morris water maze. Finally, we introduce the concept of action perturbation as opposed to the classical weight- or node-perturbation as an exploration mechanism underlying reinforcement learning. Exploration in the action space greatly increases the speed of learning as compared to exploration in the neuron or weight space.
Directory of Open Access Journals (Sweden)
Alex eRoxin
2011-03-01
Full Text Available Neuronal network models often assume a fixed probability of connectionbetween neurons. This assumption leads to random networks withbinomial in-degree and out-degree distributions which are relatively narrow. Here I study the effect of broaddegree distributions on network dynamics by interpolating between abinomial and a truncated powerlaw distribution for the in-degree andout-degree independently. This is done both for an inhibitory network(I network as well as for the recurrent excitatory connections in anetwork of excitatory and inhibitory neurons (EI network. In bothcases increasing the width of the in-degree distribution affects theglobal state of the network by driving transitions betweenasynchronous behavior and oscillations. This effect is reproduced ina simplified rate model which includes the heterogeneity in neuronalinput due to the in-degree of cells. On the other hand, broadeningthe out-degree distribution is shown to increase the fraction ofcommon inputs to pairs of neurons. This leads to increases in theamplitude of the cross-correlation (CC of synaptic currents. In thecase of the I network, despite strong oscillatory CCs in the currents, CCs of the membrane potential are low due to filtering and reset effects, leading to very weak CCs of the spikecount. In the asynchronous regime ofthe EI network, broadening the out-degree increases the amplitude ofCCs in the recurrent excitatory currents, while CC of the totalcurrent is essentially unaffected as are pairwise spikingcorrelations. This is due to a dynamic balance between excitatoryand inhibitory synaptic currents. In the oscillatory regime, changesin the out-degree can have a large effect on spiking correlations andeven on the qualitative dynamical state of the network.
International Nuclear Information System (INIS)
Moreno-Bote, Ruben; Parga, Nestor
2006-01-01
An analytical description of the response properties of simple but realistic neuron models in the presence of noise is still lacking. We determine completely up to the second order the firing statistics of a single and a pair of leaky integrate-and-fire neurons receiving some common slowly filtered white noise. In particular, the auto- and cross-correlation functions of the output spike trains of pairs of cells are obtained from an improvement of the adiabatic approximation introduced previously by Moreno-Bote and Parga [Phys. Rev. Lett. 92, 028102 (2004)]. These two functions define the firing variability and firing synchronization between neurons, and are of much importance for understanding neuron communication
Fluctuations and information filtering in coupled populations of spiking neurons with adaptation.
Deger, Moritz; Schwalger, Tilo; Naud, Richard; Gerstner, Wulfram
2014-12-01
Finite-sized populations of spiking elements are fundamental to brain function but also are used in many areas of physics. Here we present a theory of the dynamics of finite-sized populations of spiking units, based on a quasirenewal description of neurons with adaptation. We derive an integral equation with colored noise that governs the stochastic dynamics of the population activity in response to time-dependent stimulation and calculate the spectral density in the asynchronous state. We show that systems of coupled populations with adaptation can generate a frequency band in which sensory information is preferentially encoded. The theory is applicable to fully as well as randomly connected networks and to leaky integrate-and-fire as well as to generalized spiking neurons with adaptation on multiple time scales.
A complex-valued firing-rate model that approximates the dynamics of spiking networks.
Directory of Open Access Journals (Sweden)
Evan S Schaffer
2013-10-01
Full Text Available Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons.
A complex-valued firing-rate model that approximates the dynamics of spiking networks.
Schaffer, Evan S; Ostojic, Srdjan; Abbott, L F
2013-10-01
Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons.
Reconstructing stimuli from the spike-times of leaky integrate and fire neurons
Directory of Open Access Journals (Sweden)
Sebastian eGerwinn
2011-02-01
Full Text Available Reconstructing stimuli from the spike-trains of neurons is an important approach for understanding the neural code. One of the difficulties associated with this task is that signals which are varying continuously in time are encoded into sequences of discrete events or spikes. An important problem is to determine how much information about the continuously varying stimulus can be extracted from the time-points at which spikes were observed, especially if these time-points are subject to some sort of randomness. For the special case of spike trains generated by leaky integrate and fire neurons, noise can be introduced by allowing variations in the threshold every time a spike is released. A simple decoding algorithm previously derived for the noiseless case can be extended to the stochastic case, but turns out to be biased. Here, we review a solution to this problem, by presenting a simple yet efficient algorithm which greatly reduces the bias, and therefore leads to better decoding performance in the stochastic case.
Sadeh, Sadra; Rotter, Stefan
2014-01-01
Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity.
International Nuclear Information System (INIS)
Yoshioka, Masahiko
2002-01-01
We study associative memory neural networks of the Hodgkin-Huxley type of spiking neurons in which multiple periodic spatiotemporal patterns of spike timing are memorized as limit-cycle-type attractors. In encoding the spatiotemporal patterns, we assume the spike-timing-dependent synaptic plasticity with the asymmetric time window. Analysis for periodic solution of retrieval state reveals that if the area of the negative part of the time window is equivalent to the positive part, then crosstalk among encoded patterns vanishes. Phase transition due to the loss of the stability of periodic solution is observed when we assume fast α function for direct interaction among neurons. In order to evaluate the critical point of this phase transition, we employ Floquet theory in which the stability problem of the infinite number of spiking neurons interacting with α function is reduced to the eigenvalue problem with the finite size of matrix. Numerical integration of the single-body dynamics yields the explicit value of the matrix, which enables us to determine the critical point of the phase transition with a high degree of precision
Phasic spike patterning in rat supraoptic neurones in vivo and in vitro
Sabatier, Nancy; Brown, Colin H; Ludwig, Mike; Leng, Gareth
2004-01-01
In vivo, most vasopressin cells of the hypothalamic supraoptic nucleus fire action potentials in a ‘phasic’ pattern when the systemic osmotic pressure is elevated, while most oxytocin cells fire continuously. The phasic firing pattern is believed to arise as a consequence of intrinsic activity-dependent changes in membrane potential, and these have been extensively studied in vitro. Here we analysed the discharge patterning of supraoptic nucleus neurones in vivo, to infer the characteristics of the post-spike sequence of hyperpolarization and depolarization from the observed spike patterning. We then compared patterning in phasic cells in vivo and in vitro, and we found systematic differences in the interspike interval distributions, and in other statistical parameters that characterized activity patterns within bursts. Analysis of hazard functions (probability of spike initiation as a function of time since the preceding spike) revealed that phasic firing in vitro appears consistent with a regenerative process arising from a relatively slow, late depolarizing afterpotential that approaches or exceeds spike threshold. By contrast, in vivo activity appears to be dominated by stochastic rather than deterministic mechanisms, and appears consistent with a relatively early and fast depolarizing afterpotential that modulates the probability that random synaptic input exceeds spike threshold. Despite superficial similarities in the phasic firing patterns observed in vivo and in vitro, there are thus fundamental differences in the underlying mechanisms. PMID:15146047
Spiking Regularity and Coherence in Complex Hodgkin–Huxley Neuron Networks
International Nuclear Information System (INIS)
Zhi-Qiang, Sun; Ping, Xie; Wei, Li; Peng-Ye, Wang
2010-01-01
We study the effects of the strength of coupling between neurons on the spiking regularity and coherence in a complex network with randomly connected Hodgkin–Huxley neurons driven by colored noise. It is found that for the given topology realization and colored noise correlation time, there exists an optimal strength of coupling, at which the spiking regularity of the network reaches the best level. Moreover, when the temporal regularity reaches the best level, the spatial coherence of the system has already increased to a relatively high level. In addition, for the given number of neurons and noise correlation time, the values of average regularity and spatial coherence at the optimal strength of coupling are nearly independent of the topology realization. Furthermore, there exists an optimal value of colored noise correlation time at which the spiking regularity can reach its best level. These results may be helpful for understanding of the real neuron world. (cross-disciplinary physics and related areas of science and technology)
Fractal characterization of acupuncture-induced spike trains of rat WDR neurons
International Nuclear Information System (INIS)
Chen, Yingyuan; Guo, Yi; Wang, Jiang; Hong, Shouhai; Wei, Xile; Yu, Haitao; Deng, Bin
2015-01-01
Highlights: •Fractal analysis is a valuable tool for measuring MA-induced neural activities. •In course of the experiments, the spike trains display different fractal properties. •The fractal properties reflect the long-term modulation of MA on WDR neurons. •The results may explain the long-lasting effects induced by acupuncture. -- Abstract: The experimental and the clinical studies have showed manual acupuncture (MA) could evoke multiple responses in various neural regions. Characterising the neuronal activities in these regions may provide more deep insights into acupuncture mechanisms. This paper used fractal analysis to investigate MA-induced spike trains of Wide Dynamic Range (WDR) neurons in rat spinal dorsal horn, an important relay station and integral component in processing acupuncture information. Allan factor and Fano factor were utilized to test whether the spike trains were fractal, and Allan factor were used to evaluate the scaling exponents and Hurst exponents. It was found that these two fractal exponents before and during MA were different significantly. During MA, the scaling exponents of WDR neurons were regulated in a small range, indicating a special fractal pattern. The neuronal activities were long-range correlated over multiple time scales. The scaling exponents during and after MA were similar, suggesting that the long-range correlations not only displayed during MA, but also extended to after withdrawing the needle. Our results showed that fractal analysis is a useful tool for measuring acupuncture effects. MA could modulate neuronal activities of which the fractal properties change as time proceeding. This evolution of fractal dynamics in course of MA experiments may explain at the level of neuron why the effect of MA observed in experiment and in clinic are complex, time-evolutionary, long-range even lasting for some time after stimulation
Spikes and matter inhomogeneities in massless scalar field models
International Nuclear Information System (INIS)
Coley, A A; Lim, W C
2016-01-01
We shall discuss the general relativistic generation of spikes in a massless scalar field or stiff perfect fluid model. We first investigate orthogonally transitive (OT) G 2 stiff fluid spike models both heuristically and numerically, and give a new exact OT G 2 stiff fluid spike solution. We then present a new two-parameter family of non-OT G 2 stiff fluid spike solutions, obtained by the generalization of non-OT G 2 vacuum spike solutions to the stiff fluid case by applying Geroch’s transformation on a Jacobs seed. The dynamics of these new stiff fluid spike solutions is qualitatively different from that of the vacuum spike solutions in that the matter (stiff fluid) feels the spike directly and the stiff fluid spike solution can end up with a permanent spike. We then derive the evolution equations of non-OT G 2 stiff fluid models, including a second perfect fluid, in full generality, and briefly discuss some of their qualitative properties and their potential numerical analysis. Finally, we discuss how a fluid, and especially a stiff fluid or massless scalar field, affects the physics of the generation of spikes. (paper)
A network of spiking neurons that can represent interval timing: mean field analysis.
Gavornik, Jeffrey P; Shouval, Harel Z
2011-04-01
Despite the vital importance of our ability to accurately process and encode temporal information, the underlying neural mechanisms are largely unknown. We have previously described a theoretical framework that explains how temporal representations, similar to those reported in the visual cortex, can form in locally recurrent cortical networks as a function of reward modulated synaptic plasticity. This framework allows networks of both linear and spiking neurons to learn the temporal interval between a stimulus and paired reward signal presented during training. Here we use a mean field approach to analyze the dynamics of non-linear stochastic spiking neurons in a network trained to encode specific time intervals. This analysis explains how recurrent excitatory feedback allows a network structure to encode temporal representations.
Transmitter modulation of spike-evoked calcium transients in arousal related neurons
DEFF Research Database (Denmark)
Kohlmeier, Kristi Anne; Leonard, Christopher S
2006-01-01
Nitric oxide synthase (NOS)-containing cholinergic neurons in the laterodorsal tegmentum (LDT) influence behavioral and motivational states through their projections to the thalamus, ventral tegmental area and a brainstem 'rapid eye movement (REM)-induction' site. Action potential-evoked intracel......Nitric oxide synthase (NOS)-containing cholinergic neurons in the laterodorsal tegmentum (LDT) influence behavioral and motivational states through their projections to the thalamus, ventral tegmental area and a brainstem 'rapid eye movement (REM)-induction' site. Action potential......-evoked intracellular calcium transients dampen excitability and stimulate NO production in these neurons. In this study, we investigated the action of several arousal-related neurotransmitters and the role of specific calcium channels in these LDT Ca(2+)-transients by simultaneous whole-cell recording and calcium...... of cholinergic LDT neurons and that inhibition of spike-evoked Ca(2+)-transients is a common action of neurotransmitters that also activate GIRK channels in these neurons. Because spike-evoked calcium influx dampens excitability, our findings suggest that these 'inhibitory' transmitters could boost firing rate...
Mechanisms of Winner-Take-All and Group Selection in Neuronal Spiking Networks.
Chen, Yanqing
2017-01-01
A major function of central nervous systems is to discriminate different categories or types of sensory input. Neuronal networks accomplish such tasks by learning different sensory maps at several stages of neural hierarchy, such that different neurons fire selectively to reflect different internal or external patterns and states. The exact mechanisms of such map formation processes in the brain are not completely understood. Here we study the mechanism by which a simple recurrent/reentrant neuronal network accomplish group selection and discrimination to different inputs in order to generate sensory maps. We describe the conditions and mechanism of transition from a rhythmic epileptic state (in which all neurons fire synchronized and indiscriminately to any input) to a winner-take-all state in which only a subset of neurons fire for a specific input. We prove an analytic condition under which a stable bump solution and a winner-take-all state can emerge from the local recurrent excitation-inhibition interactions in a three-layer spiking network with distinct excitatory and inhibitory populations, and demonstrate the importance of surround inhibitory connection topology on the stability of dynamic patterns in spiking neural network.
The design of a new spiking neuron using dual work function silicon nanowire transistors
International Nuclear Information System (INIS)
Bindal, Ahmet; Hamedi-Hagh, Sotoudeh
2007-01-01
A new spike neuron cell is designed using vertically grown, undoped silicon nanowire transistors. This study presents an entire design cycle from designing and optimizing vertical nanowire transistors for minimal power dissipation to realizing a neuron cell and measuring its dynamic power consumption, performance and layout area. The design cycle starts with determining individual metal gate work functions for NMOS and PMOS transistors as a function of wire radius to produce a 300 mV threshold voltage. The wire radius and effective channel length are subsequently varied to find a common body geometry for both transistors that yields smaller than 1 pA OFF current while producing maximum drive currents. A spike neuron cell is subsequently built using these transistors to measure its transient performance, power dissipation and layout area. Post-layout simulation results indicate that the neuron consumes 0.397 μW to generate a +1 V and 1.12 μW to generate a -1 V output pulse for a fan-out of five synapses at 500 MHz; the power dissipation increases by approximately 3 nW for each additional synapse at the output for generating either pulse. The neuron circuit occupies approximately 0.27 μm 2
Stochastic resonance of ensemble neurons for transient spike trains: Wavelet analysis
International Nuclear Information System (INIS)
Hasegawa, Hideo
2002-01-01
By using the wavelet transformation (WT), I have analyzed the response of an ensemble of N (=1, 10, 100, and 500) Hodgkin-Huxley neurons to transient M-pulse spike trains (M=1 to 3) with independent Gaussian noises. The cross correlation between the input and output signals is expressed in terms of the WT expansion coefficients. The signal-to-noise ratio (SNR) is evaluated by using the denoising method within the WT, by which the noise contribution is extracted from the output signals. Although the response of a single (N=1) neuron to subthreshold transient signals with noises is quite unreliable, the transmission fidelity assessed by the cross correlation and SNR is shown to be much improved by increasing the value of N: a population of neurons plays an indispensable role in the stochastic resonance (SR) for transient spike inputs. It is also shown that in a large-scale ensemble, the transmission fidelity for suprathreshold transient spikes is not significantly degraded by a weak noise which is responsible to SR for subthreshold inputs
Kriener, Birgit; Helias, Moritz; Rotter, Stefan; Diesmann, Markus; Einevoll, Gaute T
2013-01-01
Pattern formation, i.e., the generation of an inhomogeneous spatial activity distribution in a dynamical system with translation invariant structure, is a well-studied phenomenon in neuronal network dynamics, specifically in neural field models. These are population models to describe the spatio-temporal dynamics of large groups of neurons in terms of macroscopic variables such as population firing rates. Though neural field models are often deduced from and equipped with biophysically meaningful properties, a direct mapping to simulations of individual spiking neuron populations is rarely considered. Neurons have a distinct identity defined by their action on their postsynaptic targets. In its simplest form they act either excitatorily or inhibitorily. When the distribution of neuron identities is assumed to be periodic, pattern formation can be observed, given the coupling strength is supracritical, i.e., larger than a critical weight. We find that this critical weight is strongly dependent on the characteristics of the neuronal input, i.e., depends on whether neurons are mean- or fluctuation driven, and different limits in linearizing the full non-linear system apply in order to assess stability. In particular, if neurons are mean-driven, the linearization has a very simple form and becomes independent of both the fixed point firing rate and the variance of the input current, while in the very strongly fluctuation-driven regime the fixed point rate, as well as the input mean and variance are important parameters in the determination of the critical weight. We demonstrate that interestingly even in "intermediate" regimes, when the system is technically fluctuation-driven, the simple linearization neglecting the variance of the input can yield the better prediction of the critical coupling strength. We moreover analyze the effects of structural randomness by rewiring individual synapses or redistributing weights, as well as coarse-graining on the formation of
A COMPUTATIONAL MODEL OF MOTOR NEURON DEGENERATION
Le Masson, Gwendal; Przedborski, Serge; Abbott, L.F.
2014-01-01
SUMMARY To explore the link between bioenergetics and motor neuron degeneration, we used a computational model in which detailed morphology and ion conductance are paired with intracellular ATP production and consumption. We found that reduced ATP availability increases the metabolic cost of a single action potential and disrupts K+/Na+ homeostasis, resulting in a chronic depolarization. The magnitude of the ATP shortage at which this ionic instability occurs depends on the morphology and intrinsic conductance characteristic of the neuron. If ATP shortage is confined to the distal part of the axon, the ensuing local ionic instability eventually spreads to the whole neuron and involves fasciculation-like spiking events. A shortage of ATP also causes a rise in intracellular calcium. Our modeling work supports the notion that mitochondrial dysfunction can account for salient features of the paralytic disorder amyotrophic lateral sclerosis, including motor neuron hyperexcitability, fasciculation, and differential vulnerability of motor neuron subpopulations. PMID:25088365
A computational model of motor neuron degeneration.
Le Masson, Gwendal; Przedborski, Serge; Abbott, L F
2014-08-20
To explore the link between bioenergetics and motor neuron degeneration, we used a computational model in which detailed morphology and ion conductance are paired with intracellular ATP production and consumption. We found that reduced ATP availability increases the metabolic cost of a single action potential and disrupts K+/Na+ homeostasis, resulting in a chronic depolarization. The magnitude of the ATP shortage at which this ionic instability occurs depends on the morphology and intrinsic conductance characteristic of the neuron. If ATP shortage is confined to the distal part of the axon, the ensuing local ionic instability eventually spreads to the whole neuron and involves fasciculation-like spiking events. A shortage of ATP also causes a rise in intracellular calcium. Our modeling work supports the notion that mitochondrial dysfunction can account for salient features of the paralytic disorder amyotrophic lateral sclerosis, including motor neuron hyperexcitability, fasciculation, and differential vulnerability of motor neuron subpopulations. Copyright © 2014 Elsevier Inc. All rights reserved.
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.
Burbank, Kendra S
2015-12-01
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.
Directory of Open Access Journals (Sweden)
Ning eQiao
2015-04-01
Full Text Available Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm 2 , and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.
Synchronization in a non-uniform network of excitatory spiking neurons
Echeveste, Rodrigo; Gros, Claudius
Spontaneous synchronization of pulse coupled elements is ubiquitous in nature and seems to be of vital importance for life. Networks of pacemaker cells in the heart, extended populations of southeast asian fireflies, and neuronal oscillations in cortical networks, are examples of this. In the present work, a rich repertoire of dynamical states with different degrees of synchronization are found in a network of excitatory-only spiking neurons connected in a non-uniform fashion. In particular, uncorrelated and partially correlated states are found without the need for inhibitory neurons or external currents. The phase transitions between these states, as well the robustness, stability, and response of the network to external stimulus are studied.
Spike Timing Matters in Novel Neuronal Code Involved in Vibrotactile Frequency Perception.
Birznieks, Ingvars; Vickery, Richard M
2017-05-22
Skin vibrations sensed by tactile receptors contribute significantly to the perception of object properties during tactile exploration [1-4] and to sensorimotor control during object manipulation [5]. Sustained low-frequency skin vibration (perception of frequency is still unknown. Measures based on mean spike rates of neurons in the primary somatosensory cortex are sufficient to explain performance in some frequency discrimination tasks [7-11]; however, there is emerging evidence that stimuli can be distinguished based also on temporal features of neural activity [12, 13]. Our study's advance is to demonstrate that temporal features are fundamental for vibrotactile frequency perception. Pulsatile mechanical stimuli were used to elicit specified temporal spike train patterns in tactile afferents, and subsequently psychophysical methods were employed to characterize human frequency perception. Remarkably, the most salient temporal feature determining vibrotactile frequency was not the underlying periodicity but, rather, the duration of the silent gap between successive bursts of neural activity. This burst gap code for frequency represents a previously unknown form of neural coding in the tactile sensory system, which parallels auditory pitch perception mechanisms based on purely temporal information where longer inter-pulse intervals receive higher perceptual weights than short intervals [14]. Our study also demonstrates that human perception of stimuli can be determined exclusively by temporal features of spike trains independent of the mean spike rate and without contribution from population response factors. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Frank mixture copula family for modeling higher-order correlations of neural spike counts
International Nuclear Information System (INIS)
Onken, Arno; Obermayer, Klaus
2009-01-01
In order to evaluate the importance of higher-order correlations in neural spike count codes, flexible statistical models of dependent multivariate spike counts are required. Copula families, parametric multivariate distributions that represent dependencies, can be applied to construct such models. We introduce the Frank mixture family as a new copula family that has separate parameters for all pairwise and higher-order correlations. In contrast to the Farlie-Gumbel-Morgenstern copula family that shares this property, the Frank mixture copula can model strong correlations. We apply spike count models based on the Frank mixture copula to data generated by a network of leaky integrate-and-fire neurons and compare the goodness of fit to distributions based on the Farlie-Gumbel-Morgenstern family. Finally, we evaluate the importance of using proper single neuron spike count distributions on the Shannon information. We find notable deviations in the entropy that increase with decreasing firing rates. Moreover, we find that the Frank mixture family increases the log likelihood of the fit significantly compared to the Farlie-Gumbel-Morgenstern family. This shows that the Frank mixture copula is a useful tool to assess the importance of higher-order correlations in spike count codes.
Mandelblat-Cerf, Yael; Ramesh, Rohan N; Burgess, Christian R; Patella, Paola; Yang, Zongfang; Lowell, Bradford B; Andermann, Mark L
2015-01-01
Agouti-related-peptide (AgRP) neurons—interoceptive neurons in the arcuate nucleus of the hypothalamus (ARC)—are both necessary and sufficient for driving feeding behavior. To better understand the functional roles of AgRP neurons, we performed optetrode electrophysiological recordings from AgRP neurons in awake, behaving AgRP-IRES-Cre mice. In free-feeding mice, we observed a fivefold increase in AgRP neuron firing with mounting caloric deficit in afternoon vs morning recordings. In food-restricted mice, as food became available, AgRP neuron firing dropped, yet remained elevated as compared to firing in sated mice. The rapid drop in spiking activity of AgRP neurons at meal onset may reflect a termination of the drive to find food, while residual, persistent spiking may reflect a sustained drive to consume food. Moreover, nearby neurons inhibited by AgRP neuron photostimulation, likely including satiety-promoting pro-opiomelanocortin (POMC) neurons, demonstrated opposite changes in spiking. Finally, firing of ARC neurons was also rapidly modulated within seconds of individual licks for liquid food. These findings suggest novel roles for antagonistic AgRP and POMC neurons in the regulation of feeding behaviors across multiple timescales. DOI: http://dx.doi.org/10.7554/eLife.07122.001 PMID:26159614
Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam
2011-01-01
One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.
Efficient Spike-Coding with Multiplicative Adaptation in a Spike Response Model
S.M. Bohte (Sander)
2012-01-01
htmlabstractNeural adaptation underlies the ability of neurons to maximize encoded informa- tion over a wide dynamic range of input stimuli. While adaptation is an intrinsic feature of neuronal models like the Hodgkin-Huxley model, the challenge is to in- tegrate adaptation in models of neural
Impact of sub and supra-threshold adaptation currents in networks of spiking neurons.
Colliaux, David; Yger, Pierre; Kaneko, Kunihiko
2015-12-01
Neuronal adaptation is the intrinsic capacity of the brain to change, by various mechanisms, its dynamical responses as a function of the context. Such a phenomena, widely observed in vivo and in vitro, is known to be crucial in homeostatic regulation of the activity and gain control. The effects of adaptation have already been studied at the single-cell level, resulting from either voltage or calcium gated channels both activated by the spiking activity and modulating the dynamical responses of the neurons. In this study, by disentangling those effects into a linear (sub-threshold) and a non-linear (supra-threshold) part, we focus on the the functional role of those two distinct components of adaptation onto the neuronal activity at various scales, starting from single-cell responses up to recurrent networks dynamics, and under stationary or non-stationary stimulations. The effects of slow currents on collective dynamics, like modulation of population oscillation and reliability of spike patterns, is quantified for various types of adaptation in sparse recurrent networks.
Detection of M-Sequences from Spike Sequence in Neuronal Networks
Directory of Open Access Journals (Sweden)
Yoshi Nishitani
2012-01-01
Full Text Available In circuit theory, it is well known that a linear feedback shift register (LFSR circuit generates pseudorandom bit sequences (PRBS, including an M-sequence with the maximum period of length. In this study, we tried to detect M-sequences known as a pseudorandom sequence generated by the LFSR circuit from time series patterns of stimulated action potentials. Stimulated action potentials were recorded from dissociated cultures of hippocampal neurons grown on a multielectrode array. We could find several M-sequences from a 3-stage LFSR circuit (M3. These results show the possibility of assembling LFSR circuits or its equivalent ones in a neuronal network. However, since the M3 pattern was composed of only four spike intervals, the possibility of an accidental detection was not zero. Then, we detected M-sequences from random spike sequences which were not generated from an LFSR circuit and compare the result with the number of M-sequences from the originally observed raster data. As a result, a significant difference was confirmed: a greater number of “0–1” reversed the 3-stage M-sequences occurred than would have accidentally be detected. This result suggests that some LFSR equivalent circuits are assembled in neuronal networks.
Comparing Realistic Subthalamic Nucleus Neuron Models
Njap, Felix; Claussen, Jens C.; Moser, Andreas; Hofmann, Ulrich G.
2011-06-01
The mechanism of action of clinically effective electrical high frequency stimulation is still under debate. However, recent evidence points at the specific activation of GABA-ergic ion channels. Using a computational approach, we analyze temporal properties of the spike trains emitted by biologically realistic neurons of the subthalamic nucleus (STN) as a function of GABA-ergic synaptic input conductances. Our contribution is based on a model proposed by Rubin and Terman and exhibits a wide variety of different firing patterns, silent, low spiking, moderate spiking and intense spiking activity. We observed that most of the cells in our network turn to silent mode when we increase the GABAA input conductance above the threshold of 3.75 mS/cm2. On the other hand, insignificant changes in firing activity are observed when the input conductance is low or close to zero. We thus reproduce Rubin's model with vanishing synaptic conductances. To quantitatively compare spike trains from the original model with the modified model at different conductance levels, we apply four different (dis)similarity measures between them. We observe that Mahalanobis distance, Victor-Purpura metric, and Interspike Interval distribution are sensitive to different firing regimes, whereas Mutual Information seems undiscriminative for these functional changes.
Stochastic modeling for neural spiking events based on fractional superstatistical Poisson process
Directory of Open Access Journals (Sweden)
Hidetoshi Konno
2018-01-01
Full Text Available In neural spike counting experiments, it is known that there are two main features: (i the counting number has a fractional power-law growth with time and (ii the waiting time (i.e., the inter-spike-interval distribution has a heavy tail. The method of superstatistical Poisson processes (SSPPs is examined whether these main features are properly modeled. Although various mixed/compound Poisson processes are generated with selecting a suitable distribution of the birth-rate of spiking neurons, only the second feature (ii can be modeled by the method of SSPPs. Namely, the first one (i associated with the effect of long-memory cannot be modeled properly. Then, it is shown that the two main features can be modeled successfully by a class of fractional SSPP (FSSPP.
Stochastic modeling for neural spiking events based on fractional superstatistical Poisson process
Konno, Hidetoshi; Tamura, Yoshiyasu
2018-01-01
In neural spike counting experiments, it is known that there are two main features: (i) the counting number has a fractional power-law growth with time and (ii) the waiting time (i.e., the inter-spike-interval) distribution has a heavy tail. The method of superstatistical Poisson processes (SSPPs) is examined whether these main features are properly modeled. Although various mixed/compound Poisson processes are generated with selecting a suitable distribution of the birth-rate of spiking neurons, only the second feature (ii) can be modeled by the method of SSPPs. Namely, the first one (i) associated with the effect of long-memory cannot be modeled properly. Then, it is shown that the two main features can be modeled successfully by a class of fractional SSPP (FSSPP).
Directory of Open Access Journals (Sweden)
Johannes Bill
Full Text Available During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input.
Bill, Johannes; Buesing, Lars; Habenschuss, Stefan; Nessler, Bernhard; Maass, Wolfgang; Legenstein, Robert
2015-01-01
During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input. PMID:26284370
Zhang, Xu; Foderaro, Greg; Henriquez, Craig; Ferrari, Silvia
2018-03-01
Recent developments in neural stimulation and recording technologies are providing scientists with the ability of recording and controlling the activity of individual neurons in vitro or in vivo, with very high spatial and temporal resolution. Tools such as optogenetics, for example, are having a significant impact in the neuroscience field by delivering optical firing control with the precision and spatiotemporal resolution required for investigating information processing and plasticity in biological brains. While a number of training algorithms have been developed to date for spiking neural network (SNN) models of biological neuronal circuits, exiting methods rely on learning rules that adjust the synaptic strengths (or weights) directly, in order to obtain the desired network-level (or functional-level) performance. As such, they are not applicable to modifying plasticity in biological neuronal circuits, in which synaptic strengths only change as a result of pre- and post-synaptic neuron firings or biological mechanisms beyond our control. This paper presents a weight-free training algorithm that relies solely on adjusting the spatiotemporal delivery of neuron firings in order to optimize the network performance. The proposed weight-free algorithm does not require any knowledge of the SNN model or its plasticity mechanisms. As a result, this training approach is potentially realizable in vitro or in vivo via neural stimulation and recording technologies, such as optogenetics and multielectrode arrays, and could be utilized to control plasticity at multiple scales of biological neuronal circuits. The approach is demonstrated by training SNNs with hundreds of units to control a virtual insect navigating in an unknown environment.
Efficient transmission of subthreshold signals in complex networks of spiking neurons.
Torres, Joaquin J; Elices, Irene; Marro, J
2015-01-01
We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances--that naturally balances the network with excitatory and inhibitory synapses--and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest.
Efficient transmission of subthreshold signals in complex networks of spiking neurons.
Directory of Open Access Journals (Sweden)
Joaquin J Torres
Full Text Available We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances--that naturally balances the network with excitatory and inhibitory synapses--and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest.
Spiking Neural P Systems with Communication on Request.
Pan, Linqiang; Păun, Gheorghe; Zhang, Gexiang; Neri, Ferrante
2017-12-01
Spiking Neural [Formula: see text] Systems are Neural System models characterized by the fact that each neuron mimics a biological cell and the communication between neurons is based on spikes. In the Spiking Neural [Formula: see text] systems investigated so far, the application of evolution rules depends on the contents of a neuron (checked by means of a regular expression). In these [Formula: see text] systems, a specified number of spikes are consumed and a specified number of spikes are produced, and then sent to each of the neurons linked by a synapse to the evolving neuron. [Formula: see text]In the present work, a novel communication strategy among neurons of Spiking Neural [Formula: see text] Systems is proposed. In the resulting models, called Spiking Neural [Formula: see text] Systems with Communication on Request, the spikes are requested from neighboring neurons, depending on the contents of the neuron (still checked by means of a regular expression). Unlike the traditional Spiking Neural [Formula: see text] systems, no spikes are consumed or created: the spikes are only moved along synapses and replicated (when two or more neurons request the contents of the same neuron). [Formula: see text]The Spiking Neural [Formula: see text] Systems with Communication on Request are proved to be computationally universal, that is, equivalent with Turing machines as long as two types of spikes are used. Following this work, further research questions are listed to be open problems.
Greenwood, Priscilla E
2016-01-01
This book describes a large number of open problems in the theory of stochastic neural systems, with the aim of enticing probabilists to work on them. This includes problems arising from stochastic models of individual neurons as well as those arising from stochastic models of the activities of small and large networks of interconnected neurons. The necessary neuroscience background to these problems is outlined within the text, so readers can grasp the context in which they arise. This book will be useful for graduate students and instructors providing material and references for applying probability to stochastic neuron modeling. Methods and results are presented, but the emphasis is on questions where additional stochastic analysis may contribute neuroscience insight. An extensive bibliography is included. Dr. Priscilla E. Greenwood is a Professor Emerita in the Department of Mathematics at the University of British Columbia. Dr. Lawrence M. Ward is a Professor in the Department of Psychology and the Brain...
Kwon, Min-Woo; Baek, Myung-Hyun; Hwang, Sungmin; Kim, Sungjun; Park, Byung-Gook
2018-09-01
We designed the CMOS analog integrate and fire (I&F) neuron circuit can drive resistive synaptic device. The neuron circuit consists of a current mirror for spatial integration, a capacitor for temporal integration, asymmetric negative and positive pulse generation part, a refractory part, and finally a back-propagation pulse generation part for learning of the synaptic devices. The resistive synaptic devices were fabricated using HfOx switching layer by atomic layer deposition (ALD). The resistive synaptic device had gradual set and reset characteristics and the conductance was adjusted by spike-timing-dependent-plasticity (STDP) learning rule. We carried out circuit simulation of synaptic device and CMOS neuron circuit. And we have developed an unsupervised spiking neural networks (SNNs) for 5 × 5 pattern recognition and classification using the neuron circuit and synaptic devices. The hardware-based SNNs can autonomously and efficiently control the weight updates of the synapses between neurons, without the aid of software calculations.
Xu, Tao; Xiao, Na; Zhai, Xiaolong; Chan, Pak Kwan; Tin, Chung
2018-02-01
Objective. Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). Approach. The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. Main results. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. Significance. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.
Xu, Tao; Xiao, Na; Zhai, Xiaolong; Kwan Chan, Pak; Tin, Chung
2018-02-01
Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.
Mean-field analysis of orientation selectivity in inhibition-dominated networks of spiking neurons.
Sadeh, Sadra; Cardanobile, Stefano; Rotter, Stefan
2014-01-01
Mechanisms underlying the emergence of orientation selectivity in the primary visual cortex are highly debated. Here we study the contribution of inhibition-dominated random recurrent networks to orientation selectivity, and more generally to sensory processing. By simulating and analyzing large-scale networks of spiking neurons, we investigate tuning amplification and contrast invariance of orientation selectivity in these networks. In particular, we show how selective attenuation of the common mode and amplification of the modulation component take place in these networks. Selective attenuation of the baseline, which is governed by the exceptional eigenvalue of the connectivity matrix, removes the unspecific, redundant signal component and ensures the invariance of selectivity across different contrasts. Selective amplification of modulation, which is governed by the operating regime of the network and depends on the strength of coupling, amplifies the informative signal component and thus increases the signal-to-noise ratio. Here, we perform a mean-field analysis which accounts for this process.
Merolla, Paul A; Arthur, John V; Alvarez-Icaza, Rodrigo; Cassidy, Andrew S; Sawada, Jun; Akopyan, Filipp; Jackson, Bryan L; Imam, Nabil; Guo, Chen; Nakamura, Yutaka; Brezzo, Bernard; Vo, Ivan; Esser, Steven K; Appuswamy, Rathinakumar; Taba, Brian; Amir, Arnon; Flickner, Myron D; Risk, William P; Manohar, Rajit; Modha, Dharmendra S
2014-08-08
Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts. Copyright © 2014, American Association for the Advancement of Science.
DEFF Research Database (Denmark)
Joshi, Suyash Narendra; Dau, Torsten; Epp, Bastian
2017-01-01
. A single ANF is modeled as a network of two exponential integrateand-fire point-neuron models, referred to as peripheral and central axons of the ANF. The peripheral axon is excited by the cathodic charge, inhibited by the anodic charge, and exhibits longer spike latencies than the central axon......A computational model of cat auditory nerve fiber (ANF) responses to electrical stimulation is presented. The model assumes that (1) there exist at least two sites of spike generation along the ANF and (2) both an anodic (positive) and a cathodic (negative) charge in isolation can evoke a spike......; the central axon is excited by the anodic charge, inhibited by the cathodic charge, and exhibits shorter spike latencies than the peripheral axon. The model also includes subthreshold and suprathreshold adaptive feedback loops which continuously modify the membrane potential and can account for effects...
Spike-timing computation properties of a feed-forward neural network model
Directory of Open Access Journals (Sweden)
Drew Benjamin Sinha
2014-01-01
Full Text Available Brain function is characterized by dynamical interactions among networks of neurons. These interactions are mediated by network topology at many scales ranging from microcircuits to brain areas. Understanding how networks operate can be aided by understanding how the transformation of inputs depends upon network connectivity patterns, e.g. serial and parallel pathways. To tractably determine how single synapses or groups of synapses in such pathways shape transformations, we modeled feed-forward networks of 7-22 neurons in which synaptic strength changed according to a spike-timing dependent plasticity rule. We investigated how activity varied when dynamics were perturbed by an activity-dependent electrical stimulation protocol (spike-triggered stimulation; STS in networks of different topologies and background input correlations. STS can successfully reorganize functional brain networks in vivo, but with a variability in effectiveness that may derive partially from the underlying network topology. In a simulated network with a single disynaptic pathway driven by uncorrelated background activity, structured spike-timing relationships between polysynaptically connected neurons were not observed. When background activity was correlated or parallel disynaptic pathways were added, however, robust polysynaptic spike timing relationships were observed, and application of STS yielded predictable changes in synaptic strengths and spike-timing relationships. These observations suggest that precise input-related or topologically induced temporal relationships in network activity are necessary for polysynaptic signal propagation. Such constraints for polysynaptic computation suggest potential roles for higher-order topological structure in network organization, such as maintaining polysynaptic correlation in the face of relatively weak synapses.
Directory of Open Access Journals (Sweden)
Yasuhiro Tsubo
Full Text Available The brain is considered to use a relatively small amount of energy for its efficient information processing. Under a severe restriction on the energy consumption, the maximization of mutual information (MMI, which is adequate for designing artificial processing machines, may not suit for the brain. The MMI attempts to send information as accurate as possible and this usually requires a sufficient energy supply for establishing clearly discretized communication bands. Here, we derive an alternative hypothesis for neural code from the neuronal activities recorded juxtacellularly in the sensorimotor cortex of behaving rats. Our hypothesis states that in vivo cortical neurons maximize the entropy of neuronal firing under two constraints, one limiting the energy consumption (as assumed previously and one restricting the uncertainty in output spike sequences at given firing rate. Thus, the conditional maximization of firing-rate entropy (CMFE solves a tradeoff between the energy cost and noise in neuronal response. In short, the CMFE sends a rich variety of information through broader communication bands (i.e., widely distributed firing rates at the cost of accuracy. We demonstrate that the CMFE is reflected in the long-tailed, typically power law, distributions of inter-spike intervals obtained for the majority of recorded neurons. In other words, the power-law tails are more consistent with the CMFE rather than the MMI. Thus, we propose the mathematical principle by which cortical neurons may represent information about synaptic input into their output spike trains.
Nicola, Wilten; Tripp, Bryan; Scott, Matthew
2016-01-01
A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.
Directory of Open Access Journals (Sweden)
Wilten eNicola
2016-02-01
Full Text Available A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF. The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks
A Hybrid Setarx Model for Spikes in Tight Electricity Markets
Directory of Open Access Journals (Sweden)
Carlo Lucheroni
2012-01-01
Full Text Available The paper discusses a simple looking but highly nonlinear regime-switching, self-excited threshold model for hourly electricity prices in continuous and discrete time. The regime structure of the model is linked to organizational features of the market. In continuous time, the model can include spikes without using jumps, by defining stochastic orbits. In passing from continuous time to discrete time, the stochastic orbits survive discretization and can be identified again as spikes. A calibration technique suitable for the discrete version of this model, which does not need deseasonalization or spike filtering, is developed, tested and applied to market data. The discussion of the properties of the model uses phase-space analysis, an approach uncommon in econometrics. (original abstract
Directory of Open Access Journals (Sweden)
Vaughan G Macefield
2011-12-01
Full Text Available Postganglionic sympathetic axons in awake healthy human subjects, regardless of their identity as muscle vasoconstrictor, cutaneous vasoconstrictor or sudomotor neurones, discharge with a low firing probability (~30%, generate low firing rates (~0.5 Hz and typically fire only once per cardiac interval. The purpose of the present study was to use modelling of spike trains in an attempt to define the number of preganglionic neurones that drive an individual postganglionic neurone. Artificial spike trains were generated in 1-3 preganglionic neurones converging onto a single postganglionic neurone. Each preganglionic input fired with a mean interval distribution of either 1000, 1500, 2000, 2500 or 3000 ms and the standard deviation varied between 0.5, 1.0 and 2.0 x the mean interval; the discharge frequency of each preganglionic neurone exhibited positive skewness and kurtosis. Of the 45 patterns examined, the mean discharge properties of the postganglionic neurone could only be explained by it being driven by, on average, two preganglionic neurones firing with a mean interspike interval of 2500 ms and SD of 5000 ms. The mean firing rate resulting from this pattern was 0.22 Hz, comparable to that of spontaneously active muscle vasoconstrictor neurones in healthy subjects (0.40 Hz. Likewise, the distribution of the number of spikes per cardiac interval was similar between the modelled and actual data: 0 spikes (69.5 vs 66.6 %, 1 spike (25.6 vs 21.2 %, 2 spikes (4.3 vs 6.4 %, 3 spikes (0.5 vs 1.7 % and 4 spikes (0.1 vs 0.7 %. Although some features of the firing patterns could be explained by the postganglionic neurone being driven by a single preganglionic neurone, none of the emulated firing patterns generated by the firing of three preganglionic neurones matched the discharge of the real neurones. These modelling data indicate that, on average, human postganglionic sympathetic neurones are driven by two preganglionic inputs.
International Nuclear Information System (INIS)
Ozer, Mahmut; Uzuntarla, Muhammet; Agaoglu, Sukriye Nihal
2006-01-01
We first investigate the amplitude effect of the subthreshold periodic forcing on the regularity of the spiking events by using the coefficient of variation of interspike intervals. We show that the resonance effect in the coefficient of variation, which is dependent on the driving frequency for larger membrane patch sizes, disappears when the amplitude of the subthreshold forcing is decreased. Then, we demonstrate that the timings of the spiking events of a noisy and periodically driven neuron concentrate on a specific phase of the stimulus. We also show that increasing the intensity of the noise causes the phase probability density of the spiking events to get smaller values, and eliminates differences in the phase locking behavior of the neuron for different patch sizes
Unsupervised clustering with spiking neurons by sparse temporal coding and multi-layer RBF networks
S.M. Bohte (Sander); J.A. La Poutré (Han); J.N. Kok (Joost)
2000-01-01
textabstractWe demonstrate that spiking neural networks encoding information in spike times are capable of computing and learning clusters from realistic data. We show how a spiking neural network based on spike-time coding and Hebbian learning can successfully perform unsupervised clustering on
Short-term memory and critical clusterization in brain neurons spike series
Bershadskii, A.; Dremencov, E.; Yadid, G.
2003-06-01
A new phenomenon: critical clusterization, is observed in the neuron firing of a genetically defined rat model of depression. The critical clusterization is studied using a multiscaling analysis of the data obtained from the neurons belonging to the Red Nucleus area of the depressive brains. It is suggested that this critical phenomenon can be partially responsible for the observed ill behavior of the depressive brains: loss of short-term motor memory and slow motor reaction.
Spiking computation and stochastic amplification in a neuron-like semiconductor microstructure
International Nuclear Information System (INIS)
Samardak, A. S.; Nogaret, A.; Janson, N. B.; Balanov, A.; Farrer, I.; Ritchie, D. A.
2011-01-01
We have demonstrated the proof of principle of a semiconductor neuron, which has dendrites, axon, and a soma and computes information encoded in electrical pulses in the same way as biological neurons. Electrical impulses applied to dendrites diffuse along microwires to the soma. The soma is the active part of the neuron, which regenerates input pulses above a voltage threshold and transmits them into the axon. Our concept of neuron is a major step forward because its spatial structure controls the timing of pulses, which arrive at the soma. Dendrites and axon act as transmission delay lines, which modify the information, coded in the timing of pulses. We have finally shown that noise enhances the detection sensitivity of the neuron by helping the transmission of weak periodic signals. A maximum enhancement of signal transmission was observed at an optimum noise level known as stochastic resonance. The experimental results are in excellent agreement with simulations of the FitzHugh-Nagumo model. Our neuron is therefore extremely well suited to providing feedback on the various mathematical approximations of neurons and building functional networks.
Sadeh, Sadra; Rotter, Stefan
2015-01-01
The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not
Directory of Open Access Journals (Sweden)
Sadra Sadeh
2015-01-01
Full Text Available The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are
Directory of Open Access Journals (Sweden)
Farida Veliev
2017-08-01
Full Text Available The emergence of nanoelectronics applied to neural interfaces has started few decades ago, and aims to provide new tools for replacing or restoring disabled functions of the nervous systems as well as further understanding the evolution of such complex organization. As the same time, graphene and other 2D materials have offered new possibilities for integrating micro and nano-devices on flexible, transparent, and biocompatible substrates, promising for bio and neuro-electronics. In addition to many bio-suitable features of graphene interface, such as, chemical inertness and anti-corrosive properties, its optical transparency enables multimodal approach of neuronal based systems, the electrical layer being compatible with additional microfluidics and optical manipulation ports. The convergence of these fields will provide a next generation of neural interfaces for the reliable detection of single spike and record with high fidelity activity patterns of neural networks. Here, we report on the fabrication of graphene field effect transistors (G-FETs on various substrates (silicon, sapphire, glass coverslips, and polyimide deposited onto Si/SiO2 substrates, exhibiting high sensitivity (4 mS/V, close to the Dirac point at VLG < VD and low noise level (10−22 A2/Hz, at VLG = 0 V. We demonstrate the in vitro detection of the spontaneous activity of hippocampal neurons in-situ-grown on top of the graphene sensors during several weeks in a millimeter size PDMS fluidics chamber (8 mm wide. These results provide an advance toward the realization of biocompatible devices for reliable and high spatio-temporal sensing of neuronal activity for both in vitro and in vivo applications.
Directory of Open Access Journals (Sweden)
Michael Doron
2017-11-01
Full Text Available The NMDA spike is a long-lasting nonlinear phenomenon initiated locally in the dendritic branches of a variety of cortical neurons. It plays a key role in synaptic plasticity and in single-neuron computations. Combining dynamic system theory and computational approaches, we now explore how the timing of synaptic inhibition affects the NMDA spike and its associated membrane current. When impinging on its early phase, individual inhibitory synapses strongly, but transiently, dampen the NMDA spike; later inhibition prematurely terminates it. A single inhibitory synapse reduces the NMDA-mediated Ca2+ current, a key player in plasticity, by up to 45%. NMDA spikes in distal dendritic branches/spines are longer-lasting and more resilient to inhibition, enhancing synaptic plasticity at these branches. We conclude that NMDA spikes are highly sensitive to dendritic inhibition; sparse weak inhibition can finely tune synaptic plasticity both locally at the dendritic branch level and globally at the level of the neuron’s output.
A compound memristive synapse model for statistical learning through STDP in spiking neural networks
Directory of Open Access Journals (Sweden)
Johannes eBill
2014-12-01
Full Text Available Memristors have recently emerged as promising circuit elements to mimic the function of biological synapses in neuromorphic computing. The fabrication of reliable nanoscale memristive synapses, that feature continuous conductance changes based on the timing of pre- and postsynaptic spikes, has however turned out to be challenging. In this article, we propose an alternative approach, the compound memristive synapse, that circumvents this problem by the use of memristors with binary memristive states. A compound memristive synapse employs multiple bistable memristors in parallel to jointly form one synapse, thereby providing a spectrum of synaptic efficacies. We investigate the computational implications of synaptic plasticity in the compound synapse by integrating the recently observed phenomenon of stochastic filament formation into an abstract model of stochastic switching. Using this abstract model, we first show how standard pulsing schemes give rise to spike-timing dependent plasticity (STDP with a stabilizing weight dependence in compound synapses. In a next step, we study unsupervised learning with compound synapses in networks of spiking neurons organized in a winner-take-all architecture. Our theoretical analysis reveals that compound-synapse STDP implements generalized Expectation-Maximization in the spiking network. Specifically, the emergent synapse configuration represents the most salient features of the input distribution in a Mixture-of-Gaussians generative model. Furthermore, the network’s spike response to spiking input streams approximates a well-defined Bayesian posterior distribution. We show in computer simulations how such networks learn to represent high-dimensional distributions over images of handwritten digits with high fidelity even in presence of substantial device variations and under severe noise conditions. Therefore, the compound memristive synapse may provide a synaptic design principle for future neuromorphic
Bill, Johannes; Legenstein, Robert
2014-01-01
Memristors have recently emerged as promising circuit elements to mimic the function of biological synapses in neuromorphic computing. The fabrication of reliable nanoscale memristive synapses, that feature continuous conductance changes based on the timing of pre- and postsynaptic spikes, has however turned out to be challenging. In this article, we propose an alternative approach, the compound memristive synapse, that circumvents this problem by the use of memristors with binary memristive states. A compound memristive synapse employs multiple bistable memristors in parallel to jointly form one synapse, thereby providing a spectrum of synaptic efficacies. We investigate the computational implications of synaptic plasticity in the compound synapse by integrating the recently observed phenomenon of stochastic filament formation into an abstract model of stochastic switching. Using this abstract model, we first show how standard pulsing schemes give rise to spike-timing dependent plasticity (STDP) with a stabilizing weight dependence in compound synapses. In a next step, we study unsupervised learning with compound synapses in networks of spiking neurons organized in a winner-take-all architecture. Our theoretical analysis reveals that compound-synapse STDP implements generalized Expectation-Maximization in the spiking network. Specifically, the emergent synapse configuration represents the most salient features of the input distribution in a Mixture-of-Gaussians generative model. Furthermore, the network's spike response to spiking input streams approximates a well-defined Bayesian posterior distribution. We show in computer simulations how such networks learn to represent high-dimensional distributions over images of handwritten digits with high fidelity even in presence of substantial device variations and under severe noise conditions. Therefore, the compound memristive synapse may provide a synaptic design principle for future neuromorphic architectures.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
International Nuclear Information System (INIS)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok
2016-01-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
Energy Technology Data Exchange (ETDEWEB)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin, E-mail: dengbin@tju.edu.cn; Chan, Wai-lok [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)
2016-06-15
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
Detection of bursts in extracellular spike trains using hidden semi-Markov point process models.
Tokdar, Surya; Xi, Peiyi; Kelly, Ryan C; Kass, Robert E
2010-08-01
Neurons in vitro and in vivo have epochs of bursting or "up state" activity during which firing rates are dramatically elevated. Various methods of detecting bursts in extracellular spike trains have appeared in the literature, the most widely used apparently being Poisson Surprise (PS). A natural description of the phenomenon assumes (1) there are two hidden states, which we label "burst" and "non-burst," (2) the neuron evolves stochastically, switching at random between these two states, and (3) within each state the spike train follows a time-homogeneous point process. If in (2) the transitions from non-burst to burst and burst to non-burst states are memoryless, this becomes a hidden Markov model (HMM). For HMMs, the state transitions follow exponential distributions, and are highly irregular. Because observed bursting may in some cases be fairly regular-exhibiting inter-burst intervals with small variation-we relaxed this assumption. When more general probability distributions are used to describe the state transitions the two-state point process model becomes a hidden semi-Markov model (HSMM). We developed an efficient Bayesian computational scheme to fit HSMMs to spike train data. Numerical simulations indicate the method can perform well, sometimes yielding very different results than those based on PS.
Marukame, Takao; Nishi, Yoshifumi; Yasuda, Shin-ichi; Tanamoto, Tetsufumi
2018-04-01
The use of memristive devices for creating artificial neurons is promising for brain-inspired computing from the viewpoints of computation architecture and learning protocol. We present an energy-efficient multiplier accumulator based on a memristive array architecture incorporating both analog and digital circuitries. The analog circuitry is used to full advantage for neural networks, as demonstrated by the spike-timing-dependent plasticity (STDP) in fabricated AlO x /TiO x -based metal-oxide memristive devices. STDP protocols for controlling periodic analog resistance with long-range stability were experimentally verified using a variety of voltage amplitudes and spike timings.
Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan
2018-02-01
Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.
O'Connor, P.; Welling, M.
2016-01-01
We introduce an algorithm to do backpropagation on a spiking network. Our network is "spiking" in the sense that our neurons accumulate their activation into a potential over time, and only send out a signal (a "spike") when this potential crosses a threshold and the neuron is reset. Neurons only
Visual language recognition with a feed-forward network of spiking neurons
Energy Technology Data Exchange (ETDEWEB)
Rasmussen, Craig E [Los Alamos National Laboratory; Garrett, Kenyan [Los Alamos National Laboratory; Sottile, Matthew [GALOIS; Shreyas, Ns [INDIANA UNIV.
2010-01-01
An analogy is made and exploited between the recognition of visual objects and language parsing. A subset of regular languages is used to define a one-dimensional 'visual' language, in which the words are translational and scale invariant. This allows an exploration of the viewpoint invariant languages that can be solved by a network of concurrent, hierarchically connected processors. A language family is defined that is hierarchically tiling system recognizable (HREC). As inspired by nature, an algorithm is presented that constructs a cellular automaton that recognizes strings from a language in the HREC family. It is demonstrated how a language recognizer can be implemented from the cellular automaton using a feed-forward network of spiking neurons. This parser recognizes fixed-length strings from the language in parallel and as the computation is pipelined, a new string can be parsed in each new interval of time. The analogy with formal language theory allows inferences to be drawn regarding what class of objects can be recognized by visual cortex operating in purely feed-forward fashion and what class of objects requires a more complicated network architecture.
Directory of Open Access Journals (Sweden)
William eLennon
2014-12-01
Full Text Available While the anatomy of the cerebellar microcircuit is well studied, how it implements cerebellar function is not understood. A number of models have been proposed to describe this mechanism but few emphasize the role of the vast network Purkinje cells (PKJs form with the molecular layer interneurons (MLIs – the stellate and basket cells. We propose a model of the MLI-PKJ network composed of simple spiking neurons incorporating the major anatomical and physiological features. In computer simulations, the model reproduces the irregular firing patterns observed in PKJs and MLIs in vitro and a shift toward faster, more regular firing patterns when inhibitory synaptic currents are blocked. In the model, the time between PKJ spikes is shown to be proportional to the amount of feedforward inhibition from an MLI on average. The two key elements of the model are: (1 spontaneously active PKJs and MLIs due to an endogenous depolarizing current, and (2 adherence to known anatomical connectivity along a parasagittal strip of cerebellar cortex. We propose this model to extend previous spiking network models of the cerebellum and for further computational investigation into the role of irregular firing and MLIs in cerebellar learning and function.
McKinstry, Jeffrey L; Edelman, Gerald M
2013-01-01
Animal behavior often involves a temporally ordered sequence of actions learned from experience. Here we describe simulations of interconnected networks of spiking neurons that learn to generate patterns of activity in correct temporal order. The simulation consists of large-scale networks of thousands of excitatory and inhibitory neurons that exhibit short-term synaptic plasticity and spike-timing dependent synaptic plasticity. The neural architecture within each area is arranged to evoke winner-take-all (WTA) patterns of neural activity that persist for tens of milliseconds. In order to generate and switch between consecutive firing patterns in correct temporal order, a reentrant exchange of signals between these areas was necessary. To demonstrate the capacity of this arrangement, we used the simulation to train a brain-based device responding to visual input by autonomously generating temporal sequences of motor actions.
Multistability in a neuron model with extracellular potassium dynamics
Wu, Xing-Xing; Shuai, J. W.
2012-06-01
Experiments show a primary role of extracellular potassium concentrations in neuronal hyperexcitability and in the generation of epileptiform bursting and depolarization blocks without synaptic mechanisms. We adopt a physiologically relevant hippocampal CA1 neuron model in a zero-calcium condition to better understand the function of extracellular potassium in neuronal seizurelike activities. The model neuron is surrounded by interstitial space in which potassium ions are able to accumulate. Potassium currents, Na+-K+ pumps, glial buffering, and ion diffusion are regulatory mechanisms of extracellular potassium. We also consider a reduced model with a fixed potassium concentration. The bifurcation structure and spiking frequency of the two models are studied. We show that, besides hyperexcitability and bursting pattern modulation, the potassium dynamics can induce not only bistability but also tristability of different firing patterns. Our results reveal the emergence of the complex behavior of multistability due to the dynamical [K+]o modulation on neuronal activities.
Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data
Deng, Xinyi
2016-08-01
A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in
Directory of Open Access Journals (Sweden)
Ping Zhong
2011-02-01
Full Text Available Serotonin exerts a powerful influence on neuronal excitability. In this study, we investigated the effects of serotonin on different neuronal populations in prefrontal cortex (PFC, a major area controlling emotion and cognition. Using whole-cell recordings in PFC slices, we found that bath application of 5-HT dose-dependently increased the firing of FS (fast spiking interneurons, and decreased the firing of pyramidal neurons. The enhancing effect of 5-HT in FS interneurons was mediated by 5-HT₂ receptors, while the reducing effect of 5-HT in pyramidal neurons was mediated by 5-HT₁ receptors. Fluoxetine, the selective serotonin reuptake inhibitor, also induced a concentration-dependent increase in the excitability of FS interneurons, but had little effect on pyramidal neurons. In rats with chronic fluoxetine treatment, the excitability of FS interneurons was significantly increased, while pyramidal neurons remained unchanged. Fluoxetine injection largely occluded the enhancing effect of 5-HT in FS interneurons, but did not alter the reducing effect of 5-HT in pyramidal neurons. These data suggest that the excitability of PFC interneurons and pyramidal neurons is regulated by exogenous 5-HT in an opposing manner, and FS interneurons are the major target of Fluoxetine. It provides a framework for understanding the action of 5-HT and antidepressants in altering PFC network activity.
Statistical properties of superimposed stationary spike trains.
Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan
2012-06-01
The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.
The dynamic brain: from spiking neurons to neural masses and cortical fields.
Directory of Open Access Journals (Sweden)
Gustavo Deco
2008-08-01
Full Text Available The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition. Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly. In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement. Computational models at different space-time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data. Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons. Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns. Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem. Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI, electroencephalogram (EEG, and magnetoencephalogram (MEG. Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties. Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data. This makes dynamic models critical in integrating theory and experiments. We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the
Neuronal Networks in Children with Continuous Spikes and Waves during Slow Sleep
Siniatchkin, Michael; Groening, Kristina; Moehring, Jan; Moeller, Friederike; Boor, Rainer; Brodbeck, Verena; Michel, Christoph M.; Rodionov, Roman; Lemieux, Louis; Stephani, Ulrich
2010-01-01
Epileptic encephalopathy with continuous spikes and waves during slow sleep is an age-related disorder characterized by the presence of interictal epileptiform discharges during at least greater than 85% of sleep and cognitive deficits associated with this electroencephalography pattern. The pathophysiological mechanisms of continuous spikes and…
Fast Na+ spike generation in dendrites of guinea-pig substantia nigra pars compacta neurons
DEFF Research Database (Denmark)
Nedergaard, S; Hounsgaard, Jørn Dybkjær
1996-01-01
were not inhibited in the presence of glutamate receptor antagonists or during Ca2+ channel blockade. Blockers of gap junctional conductance (sodium propionate, octanol and halothane) did not affect the field-induced spikes. The spike generation was highly sensitive to changes in membrane conductance...
Three-dimensional chimera patterns in networks of spiking neuron oscillators
Kasimatis, T.; Hizanidis, J.; Provata, A.
2018-05-01
We study the stable spatiotemporal patterns that arise in a three-dimensional (3D) network of neuron oscillators, whose dynamics is described by the leaky integrate-and-fire (LIF) model. More specifically, we investigate the form of the chimera states induced by a 3D coupling matrix with nonlocal topology. The observed patterns are in many cases direct generalizations of the corresponding two-dimensional (2D) patterns, e.g., spheres, layers, and cylinder grids. We also find cylindrical and "cross-layered" chimeras that do not have an equivalent in 2D systems. Quantitative measures are calculated, such as the ratio of synchronized and unsynchronized neurons as a function of the coupling range, the mean phase velocities, and the distribution of neurons in mean phase velocities. Based on these measures, the chimeras are categorized in two families. The first family of patterns is observed for weaker coupling and exhibits higher mean phase velocities for the unsynchronized areas of the network. The opposite holds for the second family, where the unsynchronized areas have lower mean phase velocities. The various measures demonstrate discontinuities, indicating criticality as the parameters cross from the first family of patterns to the second.
Pastore, Vito Paolo; Godjoski, Aleksandar; Martinoia, Sergio; Massobrio, Paolo
2018-01-01
We implemented an automated and efficient open-source software for the analysis of multi-site neuronal spike signals. The software package, named SPICODYN, has been developed as a standalone windows GUI application, using C# programming language with Microsoft Visual Studio based on .NET framework 4.5 development environment. Accepted input data formats are HDF5, level 5 MAT and text files, containing recorded or generated time series spike signals data. SPICODYN processes such electrophysiological signals focusing on: spiking and bursting dynamics and functional-effective connectivity analysis. In particular, for inferring network connectivity, a new implementation of the transfer entropy method is presented dealing with multiple time delays (temporal extension) and with multiple binary patterns (high order extension). SPICODYN is specifically tailored to process data coming from different Multi-Electrode Arrays setups, guarantying, in those specific cases, automated processing. The optimized implementation of the Delayed Transfer Entropy and the High-Order Transfer Entropy algorithms, allows performing accurate and rapid analysis on multiple spike trains from thousands of electrodes.
Directory of Open Access Journals (Sweden)
Risako Kato
2018-03-01
Full Text Available General anesthetics decrease the frequency and density of spike firing. This effect makes it difficult to detect spike regularity. To overcome this problem, we developed a method utilizing the unfolding transformation which analyzes the energy level statistics in the random matrix theory. We regarded the energy axis as time axis of neuron spike and analyzed the time series of cortical neural firing in vivo. Unfolding transformation detected regularities of neural firing while changes in firing densities were associated with pentobarbital. We found that unfolding transformation enables us to compare firing regularity between awake and anesthetic conditions on a universal scale. Keywords: Unfolding transformation, Spike-timing, Regularity
Billock, Vincent A
2018-04-01
Neural spike rate data are more restricted in range than related psychophysical data. For example, several studies suggest a compressive (roughly cube root) nonlinear relationship between wavelength-opponent spike rates in primate midbrain and color appearance in humans, two rather widely separated domains. This presents an opportunity to partially bridge a chasm between these two domains and to probe the putative nonlinearity with other psychophysical data. Here neural wavelength-opponent data are used to create cortical competition models for hue opponency. This effort led to creation of useful models of spiking neuron winner-take-all (WTA) competition and MAX selection. When fed with actual primate data, the spiking WTA models generate reasonable wavelength-opponent spike rate behaviors. An average psychophysical observer for red-green and blue-yellow opponency is curated from eight applicable studies in the refereed and dissertation literatures, with cancellation data roughly every 10 nm in 18 subjects for yellow-blue opponency and 15 subjects for red-green opponency. A direct mapping between spiking neurons with broadband wavelength sensitivity and human psychophysical luminance yields a power law exponent of 0.27, similar to the cube root nonlinearity. Similarly, direct mapping between the WTA model opponent spike rates and psychophysical opponent data suggests power law relationships with exponents between 0.24 and 0.41.
Hyperbolic Plykin attractor can exist in neuron models
DEFF Research Database (Denmark)
Belykh, V.; Belykh, I.; Mosekilde, Erik
2005-01-01
Strange hyperbolic attractors are hard to find in real physical systems. This paper provides the first example of a realistic system, a canonical three-dimensional (3D) model of bursting neurons, that is likely to have a strange hyperbolic attractor. Using a geometrical approach to the study...... of the neuron model, we derive a flow-defined Poincare map giving ail accurate account of the system's dynamics. In a parameter region where the neuron system undergoes bifurcations causing transitions between tonic spiking and bursting, this two-dimensional map becomes a map of a disk with several periodic...... holes. A particular case is the map of a disk with three holes, matching the Plykin example of a planar hyperbolic attractor. The corresponding attractor of the 3D neuron model appears to be hyperbolic (this property is not verified in the present paper) and arises as a result of a two-loop (secondary...
Directory of Open Access Journals (Sweden)
Luis I Angel-Chavez
Full Text Available In signal transduction research natural or synthetic molecules are commonly used to target a great variety of signaling proteins. For instance, forskolin, a diterpene activator of adenylate cyclase, has been widely used in cellular preparations to increase the intracellular cAMP level. However, it has been shown that forskolin directly inhibits some cloned K+ channels, which in excitable cells set up the resting membrane potential, the shape of action potential and regulate repetitive firing. Despite the growing evidence indicating that K+ channels are blocked by forskolin, there are no studies yet assessing the impact of this mechanism of action on neuron excitability and firing patterns. In sympathetic neurons, we find that forskolin and its derivative 1,9-Dideoxyforskolin, reversibly suppress the delayed rectifier K+ current (IKV. Besides, forskolin reduced the spike afterhyperpolarization and enhanced the spike frequency-dependent adaptation. Given that IKV is mostly generated by Kv2.1 channels, HEK-293 cells were transfected with cDNA encoding for the Kv2.1 α subunit, to characterize the mechanism of forskolin action. Both drugs reversible suppressed the Kv2.1-mediated K+ currents. Forskolin inhibited Kv2.1 currents and IKV with an IC50 of ~32 μM and ~24 µM, respectively. Besides, the drug induced an apparent current inactivation and slowed-down current deactivation. We suggest that forskolin reduces the excitability of sympathetic neurons by enhancing the spike frequency-dependent adaptation, partially through a direct block of their native Kv2.1 channels.
Angel-Chavez, Luis I; Acosta-Gómez, Eduardo I; Morales-Avalos, Mario; Castro, Elena; Cruzblanca, Humberto
2015-01-01
In signal transduction research natural or synthetic molecules are commonly used to target a great variety of signaling proteins. For instance, forskolin, a diterpene activator of adenylate cyclase, has been widely used in cellular preparations to increase the intracellular cAMP level. However, it has been shown that forskolin directly inhibits some cloned K+ channels, which in excitable cells set up the resting membrane potential, the shape of action potential and regulate repetitive firing. Despite the growing evidence indicating that K+ channels are blocked by forskolin, there are no studies yet assessing the impact of this mechanism of action on neuron excitability and firing patterns. In sympathetic neurons, we find that forskolin and its derivative 1,9-Dideoxyforskolin, reversibly suppress the delayed rectifier K+ current (IKV). Besides, forskolin reduced the spike afterhyperpolarization and enhanced the spike frequency-dependent adaptation. Given that IKV is mostly generated by Kv2.1 channels, HEK-293 cells were transfected with cDNA encoding for the Kv2.1 α subunit, to characterize the mechanism of forskolin action. Both drugs reversible suppressed the Kv2.1-mediated K+ currents. Forskolin inhibited Kv2.1 currents and IKV with an IC50 of ~32 μM and ~24 µM, respectively. Besides, the drug induced an apparent current inactivation and slowed-down current deactivation. We suggest that forskolin reduces the excitability of sympathetic neurons by enhancing the spike frequency-dependent adaptation, partially through a direct block of their native Kv2.1 channels.
Dierck, Carina
2018-01-01
CSWS is an age-related epileptic encephalopathy consisting of the triad of seizures, neuropsychological impairment and a specific EEG-pattern. This EEG-pattern is characterized by spike-and-wave-discharges emphasized during non-REM sleep. Until now, little has been known about the pathophysiologic processes. So far research approaches on the underlying neuronal network have been based on techniques with a good spatial but poor temporal resolution like fMRI and FDG-PET. In this study the se...
Stimulus Sensitivity of a Spiking Neural Network Model
Chevallier, Julien
2018-02-01
Some recent papers relate the criticality of complex systems to their maximal capacity of information processing. In the present paper, we consider high dimensional point processes, known as age-dependent Hawkes processes, which have been used to model spiking neural networks. Using mean-field approximation, the response of the network to a stimulus is computed and we provide a notion of stimulus sensitivity. It appears that the maximal sensitivity is achieved in the sub-critical regime, yet almost critical for a range of biologically relevant parameters.
Bifurcation software in Matlab with applications in neuronal modeling.
Govaerts, Willy; Sautois, Bart
2005-02-01
Many biological phenomena, notably in neuroscience, can be modeled by dynamical systems. We describe a recent improvement of a Matlab software package for dynamical systems with applications to modeling single neurons and all-to-all connected networks of neurons. The new software features consist of an object-oriented approach to bifurcation computations and the partial inclusion of C-code to speed up the computation. As an application, we study the origin of the spiking behaviour of neurons when the equilibrium state is destabilized by an incoming current. We show that Class II behaviour, i.e. firing with a finite frequency, is possible even if the destabilization occurs through a saddle-node bifurcation. Furthermore, we show that synchronization of an all-to-all connected network of such neurons with only excitatory connections is also possible in this case.
Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer
Directory of Open Access Journals (Sweden)
Michael eHines
2011-11-01
Full Text Available The performance of several spike exchange methods using a Blue Gene/P supercomputerhas been tested with 8K to 128K cores using randomly connected networks of up to 32M cells with 1k connections per cell and 4M cells with 10k connections per cell. The spike exchange methods used are the standard Message Passing Interface collective, MPI_Allgather, and several variants of the non-blocking multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access communication available on the Blue Gene/P. In all cases the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods --- the persistent multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast;and a two phase multisend in which a DCMF_Multicast is used to first send to a subset of phase 1 destination cores which then pass it on to their subset of phase 2 destination cores --- had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the multisend methods is almost completely due to load imbalance caused by the largevariation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a direct memory access controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect neural net architecture but be randomly distributed so that sets of cells which are burst firing together should be on different processors with their targets on as large a set of processors as possible.
Haegens, Saskia; Nácher, Verónica; Luna, Rogelio; Romo, Ranulfo; Jensen, Ole
2011-11-29
Extensive work in humans using magneto- and electroencephalography strongly suggests that decreased oscillatory α-activity (8-14 Hz) facilitates processing in a given region, whereas increased α-activity serves to actively suppress irrelevant or interfering processing. However, little work has been done to understand how α-activity is linked to neuronal firing. Here, we simultaneously recorded local field potentials and spikes from somatosensory, premotor, and motor regions while a trained monkey performed a vibrotactile discrimination task. In the local field potentials we observed strong activity in the α-band, which decreased in the sensorimotor regions during the discrimination task. This α-power decrease predicted better discrimination performance. Furthermore, the α-oscillations demonstrated a rhythmic relation with the spiking, such that firing was highest at the trough of the α-cycle. Firing rates increased with a decrease in α-power. These findings suggest that α-oscillations exercise a strong inhibitory influence on both spike timing and firing rate. Thus, the pulsed inhibition by α-oscillations plays an important functional role in the extended sensorimotor system.
A network of spiking neurons for computing sparse representations in an energy-efficient way.
Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B
2012-11-01
Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.
Connelly, William M; Crunelli, Vincenzo; Errington, Adam C
2017-05-24
Backpropagating action potentials (bAPs) are indispensable in dendritic signaling. Conflicting Ca 2+ -imaging data and an absence of dendritic recording data means that the extent of backpropagation in thalamocortical (TC) and thalamic reticular nucleus (TRN) neurons remains unknown. Because TRN neurons signal electrically through dendrodendritic gap junctions and possibly via chemical dendritic GABAergic synapses, as well as classical axonal GABA release, this lack of knowledge is problematic. To address this issue, we made two-photon targeted patch-clamp recordings from rat TC and TRN neuron dendrites to measure bAPs directly. These recordings reveal that "tonic"' and low-threshold-spike (LTS) "burst" APs in both cell types are always recorded first at the soma before backpropagating into the dendrites while undergoing substantial distance-dependent dendritic amplitude attenuation. In TC neurons, bAP attenuation strength varies according to firing mode. During LTS bursts, somatic AP half-width increases progressively with increasing spike number, allowing late-burst spikes to propagate more efficiently into the dendritic tree compared with spikes occurring at burst onset. Tonic spikes have similar somatic half-widths to late burst spikes and undergo similar dendritic attenuation. In contrast, in TRN neurons, AP properties are unchanged between LTS bursts and tonic firing and, as a result, distance-dependent dendritic attenuation remains consistent across different firing modes. Therefore, unlike LTS-associated global electrical and calcium signals, the spatial influence of bAP signaling in TC and TRN neurons is more restricted, with potentially important behavioral-state-dependent consequences for synaptic integration and plasticity in thalamic neurons. SIGNIFICANCE STATEMENT In most neurons, action potentials (APs) initiate in the axosomatic region and propagate into the dendritic tree to provide a retrograde signal that conveys information about the level of
International Nuclear Information System (INIS)
Yu, Haitao; Guo, Xinmeng; Wang, Jiang; Deng, Bin; Wei, Xile
2014-01-01
The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient for the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks
Energy Technology Data Exchange (ETDEWEB)
Yu, Haitao; Guo, Xinmeng; Wang, Jiang, E-mail: jiangwang@tju.edu.cn; Deng, Bin; Wei, Xile [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)
2014-09-01
The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient for the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks.
International Nuclear Information System (INIS)
Kanamura, Takashi
2007-01-01
This paper proposes a new model for electricity prices based on demand and supply, which we call a structural model. We show that the structural model can generate price spikes that fits the observed data better than those generated by other preceding models such as the jump diffusion model and the Box-Cox transformation model. We apply the structural model to obtain the optimal operation policy for a pumped-storage hydropower generator, and show that the structural model can provide more realistic optimal policies than the jump diffusion model. (author)
Energy Technology Data Exchange (ETDEWEB)
Kanamura, Takashi [Hitotsubashi University, Tokyo (Japan). Graduate School of International Corporate Strategy; Ohashi, Azuhiko [J-Power, Tokyo (Japan)
2007-09-15
This paper proposes a new model for electricity prices based on demand and supply, which we call a structural model. We show that the structural model can generate price spikes that fits the observed data better than those generated by other preceding models such as the jump diffusion model and the Box-Cox transformation model. We apply the structural model to obtain the optimal operation policy for a pumped-storage hydropower generator, and show that the structural model can provide more realistic optimal policies than the jump diffusion model. (author)
Modeling task-specific neuronal ensembles improves decoding of grasp
Smith, Ryan J.; Soares, Alcimar B.; Rouse, Adam G.; Schieber, Marc H.; Thakor, Nitish V.
2018-06-01
Objective. Dexterous movement involves the activation and coordination of networks of neuronal populations across multiple cortical regions. Attempts to model firing of individual neurons commonly treat the firing rate as directly modulating with motor behavior. However, motor behavior may additionally be associated with modulations in the activity and functional connectivity of neurons in a broader ensemble. Accounting for variations in neural ensemble connectivity may provide additional information about the behavior being performed. Approach. In this study, we examined neural ensemble activity in primary motor cortex (M1) and premotor cortex (PM) of two male rhesus monkeys during performance of a center-out reach, grasp and manipulate task. We constructed point process encoding models of neuronal firing that incorporated task-specific variations in the baseline firing rate as well as variations in functional connectivity with the neural ensemble. Models were evaluated both in terms of their encoding capabilities and their ability to properly classify the grasp being performed. Main results. Task-specific ensemble models correctly predicted the performed grasp with over 95% accuracy and were shown to outperform models of neuronal activity that assume only a variable baseline firing rate. Task-specific ensemble models exhibited superior decoding performance in 82% of units in both monkeys (p < 0.01). Inclusion of ensemble activity also broadly improved the ability of models to describe observed spiking. Encoding performance of task-specific ensemble models, measured by spike timing predictability, improved upon baseline models in 62% of units. Significance. These results suggest that additional discriminative information about motor behavior found in the variations in functional connectivity of neuronal ensembles located in motor-related cortical regions is relevant to decode complex tasks such as grasping objects, and may serve the basis for more
Directory of Open Access Journals (Sweden)
Paul eChorley
2011-05-01
Full Text Available Dopaminergic neurons in the mammalian substantia nigra displaycharacteristic phasic responses to stimuli which reliably predict thereceipt of primary rewards. These responses have been suggested toencode reward prediction-errors similar to those used in reinforcementlearning. Here, we propose a model of dopaminergic activity in whichprediction error signals are generated by the joint action ofshort-latency excitation and long-latency inhibition, in a networkundergoing dopaminergic neuromodulation of both spike-timing dependentsynaptic plasticity and neuronal excitability. In contrast toprevious models, sensitivity to recent events is maintained by theselective modification of specific striatal synapses, efferent tocortical neurons exhibiting stimulus-specific, temporally extendedactivity patterns. Our model shows, in the presence of significantbackground activity, (i a shift in dopaminergic response from rewardto reward predicting stimuli, (ii preservation of a response tounexpected rewards, and (iii a precisely-timed below-baseline dip inactivity observed when expected rewards are omitted.
Regalia, Giulia; Coelli, Stefania; Biffi, Emilia; Ferrigno, Giancarlo; Pedrocchi, Alessandra
2016-01-01
Neuronal spike sorting algorithms are designed to retrieve neuronal network activity on a single-cell level from extracellular multiunit recordings with Microelectrode Arrays (MEAs). In typical analysis of MEA data, one spike sorting algorithm is applied indiscriminately to all electrode signals. However, this approach neglects the dependency of algorithms' performances on the neuronal signals properties at each channel, which require data-centric methods. Moreover, sorting is commonly performed off-line, which is time and memory consuming and prevents researchers from having an immediate glance at ongoing experiments. The aim of this work is to provide a versatile framework to support the evaluation and comparison of different spike classification algorithms suitable for both off-line and on-line analysis. We incorporated different spike sorting "building blocks" into a Matlab-based software, including 4 feature extraction methods, 3 feature clustering methods, and 1 template matching classifier. The framework was validated by applying different algorithms on simulated and real signals from neuronal cultures coupled to MEAs. Moreover, the system has been proven effective in running on-line analysis on a standard desktop computer, after the selection of the most suitable sorting methods. This work provides a useful and versatile instrument for a supported comparison of different options for spike sorting towards more accurate off-line and on-line MEA data analysis.
Energy Model of Neuron Activation.
Romanyshyn, Yuriy; Smerdov, Andriy; Petrytska, Svitlana
2017-02-01
On the basis of the neurophysiological strength-duration (amplitude-duration) curve of neuron activation (which relates the threshold amplitude of a rectangular current pulse of neuron activation to the pulse duration), as well as with the use of activation energy constraint (the threshold curve corresponds to the energy threshold of neuron activation by a rectangular current pulse), an energy model of neuron activation by a single current pulse has been constructed. The constructed model of activation, which determines its spectral properties, is a bandpass filter. Under the condition of minimum-phase feature of the neuron activation model, on the basis of Hilbert transform, the possibilities of phase-frequency response calculation from its amplitude-frequency response have been considered. Approximation to the amplitude-frequency response by the response of the Butterworth filter of the first order, as well as obtaining the pulse response corresponding to this approximation, give us the possibility of analyzing the efficiency of activating current pulses of various shapes, including analysis in accordance with the energy constraint.
Stochastic resonance in models of neuronal ensembles
International Nuclear Information System (INIS)
Chialvo, D.R.; Longtin, A.; Mueller-Gerkin, J.
1997-01-01
Two recently suggested mechanisms for the neuronal encoding of sensory information involving the effect of stochastic resonance with aperiodic time-varying inputs are considered. It is shown, using theoretical arguments and numerical simulations, that the nonmonotonic behavior with increasing noise of the correlation measures used for the so-called aperiodic stochastic resonance (ASR) scenario does not rely on the cooperative effect typical of stochastic resonance in bistable and excitable systems. Rather, ASR with slowly varying signals is more properly interpreted as linearization by noise. Consequently, the broadening of the open-quotes resonance curveclose quotes in the multineuron stochastic resonance without tuning scenario can also be explained by this linearization. Computation of the input-output correlation as a function of both signal frequency and noise for the model system further reveals conditions where noise-induced firing with aperiodic inputs will benefit from stochastic resonance rather than linearization by noise. Thus, our study clarifies the tuning requirements for the optimal transduction of subthreshold aperiodic signals. It also shows that a single deterministic neuron can perform as well as a network when biased into a suprathreshold regime. Finally, we show that the inclusion of a refractory period in the spike-detection scheme produces a better correlation between instantaneous firing rate and input signal. copyright 1997 The American Physical Society
Distributed Cerebellar Motor Learning; a Spike-Timing-Dependent Plasticity Model
Directory of Open Access Journals (Sweden)
Niceto Rafael Luque
2016-03-01
Full Text Available Deep cerebellar nuclei neurons receive both inhibitory (GABAergic synaptic currents from Purkinje cells (within the cerebellar cortex and excitatory (glutamatergic synaptic currents from mossy fibres. Those two deep cerebellar nucleus inputs are thought to be also adaptive, embedding interesting properties in the framework of accurate movements. We show that distributed spike-timing-dependent plasticity mechanisms (STDP located at different cerebellar sites (parallel fibres to Purkinje cells, mossy fibres to deep cerebellar nucleus cells, and Purkinje cells to deep cerebellar nucleus cells in close-loop simulations provide an explanation for the complex learning properties of the cerebellum in motor learning. Concretely, we propose a new mechanistic cerebellar spiking model. In this new model, deep cerebellar nuclei embed a dual functionality: deep cerebellar nuclei acting as a gain adaptation mechanism and as a facilitator for the slow memory consolidation at mossy fibres to deep cerebellar nucleus synapses. Equipping the cerebellum with excitatory (e-STDP and inhibitory (i-STDP mechanisms at deep cerebellar nuclei afferents allows the accommodation of synaptic memories that were formed at parallel fibres to Purkinje cells synapses and then transferred to mossy fibres to deep cerebellar nucleus synapses. These adaptive mechanisms also contribute to modulate the deep-cerebellar-nucleus-output firing rate (output gain modulation towards optimising its working range.
Kim, Suhwan; Baek, Juyeong; Jung, Unsang; Lee, Sangwon; Jung, Woonggyu; Kim, Jeehyun; Kang, Shinwon
2013-05-01
Recently, Mouse neuroblastoma cells are considered as an attractive model for the study of human neurological and prion diseases, and intensively used as a model system in different areas. Among those areas, differentiation of neuro2a (N2A) cells, receptor mediated ion current, and glutamate induced physiological response are actively investigated. The reason for the interest to mouse neuroblastoma N2A cells is that they have a fast growing rate than other cells in neural origin with a few another advantages. This study evaluated the calcium oscillations and neural spikes recording of mouse neuroblastoma N2A cells in an epileptic condition. Based on our observation of neural spikes in mouse N2A cell with our proposed imaging modality, we report that mouse neuroblastoma N2A cells can be an important model related to epileptic activity studies. It is concluded that the mouse neuroblastoma N2A cells produce the epileptic spikes in vitro in the same way as produced by the neurons or the astrocytes. This evidence advocates the increased and strong level of neurotransmitters release by enhancement in free calcium using the 4-aminopyridine which causes the mouse neuroblastoma N2A cells to produce the epileptic spikes and calcium oscillation.
Kim, Suhwan; Jung, Unsang; Baek, Juyoung; Lee, Sangwon; Jung, Woonggyu; Kim, Jeehyun; Kang, Shinwon
2013-01-01
Recently, mouse neuroblastoma cells have been considered as an attractive model for the study of human neurological and prion diseases, and they have been intensively used as a model system in different areas. For example, the differentiation of neuro2a (N2A) cells, receptor-mediated ion current, and glutamate-induced physiological responses have been actively investigated with these cells. These mouse neuroblastoma N2A cells are of interest because they grow faster than other cells of neural origin and have a number of other advantages. The calcium oscillations and neural spikes of mouse neuroblastoma N2A cells in epileptic conditions are evaluated. Based on our observations of neural spikes in these cells with our proposed imaging modality, we reported that they can be an important model in epileptic activity studies. We concluded that mouse neuroblastoma N2A cells produce epileptic spikes in vitro in the same way as those produced by neurons or astrocytes. This evidence suggests that increased levels of neurotransmitter release due to the enhancement of free calcium from 4-aminopyridine causes the mouse neuroblastoma N2A cells to produce epileptic spikes and calcium oscillations.
Efficient spiking neural network model of pattern motion selectivity in visual cortex.
Beyeler, Michael; Richert, Micah; Dutt, Nikil D; Krichmar, Jeffrey L
2014-07-01
Simulating large-scale models of biological motion perception is challenging, due to the required memory to store the network structure and the computational power needed to quickly solve the neuronal dynamics. A low-cost yet high-performance approach to simulating large-scale neural network models in real-time is to leverage the parallel processing capability of graphics processing units (GPUs). Based on this approach, we present a two-stage model of visual area MT that we believe to be the first large-scale spiking network to demonstrate pattern direction selectivity. In this model, component-direction-selective (CDS) cells in MT linearly combine inputs from V1 cells that have spatiotemporal receptive fields according to the motion energy model of Simoncelli and Heeger. Pattern-direction-selective (PDS) cells in MT are constructed by pooling over MT CDS cells with a wide range of preferred directions. Responses of our model neurons are comparable to electrophysiological results for grating and plaid stimuli as well as speed tuning. The behavioral response of the network in a motion discrimination task is in agreement with psychophysical data. Moreover, our implementation outperforms a previous implementation of the motion energy model by orders of magnitude in terms of computational speed and memory usage. The full network, which comprises 153,216 neurons and approximately 40 million synapses, processes 20 frames per second of a 40 × 40 input video in real-time using a single off-the-shelf GPU. To promote the use of this algorithm among neuroscientists and computer vision researchers, the source code for the simulator, the network, and analysis scripts are publicly available.
Directory of Open Access Journals (Sweden)
Jason eJerome
2011-08-01
Full Text Available Neurons in the mammalian neocortex receive inputs from and communicate back to thousands of other neurons, creating complex spatiotemporal activity patterns. The experimental investigation of these parallel dynamic interactions has been limited due to the technical challenges of monitoring or manipulating neuronal activity at that level of complexity. Here we describe a new massively parallel photostimulation system that can be used to control action potential firing in in vitro brain slices with high spatial and temporal resolution while performing extracellular or intracellular electrophysiological measurements. The system uses Digital-Light-Processing (DLP technology to generate 2-dimensional (2D stimulus patterns with >780,000 independently controlled photostimulation sites that operate at high spatial (5.4 µm and temporal (>13kHz resolution. Light is projected through the quartz-glass bottom of the perfusion chamber providing access to a large area (2.76 x 2.07 mm2 of the slice preparation. This system has the unique capability to induce temporally precise action potential firing in large groups of neurons distributed over a wide area covering several cortical columns. Parallel photostimulation opens up new opportunities for the in vitro experimental investigation of spatiotemporal neuronal interactions at a broad range of anatomical scales.
Shiau, LieJune; Schwalger, Tilo; Lindner, Benjamin
2015-06-01
We study the spike statistics of an adaptive exponential integrate-and-fire neuron stimulated by white Gaussian current noise. We derive analytical approximations for the coefficient of variation and the serial correlation coefficient of the interspike interval assuming that the neuron operates in the mean-driven tonic firing regime and that the stochastic input is weak. Our result for the serial correlation coefficient has the form of a geometric sequence and is confirmed by the comparison to numerical simulations. The theory predicts various patterns of interval correlations (positive or negative at lag one, monotonically decreasing or oscillating) depending on the strength of the spike-triggered and subthreshold components of the adaptation current. In particular, for pure subthreshold adaptation we find strong positive ISI correlations that are usually ascribed to positive correlations in the input current. Our results i) provide an alternative explanation for interspike-interval correlations observed in vivo, ii) may be useful in fitting point neuron models to experimental data, and iii) may be instrumental in exploring the role of adaptation currents for signal detection and signal transmission in single neurons.
Franosch, Jan-Moritz P; Urban, Sebastian; van Hemmen, J Leo
2013-12-01
How can an animal learn from experience? How can it train sensors, such as the auditory or tactile system, based on other sensory input such as the visual system? Supervised spike-timing-dependent plasticity (supervised STDP) is a possible answer. Supervised STDP trains one modality using input from another one as "supervisor." Quite complex time-dependent relationships between the senses can be learned. Here we prove that under very general conditions, supervised STDP converges to a stable configuration of synaptic weights leading to a reconstruction of primary sensory input.
Directory of Open Access Journals (Sweden)
André eCyr
2014-07-01
Full Text Available We demonstrate the operant conditioning (OC learning process within a basic bio-inspired robot controller paradigm, using an artificial spiking neural network (ASNN with minimal component count as artificial brain. In biological agents, OC results in behavioral changes that are learned from the consequences of previous actions, using progressive prediction adjustment triggered by reinforcers. In a robotics context, virtual and physical robots may benefit from a similar learning skill when facing unknown environments with no supervision. In this work, we demonstrate that a simple ASNN can efficiently realise many OC scenarios. The elementary learning kernel that we describe relies on a few critical neurons, synaptic links and the integration of habituation and spike-timing dependent plasticity (STDP as learning rules. Using four tasks of incremental complexity, our experimental results show that such minimal neural component set may be sufficient to implement many OC procedures. Hence, with the described bio-inspired module, OC can be implemented in a wide range of robot controllers, including those with limited computational resources.
Thalamic neuron models encode stimulus information by burst-size modulation
Directory of Open Access Journals (Sweden)
Daniel Henry Elijah
2015-09-01
Full Text Available Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of
Thalamic neuron models encode stimulus information by burst-size modulation.
Elijah, Daniel H; Samengo, Inés; Montemurro, Marcelo A
2015-01-01
Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here, we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of thalamic neurons.
Robust emergence of small-world structure in networks of spiking neurons.
Kwok, Hoi Fei; Jurica, Peter; Raffone, Antonino; van Leeuwen, Cees
2007-03-01
Spontaneous activity in biological neural networks shows patterns of dynamic synchronization. We propose that these patterns support the formation of a small-world structure-network connectivity optimal for distributed information processing. We present numerical simulations with connected Hindmarsh-Rose neurons in which, starting from random connection distributions, small-world networks evolve as a result of applying an adaptive rewiring rule. The rule connects pairs of neurons that tend fire in synchrony, and disconnects ones that fail to synchronize. Repeated application of the rule leads to small-world structures. This mechanism is robustly observed for bursting and irregular firing regimes.
Multiplicative multifractal modeling and discrimination of human neuronal activity
International Nuclear Information System (INIS)
Zheng Yi; Gao Jianbo; Sanchez, Justin C.; Principe, Jose C.; Okun, Michael S.
2005-01-01
Understanding neuronal firing patterns is one of the most important problems in theoretical neuroscience. It is also very important for clinical neurosurgery. In this Letter, we introduce a computational procedure to examine whether neuronal firing recordings could be characterized by cascade multiplicative multifractals. By analyzing raw recording data as well as generated spike train data from 3 patients collected in two brain areas, the globus pallidus externa (GPe) and the globus pallidus interna (GPi), we show that the neural firings are consistent with a multifractal process over certain time scale range (t 1 ,t 2 ), where t 1 is argued to be not smaller than the mean inter-spike-interval of neuronal firings, while t 2 may be related to the time that neuronal signals propagate in the major neural branching structures pertinent to GPi and GPe. The generalized dimension spectrum D q effectively differentiates the two brain areas, both intra- and inter-patients. For distinguishing between GPe and GPi, it is further shown that the cascade model is more effective than the methods recently examined by Schiff et al. as well as the Fano factor analysis. Therefore, the methodology may be useful in developing computer aided tools to help clinicians perform precision neurosurgery in the operating room
Phase transitions and self-organized criticality in networks of stochastic spiking neurons.
Brochini, Ludmila; de Andrade Costa, Ariadne; Abadi, Miguel; Roque, Antônio C; Stolfi, Jorge; Kinouchi, Osame
2016-11-07
Phase transitions and critical behavior are crucial issues both in theoretical and experimental neuroscience. We report analytic and computational results about phase transitions and self-organized criticality (SOC) in networks with general stochastic neurons. The stochastic neuron has a firing probability given by a smooth monotonic function Φ(V) of the membrane potential V, rather than a sharp firing threshold. We find that such networks can operate in several dynamic regimes (phases) depending on the average synaptic weight and the shape of the firing function Φ. In particular, we encounter both continuous and discontinuous phase transitions to absorbing states. At the continuous transition critical boundary, neuronal avalanches occur whose distributions of size and duration are given by power laws, as observed in biological neural networks. We also propose and test a new mechanism to produce SOC: the use of dynamic neuronal gains - a form of short-term plasticity probably located at the axon initial segment (AIS) - instead of depressing synapses at the dendrites (as previously studied in the literature). The new self-organization mechanism produces a slightly supercritical state, that we called SOSC, in accord to some intuitions of Alan Turing.
Introduction to spiking neural networks: Information processing, learning and applications.
Ponulak, Filip; Kasinski, Andrzej
2011-01-01
The concept that neural information is encoded in the firing rate of neurons has been the dominant paradigm in neurobiology for many years. This paradigm has also been adopted by the theory of artificial neural networks. Recent physiological experiments demonstrate, however, that in many parts of the nervous system, neural code is founded on the timing of individual action potentials. This finding has given rise to the emergence of a new class of neural models, called spiking neural networks. In this paper we summarize basic properties of spiking neurons and spiking networks. Our focus is, specifically, on models of spike-based information coding, synaptic plasticity and learning. We also survey real-life applications of spiking models. The paper is meant to be an introduction to spiking neural networks for scientists from various disciplines interested in spike-based neural processing.
Stochastic Variational Learning in Recurrent Spiking Networks
Directory of Open Access Journals (Sweden)
Danilo eJimenez Rezende
2014-04-01
Full Text Available The ability to learn and perform statistical inference with biologically plausible recurrent network of spiking neurons is an important step towards understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators conveying information about ``novelty on a statistically rigorous ground.Simulations show that our model is able to learn bothstationary and non-stationary patterns of spike trains.We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.
Stochastic variational learning in recurrent spiking networks.
Jimenez Rezende, Danilo; Gerstner, Wulfram
2014-01-01
The ability to learn and perform statistical inference with biologically plausible recurrent networks of spiking neurons is an important step toward understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators) conveying information about "novelty" on a statistically rigorous ground. Simulations show that our model is able to learn both stationary and non-stationary patterns of spike trains. We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.
Wenger Combremont, Anne-Laure; Bayer, Laurence; Dupré, Anouk; Mühlethaler, Michel; Serafin, Mauro
2016-08-01
Fast spiking (FS) GABAergic neurons are thought to be involved in the generation of high-frequency cortical rhythms during the waking state. We previously showed that cortical layer 6b (L6b) was a specific target for the wake-promoting transmitter, hypocretin/orexin (hcrt/orx). Here, we have investigated whether L6b FS cells were sensitive to hcrt/orx and other transmitters associated with cortical activation. Recordings were thus made from L6b FS cells in either wild-type mice or in transgenic mice in which GFP-positive GABAergic cells are parvalbumin positive. Whereas in a control condition hcrt/orx induced a strong increase in the frequency, but not amplitude, of spontaneous synaptic currents, in the presence of TTX, it had no effect at all on miniature synaptic currents. Hcrt/orx effect was thus presynaptic although not by an action on glutamatergic terminals but rather on neighboring cells. In contrast, noradrenaline and acetylcholine depolarized and excited these cells through a direct postsynaptic action. Neurotensin, which is colocalized in hcrt/orx neurons, also depolarized and excited these cells but the effect was indirect. Morphologically, these cells exhibited basket-like features. These results suggest that hcrt/orx, noradrenaline, acetylcholine, and neurotensin could contribute to high-frequency cortical activity through an action on L6b GABAergic FS cells. © The Author 2016. Published by Oxford University Press.
Temporal Information Processing and Stability Analysis of the MHSN Neuron Model in DDF
Directory of Open Access Journals (Sweden)
Saket Kumar Choudhary
2016-12-01
Full Text Available Implementation of a neuron like information processing structure at hardware level is a burning research problem. In this article, we analyze the modified hybrid spiking neuron model (the MHSN model in distributed delay framework (DDF for hardware level implementation point of view. We investigate its temporal information processing capability in term of inter-spike-interval (ISI distribution. We also perform the stability analysis of the MHSN model, in which, we compute nullclines, steady state solution, eigenvalues corresponding the MHSN model. During phase plane analysis, we notice that the MHSN model generates limit cycle oscillations which is an important phenomenon in many biological processes. Qualitative behavior of these limit cycle does not changes due to the variation in applied input stimulus, however, delay effect the spiking activity and duration of cycle get altered.
Spike-based population coding and working memory.
Directory of Open Access Journals (Sweden)
Martin Boerlin
2011-02-01
Full Text Available Compelling behavioral evidence suggests that humans can make optimal decisions despite the uncertainty inherent in perceptual or motor tasks. A key question in neuroscience is how populations of spiking neurons can implement such probabilistic computations. In this article, we develop a comprehensive framework for optimal, spike-based sensory integration and working memory in a dynamic environment. We propose that probability distributions are inferred spike-per-spike in recurrently connected networks of integrate-and-fire neurons. As a result, these networks can combine sensory cues optimally, track the state of a time-varying stimulus and memorize accumulated evidence over periods much longer than the time constant of single neurons. Importantly, we propose that population responses and persistent working memory states represent entire probability distributions and not only single stimulus values. These memories are reflected by sustained, asynchronous patterns of activity which make relevant information available to downstream neurons within their short time window of integration. Model neurons act as predictive encoders, only firing spikes which account for new information that has not yet been signaled. Thus, spike times signal deterministically a prediction error, contrary to rate codes in which spike times are considered to be random samples of an underlying firing rate. As a consequence of this coding scheme, a multitude of spike patterns can reliably encode the same information. This results in weakly correlated, Poisson-like spike trains that are sensitive to initial conditions but robust to even high levels of external neural noise. This spike train variability reproduces the one observed in cortical sensory spike trains, but cannot be equated to noise. On the contrary, it is a consequence of optimal spike-based inference. In contrast, we show that rate-based models perform poorly when implemented with stochastically spiking neurons.
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers
Directory of Open Access Journals (Sweden)
Jakob Jordan
2018-02-01
Full Text Available State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.
Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne
2018-01-01
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers
Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne
2018-01-01
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613
Auditory information coding by modeled cochlear nucleus neurons.
Wang, Huan; Isik, Michael; Borst, Alexander; Hemmert, Werner
2011-06-01
In this paper we use information theory to quantify the information in the output spike trains of modeled cochlear nucleus globular bushy cells (GBCs). GBCs are part of the sound localization pathway. They are known for their precise temporal processing, and they code amplitude modulations with high fidelity. Here we investigated the information transmission for a natural sound, a recorded vowel. We conclude that the maximum information transmission rate for a single neuron was close to 1,050 bits/s, which corresponds to a value of approximately 5.8 bits per spike. For quasi-periodic signals like voiced speech, the transmitted information saturated as word duration increased. In general, approximately 80% of the available information from the spike trains was transmitted within about 20 ms. Transmitted information for speech signals concentrated around formant frequency regions. The efficiency of neural coding was above 60% up to the highest temporal resolution we investigated (20 μs). The increase in transmitted information to that precision indicates that these neurons are able to code information with extremely high fidelity, which is required for sound localization. On the other hand, only 20% of the information was captured when the temporal resolution was reduced to 4 ms. As the temporal resolution of most speech recognition systems is limited to less than 10 ms, this massive information loss might be one of the reasons which are responsible for the lack of noise robustness of these systems.
Spiking patterns of a hippocampus model in electric fields
International Nuclear Information System (INIS)
Men Cong; Wang Jiang; Qin Ying-Mei; Wei Xi-Le; Deng Bin; Che Yan-Qiu
2011-01-01
We develop a model of CA3 neurons embedded in a resistive array to mimic the effects of electric fields from a new perspective. Effects of DC and sinusoidal electric fields on firing patterns in CA3 neurons are investigated in this study. The firing patterns can be switched from no firing pattern to burst or from burst to fast periodic firing pattern with the increase of DC electric field intensity. It is also found that the firing activities are sensitive to the frequency and amplitude of the sinusoidal electric field. Different phase-locking states and chaotic firing regions are observed in the parameter space of frequency and amplitude. These findings are qualitatively in accordance with the results of relevant experimental and numerical studies. It is implied that the external or endogenous electric field can modulate the neural code in the brain. Furthermore, it is helpful to develop control strategies based on electric fields to control neural diseases such as epilepsy. (interdisciplinary physics and related areas of science and technology)
The local field potential reflects surplus spike synchrony
DEFF Research Database (Denmark)
Denker, Michael; Roux, Sébastien; Lindén, Henrik
2011-01-01
While oscillations of the local field potential (LFP) are commonly attributed to the synchronization of neuronal firing rate on the same time scale, their relationship to coincident spiking in the millisecond range is unknown. Here, we present experimental evidence to reconcile the notions...... of synchrony at the level of spiking and at the mesoscopic scale. We demonstrate that only in time intervals of significant spike synchrony that cannot be explained on the basis of firing rates, coincident spikes are better phase locked to the LFP than predicted by the locking of the individual spikes....... This effect is enhanced in periods of large LFP amplitudes. A quantitative model explains the LFP dynamics by the orchestrated spiking activity in neuronal groups that contribute the observed surplus synchrony. From the correlation analysis, we infer that neurons participate in different constellations...
Matsubara, Takashi
2017-01-01
Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning.
Synchronizations in small-world networks of spiking neurons: Diffusive versus sigmoid couplings
International Nuclear Information System (INIS)
Hasegawa, Hideo
2005-01-01
By using a semianalytical dynamical mean-field approximation previously proposed by the author [H. Hasegawa, Phys. Rev. E 70, 066107 (2004)], we have studied the synchronization of stochastic, small-world (SW) networks of FitzHugh-Nagumo neurons with diffusive couplings. The difference and similarity between results for diffusive and sigmoid couplings have been discussed. It has been shown that with introducing the weak heterogeneity to regular networks, the synchronization may be slightly increased for diffusive couplings, while it is decreased for sigmoid couplings. This increase in the synchronization for diffusive couplings is shown to be due to their local, negative feedback contributions, but not due to the short average distance in SW networks. Synchronization of SW networks depends not only on their structure but also on the type of couplings
Brink, S; Nease, S; Hasler, P
2013-09-01
Results are presented from several spiking network experiments performed on a novel neuromorphic integrated circuit. The networks are discussed in terms of their computational significance, which includes applications such as arbitrary spatiotemporal pattern generation and recognition, winner-take-all competition, stable generation of rhythmic outputs, and volatile memory. Analogies to the behavior of real biological neural systems are also noted. The alternatives for implementing the same computations are discussed and compared from a computational efficiency standpoint, with the conclusion that implementing neural networks on neuromorphic hardware is significantly more power efficient than numerical integration of model equations on traditional digital hardware. Copyright © 2013 Elsevier Ltd. All rights reserved.
Insel, Nathan; Barnes, Carol A.
2015-01-01
The medial prefrontal cortex is thought to be important for guiding behavior according to an animal's expectations. Efforts to decode the region have focused not only on the question of what information it computes, but also how distinct circuit components become engaged during behavior. We find that the activity of regular-firing, putative projection neurons contains rich information about behavioral context and firing fields cluster around reward sites, while activity among putative inhibitory and fast-spiking neurons is most associated with movement and accompanying sensory stimulation. These dissociations were observed even between adjacent neurons with apparently reciprocal, inhibitory–excitatory connections. A smaller population of projection neurons with burst-firing patterns did not show clustered firing fields around rewards; these neurons, although heterogeneous, were generally less selective for behavioral context than regular-firing cells. The data suggest a network that tracks an animal's behavioral situation while, at the same time, regulating excitation levels to emphasize high valued positions. In this scenario, the function of fast-spiking inhibitory neurons is to constrain network output relative to incoming sensory flow. This scheme could serve as a bridge between abstract sensorimotor information and single-dimensional codes for value, providing a neural framework to generate expectations from behavioral state. PMID:24700585
Simple cortical and thalamic neuron models for digital arithmetic circuit implementation
Directory of Open Access Journals (Sweden)
Takuya eNanami
2016-05-01
Full Text Available Trade-off between reproducibility of neuronal activities and computational efficiency is one ofcrucial subjects in computational neuroscience and neuromorphic engineering. A wide variety ofneuronal models have been studied from different viewpoints. The digital spiking silicon neuron(DSSN model is a qualitative model that focuses on efficient implementation by digital arithmeticcircuits. We expanded the DSSN model and found appropriate parameter sets with which itreproduces the dynamical behaviors of the ionic-conductance models of four classes of corticaland thalamic neurons. We first developed a 4-variable model by reducing the number of variablesin the ionic-conductance models and elucidated its mathematical structures using bifurcationanalysis. Then, expanded DSSN models were constructed that reproduce these mathematicalstructures and capture the characteristic behavior of each neuron class. We confirmed thatstatistics of the neuronal spike sequences are similar in the DSSN and the ionic-conductancemodels. Computational cost of the DSSN model is larger than that of the recent sophisticatedIntegrate-and-Fire-based models, but smaller than the ionic-conductance models. This modelis intended to provide another meeting point for above trade-off that satisfies the demand forlarge-scale neuronal network simulation with closer-to-biology models.
Lagorce, Xavier; Benosman, Ryad
2015-11-01
There has been significant research over the past two decades in developing new platforms for spiking neural computation. Current neural computers are primarily developed to mimic biology. They use neural networks, which can be trained to perform specific tasks to mainly solve pattern recognition problems. These machines can do more than simulate biology; they allow us to rethink our current paradigm of computation. The ultimate goal is to develop brain-inspired general purpose computation architectures that can breach the current bottleneck introduced by the von Neumann architecture. This work proposes a new framework for such a machine. We show that the use of neuron-like units with precise timing representation, synaptic diversity, and temporal delays allows us to set a complete, scalable compact computation framework. The framework provides both linear and nonlinear operations, allowing us to represent and solve any function. We show usability in solving real use cases from simple differential equations to sets of nonlinear differential equations leading to chaotic attractors.
Noise and Synchronization Analysis of the Cold-Receptor Neuronal Network Model
Directory of Open Access Journals (Sweden)
Ying Du
2014-01-01
Full Text Available This paper analyzes the dynamics of the cold receptor neural network model. First, it examines noise effects on neuronal stimulus in the model. From ISI plots, it is shown that there are considerable differences between purely deterministic simulations and noisy ones. The ISI-distance is used to measure the noise effects on spike trains quantitatively. It is found that spike trains observed in neural models can be more strongly affected by noise for different temperatures in some aspects; meanwhile, spike train has greater variability with the noise intensity increasing. The synchronization of neuronal network with different connectivity patterns is also studied. It is shown that chaotic and high period patterns are more difficult to get complete synchronization than the situation in single spike and low period patterns. The neuronal network will exhibit various patterns of firing synchronization by varying some key parameters such as the coupling strength. Different types of firing synchronization are diagnosed by a correlation coefficient and the ISI-distance method. The simulations show that the synchronization status of neurons is related to the network connectivity patterns.
One-dimensional map-based neuron model: A logistic modification
International Nuclear Information System (INIS)
Mesbah, Samineh; Moghtadaei, Motahareh; Hashemi Golpayegani, Mohammad Reza; Towhidkhah, Farzad
2014-01-01
A one-dimensional map is proposed for modeling some of the neuronal activities, including different spiking and bursting behaviors. The model is obtained by applying some modifications on the well-known Logistic map and is named the Modified and Confined Logistic (MCL) model. Map-based neuron models are known as phenomenological models and recently, they are widely applied in modeling tasks due to their computational efficacy. Most of discrete map-based models involve two variables representing the slow-fast prototype. There are also some one-dimensional maps, which can replicate some of the neuronal activities. However, the existence of four bifurcation parameters in the MCL model gives rise to reproduction of spiking behavior with control over the frequency of the spikes, and imitation of chaotic and regular bursting responses concurrently. It is also shown that the proposed model has the potential to reproduce more realistic bursting activity by adding a second variable. Moreover the MCL model is able to replicate considerable number of experimentally observed neuronal responses introduced in Izhikevich (2004) [23]. Some analytical and numerical analyses of the MCL model dynamics are presented to explain the emersion of complex dynamics from this one-dimensional map
Spiking neural network for recognizing spatiotemporal sequences of spikes
International Nuclear Information System (INIS)
Jin, Dezhe Z.
2004-01-01
Sensory neurons in many brain areas spike with precise timing to stimuli with temporal structures, and encode temporally complex stimuli into spatiotemporal spikes. How the downstream neurons read out such neural code is an important unsolved problem. In this paper, we describe a decoding scheme using a spiking recurrent neural network. The network consists of excitatory neurons that form a synfire chain, and two globally inhibitory interneurons of different types that provide delayed feedforward and fast feedback inhibition, respectively. The network signals recognition of a specific spatiotemporal sequence when the last excitatory neuron down the synfire chain spikes, which happens if and only if that sequence was present in the input spike stream. The recognition scheme is invariant to variations in the intervals between input spikes within some range. The computation of the network can be mapped into that of a finite state machine. Our network provides a simple way to decode spatiotemporal spikes with diverse types of neurons
Directory of Open Access Journals (Sweden)
Attila Szücs
Full Text Available Neurons display a high degree of variability and diversity in the expression and regulation of their voltage-dependent ionic channels. Under low level of synaptic background a number of physiologically distinct cell types can be identified in most brain areas that display different responses to standard forms of intracellular current stimulation. Nevertheless, it is not well understood how biophysically different neurons process synaptic inputs in natural conditions, i.e., when experiencing intense synaptic bombardment in vivo. While distinct cell types might process synaptic inputs into different patterns of action potentials representing specific "motifs" of network activity, standard methods of electrophysiology are not well suited to resolve such questions. In the current paper we performed dynamic clamp experiments with simulated synaptic inputs that were presented to three types of neurons in the juxtacapsular bed nucleus of stria terminalis (jcBNST of the rat. Our analysis on the temporal structure of firing showed that the three types of jcBNST neurons did not produce qualitatively different spike responses under identical patterns of input. However, we observed consistent, cell type dependent variations in the fine structure of firing, at the level of single spikes. At the millisecond resolution structure of firing we found high degree of diversity across the entire spectrum of neurons irrespective of their type. Additionally, we identified a new cell type with intrinsic oscillatory properties that produced a rhythmic and regular firing under synaptic stimulation that distinguishes it from the previously described jcBNST cell types. Our findings suggest a sophisticated, cell type dependent regulation of spike dynamics of neurons when experiencing a complex synaptic background. The high degree of their dynamical diversity has implications to their cooperative dynamics and synchronization.
Generalized activity equations for spiking neural network dynamics
Directory of Open Access Journals (Sweden)
Michael A Buice
2013-11-01
Full Text Available Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales - the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances.
Surfing a spike wave down the ventral stream.
VanRullen, Rufin; Thorpe, Simon J
2002-10-01
Numerous theories of neural processing, often motivated by experimental observations, have explored the computational properties of neural codes based on the absolute or relative timing of spikes in spike trains. Spiking neuron models and theories however, as well as their experimental counterparts, have generally been limited to the simulation or observation of isolated neurons, isolated spike trains, or reduced neural populations. Such theories would therefore seem inappropriate to capture the properties of a neural code relying on temporal spike patterns distributed across large neuronal populations. Here we report a range of computer simulations and theoretical considerations that were designed to explore the possibilities of one such code and its relevance for visual processing. In a unified framework where the relation between stimulus saliency and spike relative timing plays the central role, we describe how the ventral stream of the visual system could process natural input scenes and extract meaningful information, both rapidly and reliably. The first wave of spikes generated in the retina in response to a visual stimulation carries information explicitly in its spatio-temporal structure: the most salient information is represented by the first spikes over the population. This spike wave, propagating through a hierarchy of visual areas, is regenerated at each processing stage, where its temporal structure can be modified by (i). the selectivity of the cortical neurons, (ii). lateral interactions and (iii). top-down attentional influences from higher order cortical areas. The resulting model could account for the remarkable efficiency and rapidity of processing observed in the primate visual system.
Parametric model to estimate containment loads following an ex-vessel steam spike
International Nuclear Information System (INIS)
Lopez, R.; Hernandez, J.; Huerta, A.
1998-01-01
This paper describes the use of a relatively simple parametric model to estimate containment loads following an ex-vessel steam spike. The study was motivated because several PSAs have identified containment loads accompanying reactor vessel failures as a major contributor to early containment failure. The paper includes a detailed description of the simple but physically sound parametric model which was adopted to estimate containment loads following a steam spike into the reactor cavity. (author)
Electricity market price spike analysis by a hybrid data model and feature selection technique
International Nuclear Information System (INIS)
Amjady, Nima; Keynia, Farshid
2010-01-01
In a competitive electricity market, energy price forecasting is an important activity for both suppliers and consumers. For this reason, many techniques have been proposed to predict electricity market prices in the recent years. However, electricity price is a complex volatile signal owning many spikes. Most of electricity price forecast techniques focus on the normal price prediction, while price spike forecast is a different and more complex prediction process. Price spike forecasting has two main aspects: prediction of price spike occurrence and value. In this paper, a novel technique for price spike occurrence prediction is presented composed of a new hybrid data model, a novel feature selection technique and an efficient forecast engine. The hybrid data model includes both wavelet and time domain variables as well as calendar indicators, comprising a large candidate input set. The set is refined by the proposed feature selection technique evaluating both relevancy and redundancy of the candidate inputs. The forecast engine is a probabilistic neural network, which are fed by the selected candidate inputs of the feature selection technique and predict price spike occurrence. The efficiency of the whole proposed method for price spike occurrence forecasting is evaluated by means of real data from the Queensland and PJM electricity markets. (author)
Geminiani, Alice; Casellato, Claudia; Antonietti, Alberto; D'Angelo, Egidio; Pedrocchi, Alessandra
2018-06-01
The cerebellum plays a crucial role in sensorimotor control and cerebellar disorders compromise adaptation and learning of motor responses. However, the link between alterations at network level and cerebellar dysfunction is still unclear. In principle, this understanding would benefit of the development of an artificial system embedding the salient neuronal and plastic properties of the cerebellum and operating in closed-loop. To this aim, we have exploited a realistic spiking computational model of the cerebellum to analyze the network correlates of cerebellar impairment. The model was modified to reproduce three different damages of the cerebellar cortex: (i) a loss of the main output neurons (Purkinje Cells), (ii) a lesion to the main cerebellar afferents (Mossy Fibers), and (iii) a damage to a major mechanism of synaptic plasticity (Long Term Depression). The modified network models were challenged with an Eye-Blink Classical Conditioning test, a standard learning paradigm used to evaluate cerebellar impairment, in which the outcome was compared to reference results obtained in human or animal experiments. In all cases, the model reproduced the partial and delayed conditioning typical of the pathologies, indicating that an intact cerebellar cortex functionality is required to accelerate learning by transferring acquired information to the cerebellar nuclei. Interestingly, depending on the type of lesion, the redistribution of synaptic plasticity and response timing varied greatly generating specific adaptation patterns. Thus, not only the present work extends the generalization capabilities of the cerebellar spiking model to pathological cases, but also predicts how changes at the neuronal level are distributed across the network, making it usable to infer cerebellar circuit alterations occurring in cerebellar pathologies.
Liao, Yuxi; Li, Hongbao; Zhang, Qiaosheng; Fan, Gong; Wang, Yiwen; Zheng, Xiaoxiang
2014-01-01
Decoding algorithm in motor Brain Machine Interfaces translates the neural signals to movement parameters. They usually assume the connection between the neural firings and movements to be stationary, which is not true according to the recent studies that observe the time-varying neuron tuning property. This property results from the neural plasticity and motor learning etc., which leads to the degeneration of the decoding performance when the model is fixed. To track the non-stationary neuron tuning during decoding, we propose a dual model approach based on Monte Carlo point process filtering method that enables the estimation also on the dynamic tuning parameters. When applied on both simulated neural signal and in vivo BMI data, the proposed adaptive method performs better than the one with static tuning parameters, which raises a promising way to design a long-term-performing model for Brain Machine Interfaces decoder.
Memory effects on a resonate-and-fire neuron model subjected to Ornstein-Uhlenbeck noise
Paekivi, S.; Mankin, R.; Rekker, A.
2017-10-01
We consider a generalized Langevin equation with an exponentially decaying memory kernel as a model for the firing process of a resonate-and-fire neuron. The effect of temporally correlated random neuronal input is modeled as Ornstein-Uhlenbeck noise. In the noise-induced spiking regime of the neuron, we derive exact analytical formulas for the dependence of some statistical characteristics of the output spike train, such as the probability distribution of the interspike intervals (ISIs) and the survival probability, on the parameters of the input stimulus. Particularly, on the basis of these exact expressions, we have established sufficient conditions for the occurrence of memory-time-induced transitions between unimodal and multimodal structures of the ISI density and a critical damping coefficient which marks a dynamical transition in the behavior of the system.
Directory of Open Access Journals (Sweden)
Viswanathan Arunachalam
2013-01-01
Full Text Available The classical models of single neuron like Hodgkin-Huxley point neuron or leaky integrate and fire neuron assume the influence of postsynaptic potentials to last till the neuron fires. Vidybida (2008 in a refreshing departure has proposed models for binding neurons in which the trace of an input is remembered only for a finite fixed period of time after which it is forgotten. The binding neurons conform to the behaviour of real neurons and are applicable in constructing fast recurrent networks for computer modeling. This paper develops explicitly several useful results for a binding neuron like the firing time distribution and other statistical characteristics. We also discuss the applicability of the developed results in constructing a modified hourglass network model in which there are interconnected neurons with excitatory as well as inhibitory inputs. Limited simulation results of the hourglass network are presented.
Kim, Elmer K; Wellnitz, Scott A; Bourdon, Sarah M; Lumpkin, Ellen A; Gerling, Gregory J
2012-07-23
The next generation of prosthetic limbs will restore sensory feedback to the nervous system by mimicking how skin mechanoreceptors, innervated by afferents, produce trains of action potentials in response to compressive stimuli. Prior work has addressed building sensors within skin substitutes for robotics, modeling skin mechanics and neural dynamics of mechanotransduction, and predicting response timing of action potentials for vibration. The effort here is unique because it accounts for skin elasticity by measuring force within simulated skin, utilizes few free model parameters for parsimony, and separates parameter fitting and model validation. Additionally, the ramp-and-hold, sustained stimuli used in this work capture the essential features of the everyday task of contacting and holding an object. This systems integration effort computationally replicates the neural firing behavior for a slowly adapting type I (SAI) afferent in its temporally varying response to both intensity and rate of indentation force by combining a physical force sensor, housed in a skin-like substrate, with a mathematical model of neuronal spiking, the leaky integrate-and-fire. Comparison experiments were then conducted using ramp-and-hold stimuli on both the spiking-sensor model and mouse SAI afferents. The model parameters were iteratively fit against recorded SAI interspike intervals (ISI) before validating the model to assess its performance. Model-predicted spike firing compares favorably with that observed for single SAI afferents. As indentation magnitude increases (1.2, 1.3, to 1.4 mm), mean ISI decreases from 98.81 ± 24.73, 54.52 ± 6.94, to 41.11 ± 6.11 ms. Moreover, as rate of ramp-up increases, ISI during ramp-up decreases from 21.85 ± 5.33, 19.98 ± 3.10, to 15.42 ± 2.41 ms. Considering first spikes, the predicted latencies exhibited a decreasing trend as stimulus rate increased, as is observed in afferent recordings. Finally, the SAI afferent's characteristic response
Directory of Open Access Journals (Sweden)
Pierre Berthet
2016-07-01
Full Text Available The brain enables animals to behaviourally adapt in order to survive in a complex and dynamic environment, but how reward-oriented behaviours are achieved and computed by its underlying neural circuitry is an open question. To address this concern, we have developed a spiking model of the basal ganglia (BG that learns to dis-inhibit the action leading to a reward despite ongoing changes in the reward schedule. The architecture of the network features the two pathways commonly described in BG, the direct (denoted D1 and the indirect (denoted D2 pathway, as well as a loop involving striatum and the dopaminergic system. The activity of these dopaminergic neurons conveys the reward prediction error (RPE, which determines the magnitude of synaptic plasticity within the different pathways. All plastic connections implement a versatile four-factor learning rule derived from Bayesian inference that depends upon pre- and postsynaptic activity, receptor type and dopamine level. Synaptic weight updates occur in the D1 or D2 pathways depending on the sign of the RPE, and an efference copy informs upstream nuclei about the action selected. We demonstrate successful performance of the system in a multiple-choice learning task with a transiently changing reward schedule. We simulate lesioning of the various pathways and show that a condition without the D2 pathway fares worse than one without D1. Additionally, we simulate the degeneration observed in Parkinson’s disease (PD by decreasing the number of dopaminergic neurons during learning. The results suggest that the D1 pathway impairment in PD might have been overlooked. Furthermore, an analysis of the alterations in the synaptic weights shows that using the absolute reward value instead of the RPE leads to a larger change in D1.
Leijon, Sara; Magnusson, Anna K.
2014-01-01
The functional role of efferent innervation of the vestibular end-organs in the inner ear remains elusive. This study provides the first physiological characterization of the cholinergic vestibular efferent (VE) neurons in the brainstem by utilizing a transgenic mouse model, expressing eGFP under a choline-acetyltransferase (ChAT)-locus spanning promoter in combination with targeted patch clamp recordings. The intrinsic electrical properties of the eGFP-positive VE neurons were compared to the properties of the lateral olivocochlear (LOC) brainstem neurons, which gives rise to efferent innervation of the cochlea. Both VE and the LOC neurons were marked by their negative resting membrane potential neurons differed significantly in the depolarizing range. When injected with positive currents, VE neurons fired action potentials faithfully to the onset of depolarization followed by sparse firing with long inter-spike intervals. This response gave rise to a low response gain. The LOC neurons, conversely, responded with a characteristic delayed tonic firing upon depolarizing stimuli, giving rise to higher response gain than the VE neurons. Depolarization triggered large TEA insensitive outward currents with fast inactivation kinetics, indicating A-type potassium currents, in both the inner ear-projecting neuronal types. Immunohistochemistry confirmed expression of Kv4.3 and 4.2 ion channel subunits in both the VE and LOC neurons. The difference in spiking responses to depolarization is related to a two-fold impact of these transient outward currents on somatic integration in the LOC neurons compared to in VE neurons. It is speculated that the physiological properties of the VE neurons might be compatible with a wide-spread control over motion and gravity sensation in the inner ear, providing likewise feed-back amplification of abrupt and strong phasic signals from the semi-circular canals and of tonic signals from the gravito-sensitive macular organs. PMID:24867596
C. elegans model of neuronal aging
Peng, Chiu-Ying; Chen, Chun-Hao; Hsu, Jiun-Min; Pan, Chun-Liang
2011-01-01
Aging of the nervous system underlies the behavioral and cognitive decline associated with senescence. Understanding the molecular and cellular basis of neuronal aging will therefore contribute to the development of effective treatments for aging and age-associated neurodegenerative disorders. Despite this pressing need, there are surprisingly few animal models that aim at recapitulating neuronal aging in a physiological context. We recently developed a C. elegans model of neuronal aging, and...
A Communication Theoretical Modeling of Axonal Propagation in Hippocampal Pyramidal Neurons.
Ramezani, Hamideh; Akan, Ozgur B
2017-06-01
Understanding the fundamentals of communication among neurons, known as neuro-spike communication, leads to reach bio-inspired nanoscale communication paradigms. In this paper, we focus on a part of neuro-spike communication, known as axonal transmission, and propose a realistic model for it. The shape of the spike during axonal transmission varies according to previously applied stimulations to the neuron, and these variations affect the amount of information communicated between neurons. Hence, to reach an accurate model for neuro-spike communication, the memory of axon and its effect on the axonal transmission should be considered, which are not studied in the existing literature. In this paper, we extract the important factors on the memory of axon and define memory states based on these factors. We also describe the transition among these states and the properties of axonal transmission in each of them. Finally, we demonstrate that the proposed model can follow changes in the axonal functionality properly by simulating the proposed model and reporting the root mean square error between simulation results and experimental data.
Parametric models to relate spike train and LFP dynamics with neural information processing.
Banerjee, Arpan; Dean, Heather L; Pesaran, Bijan
2012-01-01
Spike trains and local field potentials (LFPs) resulting from extracellular current flows provide a substrate for neural information processing. Understanding the neural code from simultaneous spike-field recordings and subsequent decoding of information processing events will have widespread applications. One way to demonstrate an understanding of the neural code, with particular advantages for the development of applications, is to formulate a parametric statistical model of neural activity and its covariates. Here, we propose a set of parametric spike-field models (unified models) that can be used with existing decoding algorithms to reveal the timing of task or stimulus specific processing. Our proposed unified modeling framework captures the effects of two important features of information processing: time-varying stimulus-driven inputs and ongoing background activity that occurs even in the absence of environmental inputs. We have applied this framework for decoding neural latencies in simulated and experimentally recorded spike-field sessions obtained from the lateral intraparietal area (LIP) of awake, behaving monkeys performing cued look-and-reach movements to spatial targets. Using both simulated and experimental data, we find that estimates of trial-by-trial parameters are not significantly affected by the presence of ongoing background activity. However, including background activity in the unified model improves goodness of fit for predicting individual spiking events. Uncovering the relationship between the model parameters and the timing of movements offers new ways to test hypotheses about the relationship between neural activity and behavior. We obtained significant spike-field onset time correlations from single trials using a previously published data set where significantly strong correlation was only obtained through trial averaging. We also found that unified models extracted a stronger relationship between neural response latency and trial
International Nuclear Information System (INIS)
Cyr, André; Boukadoum, Mounir
2013-01-01
This paper presents a novel bio-inspired habituation function for robots under control by an artificial spiking neural network. This non-associative learning rule is modelled at the synaptic level and validated through robotic behaviours in reaction to different stimuli patterns in a dynamical virtual 3D world. Habituation is minimally represented to show an attenuated response after exposure to and perception of persistent external stimuli. Based on current neurosciences research, the originality of this rule includes modulated response to variable frequencies of the captured stimuli. Filtering out repetitive data from the natural habituation mechanism has been demonstrated to be a key factor in the attention phenomenon, and inserting such a rule operating at multiple temporal dimensions of stimuli increases a robot's adaptive behaviours by ignoring broader contextual irrelevant information. (paper)
Cyr, André; Boukadoum, Mounir
2013-03-01
This paper presents a novel bio-inspired habituation function for robots under control by an artificial spiking neural network. This non-associative learning rule is modelled at the synaptic level and validated through robotic behaviours in reaction to different stimuli patterns in a dynamical virtual 3D world. Habituation is minimally represented to show an attenuated response after exposure to and perception of persistent external stimuli. Based on current neurosciences research, the originality of this rule includes modulated response to variable frequencies of the captured stimuli. Filtering out repetitive data from the natural habituation mechanism has been demonstrated to be a key factor in the attention phenomenon, and inserting such a rule operating at multiple temporal dimensions of stimuli increases a robot's adaptive behaviours by ignoring broader contextual irrelevant information.
Li, Y-L; Fu, Z-Y; Yang, M-J; Wang, J; Peng, K; Yang, L-J; Tang, J; Chen, Q-C
2015-03-19
To probe the mechanism underlying the auditory behavior-related response patterns of inferior collicular neurons to constant frequency-frequency modulation (CF-FM) stimulus in Hipposideros pratti, we studied the role of post-spike hyperpolarization (PSH) in the formation of response patterns. Neurons obtained by in vivo extracellular (N=145) and intracellular (N=171) recordings could be consistently classified into single-on (SO) and double-on (DO) neurons. Using intracellular recording, we found that both SO and DO neurons have a PSH with different durations. Statistical analysis showed that most SO neurons had a longer PSH duration than DO neurons (p<0.01). These data suggested that the PSH directly participated in the formation of SO and DO neurons, and the PSH elicited by the CF component was the main synaptic mechanism underlying the SO and DO response patterns. The possible biological significance of these findings relevant to bat echolocation is discussed. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Lorenzo Cangiano
Full Text Available Hyperpolarization-activated cyclic nucleotide-sensitive (HCN channels mediate the I(f current in heart and I(h throughout the nervous system. In spiking neurons I(h participates primarily in different forms of rhythmic activity. Little is known, however, about its role in neurons operating with graded potentials as in the retina, where all four channel isoforms are expressed. Intriguing evidence for an involvement of I(h in early visual processing are the side effects reported, in dim light or darkness, by cardiac patients treated with HCN inhibitors. Moreover, electroretinographic recordings indicate that these drugs affect temporal processing in the outer retina. Here we analyzed the functional role of HCN channels in rod bipolar cells (RBCs of the mouse. Perforated-patch recordings in the dark-adapted slice found that RBCs exhibit I(h, and that this is sensitive to the specific blocker ZD7288. RBC input impedance, explored by sinusoidal frequency-modulated current stimuli (0.1-30 Hz, displays band-pass behavior in the range of I(h activation. Theoretical modeling and pharmacological blockade demonstrate that high-pass filtering of input signals by I(h, in combination with low-pass filtering by passive properties, fully accounts for this frequency-tuning. Correcting for the depolarization introduced by shunting through the pipette-membrane seal, leads to predict that in darkness I(h is tonically active in RBCs and quickens their responses to dim light stimuli. Immunohistochemistry targeting candidate subunit isoforms HCN1-2, in combination with markers of RBCs (PKC and rod-RBC synaptic contacts (bassoon, mGluR6, Kv1.3, suggests that RBCs express HCN2 on the tip of their dendrites. The functional properties conferred by I(h onto RBCs may contribute to shape the retina's light response and explain the visual side effects of HCN inhibitors.
Directory of Open Access Journals (Sweden)
Sun Qian-Quan
2009-10-01
Full Text Available Abstract Background Little is known about the roles of dendritic gap junctions (GJs of inhibitory interneurons in modulating temporal properties of sensory induced responses in sensory cortices. Electrophysiological dual patch-clamp recording and computational simulation methods were used in combination to examine a novel role of GJs in sensory mediated feed-forward inhibitory responses in barrel cortex layer IV and its underlying mechanisms. Results Under physiological conditions, excitatory post-junctional potentials (EPJPs interact with thalamocortical (TC inputs within an unprecedented few milliseconds (i.e. over 200 Hz to enhance the firing probability and synchrony of coupled fast-spiking (FS cells. Dendritic GJ coupling allows fourfold increase in synchrony and a significant enhancement in spike transmission efficacy in excitatory spiny stellate cells. The model revealed the following novel mechanisms: 1 rapid capacitive current (Icap underlies the activation of voltage-gated sodium channels; 2 there was less than 2 milliseconds in which the Icap underlying TC input and EPJP was coupled effectively; 3 cells with dendritic GJs had larger input conductance and smaller membrane response to weaker inputs; 4 synchrony in inhibitory networks by GJ coupling leads to reduced sporadic lateral inhibition and increased TC transmission efficacy. Conclusion Dendritic GJs of neocortical inhibitory networks can have very powerful effects in modulating the strength and the temporal properties of sensory induced feed-forward inhibitory and excitatory responses at a very high frequency band (>200 Hz. Rapid capacitive currents are identified as main mechanisms underlying interaction between two transient synaptic conductances.
Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.
Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon
2016-01-01
Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.
Ulmer, Christopher J.; Motta, Arthur T.
2017-11-01
The development of TEM-visible damage in materials under irradiation at cryogenic temperatures cannot be explained using classical rate theory modeling with thermally activated reactions since at low temperatures thermal reaction rates are too low. Although point defect mobility approaches zero at low temperature, the thermal spikes induced by displacement cascades enable some atom mobility as it cools. In this work a model is developed to calculate "athermal" reaction rates from the atomic mobility within the irradiation-induced thermal spikes, including both displacement cascades and electronic stopping. The athermal reaction rates are added to a simple rate theory cluster dynamics model to allow for the simulation of microstructure evolution during irradiation at cryogenic temperatures. The rate theory model is applied to in-situ irradiation of ZrC and compares well at cryogenic temperatures. The results show that the addition of the thermal spike model makes it possible to rationalize microstructure evolution in the low temperature regime.
International Nuclear Information System (INIS)
Ho, J.C.
2004-01-01
Among those theories to interpret the PWR iodine spiking behaviors, the most accepted concept is based on steam formation and condensation in damaged fuel rods. Due to the complex nature of the phenomenon, a comprehensive model of the iodine behavior has not yet been successfully developed. In 1992 a new empirical model was introduced to establish a correlation with the operating parameters. The comparison results of the predicted iodine-131 equivalent activity value with the operating radiochemistry database was off by 23%. This paper presents an improved model. Although it is still an empirical model which also gives a first order estimation of the peak iodine spiking magnitude, the deviation between prediction and measurement was reduced to ∼7%. It is believed that this improved model can be used for better prediction and control of the iodine spiking magnitude resulted from failed fuel rods during power transients or plant shutdown. (author)
Matsubara, Takashi; Torikai, Hiroyuki
2016-04-01
Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.
Timing intervals using population synchrony and spike timing dependent plasticity
Directory of Open Access Journals (Sweden)
Wei Xu
2016-12-01
Full Text Available We present a computational model by which ensembles of regularly spiking neurons can encode different time intervals through synchronous firing. We show that a neuron responding to a large population of convergent inputs has the potential to learn to produce an appropriately-timed output via spike-time dependent plasticity. We explain why temporal variability of this population synchrony increases with increasing time intervals. We also show that the scalar property of timing and its violation at short intervals can be explained by the spike-wise accumulation of jitter in the inter-spike intervals of timing neurons. We explore how the challenge of encoding longer time intervals can be overcome and conclude that this may involve a switch to a different population of neurons with lower firing rate, with the added effect of producing an earlier bias in response. Experimental data on human timing performance show features in agreement with the model’s output.
Ishibashi, Masaru; Gumenchuk, Iryna; Miyazaki, Kenichi; Inoue, Takafumi; Ross, William N; Leonard, Christopher S
2016-09-28
Orexins (hypocretins) are neuropeptides that regulate multiple homeostatic processes, including reward and arousal, in part by exciting serotonergic dorsal raphe neurons, the major source of forebrain serotonin. Here, using mouse brain slices, we found that, instead of simply depolarizing these neurons, orexin-A altered the spike encoding process by increasing the postspike afterhyperpolarization (AHP) via two distinct mechanisms. This orexin-enhanced AHP (oeAHP) was mediated by both OX1 and OX2 receptors, required Ca(2+) influx, reversed near EK, and decayed with two components, the faster of which resulted from enhanced SK channel activation, whereas the slower component decayed like a slow AHP (sAHP), but was not blocked by UCL2077, an antagonist of sAHPs in some neurons. Intracellular phospholipase C inhibition (U73122) blocked the entire oeAHP, but neither component was sensitive to PKC inhibition or altered PKA signaling, unlike classical sAHPs. The enhanced SK current did not depend on IP3-mediated Ca(2+) release but resulted from A-current inhibition and the resultant spike broadening, which increased Ca(2+) influx and Ca(2+)-induced-Ca(2+) release, whereas the slower component was insensitive to these factors. Functionally, the oeAHP slowed and stabilized orexin-induced firing compared with firing produced by a virtual orexin conductance lacking the oeAHP. The oeAHP also reduced steady-state firing rate and firing fidelity in response to stimulation, without affecting the initial rate or fidelity. Collectively, these findings reveal a new orexin action in serotonergic raphe neurons and suggest that, when orexin is released during arousal and reward, it enhances the spike encoding of phasic over tonic inputs, such as those related to sensory, motor, and reward events. Orexin peptides are known to excite neurons via slow postsynaptic depolarizations. Here we elucidate a significant new orexin action that increases and prolongs the postspike
Training spiking neural networks to associate spatio-temporal input-output spike patterns
Mohemmed, A; Schliebs, S; Matsuda, S; Kasabov, N
2013-01-01
In a previous work (Mohemmed et al., Method for training a spiking neuron to associate input–output spike trains) [1] we have proposed a supervised learning algorithm based on temporal coding to train a spiking neuron to associate input spatiotemporal spike patterns to desired output spike patterns. The algorithm is based on the conversion of spike trains into analogue signals and the application of the Widrow–Hoff learning rule. In this paper we present a mathematical formulation of the prop...
Statistics of a neuron model driven by asymmetric colored noise.
Müller-Hansen, Finn; Droste, Felix; Lindner, Benjamin
2015-02-01
Irregular firing of neurons can be modeled as a stochastic process. Here we study the perfect integrate-and-fire neuron driven by dichotomous noise, a Markovian process that jumps between two states (i.e., possesses a non-Gaussian statistics) and exhibits nonvanishing temporal correlations (i.e., represents a colored noise). Specifically, we consider asymmetric dichotomous noise with two different transition rates. Using a first-passage-time formulation, we derive exact expressions for the probability density and the serial correlation coefficient of the interspike interval (time interval between two subsequent neural action potentials) and the power spectrum of the spike train. Furthermore, we extend the model by including additional Gaussian white noise, and we give approximations for the interspike interval (ISI) statistics in this case. Numerical simulations are used to validate the exact analytical results for pure dichotomous noise, and to test the approximations of the ISI statistics when Gaussian white noise is included. The results may help to understand how correlations and asymmetry of noise and signals in nerve cells shape neuronal firing statistics.
Tang, Alexander D; Hong, Ivan; Boddington, Laura J; Garrett, Andrew R; Etherington, Sarah; Reynolds, John N J; Rodger, Jennifer
2016-10-29
Repetitive transcranial magnetic stimulation (rTMS) has become a popular method of modulating neural plasticity in humans. Clinically, rTMS is delivered at high intensities to modulate neuronal excitability. While the high-intensity magnetic field can be targeted to stimulate specific cortical regions, areas adjacent to the targeted area receive stimulation at a lower intensity and may contribute to the overall plasticity induced by rTMS. We have previously shown that low-intensity rTMS induces molecular and structural plasticity in vivo, but the effects on membrane properties and neural excitability have not been investigated. Here we investigated the acute effect of low-intensity repetitive magnetic stimulation (LI-rMS) on neuronal excitability and potential changes on the passive and active electrophysiological properties of layer 5 pyramidal neurons in vitro. Whole-cell current clamp recordings were made at baseline prior to subthreshold LI-rMS (600 pulses of iTBS, n=9 cells from 7 animals) or sham (n=10 cells from 9 animals), immediately after stimulation, as well as 10 and 20min post-stimulation. Our results show that LI-rMS does not alter passive membrane properties (resting membrane potential and input resistance) but hyperpolarises action potential threshold and increases evoked spike-firing frequency. Increases in spike firing frequency were present throughout the 20min post-stimulation whereas action potential (AP) threshold hyperpolarization was present immediately after stimulation and at 20min post-stimulation. These results provide evidence that LI-rMS alters neuronal excitability of excitatory neurons. We suggest that regions outside the targeted region of high-intensity rTMS are susceptible to neuromodulation and may contribute to rTMS-induced plasticity. Copyright © 2016 IBRO. All rights reserved.
Predictive coding of dynamical variables in balanced spiking networks.
Boerlin, Martin; Machens, Christian K; Denève, Sophie
2013-01-01
Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated.
Kember, G C; Fenton, G A; Armour, J A; Kalyaniwalla, N
2001-04-01
Regional cardiac control depends upon feedback of the status of the heart from afferent neurons responding to chemical and mechanical stimuli as transduced by an array of sensory neurites. Emerging experimental evidence shows that neural control in the heart may be partially exerted using subthreshold inputs that are amplified by noisy mechanical fluctuations. This amplification is known as aperiodic stochastic resonance (ASR). Neural control in the noisy, subthreshold regime is difficult to see since there is a near absence of any correlation between input and the output, the latter being the average firing (spiking) rate of the neuron. This lack of correlation is unresolved by traditional energy models of ASR since these models are unsuitable for identifying "cause and effect" between such inputs and outputs. In this paper, the "competition between averages" model is used to determine what portion of a noisy, subthreshold input is responsible, on average, for the output of sensory neurons as represented by the Fitzhugh-Nagumo equations. A physiologically relevant conclusion of this analysis is that a nearly constant amount of input is responsible for a spike, on average, and this amount is approximately independent of the firing rate. Hence, correlation measures are generally reduced as the firing rate is lowered even though neural control under this model is actually unaffected.
Directory of Open Access Journals (Sweden)
Sebastien Naze
2015-05-01
Full Text Available Epileptic seizure dynamics span multiple scales in space and time. Understanding seizure mechanisms requires identifying the relations between seizure components within and across these scales, together with the analysis of their dynamical repertoire. Mathematical models have been developed to reproduce seizure dynamics across scales ranging from the single neuron to the neural population. In this study, we develop a network model of spiking neurons and systematically investigate the conditions, under which the network displays the emergent dynamic behaviors known from the Epileptor, which is a well-investigated abstract model of epileptic neural activity. This approach allows us to study the biophysical parameters and variables leading to epileptiform discharges at cellular and network levels. Our network model is composed of two neuronal populations, characterized by fast excitatory bursting neurons and regular spiking inhibitory neurons, embedded in a common extracellular environment represented by a slow variable. By systematically analyzing the parameter landscape offered by the simulation framework, we reproduce typical sequences of neural activity observed during status epilepticus. We find that exogenous fluctuations from extracellular environment and electro-tonic couplings play a major role in the progression of the seizure, which supports previous studies and further validates our model. We also investigate the influence of chemical synaptic coupling in the generation of spontaneous seizure-like events. Our results argue towards a temporal shift of typical spike waves with fast discharges as synaptic strengths are varied. We demonstrate that spike waves, including interictal spikes, are generated primarily by inhibitory neurons, whereas fast discharges during the wave part are due to excitatory neurons. Simulated traces are compared with in vivo experimental data from rodents at different stages of the disorder. We draw the conclusion
Computer Modelling of Functional Aspects of Noise in Endogenously Oscillating Neurons
Huber, M. T.; Dewald, M.; Voigt, K.; Braun, H. A.; Moss, F.
1998-03-01
Membrane potential oscillations are a widespread feature of neuronal activity. When such oscillations operate close to the spike-triggering threshold, noise can become an essential property of spike-generation. According to that, we developed a minimal Hodgkin-Huxley-type computer model which includes a noise term. This model accounts for experimental data from quite different cells ranging from mammalian cortical neurons to fish electroreceptors. With slight modifications of the parameters, the model's behavior can be tuned to bursting activity, which additionally allows it to mimick temperature encoding in peripheral cold receptors including transitions to apparently chaotic dynamics as indicated by methods for the detection of unstable periodic orbits. Under all conditions, cooperative effects between noise and nonlinear dynamics can be shown which, beyond stochastic resonance, might be of functional significance for stimulus encoding and neuromodulation.
Six types of multistability in a neuronal model based on slow calcium current.
Directory of Open Access Journals (Sweden)
Tatiana Malashchenko
Full Text Available BACKGROUND: Multistability of oscillatory and silent regimes is a ubiquitous phenomenon exhibited by excitable systems such as neurons and cardiac cells. Multistability can play functional roles in short-term memory and maintaining posture. It seems to pose an evolutionary advantage for neurons which are part of multifunctional Central Pattern Generators to possess multistability. The mechanisms supporting multistability of bursting regimes are not well understood or classified. METHODOLOGY/PRINCIPAL FINDINGS: Our study is focused on determining the bio-physical mechanisms underlying different types of co-existence of the oscillatory and silent regimes observed in a neuronal model. We develop a low-dimensional model typifying the dynamics of a single leech heart interneuron. We carry out a bifurcation analysis of the model and show that it possesses six different types of multistability of dynamical regimes. These types are the co-existence of 1 bursting and silence, 2 tonic spiking and silence, 3 tonic spiking and subthreshold oscillations, 4 bursting and subthreshold oscillations, 5 bursting, subthreshold oscillations and silence, and 6 bursting and tonic spiking. These first five types of multistability occur due to the presence of a separating regime that is either a saddle periodic orbit or a saddle equilibrium. We found that the parameter range wherein multistability is observed is limited by the parameter values at which the separating regimes emerge and terminate. CONCLUSIONS: We developed a neuronal model which exhibits a rich variety of different types of multistability. We described a novel mechanism supporting the bistability of bursting and silence. This neuronal model provides a unique opportunity to study the dynamics of networks with neurons possessing different types of multistability.
Complete Neuron-Astrocyte Interaction Model: Digital Multiplierless Design and Networking Mechanism.
Haghiri, Saeed; Ahmadi, Arash; Saif, Mehrdad
2017-02-01
Glial cells, also known as neuroglia or glia, are non-neuronal cells providing support and protection for neurons in the central nervous system (CNS). They also act as supportive cells in the brain. Among a variety of glial cells, the star-shaped glial cells, i.e., astrocytes, are the largest cell population in the brain. The important role of astrocyte such as neuronal synchronization, synaptic information regulation, feedback to neural activity and extracellular regulation make the astrocytes play a vital role in brain disease. This paper presents a modified complete neuron-astrocyte interaction model that is more suitable for efficient and large scale biological neural network realization on digital platforms. Simulation results show that the modified complete interaction model can reproduce biological-like behavior of the original neuron-astrocyte mechanism. The modified interaction model is investigated in terms of digital realization feasibility and cost targeting a low cost hardware implementation. Networking behavior of this interaction is investigated and compared between two cases: i) the neuron spiking mechanism without astrocyte effects, and ii) the effect of astrocyte in regulating the neurons behavior and synaptic transmission via controlling the LTP and LTD processes. Hardware implementation on FPGA shows that the modified model mimics the main mechanism of neuron-astrocyte communication with higher performance and considerably lower hardware overhead cost compared with the original interaction model.
Noisy Spiking in Visual Area V2 of Amblyopic Monkeys.
Wang, Ye; Zhang, Bin; Tao, Xiaofeng; Wensveen, Janice M; Smith, Earl L; Chino, Yuzo M
2017-01-25
being noisy by perceptual and modeling studies, the exact nature or origin of this elevated perceptual noise is not known. We show that elevated and noisy spontaneous activity and contrast-dependent noisy spiking (spiking irregularity and trial-to-trial fluctuations in spiking) in neurons of visual area V2 could limit the visual performance of amblyopic primates. Moreover, we discovered that the noisy spiking is linked to a high level of binocular suppression in visual cortex during development. Copyright © 2017 the authors 0270-6474/17/370922-14$15.00/0.
MaBouDi, HaDi; Shimazaki, Hideaki; Giurfa, Martin; Chittka, Lars
2017-06-01
The honeybee olfactory system is a well-established model for understanding functional mechanisms of learning and memory. Olfactory stimuli are first processed in the antennal lobe, and then transferred to the mushroom body and lateral horn through dual pathways termed medial and lateral antennal lobe tracts (m-ALT and l-ALT). Recent studies reported that honeybees can perform elemental learning by associating an odour with a reward signal even after lesions in m-ALT or blocking the mushroom bodies. To test the hypothesis that the lateral pathway (l-ALT) is sufficient for elemental learning, we modelled local computation within glomeruli in antennal lobes with axons of projection neurons connecting to a decision neuron (LHN) in the lateral horn. We show that inhibitory spike-timing dependent plasticity (modelling non-associative plasticity by exposure to different stimuli) in the synapses from local neurons to projection neurons decorrelates the projection neurons' outputs. The strength of the decorrelations is regulated by global inhibitory feedback within antennal lobes to the projection neurons. By additionally modelling octopaminergic modification of synaptic plasticity among local neurons in the antennal lobes and projection neurons to LHN connections, the model can discriminate and generalize olfactory stimuli. Although positive patterning can be accounted for by the l-ALT model, negative patterning requires further processing and mushroom body circuits. Thus, our model explains several-but not all-types of associative olfactory learning and generalization by a few neural layers of odour processing in the l-ALT. As an outcome of the combination between non-associative and associative learning, the modelling approach allows us to link changes in structural organization of honeybees' antennal lobes with their behavioural performances over the course of their life.
Mirror neurons: functions, mechanisms and models.
Oztop, Erhan; Kawato, Mitsuo; Arbib, Michael A
2013-04-12
Mirror neurons for manipulation fire both when the animal manipulates an object in a specific way and when it sees another animal (or the experimenter) perform an action that is more or less similar. Such neurons were originally found in macaque monkeys, in the ventral premotor cortex, area F5 and later also in the inferior parietal lobule. Recent neuroimaging data indicate that the adult human brain is endowed with a "mirror neuron system," putatively containing mirror neurons and other neurons, for matching the observation and execution of actions. Mirror neurons may serve action recognition in monkeys as well as humans, whereas their putative role in imitation and language may be realized in human but not in monkey. This article shows the important role of computational models in providing sufficient and causal explanations for the observed phenomena involving mirror systems and the learning processes which form them, and underlines the need for additional circuitry to lift up the monkey mirror neuron circuit to sustain the posited cognitive functions attributed to the human mirror neuron system. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Brookings, Ted; Goeritz, Marie L; Marder, Eve
2014-11-01
We describe a new technique to fit conductance-based neuron models to intracellular voltage traces from isolated biological neurons. The biological neurons are recorded in current-clamp with pink (1/f) noise injected to perturb the activity of the neuron. The new algorithm finds a set of parameters that allows a multicompartmental model neuron to match the recorded voltage trace. Attempting to match a recorded voltage trace directly has a well-known problem: mismatch in the timing of action potentials between biological and model neuron is inevitable and results in poor phenomenological match between the model and data. Our approach avoids this by applying a weak control adjustment to the model to promote alignment during the fitting procedure. This approach is closely related to the control theoretic concept of a Luenberger observer. We tested this approach on synthetic data and on data recorded from an anterior gastric receptor neuron from the stomatogastric ganglion of the crab Cancer borealis. To test the flexibility of this approach, the synthetic data were constructed with conductance models that were different from the ones used in the fitting model. For both synthetic and biological data, the resultant models had good spike-timing accuracy. Copyright © 2014 the American Physiological Society.
Witton, Jonathan; Staniaszek, Lydia E.; Bartsch, Ullrich; Randall, Andrew D.; Jones, Matthew W.
2015-01-01
Key points High frequency (100–250 Hz) neuronal oscillations in the hippocampus, known as sharp‐wave ripples (SWRs), synchronise the firing behaviour of groups of neurons and play a key role in memory consolidation.Learning and memory are severely compromised in dementias such as Alzheimer's disease; however, the effects of dementia‐related pathology on SWRs are unknown.The frequency and temporal structure of SWRs was disrupted in a transgenic mouse model of tauopathy (one of the major hallmarks of several dementias).Excitatory pyramidal neurons were more likely to fire action potentials in a phase‐locked manner during SWRs in the mouse model of tauopathy; conversely, inhibitory interneurons were less likely to fire phase‐locked spikes during SWRs.These findings indicate there is reduced inhibitory control of hippocampal network events and point to a novel mechanism which may underlie the cognitive impairments in this model of dementia. Abstract Neurons within the CA1 region of the hippocampus are co‐activated during high frequency (100–250 Hz) sharp‐wave ripple (SWR) activity in a manner that probably drives synaptic plasticity and promotes memory consolidation. In this study we have used a transgenic mouse model of dementia (rTg4510 mice), which overexpresses a mutant form of tau protein, to examine the effects of tauopathy on hippocampal SWRs and associated neuronal firing. Tetrodes were used to record simultaneous extracellular action potentials and local field potentials from the dorsal CA1 pyramidal cell layer of 7‐ to 8‐month‐old wild‐type and rTg4510 mice at rest in their home cage. At this age point these mice exhibit neurofibrillary tangles, neurodegeneration and cognitive deficits. Epochs of sleep or quiet restfulness were characterised by minimal locomotor activity and a low theta/delta ratio in the local field potential power spectrum. SWRs detected off‐line were significantly lower in amplitude and had an altered temporal
SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.
Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi
2018-01-01
Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.
Attention deficit associated with early life interictal spikes in a rat model is improved with ACTH.
Directory of Open Access Journals (Sweden)
Amanda E Hernan
Full Text Available Children with epilepsy often present with pervasive cognitive and behavioral comorbidities including working memory impairments, attention deficit hyperactivity disorder (ADHD and autism spectrum disorder. These non-seizure characteristics are severely detrimental to overall quality of life. Some of these children, particularly those with epilepsies classified as Landau-Kleffner Syndrome or continuous spike and wave during sleep, have infrequent seizure activity but frequent focal epileptiform activity. This frequent epileptiform activity is thought to be detrimental to cognitive development; however, it is also possible that these IIS events initiate pathophysiological pathways in the developing brain that may be independently associated with cognitive deficits. These hypotheses are difficult to address due to the previous lack of an appropriate animal model. To this end, we have recently developed a rat model to test the role of frequent focal epileptiform activity in the prefrontal cortex. Using microinjections of a GABA(A antagonist (bicuculline methiodine delivered multiple times per day from postnatal day (p 21 to p25, we showed that rat pups experiencing frequent, focal, recurrent epileptiform activity in the form of interictal spikes during neurodevelopment have significant long-term deficits in attention and sociability that persist into adulthood. To determine if treatment with ACTH, a drug widely used to treat early-life seizures, altered outcome we administered ACTH once per day subcutaneously during the time of the induced interictal spike activity. We show a modest amelioration of the attention deficit seen in animals with a history of early life interictal spikes with ACTH, in the absence of alteration of interictal spike activity. These results suggest that pharmacological intervention that is not targeted to the interictal spike activity is worthy of future study as it may be beneficial for preventing or ameliorating adverse
Learning to Recognize Actions From Limited Training Examples Using a Recurrent Spiking Neural Model
Panda, Priyadarshini; Srinivasa, Narayan
2018-01-01
A fundamental challenge in machine learning today is to build a model that can learn from few examples. Here, we describe a reservoir based spiking neural model for learning to recognize actions with a limited number of labeled videos. First, we propose a novel encoding, inspired by how microsaccades influence visual perception, to extract spike information from raw video data while preserving the temporal correlation across different frames. Using this encoding, we show that the reservoir generalizes its rich dynamical activity toward signature action/movements enabling it to learn from few training examples. We evaluate our approach on the UCF-101 dataset. Our experiments demonstrate that our proposed reservoir achieves 81.3/87% Top-1/Top-5 accuracy, respectively, on the 101-class data while requiring just 8 video examples per class for training. Our results establish a new benchmark for action recognition from limited video examples for spiking neural models while yielding competitive accuracy with respect to state-of-the-art non-spiking neural models. PMID:29551962
A new elastic model for ground coupling of geophones with spikes
Drijkoningen, G.G.; Rademakers, F.; Slob, E.C.; Fokkema, J.T.
2006-01-01
Ground coupling are terms that describe the transfer from seismic ground motion to the motion of a geophone. In previous models, ground coupling was mainly considered as a disk lying on top of a half-space, not considering the fact that in current practice geophones are spiked and are buried for
D’Ascenzo, Marcello; Podda, Maria Vittoria; Fellin, Tommaso; Azzena, Gian Battista; Haydon, Philip; Grassi, Claudio
2009-01-01
The involvement of metabotropic glutamate receptors type 5 (mGluR5) in drug-induced behaviours is well-established but limited information is available on their functional roles in addiction-relevant brain areas like the nucleus accumbens (NAc). This study demonstrates that pharmacological and synaptic activation of mGluR5 increases the spike discharge of medium spiny neurons (MSNs) in the NAc. This effect was associated with the appearance of a slow afterdepolarization (ADP) which, in voltage-clamp experiments, was recorded as a slowly inactivating inward current. Pharmacological studies showed that ADP was elicited by mGluR5 stimulation via G-protein-dependent activation of phospholipase C and elevation of intracellular Ca2+ levels. Both ADP and spike aftercurrents were significantly inhibited by the Na+ channel-blocker, tetrodotoxin (TTX). Moreover, the selective blockade of persistent Na+ currents (INaP), achieved by NAc slice pre-incubation with 20 nm TTX or 10 μm riluzole, significantly reduced the ADP amplitude, indicating that this type of Na+ current is responsible for the mGluR5-dependent ADP. mGluR5 activation also produced significant increases in INaP, and the pharmacological blockade of this current prevented the mGluR5-induced enhancement of spike discharge. Collectively, these data suggest that mGluR5 activation upregulates INaP in MSNs of the NAc, thereby inducing an ADP that results in enhanced MSN excitability. Activation of mGluR5 will significantly alter spike firing in MSNs in vivo, and this effect could be an important mechanism by which these receptors mediate certain aspects of drug-induced behaviours. PMID:19433572
Population density models of integrate-and-fire neurons with jumps: well-posedness.
Dumont, Grégory; Henry, Jacques
2013-09-01
In this paper we study the well-posedness of different models of population of leaky integrate-and-fire neurons with a population density approach. The synaptic interaction between neurons is modeled by a potential jump at the reception of a spike. We study populations that are self excitatory or self inhibitory. We distinguish the cases where this interaction is instantaneous from the one where there is a repartition of conduction delays. In the case of a bounded density of delays both excitatory and inhibitory population models are shown to be well-posed. But without conduction delay the solution of the model of self excitatory neurons may blow up. We analyze the different behaviours of the model with jumps compared to its diffusion approximation.
A Markovian event-based framework for stochastic spiking neural networks.
Touboul, Jonathan D; Faugeras, Olivier D
2011-11-01
In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks.
International Nuclear Information System (INIS)
Janczura, Joanna; Trück, Stefan; Weron, Rafał; Wolff, Rodney C.
2013-01-01
An important issue in fitting stochastic models to electricity spot prices is the estimation of a component to deal with trends and seasonality in the data. Unfortunately, estimation routines for the long-term and short-term seasonal pattern are usually quite sensitive to extreme observations, known as electricity price spikes. Improved robustness of the model can be achieved by (a) filtering the data with some reasonable procedure for outlier detection, and then (b) using estimation and testing procedures on the filtered data. In this paper we examine the effects of different treatments of extreme observations on model estimation and on determining the number of spikes (outliers). In particular we compare results for the estimation of the seasonal and stochastic components of electricity spot prices using either the original or filtered data. We find significant evidence for a superior estimation of both the seasonal short-term and long-term components when the data have been treated carefully for outliers. Overall, our findings point out the substantial impact the treatment of extreme observations may have on these issues and, therefore, also on the pricing of electricity derivatives like futures and option contracts. An added value of our study is the ranking of different filtering techniques used in the energy economics literature, suggesting which methods could be and which should not be used for spike identification. - Highlights: • First comprehensive study on the impact of spikes on seasonal pattern estimation • The effects of different treatments of spikes on model estimation are examined. • Cleaning spot prices for outliers yields superior estimates of the seasonal pattern. • Removing outliers provides better parameter estimates for the stochastic process. • Rankings of filtering techniques suggested in the literature are provided
Context-aware modeling of neuronal morphologies
Directory of Open Access Journals (Sweden)
Benjamin eTorben-Nielsen
2014-09-01
Full Text Available Neuronal morphologies are pivotal for brain functioning: physical overlap between dendrites and axons constrain the circuit topology, and the precise shape and composition of dendrites determine the integration of inputs to produce an output signal. At the same time, morphologies are highly diverse and variant. The variance, presumably, originates from neurons developing in a densely packed brain substrate where they interact (e.g., repulsion or attraction with other actors in this substrate. However, when studying neurons their context is never part of the analysis and they are treated as if they existed in isolation.Here we argue that to fully understand neuronal morphology and its variance it is important to consider neurons in relation to each other and to other actors in the surrounding brain substrate, i.e., their context. We propose a context-aware computational framework, NeuroMaC, in which large numbers of neurons can be grown simultaneously according to growth rules expressed in terms of interactions between the developing neuron and the surrounding brain substrate.As a proof of principle, we demonstrate that by using NeuroMaC we can generate accurate virtual morphologies of distinct classes both in isolation and as part of neuronal forests. Accuracy is validated against population statistics of experimentally reconstructed morphologies. We show that context-aware generation of neurons can explain characteristics of variation. Indeed, plausible variation is an inherent property of the morphologies generated by context-aware rules. We speculate about the applicability of this framework to investigate morphologies and circuits, to classify healthy and pathological morphologies, and to generate large quantities of morphologies for large-scale modeling.
Large-scale modelling of neuronal systems
International Nuclear Information System (INIS)
Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.
2009-01-01
The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.
Hui, Ada; Lam, Xanthe M; Kuehl, Christopher; Grauschopf, Ulla; Wang, Y John
2015-01-01
When isolator technology is applied to biotechnology drug product fill-finish process, hydrogen peroxide (H2O2) spiking studies for the determination of the sensitivity of protein to residual peroxide in the isolator can be useful for assessing a maximum vapor phase hydrogen peroxide (VPHP) level. When monoclonal antibody (mAb) drug products were spiked with H2O2, an increase in methionine (Met 252 and Met 428) oxidation in the Fc region of the mAbs with a decrease in H2O2 concentration was observed for various levels of spiked-in peroxide. The reaction between Fc-Met and H2O2 was stoichiometric (i.e., 1:1 molar ratio), and the reaction rate was dependent on the concentrations of mAb and H2O2. The consumption of H2O2 by Fc-Met oxidation in the mAb followed pseudo first-order kinetics, and the rate was proportional to mAb concentration. The extent of Met 428 oxidation was half of that of Met 252, supporting that Met 252 is twice as reactive as Met 428. Similar results were observed for free L-methionine when spiked with H2O2. However, mAb formulation excipients may affect the rate of H2O2 consumption. mAb formulations containing trehalose or sucrose had faster H2O2 consumption rates than formulations without the sugars, which could be the result of impurities (e.g., metal ions) present in the excipients that may act as catalysts. Based on the H2O2 spiking study results, we can predict the amount Fc-Met oxidation for a given protein concentration and H2O2 level. Our kinetic modeling of the reaction between Fc-Met oxidation and H2O2 provides an outline to design a H2O2 spiking study to support the use of VPHP isolator for antibody drug product manufacture. Isolator technology is increasing used in drug product manufacturing of biotherapeutics. In order to understand the impact of residual vapor phase hydrogen peroxide (VPHP) levels on protein product quality, hydrogen peroxide (H2O2) spiking studies may be performed to determine the sensitivity of monoclonal antibody
An introduction to modeling neuronal dynamics
Börgers, Christoph
2017-01-01
This book is intended as a text for a one-semester course on Mathematical and Computational Neuroscience for upper-level undergraduate and beginning graduate students of mathematics, the natural sciences, engineering, or computer science. An undergraduate introduction to differential equations is more than enough mathematical background. Only a slim, high school-level background in physics is assumed, and none in biology. Topics include models of individual nerve cells and their dynamics, models of networks of neurons coupled by synapses and gap junctions, origins and functions of population rhythms in neuronal networks, and models of synaptic plasticity. An extensive online collection of Matlab programs generating the figures accompanies the book. .
Barnett, William H; Cymbalyuk, Gennady S
2014-01-01
The dynamics of individual neurons are crucial for producing functional activity in neuronal networks. An open question is how temporal characteristics can be controlled in bursting activity and in transient neuronal responses to synaptic input. Bifurcation theory provides a framework to discover generic mechanisms addressing this question. We present a family of mechanisms organized around a global codimension-2 bifurcation. The cornerstone bifurcation is located at the intersection of the border between bursting and spiking and the border between bursting and silence. These borders correspond to the blue sky catastrophe bifurcation and the saddle-node bifurcation on an invariant circle (SNIC) curves, respectively. The cornerstone bifurcation satisfies the conditions for both the blue sky catastrophe and SNIC. The burst duration and interburst interval increase as the inverse of the square root of the difference between the corresponding bifurcation parameter and its bifurcation value. For a given set of burst duration and interburst interval, one can find the parameter values supporting these temporal characteristics. The cornerstone bifurcation also determines the responses of silent and spiking neurons. In a silent neuron with parameters close to the SNIC, a pulse of current triggers a single burst. In a spiking neuron with parameters close to the blue sky catastrophe, a pulse of current temporarily silences the neuron. These responses are stereotypical: the durations of the transient intervals-the duration of the burst and the duration of latency to spiking-are governed by the inverse-square-root laws. The mechanisms described here could be used to coordinate neuromuscular control in central pattern generators. As proof of principle, we construct small networks that control metachronal-wave motor pattern exhibited in locomotion. This pattern is determined by the phase relations of bursting neurons in a simple central pattern generator modeled by a chain of
Inferring oscillatory modulation in neural spike trains.
Arai, Kensuke; Kass, Robert E
2017-10-01
Oscillations are observed at various frequency bands in continuous-valued neural recordings like the electroencephalogram (EEG) and local field potential (LFP) in bulk brain matter, and analysis of spike-field coherence reveals that spiking of single neurons often occurs at certain phases of the global oscillation. Oscillatory modulation has been examined in relation to continuous-valued oscillatory signals, and independently from the spike train alone, but behavior or stimulus triggered firing-rate modulation, spiking sparseness, presence of slow modulation not locked to stimuli and irregular oscillations with large variability in oscillatory periods, present challenges to searching for temporal structures present in the spike train. In order to study oscillatory modulation in real data collected under a variety of experimental conditions, we describe a flexible point-process framework we call the Latent Oscillatory Spike Train (LOST) model to decompose the instantaneous firing rate in biologically and behaviorally relevant factors: spiking refractoriness, event-locked firing rate non-stationarity, and trial-to-trial variability accounted for by baseline offset and a stochastic oscillatory modulation. We also extend the LOST model to accommodate changes in the modulatory structure over the duration of the experiment, and thereby discover trial-to-trial variability in the spike-field coherence of a rat primary motor cortical neuron to the LFP theta rhythm. Because LOST incorporates a latent stochastic auto-regressive term, LOST is able to detect oscillations when the firing rate is low, the modulation is weak, and when the modulating oscillation has a broad spectral peak.
Fernandez, Fernando R.; Broicher, Tilman; Truong, Alan; White, John A.
2011-01-01
Modulating the gain of the input-output function of neurons is critical for processing of stimuli and network dynamics. Previous gain control mechanisms have suggested that voltage fluctuations play a key role in determining neuronal gain in vivo. Here we show that, under increased membrane conductance, voltage fluctuations restore Na+ current and reduce spike frequency adaptation in rat hippocampal CA1 pyramidal neurons in vitro. As a consequence, membrane voltage fluctuations produce a leftward shift in the f-I relationship without a change in gain, relative to an increase in conductance alone. Furthermore, we show that these changes have important implications for the integration of inhibitory inputs. Due to the ability to restore Na+ current, hyperpolarizing membrane voltage fluctuations mediated by GABAA-like inputs can increase firing rate in a high conductance state. Finally, our data show that the effects on gain and synaptic integration are mediated by voltage fluctuations within a physiologically relevant range of frequencies (10–40 Hz). PMID:21389243
Directory of Open Access Journals (Sweden)
Dorea Vierling-Claassen
2010-11-01
Full Text Available Selective optogenetic drive of fast spiking interneurons (FS leads to enhanced local field potential (LFP power across the traditional gamma frequency band (20-80Hz; Cardin et al., 2009. In contrast, drive to regular-spiking pyramidal cells (RS enhances power at lower frequencies, with a peak at 8 Hz. The first result is consistent with previous computational studies emphasizing the role of FS and the time constant of GABAA synaptic inhibition in gamma rhythmicity. However, the same theoretical models do not typically predict low-frequency LFP enhancement with RS drive. To develop hypotheses as to how the same network can support these contrasting behaviors, we constructed a biophysically principled network model of primary somatosensory neocortex containing FS, RS and low-threshold-spiking (LTS interneurons. Cells were modeled with detailed cell anatomy and physiology, multiple dendritic compartments, and included active somatic and dendritic ionic currents. Consistent with prior studies, the model demonstrated gamma resonance during FS drive, dependent on the time-constant of GABAA inhibition induced by synchronous FS activity. Lower frequency enhancement during RS drive was replicated only on inclusion of an inhibitory LTS population, whose activation was critically dependent on RS synchrony and evoked longer-lasting inhibition. Our results predict that differential recruitment of FS and LTS inhibitory populations is essential to the observed cortical dynamics and may provide a means for amplifying the natural expression of distinct oscillations in normal cortical processing.
Chimera-like states in a neuronal network model of the cat brain
Santos, M. S.; Szezech, J. D.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Batista, A. M.; Viana, R. L.; Kurths, J.
2017-08-01
Neuronal systems have been modeled by complex networks in different description levels. Recently, it has been verified that networks can simultaneously exhibit one coherent and other incoherent domain, known as chimera states. In this work, we study the existence of chimera states in a network considering the connectivity matrix based on the cat cerebral cortex. The cerebral cortex of the cat can be separated in 65 cortical areas organised into the four cognitive regions: visual, auditory, somatosensory-motor and frontolimbic. We consider a network where the local dynamics is given by the Hindmarsh-Rose model. The Hindmarsh-Rose equations are a well known model of neuronal activity that has been considered to simulate membrane potential in neuron. Here, we analyse under which conditions chimera states are present, as well as the affects induced by intensity of coupling on them. We observe the existence of chimera states in that incoherent structure can be composed of desynchronised spikes or desynchronised bursts. Moreover, we find that chimera states with desynchronised bursts are more robust to neuronal noise than with desynchronised spikes.
Observability and synchronization of neuron models
Aguirre, Luis A.; Portes, Leonardo L.; Letellier, Christophe
2017-10-01
Observability is the property that enables recovering the state of a dynamical system from a reduced number of measured variables. In high-dimensional systems, it is therefore important to make sure that the variable recorded to perform the analysis conveys good observability of the system dynamics. The observability of a network of neuron models depends nontrivially on the observability of the node dynamics and on the topology of the network. The aim of this paper is twofold. First, to perform a study of observability using four well-known neuron models by computing three different observability coefficients. This not only clarifies observability properties of the models but also shows the limitations of applicability of each type of coefficients in the context of such models. Second, to study the emergence of phase synchronization in networks composed of neuron models. This is done performing multivariate singular spectrum analysis which, to the best of the authors' knowledge, has not been used in the context of networks of neuron models. It is shown that it is possible to detect phase synchronization: (i) without having to measure all the state variables, but only one (that provides greatest observability) from each node and (ii) without having to estimate the phase.
Towards reproducible descriptions of neuronal network models.
Directory of Open Access Journals (Sweden)
Eilen Nordlie
2009-08-01
Full Text Available Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing--and thinking about--complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain.
Directory of Open Access Journals (Sweden)
John J Wade
Full Text Available In recent years research suggests that astrocyte networks, in addition to nutrient and waste processing functions, regulate both structural and synaptic plasticity. To understand the biological mechanisms that underpin such plasticity requires the development of cell level models that capture the mutual interaction between astrocytes and neurons. This paper presents a detailed model of bidirectional signaling between astrocytes and neurons (the astrocyte-neuron model or AN model which yields new insights into the computational role of astrocyte-neuronal coupling. From a set of modeling studies we demonstrate two significant findings. Firstly, that spatial signaling via astrocytes can relay a "learning signal" to remote synaptic sites. Results show that slow inward currents cause synchronized postsynaptic activity in remote neurons and subsequently allow Spike-Timing-Dependent Plasticity based learning to occur at the associated synapses. Secondly, that bidirectional communication between neurons and astrocytes underpins dynamic coordination between neuron clusters. Although our composite AN model is presently applied to simplified neural structures and limited to coordination between localized neurons, the principle (which embodies structural, functional and dynamic complexity, and the modeling strategy may be extended to coordination among remote neuron clusters.
International Nuclear Information System (INIS)
Higgs, Helen; Worthington, Andrew
2008-01-01
It is commonly known that wholesale spot electricity markets exhibit high price volatility, strong mean-reversion and frequent extreme price spikes. This paper employs a basic stochastic model, a mean-reverting model and a regime-switching model to capture these features in the Australian national electricity market (NEM), comprising the interconnected markets of New South Wales, Queensland, South Australia and Victoria. Daily spot prices from 1 January 1999 to 31 December 2004 are employed. The results show that the regime-switching model outperforms the basic stochastic and mean-reverting models. Electricity prices are also found to exhibit stronger mean-reversion after a price spike than in the normal period, and price volatility is more than fourteen times higher in spike periods than in normal periods. The probability of a spike on any given day ranges between 5.16% in NSW and 9.44% in Victoria
International Nuclear Information System (INIS)
Rodrigues, Serafim; Terry, John R.; Breakspear, Michael
2006-01-01
In this Letter, the genesis of spike-wave activity-a hallmark of many generalized epileptic seizures-is investigated in a reduced mean-field model of human neural activity. Drawing upon brain modelling and dynamical systems theory, we demonstrate that the thalamic circuitry of the system is crucial for the generation of these abnormal rhythms, observing that the combination of inhibition from reticular nuclei and excitation from the cortical signal, interplay to generate the spike-wave oscillation. The mechanism revealed provides an explanation of why approaches based on linear stability and Heaviside approximations to the activation function have failed to explain the phenomena of spike-wave behaviour in mean-field models. A mathematical understanding of this transition is a crucial step towards relating spiking network models and mean-field approaches to human brain modelling
International Nuclear Information System (INIS)
Jin, Dezhe Z
2008-01-01
Temporally complex stimuli are encoded into spatiotemporal spike sequences of neurons in many sensory areas. Here, we describe how downstream neurons with dendritic bistable plateau potentials can be connected to decode such spike sequences. Driven by feedforward inputs from the sensory neurons and controlled by feedforward inhibition and lateral excitation, the neurons transit between UP and DOWN states of the membrane potentials. The neurons spike only in the UP states. A decoding neuron spikes at the end of an input to signal the recognition of specific spike sequences. The transition dynamics is equivalent to that of a finite state automaton. A connection rule for the networks guarantees that any finite state automaton can be mapped into the transition dynamics, demonstrating the equivalence in computational power between the networks and finite state automata. The decoding mechanism is capable of recognizing an arbitrary number of spatiotemporal spike sequences, and is insensitive to the variations of the spike timings in the sequences
Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks
Pyle, Ryan; Rosenbaum, Robert
2017-01-01
Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.
Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks.
Pyle, Ryan; Rosenbaum, Robert
2017-01-06
Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.
Comparison of Langevin and Markov channel noise models for neuronal signal generation.
Sengupta, B; Laughlin, S B; Niven, J E
2010-01-01
The stochastic opening and closing of voltage-gated ion channels produce noise in neurons. The effect of this noise on the neuronal performance has been modeled using either an approximate or Langevin model based on stochastic differential equations or an exact model based on a Markov process model of channel gating. Yet whether the Langevin model accurately reproduces the channel noise produced by the Markov model remains unclear. Here we present a comparison between Langevin and Markov models of channel noise in neurons using single compartment Hodgkin-Huxley models containing either Na+ and K+, or only K+ voltage-gated ion channels. The performance of the Langevin and Markov models was quantified over a range of stimulus statistics, membrane areas, and channel numbers. We find that in comparison to the Markov model, the Langevin model underestimates the noise contributed by voltage-gated ion channels, overestimating information rates for both spiking and nonspiking membranes. Even with increasing numbers of channels, the difference between the two models persists. This suggests that the Langevin model may not be suitable for accurately simulating channel noise in neurons, even in simulations with large numbers of ion channels.
Kruijne, Wouter; Van der Stigchel, Stefan; Meeter, Martijn
2014-03-01
The trajectory of saccades to a target is often affected whenever there is a distractor in the visual field. Distractors can cause a saccade to deviate towards their location or away from it. The oculomotor mechanisms that produce deviation towards distractors have been thoroughly explored in behavioral, neurophysiological and computational studies. The mechanisms underlying deviation away, on the other hand, remain unclear. Behavioral findings suggest a mechanism of spatially focused, top-down inhibition in a saccade map, and deviation away has become a tool to investigate such inhibition. However, this inhibition hypothesis has little neuroanatomical or neurophysiological support, and recent findings go against it. Here, we propose that deviation away results from an unbalanced saccade drive from the brainstem, caused by spike rate adaptation in brainstem long-lead burst neurons. Adaptation to stimulation in the direction of the distractor results in an unbalanced drive away from it. An existing model of the saccade system was extended with this theory. The resulting model simulates a wide range of findings on saccade trajectories, including findings that have classically been interpreted to support inhibition views. Furthermore, the model replicated the effect of saccade latency on deviation away, but predicted this effect would be absent with large (400 ms) distractor-target onset asynchrony. This prediction was confirmed in an experiment, which demonstrates that the theory both explains classical findings on saccade trajectories and predicts new findings. Copyright © 2014 Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Xie, Huijuan; Gong, Yubing
2017-01-01
In this paper, we numerically study the effect of spike-timing-dependent plasticity (STDP) on multiple coherence resonances (MCR) and synchronization transitions (ST) induced by time delay in adaptive scale-free Hodgkin–Huxley neuronal networks. It is found that STDP has a big influence on MCR and ST induced by time delay and on the effect of network average degree on the MCR and ST. MCR is enhanced or suppressed as the adjusting rate A p of STDP decreases or increases, and there is optimal A p by which ST becomes strongest. As network average degree 〈k〉 increases, ST is enhanced and there is optimal 〈k〉 at which MCR becomes strongest. Moreover, for a larger A p value, ST is enhanced more rapidly with increasing 〈k〉 and the optimal 〈k〉 for MCR increases. These results show that STDP can either enhance or suppress MCR, and there is optimal STDP that can most strongly enhance ST induced by time delay in the adaptive neuronal networks. These findings could find potential implication for the information processing and transmission in neural systems.
Novel model of neuronal bioenergetics
DEFF Research Database (Denmark)
Bak, Lasse Kristoffer; Obel, Linea Lykke Frimodt; Walls, Anne B
2012-01-01
-methyl-d-aspartate)-induced synaptic activity and that lactate alone is not able to support neurotransmitter glutamate homoeostasis. Subsequently, a model was proposed to explain these results at the cellular level. In brief, the intermittent rises in intracellular Ca2+ during activation cause influx of Ca2+ into the mitochondrial...
Stela Jokić; B. Nagy; K. Aladić; B. Simándi
2013-01-01
The extraction of soybean oil from the surface of spiked quartz sand using supercritical CO2 was investigated. Sand as solid was used; it is not porous material so the internal diffusion does not exist, all the soluble material is in the surface of the particles. Sovová’s model has been used in order to obtain an analytical solution to develop the required extraction yield curves. The model simplifies when the internal diffusion can be neglected. The external mass transfer coefficient was det...
Is the thermal-spike model consistent with experimentally determined electron temperature?
International Nuclear Information System (INIS)
Ajryan, Eh.A.; Fedorov, A.V.; Kostenko, B.F.
2000-01-01
Carbon K-Auger electron spectra from amorphous carbon foils induced by fast heavy ions are theoretically investigated. The high-energy tail of the Auger structure showing a clear projectile charge dependence is analyzed within the thermal-spike model framework as well as in the frame of another model taking into account some kinetic features of the process. A poor comparison results between theoretically and experimentally determined temperatures are suggested to be due to an improper account of double electron excitations or due to shake-up processes which leave the system in a more energetic initial state than a statically screened core hole
Zhan, Feibiao; Liu, Shenquan
2017-01-01
Electrical activities are ubiquitous neuronal bioelectric phenomena, which have many different modes to encode the expression of biological information, and constitute the whole process of signal propagation between neurons. Therefore, we focus on the electrical activities of neurons, which is also causing widespread concern among neuroscientists. In this paper, we mainly investigate the electrical activities of the Morris-Lecar (M-L) model with electromagnetic radiation or Gaussian white noise, which can restore the authenticity of neurons in realistic neural network. First, we explore dynamical response of the whole system with electromagnetic induction (EMI) and Gaussian white noise. We find that there are slight differences in the discharge behaviors via comparing the response of original system with that of improved system, and electromagnetic induction can transform bursting or spiking state to quiescent state and vice versa. Furthermore, we research bursting transition mode and the corresponding periodic solution mechanism for the isolated neuron model with electromagnetic induction by using one-parameter and bi-parameters bifurcation analysis. Finally, we analyze the effects of Gaussian white noise on the original system and coupled system, which is conducive to understand the actual discharge properties of realistic neurons.
Directory of Open Access Journals (Sweden)
Feibiao Zhan
2017-11-01
Full Text Available Electrical activities are ubiquitous neuronal bioelectric phenomena, which have many different modes to encode the expression of biological information, and constitute the whole process of signal propagation between neurons. Therefore, we focus on the electrical activities of neurons, which is also causing widespread concern among neuroscientists. In this paper, we mainly investigate the electrical activities of the Morris-Lecar (M-L model with electromagnetic radiation or Gaussian white noise, which can restore the authenticity of neurons in realistic neural network. First, we explore dynamical response of the whole system with electromagnetic induction (EMI and Gaussian white noise. We find that there are slight differences in the discharge behaviors via comparing the response of original system with that of improved system, and electromagnetic induction can transform bursting or spiking state to quiescent state and vice versa. Furthermore, we research bursting transition mode and the corresponding periodic solution mechanism for the isolated neuron model with electromagnetic induction by using one-parameter and bi-parameters bifurcation analysis. Finally, we analyze the effects of Gaussian white noise on the original system and coupled system, which is conducive to understand the actual discharge properties of realistic neurons.
Scaling up spike-and-slab models for unsupervised feature learning.
Goodfellow, Ian J; Courville, Aaron; Bengio, Yoshua
2013-08-01
We describe the use of two spike-and-slab models for modeling real-valued data, with an emphasis on their applications to object recognition. The first model, which we call spike-and-slab sparse coding (S3C), is a preexisting model for which we introduce a faster approximate inference algorithm. We introduce a deep variant of S3C, which we call the partially directed deep Boltzmann machine (PD-DBM) and extend our S3C inference algorithm for use on this model. We describe learning procedures for each. We demonstrate that our inference procedure for S3C enables scaling the model to unprecedented large problem sizes, and demonstrate that using S3C as a feature extractor results in very good object recognition performance, particularly when the number of labeled examples is low. We show that the PD-DBM generates better samples than its shallow counterpart, and that unlike DBMs or DBNs, the PD-DBM may be trained successfully without greedy layerwise training.
Diffusion approximation of neuronal models revisited
Czech Academy of Sciences Publication Activity Database
Čupera, Jakub
2014-01-01
Roč. 11, č. 1 (2014), s. 11-25 ISSN 1547-1063. [International Workshop on Neural Coding (NC) /10./. Praha, 02.09.2012-07.09.2012] R&D Projects: GA ČR(CZ) GAP103/11/0282 Institutional support: RVO:67985823 Keywords : stochastic model * neuronal activity * first-passage time Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.840, year: 2014
The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.
Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun
2017-01-01
Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.
A Dynamic Bayesian Model for Characterizing Cross-Neuronal Interactions During Decision-Making.
Zhou, Bo; Moorman, David E; Behseta, Sam; Ombao, Hernando; Shahbaba, Babak
2016-01-01
The goal of this paper is to develop a novel statistical model for studying cross-neuronal spike train interactions during decision making. For an individual to successfully complete the task of decision-making, a number of temporally-organized events must occur: stimuli must be detected, potential outcomes must be evaluated, behaviors must be executed or inhibited, and outcomes (such as reward or no-reward) must be experienced. Due to the complexity of this process, it is likely the case that decision-making is encoded by the temporally-precise interactions between large populations of neurons. Most existing statistical models, however, are inadequate for analyzing such a phenomenon because they provide only an aggregated measure of interactions over time. To address this considerable limitation, we propose a dynamic Bayesian model which captures the time-varying nature of neuronal activity (such as the time-varying strength of the interactions between neurons). The proposed method yielded results that reveal new insight into the dynamic nature of population coding in the prefrontal cortex during decision making. In our analysis, we note that while some neurons in the prefrontal cortex do not synchronize their firing activity until the presence of a reward, a different set of neurons synchronize their activity shortly after stimulus onset. These differentially synchronizing sub-populations of neurons suggests a continuum of population representation of the reward-seeking task. Secondly, our analyses also suggest that the degree of synchronization differs between the rewarded and non-rewarded conditions. Moreover, the proposed model is scalable to handle data on many simultaneously-recorded neurons and is applicable to analyzing other types of multivariate time series data with latent structure. Supplementary materials (including computer codes) for our paper are available online.
Directory of Open Access Journals (Sweden)
Quan Wang
2017-08-01
Full Text Available The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP and synaptic normalization (SN. When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network's changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network's sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that
A feed-forward spiking model of shape-coding by IT cells
Directory of Open Access Journals (Sweden)
August eRomeo
2014-05-01
Full Text Available The ability to recognize a shape is linked to figure-ground organization. Cell preferences appear to be correlated across contrast-polarity reversals and mirror reversals of polygon displays, but not so much across figure-ground (FG reversals. Here we present a network structure which explains both shape-coding by IT cells and the suppression of responses to figure-ground reversed stimuli. In the model figure-ground discrimination is achieved much before shape discrimination, that is itself evidenced by the difference in the spiking onsets of a couple of cells selective for two image categories.
Macro- and micro-chaotic structures in the Hindmarsh-Rose model of bursting neurons
Energy Technology Data Exchange (ETDEWEB)
Barrio, Roberto, E-mail: rbarrio@unizar.es; Serrano, Sergio [Computational Dynamics Group, Departamento de Matemática Aplicada, GME and IUMA, Universidad de Zaragoza, E-50009 Zaragoza (Spain); Angeles Martínez, M. [Computational Dynamics Group, GME, Universidad de Zaragoza, E-50009 Zaragoza (Spain); Shilnikov, Andrey [Neuroscience Institute and Department of Mathematics and Statistics, Georgia State University, Atlanta, Georgia 30078 (United States); Department of Computational Mathematics and Cybernetics, Lobachevsky State University of Nizhni Novgorod, 603950 Nizhni Novgorod (Russian Federation)
2014-06-01
We study a plethora of chaotic phenomena in the Hindmarsh-Rose neuron model with the use of several computational techniques including the bifurcation parameter continuation, spike-quantification, and evaluation of Lyapunov exponents in bi-parameter diagrams. Such an aggregated approach allows for detecting regions of simple and chaotic dynamics, and demarcating borderlines—exact bifurcation curves. We demonstrate how the organizing centers—points corresponding to codimension-two homoclinic bifurcations—along with fold and period-doubling bifurcation curves structure the biparametric plane, thus forming macro-chaotic regions of onion bulb shapes and revealing spike-adding cascades that generate micro-chaotic structures due to the hysteresis.
Macro- and micro-chaotic structures in the Hindmarsh-Rose model of bursting neurons
International Nuclear Information System (INIS)
Barrio, Roberto; Serrano, Sergio; Angeles Martínez, M.; Shilnikov, Andrey
2014-01-01
We study a plethora of chaotic phenomena in the Hindmarsh-Rose neuron model with the use of several computational techniques including the bifurcation parameter continuation, spike-quantification, and evaluation of Lyapunov exponents in bi-parameter diagrams. Such an aggregated approach allows for detecting regions of simple and chaotic dynamics, and demarcating borderlines—exact bifurcation curves. We demonstrate how the organizing centers—points corresponding to codimension-two homoclinic bifurcations—along with fold and period-doubling bifurcation curves structure the biparametric plane, thus forming macro-chaotic regions of onion bulb shapes and revealing spike-adding cascades that generate micro-chaotic structures due to the hysteresis
Hardware implementation of stochastic spiking neural networks.
Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni
2012-08-01
Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.
Biophysically realistic minimal model of dopamine neuron
Oprisan, Sorinel
2008-03-01
We proposed and studied a new biophysically relevant computational model of dopaminergic neurons. Midbrain dopamine neurons are involved in motivation and the control of movement, and have been implicated in various pathologies such as Parkinson's disease, schizophrenia, and drug abuse. The model we developed is a single-compartment Hodgkin-Huxley (HH)-type parallel conductance membrane model. The model captures the essential mechanisms underlying the slow oscillatory potentials and plateau potential oscillations. The main currents involved are: 1) a voltage-dependent fast calcium current, 2) a small conductance potassium current that is modulated by the cytosolic concentration of calcium, and 3) a slow voltage-activated potassium current. We developed multidimensional bifurcation diagrams and extracted the effective domains of sustained oscillations. The model includes a calcium balance due to the fundamental importance of calcium influx as proved by simultaneous electrophysiological and calcium imaging procedure. Although there are significant evidences to suggest a partially electrogenic calcium pump, all previous models considered only elecrtogenic pumps. We investigated the effect of the electrogenic calcium pump on the bifurcation diagram of the model and compared our findings against the experimental results.
The thermal-spike model description of the ion-irradiated polyimide
International Nuclear Information System (INIS)
Sun Youmei; Zhang Chonghong; Zhu Zhiyong; Wang Zhiguang; Jin Yunfan; Liu Jie; Wang Ying
2004-01-01
To describe the role of electronic energy loss (dE/dX) e for chemical modification of polyimide (PI), multi-layer stacks (corresponding to different dE/dX) were irradiated by different swift heavy ions (1.158 GeV Fe 56 and 1.755 GeV Xe 136 ) under vacuum and at room temperature. Chemical changes of modified PI films were studied by Fourier transform infrared (FTIR) spectroscopy. The chain disruption rate of PI was investigated in the fluence range from 1 x 10 11 to 6 x 10 12 ions/cm 2 and a wider energy stopping power range (2.2-5.1 keV/nm for Fe 56 ions and 8.6-11.5 keV/nm for Xe 136 ions). Alkyne formation was observed over the electronic energy loss range of interest. By applying the saturated track model assumption (the damage process only occur in a cylinder of area σ), the mean degradation and alkyne formation radii in tracks were induced for Fe and Xe ion irradiation, respectively. The results were validated by the thermal-spike model. The analysis of the irradiated PI films shows that the predictions of the thermal-spike model of Szenes are in qualitative agreement with the curve shape of experimental results
Directory of Open Access Journals (Sweden)
Liang Meng
Full Text Available A fundamental issue in neuroscience is how to identify the multiple biophysical mechanisms through which neurons generate observed patterns of spiking activity. In previous work, we proposed a method for linking observed patterns of spiking activity to specific biophysical mechanisms based on a state space modeling framework and a sequential Monte Carlo, or particle filter, estimation algorithm. We have shown, in simulation, that this approach is able to identify a space of simple biophysical models that were consistent with observed spiking data (and included the model that generated the data, but have yet to demonstrate the application of the method to identify realistic currents from real spike train data. Here, we apply the particle filter to spiking data recorded from rat layer V cortical neurons, and correctly identify the dynamics of an slow, intrinsic current. The underlying intrinsic current is successfully identified in four distinct neurons, even though the cells exhibit two distinct classes of spiking activity: regular spiking and bursting. This approach--linking statistical, computational, and experimental neuroscience--provides an effective technique to constrain detailed biophysical models to specific mechanisms consistent with observed spike train data.
Simulating synchronization in neuronal networks
Fink, Christian G.
2016-06-01
We discuss several techniques used in simulating neuronal networks by exploring how a network's connectivity structure affects its propensity for synchronous spiking. Network connectivity is generated using the Watts-Strogatz small-world algorithm, and two key measures of network structure are described. These measures quantify structural characteristics that influence collective neuronal spiking, which is simulated using the leaky integrate-and-fire model. Simulations show that adding a small number of random connections to an otherwise lattice-like connectivity structure leads to a dramatic increase in neuronal synchronization.
Stochastic synchronization in finite size spiking networks
Doiron, Brent; Rinzel, John; Reyes, Alex
2006-09-01
We study a stochastic synchronization of spiking activity in feedforward networks of integrate-and-fire model neurons. A stochastic mean field analysis shows that synchronization occurs only when the network size is sufficiently small. This gives evidence that the dynamics, and hence processing, of finite size populations can be drastically different from that observed in the infinite size limit. Our results agree with experimentally observed synchrony in cortical networks, and further strengthen the link between synchrony and propagation in cortical systems.
Spike-threshold adaptation predicted by membrane potential dynamics in vivo.
Directory of Open Access Journals (Sweden)
Bertrand Fontaine
2014-04-01
Full Text Available Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo.
A neuron-astrocyte transistor-like model for neuromorphic dressed neurons.
Valenza, G; Pioggia, G; Armato, A; Ferro, M; Scilingo, E P; De Rossi, D
2011-09-01
Experimental evidences on the role of the synaptic glia as an active partner together with the bold synapse in neuronal signaling and dynamics of neural tissue strongly suggest to investigate on a more realistic neuron-glia model for better understanding human brain processing. Among the glial cells, the astrocytes play a crucial role in the tripartite synapsis, i.e. the dressed neuron. A well-known two-way astrocyte-neuron interaction can be found in the literature, completely revising the purely supportive role for the glia. The aim of this study is to provide a computationally efficient model for neuron-glia interaction. The neuron-glia interactions were simulated by implementing the Li-Rinzel model for an astrocyte and the Izhikevich model for a neuron. Assuming the dressed neuron dynamics similar to the nonlinear input-output characteristics of a bipolar junction transistor, we derived our computationally efficient model. This model may represent the fundamental computational unit for the development of real-time artificial neuron-glia networks opening new perspectives in pattern recognition systems and in brain neurophysiology. Copyright © 2011 Elsevier Ltd. All rights reserved.
Numerical simulation of coherent resonance in a model network of Rulkov neurons
Andreev, Andrey V.; Runnova, Anastasia E.; Pisarchik, Alexander N.
2018-04-01
In this paper we study the spiking behaviour of a neuronal network consisting of Rulkov elements. We find that the regularity of this behaviour maximizes at a certain level of environment noise. This effect referred to as coherence resonance is demonstrated in a random complex network of Rulkov neurons. An external stimulus added to some of neurons excites them, and then activates other neurons in the network. The network coherence is also maximized at the certain stimulus amplitude.
Automatic spike sorting using tuning information.
Ventura, Valérie
2009-09-01
Current spike sorting methods focus on clustering neurons' characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes' identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only.
International Nuclear Information System (INIS)
Hasegawa, Hideo
2004-01-01
By extending a dynamical mean-field approximation previously proposed by the author [H. Hasegawa, Phys. Rev. E 67, 041903 (2003)], we have developed a semianalytical theory which takes into account a wide range of couplings in a small-world network. Our network consists of noisy N-unit FitzHugh-Nagumo neurons with couplings whose average coordination number Z may change from local (Z<< N) to global couplings (Z=N-1) and/or whose concentration of random couplings p is allowed to vary from regular (p=0) to completely random (p=1). We have taken into account three kinds of spatial correlations: the on-site correlation, the correlation for a coupled pair, and that for a pair without direct couplings. The original 2N-dimensional stochastic differential equations are transformed to 13-dimensional deterministic differential equations expressed in terms of means, variances, and covariances of state variables. The synchronization ratio and the firing-time precision for an applied single spike have been discussed as functions of Z and p. Our calculations have shown that with increasing p, the synchronization is worse because of increased heterogeneous couplings, although the average network distance becomes shorter. Results calculated by our theory are in good agreement with those by direct simulations
Liberti, M; Paffi, A; Maggio, F; De Angelis, A; Apollonio, F; d'Inzeo, G
2009-01-01
A number of experimental investigations have evidenced the extraordinary sensitivity of neuronal cells to weak input stimulations, including electromagnetic (EM) fields. Moreover, it has been shown that biological noise, due to random channels gating, acts as a tuning factor in neuronal processing, according to the stochastic resonant (SR) paradigm. In this work the attention is focused on noise arising from the stochastic gating of ionic channels in a model of Ranvier node of acoustic fibers. The small number of channels gives rise to a high noise level, which is able to cause a spike train generation even in the absence of stimulations. A SR behavior has been observed in the model for the detection of sinusoidal signals at frequencies typical of the speech.
Directory of Open Access Journals (Sweden)
Qiang Yu
Full Text Available A new learning rule (Precise-Spike-Driven (PSD Synaptic Plasticity is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.
Yu, Qiang; Tang, Huajin; Tan, Kay Chen; Li, Haizhou
2013-01-01
A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.
A Spiking Working Memory Model Based on Hebbian Short-Term Potentiation
Fiebig, Florian
2017-01-01
A dominant theory of working memory (WM), referred to as the persistent activity hypothesis, holds that recurrently connected neural networks, presumably located in the prefrontal cortex, encode and maintain WM memory items through sustained elevated activity. Reexamination of experimental data has shown that prefrontal cortex activity in single units during delay periods is much more variable than predicted by such a theory and associated computational models. Alternative models of WM maintenance based on synaptic plasticity, such as short-term nonassociative (non-Hebbian) synaptic facilitation, have been suggested but cannot account for encoding of novel associations. Here we test the hypothesis that a recently identified fast-expressing form of Hebbian synaptic plasticity (associative short-term potentiation) is a possible mechanism for WM encoding and maintenance. Our simulations using a spiking neural network model of cortex reproduce a range of cognitive memory effects in the classical multi-item WM task of encoding and immediate free recall of word lists. Memory reactivation in the model occurs in discrete oscillatory bursts rather than as sustained activity. We relate dynamic network activity as well as key synaptic characteristics to electrophysiological measurements. Our findings support the hypothesis that fast Hebbian short-term potentiation is a key WM mechanism. SIGNIFICANCE STATEMENT Working memory (WM) is a key component of cognition. Hypotheses about the neural mechanism behind WM are currently under revision. Reflecting recent findings of fast Hebbian synaptic plasticity in cortex, we test whether a cortical spiking neural network model with such a mechanism can learn a multi-item WM task (word list learning). We show that our model can reproduce human cognitive phenomena and achieve comparable memory performance in both free and cued recall while being simultaneously compatible with experimental data on structure, connectivity, and
A Spiking Working Memory Model Based on Hebbian Short-Term Potentiation.
Fiebig, Florian; Lansner, Anders
2017-01-04
A dominant theory of working memory (WM), referred to as the persistent activity hypothesis, holds that recurrently connected neural networks, presumably located in the prefrontal cortex, encode and maintain WM memory items through sustained elevated activity. Reexamination of experimental data has shown that prefrontal cortex activity in single units during delay periods is much more variable than predicted by such a theory and associated computational models. Alternative models of WM maintenance based on synaptic plasticity, such as short-term nonassociative (non-Hebbian) synaptic facilitation, have been suggested but cannot account for encoding of novel associations. Here we test the hypothesis that a recently identified fast-expressing form of Hebbian synaptic plasticity (associative short-term potentiation) is a possible mechanism for WM encoding and maintenance. Our simulations using a spiking neural network model of cortex reproduce a range of cognitive memory effects in the classical multi-item WM task of encoding and immediate free recall of word lists. Memory reactivation in the model occurs in discrete oscillatory bursts rather than as sustained activity. We relate dynamic network activity as well as key synaptic characteristics to electrophysiological measurements. Our findings support the hypothesis that fast Hebbian short-term potentiation is a key WM mechanism. Working memory (WM) is a key component of cognition. Hypotheses about the neural mechanism behind WM are currently under revision. Reflecting recent findings of fast Hebbian synaptic plasticity in cortex, we test whether a cortical spiking neural network model with such a mechanism can learn a multi-item WM task (word list learning). We show that our model can reproduce human cognitive phenomena and achieve comparable memory performance in both free and cued recall while being simultaneously compatible with experimental data on structure, connectivity, and neurophysiology of the underlying
Jurkojć, Jacek; Michnik, Robert; Czapla, Krzysztof
2017-06-01
This article deals with kinematic and kinetic conditions in volleyball attack and identifies loads in the shoulder joint. Joint angles and velocities of individual segments of upper limb were measured with the use of the motion capture system XSENS. Muscle forces and loads in skeletal system were calculated by means of mathematical model elaborated in AnyBody system. Spikes performed by players in the best and worst way were compared with each other. The relationships were found between reactions in shoulder joint and flexion/extension, abduction/adduction and rotation angles in the same joint and flexion/extension in the elbow joint. Reactions in shoulder joint varied from 591 N to 2001 N (in relation to body weight [BW] 83-328%). The analysis proved that hand velocity at the moment of the ball hit (which varied between 6.8 and 13.3 m s -1 ) influences on the value of reaction in joints, but positions of individual segments relative to each other are also crucial. It was also proved in objective way, that position of the upper limb during spike can be more or less harmful assuming that bigger reaction increases possibility of injury, what can be an indication for trainers and physiotherapists how to improve injury prevention.
Spatio-Temporal Modeling of Neuron Fields
DEFF Research Database (Denmark)
Lund, Adam
The starting point and focal point for this thesis was stochastic dynamical modelling of neuronal imaging data with the declared objective of drawing inference, within this model framework, in a large-scale (high-dimensional) data setting. Implicitly this objective entails carrying out three...... be achieved if the scale of the data is taken into consideration throughout i) - iii). The strategy in this project was, relying on a space and time continuous stochastic modelling approach, to obtain a stochastic functional differential equation on a Hilbert space. By decomposing the drift operator...... of this SFDE such that each component is essentially represented by a smooth function of time and space and expanding these component functions in a tensor product basis we implicitly reduce the number of model parameters. In addition, the component-wise tensor representation induce a corresponding component...
Cerebellar Nuclear Neurons Use Time and Rate Coding to Transmit Purkinje Neuron Pauses.
Sudhakar, Shyam Kumar; Torben-Nielsen, Benjamin; De Schutter, Erik
2015-12-01
Neurons of the cerebellar nuclei convey the final output of the cerebellum to their targets in various parts of the brain. Within the cerebellum their direct upstream connections originate from inhibitory Purkinje neurons. Purkinje neurons have a complex firing pattern of regular spikes interrupted by intermittent pauses of variable length. How can the cerebellar nucleus process this complex input pattern? In this modeling study, we investigate different forms of Purkinje neuron simple spike pause synchrony and its influence on candidate coding strategies in the cerebellar nuclei. That is, we investigate how different alignments of synchronous pauses in synthetic Purkinje neuron spike trains affect either time-locking or rate-changes in the downstream nuclei. We find that Purkinje neuron synchrony is mainly represented by changes in the firing rate of cerebellar nuclei neurons. Pause beginning synchronization produced a unique effect on nuclei neuron firing, while the effect of pause ending and pause overlapping synchronization could not be distinguished from each other. Pause beginning synchronization produced better time-locking of nuclear neurons for short length pauses. We also characterize the effect of pause length and spike jitter on the nuclear neuron firing. Additionally, we find that the rate of rebound responses in nuclear neurons after a synchronous pause is controlled by the firing rate of Purkinje neurons preceding it.
Cerebellar Nuclear Neurons Use Time and Rate Coding to Transmit Purkinje Neuron Pauses
Sudhakar, Shyam Kumar; Torben-Nielsen, Benjamin; De Schutter, Erik
2015-01-01
Neurons of the cerebellar nuclei convey the final output of the cerebellum to their targets in various parts of the brain. Within the cerebellum their direct upstream connections originate from inhibitory Purkinje neurons. Purkinje neurons have a complex firing pattern of regular spikes interrupted by intermittent pauses of variable length. How can the cerebellar nucleus process this complex input pattern? In this modeling study, we investigate different forms of Purkinje neuron simple spike pause synchrony and its influence on candidate coding strategies in the cerebellar nuclei. That is, we investigate how different alignments of synchronous pauses in synthetic Purkinje neuron spike trains affect either time-locking or rate-changes in the downstream nuclei. We find that Purkinje neuron synchrony is mainly represented by changes in the firing rate of cerebellar nuclei neurons. Pause beginning synchronization produced a unique effect on nuclei neuron firing, while the effect of pause ending and pause overlapping synchronization could not be distinguished from each other. Pause beginning synchronization produced better time-locking of nuclear neurons for short length pauses. We also characterize the effect of pause length and spike jitter on the nuclear neuron firing. Additionally, we find that the rate of rebound responses in nuclear neurons after a synchronous pause is controlled by the firing rate of Purkinje neurons preceding it. PMID:26630202
The Second Spiking Threshold: Dynamics of Laminar Network Spiking in the Visual Cortex
DEFF Research Database (Denmark)
Forsberg, Lars E.; Bonde, Lars H.; Harvey, Michael A.
2016-01-01
and moving visual stimuli from the spontaneous ongoing spiking state, in all layers and zones of areas 17 and 18 indicating that the second threshold is a property of the network. Spontaneous and evoked spiking, thus can easily be distinguished. In addition, the trajectories of spontaneous ongoing states......Most neurons have a threshold separating the silent non-spiking state and the state of producing temporal sequences of spikes. But neurons in vivo also have a second threshold, found recently in granular layer neurons of the primary visual cortex, separating spontaneous ongoing spiking from...... visually evoked spiking driven by sharp transients. Here we examine whether this second threshold exists outside the granular layer and examine details of transitions between spiking states in ferrets exposed to moving objects. We found the second threshold, separating spiking states evoked by stationary...
Spike-timing-based computation in sound localization.
Directory of Open Access Journals (Sweden)
Dan F M Goodman
2010-11-01
Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.
Connecting mirror neurons and forward models.
Miall, R C
2003-12-02
Two recent developments in motor neuroscience are promising the extension of theoretical concepts from motor control towards cognitive processes, including human social interactions and understanding the intentions of others. The first of these is the discovery of what are now called mirror neurons, which code for both observed and executed actions. The second is the concept of internal models, and in particular recent proposals that forward and inverse models operate in paired modules. These two ideas will be briefly introduced, and a recent suggestion linking between the two processes of mirroring and modelling will be described which may underlie our abilities for imitating actions, for cooperation between two actors, and possibly for communication via gesture and language.
Multineuronal Spike Sequences Repeat with Millisecond Precision
Directory of Open Access Journals (Sweden)
Koki eMatsumoto
2013-06-01
Full Text Available Cortical microcircuits are nonrandomly wired by neurons. As a natural consequence, spikes emitted by microcircuits are also nonrandomly patterned in time and space. One of the prominent spike organizations is a repetition of fixed patterns of spike series across multiple neurons. However, several questions remain unsolved, including how precisely spike sequences repeat, how the sequences are spatially organized, how many neurons participate in sequences, and how different sequences are functionally linked. To address these questions, we monitored spontaneous spikes of hippocampal CA3 neurons ex vivo using a high-speed functional multineuron calcium imaging technique that allowed us to monitor spikes with millisecond resolution and to record the location of spiking and nonspiking neurons. Multineuronal spike sequences were overrepresented in spontaneous activity compared to the statistical chance level. Approximately 75% of neurons participated in at least one sequence during our observation period. The participants were sparsely dispersed and did not show specific spatial organization. The number of sequences relative to the chance level decreased when larger time frames were used to detect sequences. Thus, sequences were precise at the millisecond level. Sequences often shared common spikes with other sequences; parts of sequences were subsequently relayed by following sequences, generating complex chains of multiple sequences.
Shehzad, Danish; Bozkuş, Zeki
2016-01-01
Increase in complexity of neuronal network models escalated the efforts to make NEURON simulation environment efficient. The computational neuroscientists divided the equations into subnets amongst multiple processors for achieving better hardware performance. On parallel machines for neuronal networks, interprocessor spikes exchange consumes large section of overall simulation time. In NEURON for communication between processors Message Passing Interface (MPI) is used. MPI_Allgather collecti...
Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model
Rallapalli, Varsha H.
2016-01-01
Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL) often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM) has demonstrated that the signal-to-noise ratio (SNRENV) from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N) is assumed to: (a) reduce S + N envelope power by filling in dips within clean speech (S) and (b) introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.
Samura, Toshikazu; Hayashi, Hatsuo
2012-09-01
It has been demonstrated that theta rhythm propagates along the septotemporal axis of the hippocampal CA1 of the rat running on a track, and it has been suggested that directional spike propagation in the hippocampal CA3 is reflected in CA1. In this paper, we show that directional spike propagation occurs in a recurrent network model in which neurons are connected locally and connection weights are modified through STDP. The recurrent network model consists of excitatory and inhibitory neurons, which are intrinsic bursting and fast spiking neurons developed by Izhikevich, respectively. The maximum length of connections from excitatory neurons is shorter in the horizontal direction than the vertical direction. Connections from inhibitory neurons have the same maximum length in both directions, and the maximum length of inhibitory connections is the same as that of excitatory connections in the vertical direction. When connection weights between excitatory neurons (E→E) were modified through STDP and those from excitatory neurons to inhibitory neurons (E→I) were constant, spikes propagated in the vertical direction as expected from the network structure. However, when E→I connection weights were modified through STDP, as well as E→E connection weights, spikes propagated in the horizontal direction against the above expectation. This paradoxical propagation was produced by strengthened E→I connections which shifted the timing of inhibition forward. When E→I connections are enhanced, the direction of effective inhibition changes from horizontal to vertical, as if a gate for spike propagation is opened in the horizontal direction and firewalls come out in the vertical direction. These results suggest that the advance of timing of inhibition caused by potentiation of E→I connections is influential in network activity and is an important element in determining the direction of spike propagation. Copyright © 2012 Elsevier Ltd. All rights reserved.
Impact of spike train autostructure on probability distribution of joint spike events.
Pipa, Gordon; Grün, Sonja; van Vreeswijk, Carl
2013-05-01
The discussion whether temporally coordinated spiking activity really exists and whether it is relevant has been heated over the past few years. To investigate this issue, several approaches have been taken to determine whether synchronized events occur significantly above chance, that is, whether they occur more often than expected if the neurons fire independently. Most investigations ignore or destroy the autostructure of the spiking activity of individual cells or assume Poissonian spiking as a model. Such methods that ignore the autostructure can significantly bias the coincidence statistics. Here, we study the influence of the autostructure on the probability distribution of coincident spiking events between tuples of mutually independent non-Poisson renewal processes. In particular, we consider two types of renewal processes that were suggested as appropriate models of experimental spike trains: a gamma and a log-normal process. For a gamma process, we characterize the shape of the distribution analytically with the Fano factor (FFc). In addition, we perform Monte Carlo estimations to derive the full shape of the distribution and the probability for false positives if a different process type is assumed as was actually present. We also determine how manipulations of such spike trains, here dithering, used for the generation of surrogate data change the distribution of coincident events and influence the significance estimation. We find, first, that the width of the coincidence count distribution and its FFc depend critically and in a nontrivial way on the detailed properties of the structure of the spike trains as characterized by the coefficient of variation CV. Second, the dependence of the FFc on the CV is complex and mostly nonmonotonic. Third, spike dithering, even if as small as a fraction of the interspike interval, can falsify the inference on coordinated firing.
Zhang, Xuming; Li, Liu; Zhu, Fei; Hou, Wenguang; Chen, Xinjian
2014-06-01
Optical coherence tomography (OCT) images are usually degraded by significant speckle noise, which will strongly hamper their quantitative analysis. However, speckle noise reduction in OCT images is particularly challenging because of the difficulty in differentiating between noise and the information components of the speckle pattern. To address this problem, the spiking cortical model (SCM)-based nonlocal means method is presented. The proposed method explores self-similarities of OCT images based on rotation-invariant features of image patches extracted by SCM and then restores the speckled images by averaging the similar patches. This method can provide sufficient speckle reduction while preserving image details very well due to its effectiveness in finding reliable similar patches under high speckle noise contamination. When applied to the retinal OCT image, this method provides signal-to-noise ratio improvements of >16 dB with a small 5.4% loss of similarity.
A hidden Markov model approach to neuron firing patterns.
Camproux, A C; Saunier, F; Chouvet, G; Thalabard, J C; Thomas, G
1996-11-01
Analysis and characterization of neuronal discharge patterns are of interest to neurophysiologists and neuropharmacologists. In this paper we present a hidden Markov model approach to modeling single neuron electrical activity. Basically the model assumes that each interspike interval corresponds to one of several possible states of the neuron. Fitting the model to experimental series of interspike intervals by maximum likelihood allows estimation of the number of possible underlying neuron states, the probability density functions of interspike intervals corresponding to each state, and the transition probabilities between states. We present an application to the analysis of recordings of a locus coeruleus neuron under three pharmacological conditions. The model distinguishes two states during halothane anesthesia and during recovery from halothane anesthesia, and four states after administration of clonidine. The transition probabilities yield additional insights into the mechanisms of neuron firing.
Directory of Open Access Journals (Sweden)
Zedong Bi
2016-08-01
Full Text Available Synapses may undergo variable changes during plasticity because of the variability of spike patterns such as temporal stochasticity and spatial randomness. Here, we call the variability of synaptic weight changes during plasticity to be efficacy variability. In this paper, we investigate how four aspects of spike pattern statistics (i.e., synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations influence the efficacy variability under pair-wise additive spike-timing dependent plasticity (STDP and synaptic homeostasis (the mean strength of plastic synapses into a neuron is bounded, by implementing spike shuffling methods onto spike patterns self-organized by a network of excitatory and inhibitory leaky integrate-and-fire (LIF neurons. With the increase of the decay time scale of the inhibitory synaptic currents, the LIF network undergoes a transition from asynchronous state to weak synchronous state and then to synchronous bursting state. We first shuffle these spike patterns using a variety of methods, each designed to evidently change a specific pattern statistics; and then investigate the change of efficacy variability of the synapses under STDP and synaptic homeostasis, when the neurons in the network fire according to the spike patterns before and after being treated by a shuffling method. In this way, we can understand how the change of pattern statistics may cause the change of efficacy variability. Our results are consistent with those of our previous study which implements spike-generating models on converging motifs. We also find that burstiness/regularity is important to determine the efficacy variability under asynchronous states, while heterogeneity of cross-correlations is the main factor to cause efficacy variability when the network moves into synchronous bursting states (the states observed in epilepsy.
Models of the stochastic activity of neurones
Holden, Arun Vivian
1976-01-01
These notes have grown from a series of seminars given at Leeds between 1972 and 1975. They represent an attempt to gather together the different kinds of model which have been proposed to account for the stochastic activity of neurones, and to provide an introduction to this area of mathematical biology. A striking feature of the electrical activity of the nervous system is that it appears stochastic: this is apparent at all levels of recording, ranging from intracellular recordings to the electroencephalogram. The chapters start with fluctuations in membrane potential, proceed through single unit and synaptic activity and end with the behaviour of large aggregates of neurones: L have chgaen this seque~~e\\/~~';uggest that the interesting behaviourr~f :the nervous system - its individuality, variability and dynamic forms - may in part result from the stochastic behaviour of its components. I would like to thank Dr. Julio Rubio for reading and commenting on the drafts, Mrs. Doris Beighton for producing the fin...
Experimental study and mathematical model on remediation of Cd spiked kaolinite by electrokinetics
International Nuclear Information System (INIS)
Mascia, Michele; Palmas, Simonetta; Polcaro, Anna Maria; Vacca, Annalisa; Muntoni, Aldo
2007-01-01
An experimental study on electrokinetic removal of cadmium from kaolinitic clays is presented in this work, which is aimed to investigate the effect of surface reactions on the electrokinetic process. Enhanced electrokinetic tests were performed in which the pH of the compartments was controlled. Cadmium spiked kaolin was adopted in the experimental runs. On the basis of the experimental results, a numerical model was formulated to simulate the cadmium (Cd) transport under an electric field by combining a one-dimensional diffusion-advection model with a geochemical model: the combined model describes the contaminant transport driven by chemical and electrical gradients, as well as the effect of the surface reactions. The geochemical model utilized parameters derived from the literature, and it was validated by experimental data obtained by sorption and titration experiments. Electrokinetic tests were utilized to validate the results of the proposed model. A good prediction of the behaviour of the soil/cadmium ions system under electrical field was obtained: the differences between experimental and model predicted profiles for the species considered were less than 5% in all the examined conditions
Functionalized anatomical models for EM-neuron Interaction modeling
Neufeld, Esra; Cassará, Antonino Mario; Montanaro, Hazael; Kuster, Niels; Kainz, Wolfgang
2016-06-01
The understanding of interactions between electromagnetic (EM) fields and nerves are crucial in contexts ranging from therapeutic neurostimulation to low frequency EM exposure safety. To properly consider the impact of in vivo induced field inhomogeneity on non-linear neuronal dynamics, coupled EM-neuronal dynamics modeling is required. For that purpose, novel functionalized computable human phantoms have been developed. Their implementation and the systematic verification of the integrated anisotropic quasi-static EM solver and neuronal dynamics modeling functionality, based on the method of manufactured solutions and numerical reference data, is described. Electric and magnetic stimulation of the ulnar and sciatic nerve were modeled to help understanding a range of controversial issues related to the magnitude and optimal determination of strength-duration (SD) time constants. The results indicate the importance of considering the stimulation-specific inhomogeneous field distributions (especially at tissue interfaces), realistic models of non-linear neuronal dynamics, very short pulses, and suitable SD extrapolation models. These results and the functionalized computable phantom will influence and support the development of safe and effective neuroprosthetic devices and novel electroceuticals. Furthermore they will assist the evaluation of existing low frequency exposure standards for the entire population under all exposure conditions.
SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.
Zenke, Friedemann; Ganguli, Surya
2018-04-13
A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns.
Karmeshu; Gupta, Varun; Kadambari, K V
2011-06-01
A single neuronal model incorporating distributed delay (memory)is proposed. The stochastic model has been formulated as a Stochastic Integro-Differential Equation (SIDE) which results in the underlying process being non-Markovian. A detailed analysis of the model when the distributed delay kernel has exponential form (weak delay) has been carried out. The selection of exponential kernel has enabled the transformation of the non-Markovian model to a Markovian model in an extended state space. For the study of First Passage Time (FPT) with exponential delay kernel, the model has been transformed to a system of coupled Stochastic Differential Equations (SDEs) in two-dimensional state space. Simulation studies of the SDEs provide insight into the effect of weak delay kernel on the Inter-Spike Interval(ISI) distribution. A measure based on Jensen-Shannon divergence is proposed which can be used to make a choice between two competing models viz. distributed delay model vis-á-vis LIF model. An interesting feature of the model is that the behavior of (CV(t))((ISI)) (Coefficient of Variation) of the ISI distribution with respect to memory kernel time constant parameter η reveals that neuron can switch from a bursting state to non-bursting state as the noise intensity parameter changes. The membrane potential exhibits decaying auto-correlation structure with or without damped oscillatory behavior depending on the choice of parameters. This behavior is in agreement with empirically observed pattern of spike count in a fixed time window. The power spectral density derived from the auto-correlation function is found to exhibit single and double peaks. The model is also examined for the case of strong delay with memory kernel having the form of Gamma distribution. In contrast to fast decay of damped oscillations of the ISI distribution for the model with weak delay kernel, the decay of damped oscillations is found to be slower for the model with strong delay kernel.
A minimal model for a slow pacemaking neuron
International Nuclear Information System (INIS)
Zakharov, D.G.; Kuznetsov, A.
2012-01-01
Highlights: ► We have constructed a phenomenological model for slow pacemaking neurons. ► The model implements a nonlinearity introduced by an ion-dependent current. ► The new nonlinear dependence allows for differentiating responses to various stimuli. ► We discuss implications of our results for a broad class of neurons. - Abstract: We have constructed a phenomenological model for slow pacemaking neurons. These are neurons that generate very regular periodic oscillations of the membrane potential. Many of these neurons also differentially respond to various types of stimulation. The model is based on FitzHugh–Nagumo (FHN) oscillator and implements a nonlinearity introduced by a current that depends on an ion concentration. The comparison with the original FHN oscillator has shown that the new nonlinear dependence allows for differentiating responses to various stimuli. We discuss implications of our results for a broad class of neurons.
A Neuronal Network Model for Pitch Selectivity and Representation
Huang, Chengcheng; Rinzel, John
2016-01-01
Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among c...
Merrison-Hort, Robert; Soffe, Stephen R; Borisyuk, Roman
2018-01-01
Although, in most animals, brain connectivity varies between individuals, behaviour is often similar across a species. What fundamental structural properties are shared across individual networks that define this behaviour? We describe a probabilistic model of connectivity in the hatchling Xenopus tadpole spinal cord which, when combined with a spiking model, reliably produces rhythmic activity corresponding to swimming. The probabilistic model allows calculation of structural characteristics that reflect common network properties, independent of individual network realisations. We use the structural characteristics to study examples of neuronal dynamics, in the complete network and various sub-networks, and this allows us to explain the basis for key experimental findings, and make predictions for experiments. We also study how structural and functional features differ between detailed anatomical connectomes and those generated by our new, simpler, model (meta-model). PMID:29589828
Directory of Open Access Journals (Sweden)
Loreen eHertäg
2012-09-01
Full Text Available For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ('in-vivo-like' input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a 'high-throughput' model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.
Neural Spike Train Synchronisation Indices: Definitions, Interpretations and Applications.
Halliday, D M; Rosenberg, J R
2017-04-24
A comparison of previously defined spike train syncrhonization indices is undertaken within a stochastic point process framework. The second order cumulant density (covariance density) is shown to be common to all the indices. Simulation studies were used to investigate the sampling variability of a single index based on the second order cumulant. The simulations used a paired motoneurone model and a paired regular spiking cortical neurone model. The sampling variability of spike trains generated under identical conditions from the paired motoneurone model varied from 50% { 160% of the estimated value. On theoretical grounds, and on the basis of simulated data a rate dependence is present in all synchronization indices. The application of coherence and pooled coherence estimates to the issue of synchronization indices is considered. This alternative frequency domain approach allows an arbitrary number of spike train pairs to be evaluated for statistically significant differences, and combined into a single population measure. The pooled coherence framework allows pooled time domain measures to be derived, application of this to the simulated data is illustrated. Data from the cortical neurone model is generated over a wide range of firing rates (1 - 250 spikes/sec). The pooled coherence framework correctly characterizes the sampling variability as not significant over this wide operating range. The broader applicability of this approach to multi electrode array data is briefly discussed.
International Nuclear Information System (INIS)
Cofré, Rodrigo; Cessac, Bruno
2013-01-01
We investigate the effect of electric synapses (gap junctions) on collective neuronal dynamics and spike statistics in a conductance-based integrate-and-fire neural network, driven by Brownian noise, where conductances depend upon spike history. We compute explicitly the time evolution operator and show that, given the spike-history of the network and the membrane potentials at a given time, the further dynamical evolution can be written in a closed form. We show that spike train statistics is described by a Gibbs distribution whose potential can be approximated with an explicit formula, when the noise is weak. This potential form encompasses existing models for spike trains statistics analysis such as maximum entropy models or generalized linear models (GLM). We also discuss the different types of correlations: those induced by a shared stimulus and those induced by neurons interactions
Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems
Directory of Open Access Journals (Sweden)
Emre eNeftci
2014-01-01
Full Text Available Restricted Boltzmann Machines (RBMs and Deep Belief Networks have been demonstrated to perform efficiently in variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The reverberating activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP carries out the weight updates in an online, asynchronous fashion.We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.
Fast computation with spikes in a recurrent neural network
International Nuclear Information System (INIS)
Jin, Dezhe Z.; Seung, H. Sebastian
2002-01-01
Neural networks with recurrent connections are sometimes regarded as too slow at computation to serve as models of the brain. Here we analytically study a counterexample, a network consisting of N integrate-and-fire neurons with self excitation, all-to-all inhibition, instantaneous synaptic coupling, and constant external driving inputs. When the inhibition and/or excitation are large enough, the network performs a winner-take-all computation for all possible external inputs and initial states of the network. The computation is done very quickly: As soon as the winner spikes once, the computation is completed since no other neurons will spike. For some initial states, the winner is the first neuron to spike, and the computation is done at the first spike of the network. In general, there are M potential winners, corresponding to the top M external inputs. When the external inputs are close in magnitude, M tends to be larger. If M>1, the selection of the actual winner is strongly influenced by the initial states. If a special relation between the excitation and inhibition is satisfied, the network always selects the neuron with the maximum external input as the winner
Event-driven contrastive divergence for spiking neuromorphic systems.
Neftci, Emre; Das, Srinjoy; Pedroni, Bruno; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert
2013-01-01
Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.
Directory of Open Access Journals (Sweden)
Janelle Drouin-Ouellet
2017-09-01
Full Text Available Direct neuronal reprogramming, by which a neuron is formed via direct conversion from a somatic cell without going through a pluripotent intermediate stage, allows for the possibility of generating patient-derived neurons. A unique feature of these so-called induced neurons (iNs is the potential to maintain aging and epigenetic signatures of the donor, which is critical given that many diseases of the CNS are age related. Here, we review the published literature on the work that has been undertaken using iNs to model human brain disorders. Furthermore, as disease-modeling studies using this direct neuronal reprogramming approach are becoming more widely adopted, it is important to assess the criteria that are used to characterize the iNs, especially in relation to the extent to which they are mature adult neurons. In particular: i what constitutes an iN cell, ii which stages of conversion offer the earliest/optimal time to assess features that are specific to neurons and/or a disorder and iii whether generating subtype-specific iNs is critical to the disease-related features that iNs express. Finally, we discuss the range of potential biomedical applications that can be explored using patient-specific models of neurological disorders with iNs, and the challenges that will need to be overcome in order to realize these applications.
Design of memristive interface between electronic neurons
Gerasimova, S. A.; Mikhaylov, A. N.; Belov, A. I.; Korolev, D. S.; Guseinov, D. V.; Lebedeva, A. V.; Gorshkov, O. N.; Kazantsev, V. B.
2018-05-01
Nonlinear dynamics of two electronic oscillators coupled via a memristive device has been investigated. Such model mimics the interaction between synaptically coupled brain neurons with the memristive device imitating neuron axon. The synaptic connection is provided by the adaptive behavior of memristive device that changes its resistance under the action of spike-like activity. Mathematical model of such a memristive interface has been developed to describe and predict the experimentally observed regularities of forced synchronization of neuron-like oscillators.
Simple networks for spike-timing-based computation, with application to olfactory processing.
Brody, Carlos D; Hopfield, J J
2003-03-06
Spike synchronization across neurons can be selective for the situation where neurons are driven at similar firing rates, a "many are equal" computation. This can be achieved in the absence of synaptic interactions between neurons, through phase locking to a common underlying oscillatory potential. Based on this principle, we instantiate an algorithm for robust odor recognition into a model network of spiking neurons whose main features are taken from known properties of biological olfactory systems. Here, recognition of odors is signaled by spike synchronization of specific subsets of "mitral cells." This synchronization is highly odor selective and invariant to a wide range of odor concentrations. It is also robust to the presence of strong distractor odors, thus allowing odor segmentation within complex olfactory scenes. Information about odors is encoded in both the identity of glomeruli activated above threshold (1 bit of information per glomerulus) and in the analog degree of activation of the glomeruli (approximately 3 bits per glomerulus).
Transitions to Synchrony in Coupled Bursting Neurons
Dhamala, Mukeshwar; Jirsa, Viktor K.; Ding, Mingzhou
2004-01-01
Certain cells in the brain, for example, thalamic neurons during sleep, show spike-burst activity. We study such spike-burst neural activity and the transitions to a synchronized state using a model of coupled bursting neurons. In an electrically coupled network, we show that the increase of coupling strength increases incoherence first and then induces two different transitions to synchronized states, one associated with bursts and the other with spikes. These sequential transitions to synchronized states are determined by the zero crossings of the maximum transverse Lyapunov exponents. These results suggest that synchronization of spike-burst activity is a multi-time-scale phenomenon and burst synchrony is a precursor to spike synchrony.
Transitions to synchrony in coupled bursting neurons
International Nuclear Information System (INIS)
Dhamala, Mukeshwar; Jirsa, Viktor K.; Ding Mingzhou
2004-01-01
Certain cells in the brain, for example, thalamic neurons during sleep, show spike-burst activity. We study such spike-burst neural activity and the transitions to a synchronized state using a model of coupled bursting neurons. In an electrically coupled network, we show that the increase of coupling strength increases incoherence first and then induces two different transitions to synchronized states, one associated with bursts and the other with spikes. These sequential transitions to synchronized states are determined by the zero crossings of the maximum transverse Lyapunov exponents. These results suggest that synchronization of spike-burst activity is a multi-time-scale phenomenon and burst synchrony is a precursor to spike synchrony
A decision-making model based on a spiking neural circuit and synaptic plasticity.
Wei, Hui; Bu, Yijie; Dai, Dawei
2017-10-01
To adapt to the environment and survive, most animals can control their behaviors by making decisions. The process of decision-making and responding according to cues in the environment is stable, sustainable, and learnable. Understanding how behaviors are regulated by neural circuits and the encoding and decoding mechanisms from stimuli to responses are important goals in neuroscience. From results observed in Drosophila experiments, the underlying decision-making process is discussed, and a neural circuit that implements a two-choice decision-making model is proposed to explain and reproduce the observations. Compared with previous two-choice decision making models, our model uses synaptic plasticity to explain changes in decision output given the same environment. Moreover, biological meanings of parameters of our decision-making model are discussed. In this paper, we explain at the micro-level (i.e., neurons and synapses) how observable decision-making behavior at the macro-level is acquired and achieved.
Bursts generate a non-reducible spike-pattern code
Directory of Open Access Journals (Sweden)
Hugo G Eyherabide
2009-05-01
Full Text Available On the single-neuron level, precisely timed spikes can either constitute firing-rate codes or spike-pattern codes that utilize the relative timing between consecutive spikes. There has been little experimental support for the hypothesis that such temporal patterns contribute substantially to information transmission. Using grasshopper auditory receptors as a model system, we show that correlations between spikes can be used to represent behaviorally relevant stimuli. The correlations reflect the inner structure of the spike train: a succession of burst-like patterns. We demonstrate that bursts with different spike counts encode different stimulus features, such that about 20% of the transmitted information corresponds to discriminating between different features, and the remaining 80% is used to allocate these features in time. In this spike-pattern code, the "what" and the "when" of the stimuli are encoded in the duration of each burst and the time of burst onset, respectively. Given the ubiquity of burst firing, we expect similar findings also for other neural systems.
Gu, Yameng; Zhang, Xuming
2017-05-01
Optical coherence tomography (OCT) images are severely degraded by speckle noise. Existing methods for despeckling multiframe OCT data cannot deliver sufficient speckle suppression while preserving image details well. To address this problem, the spiking cortical model (SCM) based non-local means (NLM) method has been proposed in this letter. In the proposed method, the considered frame and two neighboring frames are input into three SCMs to generate the temporal series of pulse outputs. The normalized moment of inertia (NMI) of the considered patches in the pulse outputs is extracted to represent the rotational and scaling invariant features of the corresponding patches in each frame. The pixel similarity is computed based on the Euclidean distance between the NMI features and used as the weight. Each pixel in the considered frame is restored by the weighted averaging of all pixels in the pre-defined search window in the three frames. Experiments on the real multiframe OCT data of the pig eye demonstrate the advantage of the proposed method over the frame averaging method, the multiscale sparsity based tomographic denoising method, the wavelet-based method and the traditional NLM method in terms of visual inspection and objective metrics such as signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), equivalent number of looks (ENL) and cross-correlation (XCOR).
Nomura, Toshihiro; Musial, Timothy F; Marshall, John J; Zhu, Yiwen; Remmers, Christine L; Xu, Jian; Nicholson, Daniel A; Contractor, Anis
2017-11-22
Fragile X syndrome (FXS) is a neurodevelopmental disorder that is a leading cause of inherited intellectual disability, and the most common known cause of autism spectrum disorder. FXS is broadly characterized by sensory hypersensitivity and several developmental alterations in synaptic and circuit function have been uncovered in the sensory cortex of the mouse model of FXS ( Fmr1 KO). GABA-mediated neurotransmission and fast-spiking (FS) GABAergic interneurons are central to cortical circuit development in the neonate. Here we demonstrate that there is a delay in the maturation of the intrinsic properties of FS interneurons in the sensory cortex, and a deficit in the formation of excitatory synaptic inputs on to these neurons in neonatal Fmr1 KO mice. Both these delays in neuronal and synaptic maturation were rectified by chronic administration of a TrkB receptor agonist. These results demonstrate that the maturation of the GABAergic circuit in the sensory cortex is altered during a critical developmental period due in part to a perturbation in BDNF-TrkB signaling, and could contribute to the alterations in cortical development underlying the sensory pathophysiology of FXS. SIGNIFICANCE STATEMENT Fragile X (FXS) individuals have a range of sensory related phenotypes, and there is growing evidence of alterations in neuronal circuits in the sensory cortex of the mouse model of FXS ( Fmr1 KO). GABAergic interneurons are central to the correct formation of circuits during cortical critical periods. Here we demonstrate a delay in the maturation of the properties and synaptic connectivity of interneurons in Fmr1 KO mice during a critical period of cortical development. The delays both in cellular and synaptic maturation were rectified by administration of a TrkB receptor agonist, suggesting reduced BDNF-TrkB signaling as a contributing factor. These results provide evidence that the function of fast-spiking interneurons is disrupted due to a deficiency in neurotrophin
The role of dendritic non-linearities in single neuron computation
Directory of Open Access Journals (Sweden)
Boris Gutkin
2014-05-01
Full Text Available Experiment has demonstrated that summation of excitatory post-synaptic protientials (EPSPs in dendrites is non-linear. The sum of multiple EPSPs can be larger than their arithmetic sum, a superlinear summation due to the opening of voltage-gated channels and similar to somatic spiking. The so-called dendritic spike. The sum of multiple of EPSPs can also be smaller than their arithmetic sum, because the synaptic current necessarily saturates at some point. While these observations are well-explained by biophysical models the impact of dendritic spikes on computation remains a matter of debate. One reason is that dendritic spikes may fail to make the neuron spike; similarly, dendritic saturations are sometime presented as a glitch which should be corrected by dendritic spikes. We will provide solid arguments against this claim and show that dendritic saturations as well as dendritic spikes enhance single neuron computation, even when they cannot directly make the neuron fire. To explore the computational impact of dendritic spikes and saturations, we are using a binary neuron model in conjunction with Boolean algebra. We demonstrate using these tools that a single dendritic non-linearity, either spiking or saturating, combined with somatic non-linearity, enables a neuron to compute linearly non-separable Boolean functions (lnBfs. These functions are impossible to compute when summation is linear and the exclusive OR is a famous example of lnBfs. Importantly, the implementation of these functions does not require the dendritic non-linearity to make the neuron spike. Next, We show that reduced and realistic biophysical models of the neuron are capable of computing lnBfs. Within these models and contrary to the binary model, the dendritic and somatic non-linearity are tightly coupled. Yet we show that these neuron models are capable of linearly non-separable computations.
How neurons migrate: a dynamic in-silico model of neuronal migration in the developing cortex
LENUS (Irish Health Repository)
Setty, Yaki
2011-09-30
Abstract Background Neuronal migration, the process by which neurons migrate from their place of origin to their final position in the brain, is a central process for normal brain development and function. Advances in experimental techniques have revealed much about many of the molecular components involved in this process. Notwithstanding these advances, how the molecular machinery works together to govern the migration process has yet to be fully understood. Here we present a computational model of neuronal migration, in which four key molecular entities, Lis1, DCX, Reelin and GABA, form a molecular program that mediates the migration process. Results The model simulated the dynamic migration process, consistent with in-vivo observations of morphological, cellular and population-level phenomena. Specifically, the model reproduced migration phases, cellular dynamics and population distributions that concur with experimental observations in normal neuronal development. We tested the model under reduced activity of Lis1 and DCX and found an aberrant development similar to observations in Lis1 and DCX silencing expression experiments. Analysis of the model gave rise to unforeseen insights that could guide future experimental study. Specifically: (1) the model revealed the possibility that under conditions of Lis1 reduced expression, neurons experience an oscillatory neuron-glial association prior to the multipolar stage; and (2) we hypothesized that observed morphology variations in rats and mice may be explained by a single difference in the way that Lis1 and DCX stimulate bipolar motility. From this we make the following predictions: (1) under reduced Lis1 and enhanced DCX expression, we predict a reduced bipolar migration in rats, and (2) under enhanced DCX expression in mice we predict a normal or a higher bipolar migration. Conclusions We present here a system-wide computational model of neuronal migration that integrates theory and data within a precise
How neurons migrate: a dynamic in-silico model of neuronal migration in the developing cortex
Directory of Open Access Journals (Sweden)
Skoblov Nikita
2011-09-01
Full Text Available Abstract Background Neuronal migration, the process by which neurons migrate from their place of origin to their final position in the brain, is a central process for normal brain development and function. Advances in experimental techniques have revealed much about many of the molecular components involved in this process. Notwithstanding these advances, how the molecular machinery works together to govern the migration process has yet to be fully understood. Here we present a computational model of neuronal migration, in which four key molecular entities, Lis1, DCX, Reelin and GABA, form a molecular program that mediates the migration process. Results The model simulated the dynamic migration process, consistent with in-vivo observations of morphological, cellular and population-level phenomena. Specifically, the model reproduced migration phases, cellular dynamics and population distributions that concur with experimental observations in normal neuronal development. We tested the model under reduced activity of Lis1 and DCX and found an aberrant development similar to observations in Lis1 and DCX silencing expression experiments. Analysis of the model gave rise to unforeseen insights that could guide future experimental study. Specifically: (1 the model revealed the possibility that under conditions of Lis1 reduced expression, neurons experience an oscillatory neuron-glial association prior to the multipolar stage; and (2 we hypothesized that observed morphology variations in rats and mice may be explained by a single difference in the way that Lis1 and DCX stimulate bipolar motility. From this we make the following predictions: (1 under reduced Lis1 and enhanced DCX expression, we predict a reduced bipolar migration in rats, and (2 under enhanced DCX expression in mice we predict a normal or a higher bipolar migration. Conclusions We present here a system-wide computational model of neuronal migration that integrates theory and data within a
Segundo, J P; Vibert, J F; Stiber, M
1998-11-01
Codings involving spike trains at synapses with inhibitory postsynaptic potentials on pacemakers were examined in crayfish stretch receptor organs by modulating presynaptic instantaneous rates periodically (triangles or sines; frequencies, slopes and depths under, respectively, 5.0 Hz, 40.0/s/s and 25.0/s). Timings were described by interspike and cross-intervals ("phases"); patterns (dispersions, sequences) and forms (timing classes) were identified using pooled graphs (instant along the cycle when a spike occurs vs preceding interval) and return maps (plots of successive intervals). A remarkable heterogeneity of postsynaptic intervals and phases characterizes each modulation. All cycles separate into the same portions: each contains a particular form and switches abruptly to the next. Forms differ in irregularity and predictability: they are (see text) "p:q alternations", "intermittent", "phase walk-throughs", "messy erratic" and "messy stammering". Postsynaptic cycles are asymmetric (hysteresis). This contrasts with the presynaptic homogeneity, smoothness and symmetry. All control parameters are, individually and jointly, strongly influential. Presynaptic slopes, say, act through a postsynaptic sensitivity to their magnitude and sign; when increasing, hysteresis augments and forms change or disappear. Appropriate noise attenuates between-train contrasts, providing modulations are under 0.5 Hz. Postsynaptic natural intervals impose critical time bases, separating presynaptic intervals (around, above or below them) with dissimilar consequences. Coding rules are numerous and have restricted domains; generalizations are misleading. Modulation-driven forms are trendy pacemaker-driven forms. However, dissimilarities, slight when patterns are almost pacemaker, increase as inhibition departs from pacemaker and incorporate unpredictable features. Physiological significance-(1) Pacemaker-driven forms, simple and ubiquitous, appear to be elementary building blocks of
Interictal spike frequency varies with ovarian cycle stage in a rat model of epilepsy.
D'Amour, James; Magagna-Poveda, Alejandra; Moretto, Jillian; Friedman, Daniel; LaFrancois, John J; Pearce, Patrice; Fenton, Andre A; MacLusky, Neil J; Scharfman, Helen E
2015-07-01
In catamenial epilepsy, seizures exhibit a cyclic pattern that parallels the menstrual cycle. Many studies suggest that catamenial seizures are caused by fluctuations in gonadal hormones during the menstrual cycle, but this has been difficult to study in rodent models of epilepsy because the ovarian cycle in rodents, called the estrous cycle, is disrupted by severe seizures. Thus, when epilepsy is severe, estrous cycles become irregular or stop. Therefore, we modified kainic acid (KA)- and pilocarpine-induced status epilepticus (SE) models of epilepsy so that seizures were rare for the first months after SE, and conducted video-EEG during this time. The results showed that interictal spikes (IIS) occurred intermittently. All rats with regular 4-day estrous cycles had IIS that waxed and waned with the estrous cycle. The association between the estrous cycle and IIS was strong: if the estrous cycles became irregular transiently, IIS frequency also became irregular, and when the estrous cycle resumed its 4-day pattern, IIS frequency did also. Furthermore, when rats were ovariectomized, or males were recorded, IIS frequency did not show a 4-day pattern. Systemic administration of an estrogen receptor antagonist stopped the estrous cycle transiently, accompanied by transient irregularity of the IIS pattern. Eventually all animals developed severe, frequent seizures and at that time both the estrous cycle and the IIS became irregular. We conclude that the estrous cycle entrains IIS in the modified KA and pilocarpine SE models of epilepsy. The data suggest that the ovarian cycle influences more aspects of epilepsy than seizure susceptibility. Copyright © 2015 Elsevier Inc. All rights reserved.
Integration of Continuous-Time Dynamics in a Spiking Neural Network Simulator
Directory of Open Access Journals (Sweden)
Jan Hahne
2017-05-01
Full Text Available Contemporary modeling approaches to the dynamics of neural networks include two important classes of models: biologically grounded spiking neuron models and functionally inspired rate-based units. We present a unified simulation framework that supports the combination of the two for multi-scale modeling, enables the quantitative validation of mean-field approaches by spiking network simulations, and provides an increase in reliability by usage of the same simulation code and the same network model specifications for both model classes. While most spiking simulations rely on the communication of discrete events, rate models require time-continuous interactions between neurons. Exploiting the conceptual similarity to the inclusion of gap junctions in spiking network simulations, we arrive at a reference implementation of instantaneous and delayed interactions between rate-based models in a spiking network simulator. The separation of rate dynamics from the general connection and communication infrastructure ensures flexibility of the framework. In addition to the standard implementation we present an iterative approach based on waveform-relaxation techniques to reduce communication and increase performance for large-scale simulations of rate-based models