WorldWideScience

Sample records for model spiking neurons

  1. Automatic fitting of spiking neuron models to electrophysiological recordings

    Directory of Open Access Journals (Sweden)

    Cyrille Rossant

    2010-03-01

    Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.

  2. Stochastic resonance in noisy spiking retinal and sensory neuron models.

    Science.gov (United States)

    Patel, Ashok; Kosko, Bart

    2005-01-01

    Two new theorems show that small amounts of additive white noise can improve the bit count or mutual information of several popular models of spiking retinal neurons and spiking sensory neurons. The first theorem gives necessary and sufficient conditions for this noise benefit or stochastic resonance (SR) effect for subthreshold signals in a standard family of Poisson spiking models of retinal neurons. The result holds for all types of finite-variance noise and for all types of infinite-variance stable noise: SR occurs if and only if a sum of noise means or location parameters falls outside a 'forbidden interval' of values. The second theorem gives a similar forbidden-interval sufficient condition for the SR effect for several types of spiking sensory neurons that include the Fitzhugh-Nagumo neuron, the leaky integrate-and-fire neuron, and the reduced Type I neuron model if the additive noise is Gaussian white noise. Simulations show that neither the forbidden-interval condition nor Gaussianity is necessary for the SR effect.

  3. Stochastic models for spike trains of single neurons

    CERN Document Server

    Sampath, G

    1977-01-01

    1 Some basic neurophysiology 4 The neuron 1. 1 4 1. 1. 1 The axon 7 1. 1. 2 The synapse 9 12 1. 1. 3 The soma 1. 1. 4 The dendrites 13 13 1. 2 Types of neurons 2 Signals in the nervous system 14 2. 1 Action potentials as point events - point processes in the nervous system 15 18 2. 2 Spontaneous activi~ in neurons 3 Stochastic modelling of single neuron spike trains 19 3. 1 Characteristics of a neuron spike train 19 3. 2 The mathematical neuron 23 4 Superposition models 26 4. 1 superposition of renewal processes 26 4. 2 Superposition of stationary point processe- limiting behaviour 34 4. 2. 1 Palm functions 35 4. 2. 2 Asymptotic behaviour of n stationary point processes superposed 36 4. 3 Superposition models of neuron spike trains 37 4. 3. 1 Model 4. 1 39 4. 3. 2 Model 4. 2 - A superposition model with 40 two input channels 40 4. 3. 3 Model 4. 3 4. 4 Discussion 41 43 5 Deletion models 5. 1 Deletion models with 1nd~endent interaction of excitatory and inhibitory sequences 44 VI 5. 1. 1 Model 5. 1 The basic de...

  4. From spiking neuron models to linear-nonlinear models.

    Directory of Open Access Journals (Sweden)

    Srdjan Ostojic

    Full Text Available Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF, exponential integrate-and-fire (EIF and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.

  5. Cochlear spike synchronization and neuron coincidence detection model

    Science.gov (United States)

    Bader, Rolf

    2018-02-01

    Coincidence detection of a spike pattern fed from the cochlea into a single neuron is investigated using a physical Finite-Difference model of the cochlea and a physiologically motivated neuron model. Previous studies have shown experimental evidence of increased spike synchronization in the nucleus cochlearis and the trapezoid body [Joris et al., J. Neurophysiol. 71(3), 1022-1036 and 1037-1051 (1994)] and models show tone partial phase synchronization at the transition from mechanical waves on the basilar membrane into spike patterns [Ch. F. Babbs, J. Biophys. 2011, 435135]. Still the traveling speed of waves on the basilar membrane cause a frequency-dependent time delay of simultaneously incoming sound wavefronts up to 10 ms. The present model shows nearly perfect synchronization of multiple spike inputs as neuron outputs with interspike intervals (ISI) at the periodicity of the incoming sound for frequencies from about 30 to 300 Hz for two different amounts of afferent nerve fiber neuron inputs. Coincidence detection serves here as a fusion of multiple inputs into one single event enhancing pitch periodicity detection for low frequencies, impulse detection, or increased sound or speech intelligibility due to dereverberation.

  6. Colored noise and memory effects on formal spiking neuron models

    Science.gov (United States)

    da Silva, L. A.; Vilela, R. D.

    2015-06-01

    Simplified neuronal models capture the essence of the electrical activity of a generic neuron, besides being more interesting from the computational point of view when compared to higher-dimensional models such as the Hodgkin-Huxley one. In this work, we propose a generalized resonate-and-fire model described by a generalized Langevin equation that takes into account memory effects and colored noise. We perform a comprehensive numerical analysis to study the dynamics and the point process statistics of the proposed model, highlighting interesting new features such as (i) nonmonotonic behavior (emergence of peak structures, enhanced by the choice of colored noise characteristic time scale) of the coefficient of variation (CV) as a function of memory characteristic time scale, (ii) colored noise-induced shift in the CV, and (iii) emergence and suppression of multimodality in the interspike interval (ISI) distribution due to memory-induced subthreshold oscillations. Moreover, in the noise-induced spike regime, we study how memory and colored noise affect the coherence resonance (CR) phenomenon. We found that for sufficiently long memory, not only is CR suppressed but also the minimum of the CV-versus-noise intensity curve that characterizes the presence of CR may be replaced by a maximum. The aforementioned features allow to interpret the interplay between memory and colored noise as an effective control mechanism to neuronal variability. Since both variability and nontrivial temporal patterns in the ISI distribution are ubiquitous in biological cells, we hope the present model can be useful in modeling real aspects of neurons.

  7. Routes to Chaos Induced by a Discontinuous Resetting Process in a Hybrid Spiking Neuron Model.

    Science.gov (United States)

    Nobukawa, Sou; Nishimura, Haruhiko; Yamanishi, Teruya

    2018-01-10

    Several hybrid spiking neuron models combining continuous spike generation mechanisms and discontinuous resetting processes following spiking have been proposed. The Izhikevich neuron model, for example, can reproduce many spiking patterns. This model clearly possesses various types of bifurcations and routes to chaos under the effect of a state-dependent jump in the resetting process. In this study, we focus further on the relation between chaotic behaviour and the state-dependent jump, approaching the subject by comparing spiking neuron model versions with and without the resetting process. We first adopt a continuous two-dimensional spiking neuron model in which the orbit in the spiking state does not exhibit divergent behaviour. We then insert the resetting process into the model. An evaluation using the Lyapunov exponent with a saltation matrix and a characteristic multiplier of the Poincar'e map reveals that two types of chaotic behaviour (i.e. bursting chaotic spikes and near-period-two chaotic spikes) are induced by the resetting process. In addition, we confirm that this chaotic bursting state is generated from the periodic spiking state because of the slow- and fast-scale dynamics that arise when jumping to the hyperpolarization and depolarization regions, respectively.

  8. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Directory of Open Access Journals (Sweden)

    Lars Buesing

    2011-11-01

    Full Text Available The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  9. Novel Spiking Neuron-Astrocyte Networks based on nonlinear transistor-like models of tripartite synapses.

    Science.gov (United States)

    Valenza, Gaetano; Tedesco, Luciano; Lanata, Antonio; De Rossi, Danilo; Scilingo, Enzo Pasquale

    2013-01-01

    In this paper a novel and efficient computational implementation of a Spiking Neuron-Astrocyte Network (SNAN) is reported. Neurons are modeled according to the Izhikevich formulation and the neuron-astrocyte interactions are intended as tripartite synapsis and modeled with the previously proposed nonlinear transistor-like model. Concerning the learning rules, the original spike-timing dependent plasticity is used for the neural part of the SNAN whereas an ad-hoc rule is proposed for the astrocyte part. SNAN performances are compared with a standard spiking neural network (SNN) and evaluated using the polychronization concept, i.e., number of co-existing groups that spontaneously generate patterns of polychronous activity. The astrocyte-neuron ratio is the biologically inspired value of 1.5. The proposed SNAN shows higher number of polychronous groups than SNN, remarkably achieved for the whole duration of simulation (24 hours).

  10. Biophysical properties and computational modeling of calcium spikes in serotonergic neurons of the dorsal raphe nucleus.

    Science.gov (United States)

    Tuckwell, Henry C

    2013-06-01

    Serotonergic neurons of the dorsal raphe nuclei, with their extensive innervation of nearly the whole brain have important modulatory effects on many cognitive and physiological processes. They play important roles in clinical depression and other psychiatric disorders. In order to quantify the effects of serotonergic transmission on target cells it is desirable to construct computational models and to this end these it is necessary to have details of the biophysical and spike properties of the serotonergic neurons. Here several basic properties are reviewed with data from several studies since the 1960s to the present. The quantities included are input resistance, resting membrane potential, membrane time constant, firing rate, spike duration, spike and afterhyperpolarization (AHP) amplitude, spike threshold, cell capacitance, soma and somadendritic areas. The action potentials of these cells are normally triggered by a combination of sodium and calcium currents which may result in autonomous pacemaker activity. We here analyse the mechanisms of high-threshold calcium spikes which have been demonstrated in these cells the presence of TTX (tetrodotoxin). The parameters for calcium dynamics required to give calcium spikes are quite different from those for regular spiking which suggests the involvement of restricted parts of the soma-dendritic surface as has been found, for example, in hippocampal neurons. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  11. Information transmission with spiking Bayesian neurons

    International Nuclear Information System (INIS)

    Lochmann, Timm; Deneve, Sophie

    2008-01-01

    Spike trains of cortical neurons resulting from repeatedpresentations of a stimulus are variable and exhibit Poisson-like statistics. Many models of neural coding therefore assumed that sensory information is contained in instantaneous firing rates, not spike times. Here, we ask how much information about time-varying stimuli can be transmitted by spiking neurons with such input and output variability. In particular, does this variability imply spike generation to be intrinsically stochastic? We consider a model neuron that estimates optimally the current state of a time-varying binary variable (e.g. presence of a stimulus) by integrating incoming spikes. The unit signals its current estimate to other units with spikes whenever the estimate increased by a fixed amount. As shown previously, this computation results in integrate and fire dynamics with Poisson-like output spike trains. This output variability is entirely due to the stochastic input rather than noisy spike generation. As a result such a deterministic neuron can transmit most of the information about the time varying stimulus. This contrasts with a standard model of sensory neurons, the linear-nonlinear Poisson (LNP) model which assumes that most variability in output spike trains is due to stochastic spike generation. Although it yields the same firing statistics, we found that such noisy firing results in the loss of most information. Finally, we use this framework to compare potential effects of top-down attention versus bottom-up saliency on information transfer with spiking neurons

  12. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    Science.gov (United States)

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  13. Capturing Spike Variability in Noisy Izhikevich Neurons Using Point Process Generalized Linear Models.

    Science.gov (United States)

    Østergaard, Jacob; Kramer, Mark A; Eden, Uri T

    2018-01-01

    To understand neural activity, two broad categories of models exist: statistical and dynamical. While statistical models possess rigorous methods for parameter estimation and goodness-of-fit assessment, dynamical models provide mechanistic insight. In general, these two categories of models are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input current. We then fit these spike train data with a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured.

  14. Predictive features of persistent activity emergence in regular spiking and intrinsic bursting model neurons.

    Directory of Open Access Journals (Sweden)

    Kyriaki Sidiropoulou

    Full Text Available Proper functioning of working memory involves the expression of stimulus-selective persistent activity in pyramidal neurons of the prefrontal cortex (PFC, which refers to neural activity that persists for seconds beyond the end of the stimulus. The mechanisms which PFC pyramidal neurons use to discriminate between preferred vs. neutral inputs at the cellular level are largely unknown. Moreover, the presence of pyramidal cell subtypes with different firing patterns, such as regular spiking and intrinsic bursting, raises the question as to what their distinct role might be in persistent firing in the PFC. Here, we use a compartmental modeling approach to search for discriminatory features in the properties of incoming stimuli to a PFC pyramidal neuron and/or its response that signal which of these stimuli will result in persistent activity emergence. Furthermore, we use our modeling approach to study cell-type specific differences in persistent activity properties, via implementing a regular spiking (RS and an intrinsic bursting (IB model neuron. We identify synaptic location within the basal dendrites as a feature of stimulus selectivity. Specifically, persistent activity-inducing stimuli consist of activated synapses that are located more distally from the soma compared to non-inducing stimuli, in both model cells. In addition, the action potential (AP latency and the first few inter-spike-intervals of the neuronal response can be used to reliably detect inducing vs. non-inducing inputs, suggesting a potential mechanism by which downstream neurons can rapidly decode the upcoming emergence of persistent activity. While the two model neurons did not differ in the coding features of persistent activity emergence, the properties of persistent activity, such as the firing pattern and the duration of temporally-restricted persistent activity were distinct. Collectively, our results pinpoint to specific features of the neuronal response to a given

  15. Mean-field approximations for coupled populations of generalized linear model spiking neurons with Markov refractoriness.

    Science.gov (United States)

    Toyoizumi, Taro; Rad, Kamiar Rahnama; Paninski, Liam

    2009-05-01

    There has recently been a great deal of interest in inferring network connectivity from the spike trains in populations of neurons. One class of useful models that can be fit easily to spiking data is based on generalized linear point process models from statistics. Once the parameters for these models are fit, the analyst is left with a nonlinear spiking network model with delays, which in general may be very difficult to understand analytically. Here we develop mean-field methods for approximating the stimulus-driven firing rates (in both the time-varying and steady-state cases), auto- and cross-correlations, and stimulus-dependent filtering properties of these networks. These approximations are valid when the contributions of individual network coupling terms are small and, hence, the total input to a neuron is approximately gaussian. These approximations lead to deterministic ordinary differential equations that are much easier to solve and analyze than direct Monte Carlo simulation of the network activity. These approximations also provide an analytical way to evaluate the linear input-output filter of neurons and how the filters are modulated by network interactions and some stimulus feature. Finally, in the case of strong refractory effects, the mean-field approximations in the generalized linear model become inaccurate; therefore, we introduce a model that captures strong refractoriness, retains all of the easy fitting properties of the standard generalized linear model, and leads to much more accurate approximations of mean firing rates and cross-correlations that retain fine temporal behaviors.

  16. Spiking Neuron Network Helmholtz Machine

    Directory of Open Access Journals (Sweden)

    Pavel eSountsov

    2015-04-01

    Full Text Available An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule.

  17. The Effects of Guanfacine and Phenylephrine on a Spiking Neuron Model of Working Memory.

    Science.gov (United States)

    Duggins, Peter; Stewart, Terrence C; Choo, Xuan; Eliasmith, Chris

    2017-01-01

    We use a spiking neural network model of working memory (WM) capable of performing the spatial delayed response task (DRT) to investigate two drugs that affect WM: guanfacine (GFC) and phenylephrine (PHE). In this model, the loss of information over time results from changes in the spiking neural activity through recurrent connections. We reproduce the standard forgetting curve and then show that this curve changes in the presence of GFC and PHE, whose application is simulated by manipulating functional, neural, and biophysical properties of the model. In particular, applying GFC causes increased activity in neurons that are sensitive to the information currently being remembered, while applying PHE leads to decreased activity in these same neurons. Interestingly, these differential effects emerge from network-level interactions because GFC and PHE affect all neurons equally. We compare our model to both electrophysiological data from neurons in monkey dorsolateral prefrontal cortex and to behavioral evidence from monkeys performing the DRT. Copyright © 2016 Cognitive Science Society, Inc.

  18. Spiking Neurons for Analysis of Patterns

    Science.gov (United States)

    Huntsberger, Terrance

    2008-01-01

    Artificial neural networks comprising spiking neurons of a novel type have been conceived as improved pattern-analysis and pattern-recognition computational systems. These neurons are represented by a mathematical model denoted the state-variable model (SVM), which among other things, exploits a computational parallelism inherent in spiking-neuron geometry. Networks of SVM neurons offer advantages of speed and computational efficiency, relative to traditional artificial neural networks. The SVM also overcomes some of the limitations of prior spiking-neuron models. There are numerous potential pattern-recognition, tracking, and data-reduction (data preprocessing) applications for these SVM neural networks on Earth and in exploration of remote planets. Spiking neurons imitate biological neurons more closely than do the neurons of traditional artificial neural networks. A spiking neuron includes a central cell body (soma) surrounded by a tree-like interconnection network (dendrites). Spiking neurons are so named because they generate trains of output pulses (spikes) in response to inputs received from sensors or from other neurons. They gain their speed advantage over traditional neural networks by using the timing of individual spikes for computation, whereas traditional artificial neurons use averages of activity levels over time. Moreover, spiking neurons use the delays inherent in dendritic processing in order to efficiently encode the information content of incoming signals. Because traditional artificial neurons fail to capture this encoding, they have less processing capability, and so it is necessary to use more gates when implementing traditional artificial neurons in electronic circuitry. Such higher-order functions as dynamic tasking are effected by use of pools (collections) of spiking neurons interconnected by spike-transmitting fibers. The SVM includes adaptive thresholds and submodels of transport of ions (in imitation of such transport in biological

  19. Memristors Empower Spiking Neurons With Stochasticity

    KAUST Repository

    Al-Shedivat, Maruan

    2015-06-01

    Recent theoretical studies have shown that probabilistic spiking can be interpreted as learning and inference in cortical microcircuits. This interpretation creates new opportunities for building neuromorphic systems driven by probabilistic learning algorithms. However, such systems must have two crucial features: 1) the neurons should follow a specific behavioral model, and 2) stochastic spiking should be implemented efficiently for it to be scalable. This paper proposes a memristor-based stochastically spiking neuron that fulfills these requirements. First, the analytical model of the memristor is enhanced so it can capture the behavioral stochasticity consistent with experimentally observed phenomena. The switching behavior of the memristor model is demonstrated to be akin to the firing of the stochastic spike response neuron model, the primary building block for probabilistic algorithms in spiking neural networks. Furthermore, the paper proposes a neural soma circuit that utilizes the intrinsic nondeterminism of memristive switching for efficient spike generation. The simulations and analysis of the behavior of a single stochastic neuron and a winner-take-all network built of such neurons and trained on handwritten digits confirm that the circuit can be used for building probabilistic sampling and pattern adaptation machinery in spiking networks. The findings constitute an important step towards scalable and efficient probabilistic neuromorphic platforms. © 2011 IEEE.

  20. Biological modelling of a computational spiking neural network with neuronal avalanches

    Science.gov (United States)

    Li, Xiumin; Chen, Qing; Xue, Fangzheng

    2017-05-01

    In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.

  1. Mirror Neurons Modeled Through Spike-Timing-Dependent Plasticity are Affected by Channelopathies Associated with Autism Spectrum Disorder.

    Science.gov (United States)

    Antunes, Gabriela; da Silva, Samuel F Faria; de Souza, Fabio M Simoes

    2017-11-28

    Mirror neurons fire action potentials both when the agent performs a certain behavior and watches someone performing a similar action. Here, we present an original mirror neuron model based on the spike-timing-dependent plasticity (STDP) between two morpho-electrical models of neocortical pyramidal neurons. Both neurons fired spontaneously with basal firing rate that follows a Poisson distribution, and the STDP between them was modeled by the triplet algorithm. Our simulation results demonstrated that STDP is sufficient for the rise of mirror neuron function between the pairs of neocortical neurons. This is a proof of concept that pairs of neocortical neurons associating sensory inputs to motor outputs could operate like mirror neurons. In addition, we used the mirror neuron model to investigate whether channelopathies associated with autism spectrum disorder could impair the modeled mirror function. Our simulation results showed that impaired hyperpolarization-activated cationic currents (Ih) affected the mirror function between the pairs of neocortical neurons coupled by STDP.

  2. Spike propagation through the dorsal root ganglia in an unmyelinated sensory neuron: a modeling study.

    Science.gov (United States)

    Sundt, Danielle; Gamper, Nikita; Jaffe, David B

    2015-12-01

    Unmyelinated C-fibers are a major type of sensory neurons conveying pain information. Action potential conduction is regulated by the bifurcation (T-junction) of sensory neuron axons within the dorsal root ganglia (DRG). Understanding how C-fiber signaling is influenced by the morphology of the T-junction and the local expression of ion channels is important for understanding pain signaling. In this study we used biophysical computer modeling to investigate the influence of axon morphology within the DRG and various membrane conductances on the reliability of spike propagation. As expected, calculated input impedance and the amplitude of propagating action potentials were both lowest at the T-junction. Propagation reliability for single spikes was highly sensitive to the diameter of the stem axon and the density of voltage-gated Na(+) channels. A model containing only fast voltage-gated Na(+) and delayed-rectifier K(+) channels conducted trains of spikes up to frequencies of 110 Hz. The addition of slowly activating KCNQ channels (i.e., KV7 or M-channels) to the model reduced the following frequency to 30 Hz. Hyperpolarization produced by addition of a much slower conductance, such as a Ca(2+)-dependent K(+) current, was needed to reduce the following frequency to 6 Hz. Attenuation of driving force due to ion accumulation or hyperpolarization produced by a Na(+)-K(+) pump had no effect on following frequency but could influence the reliability of spike propagation mutually with the voltage shift generated by a Ca(2+)-dependent K(+) current. These simulations suggest how specific ion channels within the DRG may contribute toward therapeutic treatments for chronic pain. Copyright © 2015 the American Physiological Society.

  3. Lateral Information Processing by Spiking Neurons: A Theoretical Model of the Neural Correlate of Consciousness

    Science.gov (United States)

    Ebner, Marc; Hameroff, Stuart

    2011-01-01

    Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on “autopilot”). Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the “conscious pilot”) suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious “auto-pilot” cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways “gap junctions” in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of synchrony, within which

  4. Lateral Information Processing by Spiking Neurons: A Theoretical Model of the Neural Correlate of Consciousness

    Directory of Open Access Journals (Sweden)

    Marc Ebner

    2011-01-01

    Full Text Available Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on “autopilot”. Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the “conscious pilot” suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious “auto-pilot” cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways “gap junctions” in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of

  5. Computational modeling of spike generation in serotonergic neurons of the dorsal raphe nucleus.

    Science.gov (United States)

    Tuckwell, Henry C; Penington, Nicholas J

    2014-07-01

    Serotonergic neurons of the dorsal raphe nucleus, with their extensive innervation of limbic and higher brain regions and interactions with the endocrine system have important modulatory or regulatory effects on many cognitive, emotional and physiological processes. They have been strongly implicated in responses to stress and in the occurrence of major depressive disorder and other psychiatric disorders. In order to quantify some of these effects, detailed mathematical models of the activity of such cells are required which describe their complex neurochemistry and neurophysiology. We consider here a single-compartment model of these neurons which is capable of describing many of the known features of spike generation, particularly the slow rhythmic pacemaking activity often observed in these cells in a variety of species. Included in the model are 11 kinds of ion channels: a fast sodium current INa, a delayed rectifier potassium current IKDR, a transient potassium current IA, a slow non-inactivating potassium current IM, a low-threshold calcium current IT, two high threshold calcium currents IL and IN, small and large conductance potassium currents ISK and IBK, a hyperpolarization-activated cation current IH and a leak current ILeak. In Sections 3-8, each current type is considered in detail and parameters estimated from voltage clamp data where possible. Three kinds of model are considered for the BK current and two for the leak current. Intracellular calcium ion concentration Cai is an additional component and calcium dynamics along with buffering and pumping is discussed in Section 9. The remainder of the article contains descriptions of computed solutions which reveal both spontaneous and driven spiking with several parameter sets. Attention is focused on the properties usually associated with these neurons, particularly long duration of action potential, steep upslope on the leading edge of spikes, pacemaker-like spiking, long-lasting afterhyperpolarization

  6. A Computational Model to Investigate Astrocytic Glutamate Uptake Influence on Synaptic Transmission and Neuronal Spiking

    Directory of Open Access Journals (Sweden)

    Sushmita Lakshmi Allam

    2012-10-01

    Full Text Available Over the past decades, our view of astrocytes has switched from passive support cells to active processing elements in the brain. The current view is that astrocytes shape neuronal communication and also play an important role in many neurodegenerative diseases. Despite the growing awareness of the importance of astrocytes, the exact mechanisms underlying neuron-astrocyte communication and the physiological consequences of astrocytic-neuronal interactions remain largely unclear. In this work, we define a modeling framework that will permit to address unanswered questions regarding the role of astrocytes. Our computational model of a detailed glutamatergic synapse facilitates the analysis of neural system responses to various stimuli and conditions that are otherwise difficult to obtain experimentally, in particular the readouts at the sub-cellular level. In this paper, we extend a detailed glutamatergic synaptic model, to include astrocytic glutamate transporters. We demonstrate how these glial transporters, responsible for the majority of glutamate uptake, modulate synaptic transmission mediated by ionotropic AMPA and NMDA receptors at glutamatergic synapses. Furthermore, we investigate how these local signaling effects at the synaptic level are translated into varying spatio-temporal patterns of neuron firing. Paired pulse stimulation results reveal that the effect of astrocytic glutamate uptake is more apparent when the input inter-spike interval is sufficiently long to allow the receptors to recover from desensitization. These results suggest an important functional role of astrocytes in spike timing dependent processes and demand further investigation of the molecular basis of certain neurological diseases specifically related to alterations in astrocytic glutamate uptake, such as epilepsy.

  7. Memory-induced resonancelike suppression of spike generation in a resonate-and-fire neuron model

    Science.gov (United States)

    Mankin, Romi; Paekivi, Sander

    2018-01-01

    The behavior of a stochastic resonate-and-fire neuron model based on a reduction of a fractional noise-driven generalized Langevin equation (GLE) with a power-law memory kernel is considered. The effect of temporally correlated random activity of synaptic inputs, which arise from other neurons forming local and distant networks, is modeled as an additive fractional Gaussian noise in the GLE. Using a first-passage-time formulation, in certain system parameter domains exact expressions for the output interspike interval (ISI) density and for the survival probability (the probability that a spike is not generated) are derived and their dependence on input parameters, especially on the memory exponent, is analyzed. In the case of external white noise, it is shown that at intermediate values of the memory exponent the survival probability is significantly enhanced in comparison with the cases of strong and weak memory, which causes a resonancelike suppression of the probability of spike generation as a function of the memory exponent. Moreover, an examination of the dependence of multimodality in the ISI distribution on input parameters shows that there exists a critical memory exponent αc≈0.402 , which marks a dynamical transition in the behavior of the system. That phenomenon is illustrated by a phase diagram describing the emergence of three qualitatively different structures of the ISI distribution. Similarities and differences between the behavior of the model at internal and external noises are also discussed.

  8. Reinforcement learning of targeted movement in a spiking neuronal model of motor cortex.

    Science.gov (United States)

    Chadderdon, George L; Neymotin, Samuel A; Kerr, Cliff C; Lytton, William W

    2012-01-01

    Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint "forearm" to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. The model consisted of 144 excitatory and 64 inhibitory event-based neurons, each with AMPA, NMDA, and GABA synapses. Proprioceptive cell input to this model encoded the 2 muscle lengths. Plasticity was only enabled in feedforward connections between input and output excitatory units, using spike-timing-dependent eligibility traces for synaptic credit or blame assignment. Learning resulted from a global 3-valued signal: reward (+1), no learning (0), or punishment (-1), corresponding to phasic increases, lack of change, or phasic decreases of dopaminergic cell firing, respectively. Successful learning only occurred when both reward and punishment were enabled. In this case, 5 target angles were learned successfully within 180 s of simulation time, with a median error of 8 degrees. Motor babbling allowed exploratory learning, but decreased the stability of the learned behavior, since the hand continued moving after reaching the target. Our model demonstrated that a global reinforcement signal, coupled with eligibility traces for synaptic plasticity, can train a spiking sensorimotor network to perform goal-directed motor behavior.

  9. Reinforcement learning of targeted movement in a spiking neuronal model of motor cortex.

    Directory of Open Access Journals (Sweden)

    George L Chadderdon

    Full Text Available Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint "forearm" to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. The model consisted of 144 excitatory and 64 inhibitory event-based neurons, each with AMPA, NMDA, and GABA synapses. Proprioceptive cell input to this model encoded the 2 muscle lengths. Plasticity was only enabled in feedforward connections between input and output excitatory units, using spike-timing-dependent eligibility traces for synaptic credit or blame assignment. Learning resulted from a global 3-valued signal: reward (+1, no learning (0, or punishment (-1, corresponding to phasic increases, lack of change, or phasic decreases of dopaminergic cell firing, respectively. Successful learning only occurred when both reward and punishment were enabled. In this case, 5 target angles were learned successfully within 180 s of simulation time, with a median error of 8 degrees. Motor babbling allowed exploratory learning, but decreased the stability of the learned behavior, since the hand continued moving after reaching the target. Our model demonstrated that a global reinforcement signal, coupled with eligibility traces for synaptic plasticity, can train a spiking sensorimotor network to perform goal-directed motor behavior.

  10. Prospective Coding by Spiking Neurons.

    Directory of Open Access Journals (Sweden)

    Johanni Brea

    2016-06-01

    Full Text Available Animals learn to make predictions, such as associating the sound of a bell with upcoming feeding or predicting a movement that a motor command is eliciting. How predictions are realized on the neuronal level and what plasticity rule underlies their learning is not well understood. Here we propose a biologically plausible synaptic plasticity rule to learn predictions on a single neuron level on a timescale of seconds. The learning rule allows a spiking two-compartment neuron to match its current firing rate to its own expected future discounted firing rate. For instance, if an originally neutral event is repeatedly followed by an event that elevates the firing rate of a neuron, the originally neutral event will eventually also elevate the neuron's firing rate. The plasticity rule is a form of spike timing dependent plasticity in which a presynaptic spike followed by a postsynaptic spike leads to potentiation. Even if the plasticity window has a width of 20 milliseconds, associations on the time scale of seconds can be learned. We illustrate prospective coding with three examples: learning to predict a time varying input, learning to predict the next stimulus in a delayed paired-associate task and learning with a recurrent network to reproduce a temporally compressed version of a sequence. We discuss the potential role of the learning mechanism in classical trace conditioning. In the special case that the signal to be predicted encodes reward, the neuron learns to predict the discounted future reward and learning is closely related to the temporal difference learning algorithm TD(λ.

  11. Membrane capacitive memory alters spiking in neurons described by the fractional-order Hodgkin-Huxley model.

    Directory of Open Access Journals (Sweden)

    Seth H Weinberg

    Full Text Available Excitable cells and cell membranes are often modeled by the simple yet elegant parallel resistor-capacitor circuit. However, studies have shown that the passive properties of membranes may be more appropriately modeled with a non-ideal capacitor, in which the current-voltage relationship is given by a fractional-order derivative. Fractional-order membrane potential dynamics introduce capacitive memory effects, i.e., dynamics are influenced by a weighted sum of the membrane potential prior history. However, it is not clear to what extent fractional-order dynamics may alter the properties of active excitable cells. In this study, we investigate the spiking properties of the neuronal membrane patch, nerve axon, and neural networks described by the fractional-order Hodgkin-Huxley neuron model. We find that in the membrane patch model, as fractional-order decreases, i.e., a greater influence of membrane potential memory, peak sodium and potassium currents are altered, and spike frequency and amplitude are generally reduced. In the nerve axon, the velocity of spike propagation increases as fractional-order decreases, while in a neural network, electrical activity is more likely to cease for smaller fractional-order. Importantly, we demonstrate that the modulation of the peak ionic currents that occurs for reduced fractional-order alone fails to reproduce many of the key alterations in spiking properties, suggesting that membrane capacitive memory and fractional-order membrane potential dynamics are important and necessary to reproduce neuronal electrical activity.

  12. Membrane capacitive memory alters spiking in neurons described by the fractional-order Hodgkin-Huxley model.

    Science.gov (United States)

    Weinberg, Seth H

    2015-01-01

    Excitable cells and cell membranes are often modeled by the simple yet elegant parallel resistor-capacitor circuit. However, studies have shown that the passive properties of membranes may be more appropriately modeled with a non-ideal capacitor, in which the current-voltage relationship is given by a fractional-order derivative. Fractional-order membrane potential dynamics introduce capacitive memory effects, i.e., dynamics are influenced by a weighted sum of the membrane potential prior history. However, it is not clear to what extent fractional-order dynamics may alter the properties of active excitable cells. In this study, we investigate the spiking properties of the neuronal membrane patch, nerve axon, and neural networks described by the fractional-order Hodgkin-Huxley neuron model. We find that in the membrane patch model, as fractional-order decreases, i.e., a greater influence of membrane potential memory, peak sodium and potassium currents are altered, and spike frequency and amplitude are generally reduced. In the nerve axon, the velocity of spike propagation increases as fractional-order decreases, while in a neural network, electrical activity is more likely to cease for smaller fractional-order. Importantly, we demonstrate that the modulation of the peak ionic currents that occurs for reduced fractional-order alone fails to reproduce many of the key alterations in spiking properties, suggesting that membrane capacitive memory and fractional-order membrane potential dynamics are important and necessary to reproduce neuronal electrical activity.

  13. Modeling spike-wave discharges by a complex network of neuronal oscillators

    NARCIS (Netherlands)

    Medvedeva, T.M.; Sysoeva, M.V.; Luijtelaar, E.L.J.M. van; Sysoev, I.V.

    2018-01-01

    Purpose: The organization of neural networks and the mechanisms, which generate the highly stereotypical for absence epilepsy spike-wave discharges (SWDs) is heavily debated. Here we describe such a model which can both reproduce the characteristics of SWDs and dynamics of coupling between brain

  14. The mechanism of saccade motor pattern generation investigated by a large-scale spiking neuron model of the superior colliculus.

    Directory of Open Access Journals (Sweden)

    Jan Morén

    Full Text Available The subcortical saccade-generating system consists of the retina, superior colliculus, cerebellum and brainstem motoneuron areas. The superior colliculus is the site of sensory-motor convergence within this basic visuomotor loop preserved throughout the vertebrates. While the system has been extensively studied, there are still several outstanding questions regarding how and where the saccade eye movement profile is generated and the contribution of respective parts within this system. Here we construct a spiking neuron model of the whole intermediate layer of the superior colliculus based on the latest anatomy and physiology data. The model consists of conductance-based spiking neurons with quasi-visual, burst, buildup, local inhibitory, and deep layer inhibitory neurons. The visual input is given from the superficial superior colliculus and the burst neurons send the output to the brainstem oculomotor nuclei. Gating input from the basal ganglia and an integral feedback from the reticular formation are also included.We implement the model in the NEST simulator and show that the activity profile of bursting neurons can be reproduced by a combination of NMDA-type and cholinergic excitatory synaptic inputs and integrative inhibitory feedback. The model shows that the spreading neural activity observed in vivo can keep track of the collicular output over time and reset the system at the end of a saccade through activation of deep layer inhibitory neurons. We identify the model parameters according to neural recording data and show that the resulting model recreates the saccade size-velocity curves known as the saccadic main sequence in behavioral studies. The present model is consistent with theories that the superior colliculus takes a principal role in generating the temporal profiles of saccadic eye movements, rather than just specifying the end points of eye movements.

  15. Macroscopic Description for Networks of Spiking Neurons

    Science.gov (United States)

    Montbrió, Ernest; Pazó, Diego; Roxin, Alex

    2015-04-01

    A major goal of neuroscience, statistical physics, and nonlinear dynamics is to understand how brain function arises from the collective dynamics of networks of spiking neurons. This challenge has been chiefly addressed through large-scale numerical simulations. Alternatively, researchers have formulated mean-field theories to gain insight into macroscopic states of large neuronal networks in terms of the collective firing activity of the neurons, or the firing rate. However, these theories have not succeeded in establishing an exact correspondence between the firing rate of the network and the underlying microscopic state of the spiking neurons. This has largely constrained the range of applicability of such macroscopic descriptions, particularly when trying to describe neuronal synchronization. Here, we provide the derivation of a set of exact macroscopic equations for a network of spiking neurons. Our results reveal that the spike generation mechanism of individual neurons introduces an effective coupling between two biophysically relevant macroscopic quantities, the firing rate and the mean membrane potential, which together govern the evolution of the neuronal network. The resulting equations exactly describe all possible macroscopic dynamical states of the network, including states of synchronous spiking activity. Finally, we show that the firing-rate description is related, via a conformal map, to a low-dimensional description in terms of the Kuramoto order parameter, called Ott-Antonsen theory. We anticipate that our results will be an important tool in investigating how large networks of spiking neurons self-organize in time to process and encode information in the brain.

  16. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  17. Modeling of multisensory convergence with a network of spiking neurons: a reverse engineering approach.

    Science.gov (United States)

    Lim, Hun Ki; Keniston, Leslie P; Cios, Krzysztof J

    2011-07-01

    Multisensory processing in the brain underlies a wide variety of perceptual phenomena, but little is known about the underlying mechanisms of how multisensory neurons are formed. This lack of knowledge is due to the difficulty for biological experiments to manipulate and test the parameters of multisensory convergence, the first and definitive step in the multisensory process. Therefore, by using a computational model of multisensory convergence, this study seeks to provide insight into the mechanisms of multisensory convergence. To reverse-engineer multisensory convergence, we used a biologically realistic neuron model and a biology-inspired plasticity rule, but did not make any a priori assumptions about multisensory properties of neurons in the network. The network consisted of two separate projection areas that converged upon neurons in a third area, and stimulation involved activation of one of the projection areas (or the other) or their combination. Experiments consisted of two parts: network training and multisensory simulation. Analyses were performed, first, to find multisensory properties in the simulated networks; second, to reveal properties of the network using graph theoretical approach; and third, to generate hypothesis related to the multisensory convergence. The results showed that the generation of multisensory neurons related to the topological properties of the network, in particular, the strengths of connections after training, was found to play an important role in forming and thus distinguishing multisensory neuron types. © 2011 IEEE

  18. Model-based analysis and control of a network of basal ganglia spiking neurons in the normal and Parkinsonian states

    Science.gov (United States)

    Liu, Jianbo; Khalil, Hassan K.; Oweiss, Karim G.

    2011-08-01

    Controlling the spatiotemporal firing pattern of an intricately connected network of neurons through microstimulation is highly desirable in many applications. We investigated in this paper the feasibility of using a model-based approach to the analysis and control of a basal ganglia (BG) network model of Hodgkin-Huxley (HH) spiking neurons through microstimulation. Detailed analysis of this network model suggests that it can reproduce the experimentally observed characteristics of BG neurons under a normal and a pathological Parkinsonian state. A simplified neuronal firing rate model, identified from the detailed HH network model, is shown to capture the essential network dynamics. Mathematical analysis of the simplified model reveals the presence of a systematic relationship between the network's structure and its dynamic response to spatiotemporally patterned microstimulation. We show that both the network synaptic organization and the local mechanism of microstimulation can impose tight constraints on the possible spatiotemporal firing patterns that can be generated by the microstimulated network, which may hinder the effectiveness of microstimulation to achieve a desired objective under certain conditions. Finally, we demonstrate that the feedback control design aided by the mathematical analysis of the simplified model is indeed effective in driving the BG network in the normal and Parskinsonian states to follow a prescribed spatiotemporal firing pattern. We further show that the rhythmic/oscillatory patterns that characterize a dopamine-depleted BG network can be suppressed as a direct consequence of controlling the spatiotemporal pattern of a subpopulation of the output Globus Pallidus internalis (GPi) neurons in the network. This work may provide plausible explanations for the mechanisms underlying the therapeutic effects of deep brain stimulation (DBS) in Parkinson's disease and pave the way towards a model-based, network level analysis and closed

  19. Spiking Activity of a LIF Neuron in Distributed Delay Framework

    Directory of Open Access Journals (Sweden)

    Saket Kumar Choudhary

    2016-06-01

    Full Text Available Evolution of membrane potential and spiking activity for a single leaky integrate-and-fire (LIF neuron in distributed delay framework (DDF is investigated. DDF provides a mechanism to incorporate memory element in terms of delay (kernel function into a single neuron models. This investigation includes LIF neuron model with two different kinds of delay kernel functions, namely, gamma distributed delay kernel function and hypo-exponential distributed delay kernel function. Evolution of membrane potential for considered models is studied in terms of stationary state probability distribution (SPD. Stationary state probability distribution of membrane potential (SPDV for considered neuron models are found asymptotically similar which is Gaussian distributed. In order to investigate the effect of membrane potential delay, rate code scheme for neuronal information processing is applied. Firing rate and Fano-factor for considered neuron models are calculated and standard LIF model is used for comparative study. It is noticed that distributed delay increases the spiking activity of a neuron. Increase in spiking activity of neuron in DDF is larger for hypo-exponential distributed delay function than gamma distributed delay function. Moreover, in case of hypo-exponential delay function, a LIF neuron generates spikes with Fano-factor less than 1.

  20. Energy consumption in Hodgkin–Huxley type fast spiking neuron model exposed to an external electric field

    Directory of Open Access Journals (Sweden)

    K. Usha

    2016-09-01

    Full Text Available This paper evaluates the change in metabolic energy required to maintain the signalling activity of neurons in the presence of an external electric field. We have analysed the Hodgkin–Huxley type conductance based fast spiking neuron model as electrical circuit by changing the frequency and amplitude of the applied electric field. The study has shown that, the presence of electric field increases the membrane potential, electrical energy supply and metabolic energy consumption. As the amplitude of applied electric field increases by keeping a constant frequency, the membrane potential increases and consequently the electrical energy supply and metabolic energy consumption increases. On increasing the frequency of the applied field, the peak value of membrane potential after depolarization gradually decreases as a result electrical energy supply decreases which results in a lower rate of hydrolysis of ATP molecules.

  1. Neuronal coding and spiking randomness

    Czech Academy of Sciences Publication Activity Database

    Košťál, Lubomír; Lánský, Petr; Rospars, J. P.

    2007-01-01

    Roč. 26, č. 10 (2007), s. 2693-2988 ISSN 0953-816X R&D Projects: GA MŠk(CZ) LC554; GA AV ČR(CZ) 1ET400110401; GA AV ČR(CZ) KJB100110701 Grant - others:ECO-NET(FR) 112644PF Institutional research plan: CEZ:AV0Z50110509 Keywords : spike train * variability * neurovědy Subject RIV: FH - Neurology Impact factor: 3.673, year: 2007

  2. Extraction of Inter-Aural Time Differences Using a Spiking Neuron Network Model of the Medial Superior Olive

    Directory of Open Access Journals (Sweden)

    Jörg Encke

    2018-03-01

    Full Text Available The mammalian auditory system is able to extract temporal and spectral features from sound signals at the two ears. One important cue for localization of low-frequency sound sources in the horizontal plane are inter-aural time differences (ITDs which are first analyzed in the medial superior olive (MSO in the brainstem. Neural recordings of ITD tuning curves at various stages along the auditory pathway suggest that ITDs in the mammalian brainstem are not represented in form of a Jeffress-type place code. An alternative is the hemispheric opponent-channel code, according to which ITDs are encoded as the difference in the responses of the MSO nuclei in the two hemispheres. In this study, we present a physiologically-plausible, spiking neuron network model of the mammalian MSO circuit and apply two different methods of extracting ITDs from arbitrary sound signals. The network model is driven by a functional model of the auditory periphery and physiological models of the cochlear nucleus and the MSO. Using a linear opponent-channel decoder, we show that the network is able to detect changes in ITD with a precision down to 10 μs and that the sensitivity of the decoder depends on the slope of the ITD-rate functions. A second approach uses an artificial neuronal network to predict ITDs directly from the spiking output of the MSO and ANF model. Using this predictor, we show that the MSO-network is able to reliably encode static and time-dependent ITDs over a large frequency range, also for complex signals like speech.

  3. Stochastic optimal control of single neuron spike trains

    DEFF Research Database (Denmark)

    Iolov, Alexandre; Ditlevsen, Susanne; Longtin, Andrë

    2014-01-01

    stimulation of a neuron to achieve a target spike train under the physiological constraint to not damage tissue. Approach. We pose a stochastic optimal control problem to precisely specify the spike times in a leaky integrate-and-fire (LIF) model of a neuron with noise assumed to be of intrinsic or synaptic...... to the spike times (open-loop control). Main results. We have developed a stochastic optimal control algorithm to obtain precise spike times. It is applicable in both the supra-threshold and sub-threshold regimes, under open-loop and closed-loop conditions and with an arbitrary noise intensity; the accuracy...... into account physiological constraints on the control. A precise and robust targeting of neural activity based on stochastic optimal control has great potential for regulating neural activity in e.g. prosthetic applications and to improve our understanding of the basic mechanisms by which neuronal firing...

  4. Inherently stochastic spiking neurons for probabilistic neural computation

    KAUST Repository

    Al-Shedivat, Maruan

    2015-04-01

    Neuromorphic engineering aims to design hardware that efficiently mimics neural circuitry and provides the means for emulating and studying neural systems. In this paper, we propose a new memristor-based neuron circuit that uniquely complements the scope of neuron implementations and follows the stochastic spike response model (SRM), which plays a cornerstone role in spike-based probabilistic algorithms. We demonstrate that the switching of the memristor is akin to the stochastic firing of the SRM. Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards memristive, scalable and efficient stochastic neuromorphic platforms. © 2015 IEEE.

  5. A spiking neuron model of the cortico-basal ganglia circuits for goal-directed and habitual action learning.

    Science.gov (United States)

    Chersi, Fabian; Mirolli, Marco; Pezzulo, Giovanni; Baldassarre, Gianluca

    2013-05-01

    Dual-system theories postulate that actions are supported either by a goal-directed or by a habit-driven response system. Neuroimaging and anatomo-functional studies have provided evidence that the prefrontal cortex plays a fundamental role in the first type of action control, while internal brain areas such as the basal ganglia are more active during habitual and overtrained responses. Additionally, it has been shown that areas of the cortex and the basal ganglia are connected through multiple parallel "channels", which are thought to function as an action selection mechanism resolving competitions between alternative options available in a given context. In this paper we propose a multi-layer network of spiking neurons that implements in detail the thalamo-cortical circuits that are believed to be involved in action learning and execution. A key feature of this model is that neurons are organized in small pools in the motor cortex and form independent loops with specific pools of the basal ganglia where inhibitory circuits implement a multistep selection mechanism. The described model has been validated utilizing it to control the actions of a virtual monkey that has to learn to turn on briefly flashing lights by pressing corresponding buttons on a board. When the animal is able to fluently execute the task the button-light associations are remapped so that it has to suppress its habitual behavior in order to execute goal-directed actions. The model nicely shows how sensory-motor associations for action sequences are formed at the cortico-basal ganglia level and how goal-directed decisions may override automatic motor responses. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. A memristive spiking neuron with firing rate coding

    Directory of Open Access Journals (Sweden)

    Marina eIgnatov

    2015-10-01

    Full Text Available Perception, decisions, and sensations are all encoded into trains of action potentials in the brain. The relation between stimulus strength and all-or-nothing spiking of neurons is widely believed to be the basis of this coding. This initiated the development of spiking neuron models; one of today's most powerful conceptual tool for the analysis and emulation of neural dynamics. The success of electronic circuit models and their physical realization within silicon field-effect transistor circuits lead to elegant technical approaches. Recently, the spectrum of electronic devices for neural computing has been extended by memristive devices, mainly used to emulate static synaptic functionality. Their capabilities for emulations of neural activity were recently demonstrated using a memristive neuristor circuit, while a memristive neuron circuit has so far been elusive. Here, a spiking neuron model is experimentally realized in a compact circuit comprising memristive and memcapacitive devices based on the strongly correlated electron material vanadium dioxide (VO2 and on the chemical electromigration cell Ag/TiO2-x/Al. The circuit can emulate dynamical spiking patterns in response to an external stimulus including adaptation, which is at the heart of firing rate coding as first observed by E.D. Adrian in 1926.

  7. A memristive spiking neuron with firing rate coding.

    Science.gov (United States)

    Ignatov, Marina; Ziegler, Martin; Hansen, Mirko; Petraru, Adrian; Kohlstedt, Hermann

    2015-01-01

    Perception, decisions, and sensations are all encoded into trains of action potentials in the brain. The relation between stimulus strength and all-or-nothing spiking of neurons is widely believed to be the basis of this coding. This initiated the development of spiking neuron models; one of today's most powerful conceptual tool for the analysis and emulation of neural dynamics. The success of electronic circuit models and their physical realization within silicon field-effect transistor circuits lead to elegant technical approaches. Recently, the spectrum of electronic devices for neural computing has been extended by memristive devices, mainly used to emulate static synaptic functionality. Their capabilities for emulations of neural activity were recently demonstrated using a memristive neuristor circuit, while a memristive neuron circuit has so far been elusive. Here, a spiking neuron model is experimentally realized in a compact circuit comprising memristive and memcapacitive devices based on the strongly correlated electron material vanadium dioxide (VO2) and on the chemical electromigration cell Ag/TiO2-x /Al. The circuit can emulate dynamical spiking patterns in response to an external stimulus including adaptation, which is at the heart of firing rate coding as first observed by E.D. Adrian in 1926.

  8. Spiking neurons in a hierarchical self-organizing map model can learn to develop spatial and temporal properties of entorhinal grid cells and hippocampal place cells.

    Directory of Open Access Journals (Sweden)

    Praveen K Pilly

    Full Text Available Medial entorhinal grid cells and hippocampal place cells provide neural correlates of spatial representation in the brain. A place cell typically fires whenever an animal is present in one or more spatial regions, or places, of an environment. A grid cell typically fires in multiple spatial regions that form a regular hexagonal grid structure extending throughout the environment. Different grid and place cells prefer spatially offset regions, with their firing fields increasing in size along the dorsoventral axes of the medial entorhinal cortex and hippocampus. The spacing between neighboring fields for a grid cell also increases along the dorsoventral axis. This article presents a neural model whose spiking neurons operate in a hierarchy of self-organizing maps, each obeying the same laws. This spiking GridPlaceMap model simulates how grid cells and place cells may develop. It responds to realistic rat navigational trajectories by learning grid cells with hexagonal grid firing fields of multiple spatial scales and place cells with one or more firing fields that match neurophysiological data about these cells and their development in juvenile rats. The place cells represent much larger spaces than the grid cells, which enable them to support navigational behaviors. Both self-organizing maps amplify and learn to categorize the most frequent and energetic co-occurrences of their inputs. The current results build upon a previous rate-based model of grid and place cell learning, and thus illustrate a general method for converting rate-based adaptive neural models, without the loss of any of their analog properties, into models whose cells obey spiking dynamics. New properties of the spiking GridPlaceMap model include the appearance of theta band modulation. The spiking model also opens a path for implementation in brain-emulating nanochips comprised of networks of noisy spiking neurons with multiple-level adaptive weights for controlling autonomous

  9. Thermal impact on spiking properties in Hodgkin–Huxley neuron ...

    Indian Academy of Sciences (India)

    Abstract. The effect of environmental temperature on neuronal spiking behaviors is investigated by numerically simulating the temperature dependence of spiking threshold of the Hodgkin–Huxley neuron subject to synaptic stimulus. We find that the spiking threshold exhibits a global minimum in a specific temperature range ...

  10. Modeling Spike-Train Processing in the Cerebellum Granular Layer and Changes in Plasticity Reveal Single Neuron Effects in Neural Ensembles

    Directory of Open Access Journals (Sweden)

    Chaitanya Medini

    2012-01-01

    Full Text Available The cerebellum input stage has been known to perform combinatorial operations on input signals. In this paper, two types of mathematical models were used to reproduce the role of feed-forward inhibition and computation in the granular layer microcircuitry to investigate spike train processing. A simple spiking model and a biophysically-detailed model of the network were used to study signal recoding in the granular layer and to test observations like center-surround organization and time-window hypothesis in addition to effects of induced plasticity. Simulations suggest that simple neuron models may be used to abstract timing phenomenon in large networks, however detailed models were needed to reconstruct population coding via evoked local field potentials (LFP and for simulating changes in synaptic plasticity. Our results also indicated that spatio-temporal code of the granular network is mainly controlled by the feed-forward inhibition from the Golgi cell synapses. Spike amplitude and total number of spikes were modulated by LTP and LTD. Reconstructing granular layer evoked-LFP suggests that granular layer propagates the nonlinearities of individual neurons. Simulations indicate that granular layer network operates a robust population code for a wide range of intervals, controlled by the Golgi cell inhibition and is regulated by the post-synaptic excitability.

  11. A Simple Deep Learning Method for Neuronal Spike Sorting

    Science.gov (United States)

    Yang, Kai; Wu, Haifeng; Zeng, Yu

    2017-10-01

    Spike sorting is one of key technique to understand brain activity. With the development of modern electrophysiology technology, some recent multi-electrode technologies have been able to record the activity of thousands of neuronal spikes simultaneously. The spike sorting in this case will increase the computational complexity of conventional sorting algorithms. In this paper, we will focus spike sorting on how to reduce the complexity, and introduce a deep learning algorithm, principal component analysis network (PCANet) to spike sorting. The introduced method starts from a conventional model and establish a Toeplitz matrix. Through the column vectors in the matrix, we trains a PCANet, where some eigenvalue vectors of spikes could be extracted. Finally, support vector machine (SVM) is used to sort spikes. In experiments, we choose two groups of simulated data from public databases availably and compare this introduced method with conventional methods. The results indicate that the introduced method indeed has lower complexity with the same sorting errors as the conventional methods.

  12. Spike Frequency Adaptation in Neurons of the Central Nervous System.

    Science.gov (United States)

    Ha, Go Eun; Cheong, Eunji

    2017-08-01

    Neuronal firing patterns and frequencies determine the nature of encoded information of the neurons. Here we discuss the molecular identity and cellular mechanisms of spike-frequency adaptation in central nervous system (CNS) neurons. Calcium-activated potassium (K Ca ) channels such as BK Ca and SK Ca channels have long been known to be important mediators of spike adaptation via generation of a large afterhyperpolarization when neurons are hyper-activated. However, it has been shown that a strong hyperpolarization via these K Ca channels would cease action potential generation rather than reducing the frequency of spike generation. In some types of neurons, the strong hyperpolarization is followed by oscillatory activity in these neurons. Recently, spike-frequency adaptation in thalamocortical (TC) and CA1 hippocampal neurons is shown to be mediated by the Ca 2+ -activated Cl- channel (CACC), anoctamin-2 (ANO2). Knockdown of ANO2 in these neurons results in significantly reduced spike-frequency adaptation accompanied by increased number of spikes without shifting the firing mode, which suggests that ANO2 mediates a genuine form of spike adaptation, finely tuning the frequency of spikes in these neurons. Based on the finding of a broad expression of this new class of CACC in the brain, it can be proposed that the ANO2-mediated spike-frequency adaptation may be a general mechanism to control information transmission in the CNS neurons.

  13. Spike-timing-dependent learning rule to encode spatiotemporal patterns in a network of spiking neurons

    Science.gov (United States)

    Yoshioka, Masahiko

    2002-01-01

    We study associative memory neural networks based on the Hodgkin-Huxley type of spiking neurons. We introduce the spike-timing-dependent learning rule, in which the time window with the negative part as well as the positive part is used to describe the biologically plausible synaptic plasticity. The learning rule is applied to encode a number of periodical spatiotemporal patterns, which are successfully reproduced in the periodical firing pattern of spiking neurons in the process of memory retrieval. The global inhibition is incorporated into the model so as to induce the gamma oscillation. The occurrence of gamma oscillation turns out to give appropriate spike timings for memory retrieval of discrete type of spatiotemporal pattern. The theoretical analysis to elucidate the stationary properties of perfect retrieval state is conducted in the limit of an infinite number of neurons and shows the good agreement with the result of numerical simulations. The result of this analysis indicates that the presence of the negative and positive parts in the form of the time window contributes to reduce the size of crosstalk term, implying that the time window with the negative and positive parts is suitable to encode a number of spatiotemporal patterns. We draw some phase diagrams, in which we find various types of phase transitions with change of the intensity of global inhibition.

  14. A new supervised learning algorithm for spiking neurons.

    Science.gov (United States)

    Xu, Yan; Zeng, Xiaoqin; Zhong, Shuiming

    2013-06-01

    The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by the precise firing times of spikes. If only running time is considered, the supervised learning for a spiking neuron is equivalent to distinguishing the times of desired output spikes and the other time during the running process of the neuron through adjusting synaptic weights, which can be regarded as a classification problem. Based on this idea, this letter proposes a new supervised learning method for spiking neurons with temporal encoding; it first transforms the supervised learning into a classification problem and then solves the problem by using the perceptron learning rule. The experiment results show that the proposed method has higher learning accuracy and efficiency over the existing learning methods, so it is more powerful for solving complex and real-time problems.

  15. Solving constraint satisfaction problems with networks of spiking neurons

    Directory of Open Access Journals (Sweden)

    Zeno eJonke

    2016-03-01

    Full Text Available Network of neurons in the brain apply – unlike processors in our current generation ofcomputer hardware – an event-based processing strategy, where short pulses (spikes areemitted sparsely by neurons to signal the occurrence of an event at a particular point intime. Such spike-based computations promise to be substantially more power-efficient thantraditional clocked processing schemes. However it turned out to be surprisingly difficult todesign networks of spiking neurons that can solve difficult computational problems on the levelof single spikes (rather than rates of spikes. We present here a new method for designingnetworks of spiking neurons via an energy function. Furthermore we show how the energyfunction of a network of stochastically firing neurons can be shaped in a quite transparentmanner by composing the networks of simple stereotypical network motifs. We show that thisdesign approach enables networks of spiking neurons to produce approximate solutions todifficult (NP-hard constraint satisfaction problems from the domains of planning/optimizationand verification/logical inference. The resulting networks employ noise as a computationalresource. Nevertheless the timing of spikes (rather than just spike rates plays an essential rolein their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines and Gibbs sampling.

  16. How adaptation shapes spike rate oscillations in recurrent neuronal networks

    Directory of Open Access Journals (Sweden)

    Moritz eAugustin

    2013-02-01

    Full Text Available Neural mass signals from in-vivo recordings often show oscillations with frequencies ranging from <1 Hz to 100 Hz. Fast rhythmic activity in the beta and gamma range can be generated by network based mechanisms such as recurrent synaptic excitation-inhibition loops. Slower oscillations might instead depend on neuronal adaptation currents whose timescales range from tens of milliseconds to seconds. Here we investigate how the dynamics of such adaptation currents contribute to spike rate oscillations and resonance properties in recurrent networks of excitatory and inhibitory neurons. Based on a network of sparsely coupled spiking model neurons with two types of adaptation current and conductance based synapses with heterogeneous strengths and delays we use a mean-field approach to analyze oscillatory network activity. For constant external input, we find that spike-triggered adaptation currents provide a mechanism to generate slow oscillations over a wide range of adaptation timescales as long as recurrent synaptic excitation is sufficiently strong. Faster rhythms occur when recurrent inhibition is slower than excitation and oscillation frequency increases with the strength of inhibition. Adaptation facilitates such network based oscillations for fast synaptic inhibition and leads to decreased frequencies. For oscillatory external input, adaptation currents amplify a narrow band of frequencies and cause phase advances for low frequencies in addition to phase delays at higher frequencies. Our results therefore identify the different key roles of neuronal adaptation dynamics for rhythmogenesis and selective signal propagation in recurrent networks.

  17. Channel noise effects on first spike latency of a stochastic Hodgkin-Huxley neuron

    Science.gov (United States)

    Maisel, Brenton; Lindenberg, Katja

    2017-02-01

    While it is widely accepted that information is encoded in neurons via action potentials or spikes, it is far less understood what specific features of spiking contain encoded information. Experimental evidence has suggested that the timing of the first spike may be an energy-efficient coding mechanism that contains more neural information than subsequent spikes. Therefore, the biophysical features of neurons that underlie response latency are of considerable interest. Here we examine the effects of channel noise on the first spike latency of a Hodgkin-Huxley neuron receiving random input from many other neurons. Because the principal feature of a Hodgkin-Huxley neuron is the stochastic opening and closing of channels, the fluctuations in the number of open channels lead to fluctuations in the membrane voltage and modify the timing of the first spike. Our results show that when a neuron has a larger number of channels, (i) the occurrence of the first spike is delayed and (ii) the variation in the first spike timing is greater. We also show that the mean, median, and interquartile range of first spike latency can be accurately predicted from a simple linear regression by knowing only the number of channels in the neuron and the rate at which presynaptic neurons fire, but the standard deviation (i.e., neuronal jitter) cannot be predicted using only this information. We then compare our results to another commonly used stochastic Hodgkin-Huxley model and show that the more commonly used model overstates the first spike latency but can predict the standard deviation of first spike latencies accurately. We end by suggesting a more suitable definition for the neuronal jitter based upon our simulations and comparison of the two models.

  18. Clustering predicts memory performance in networks of spiking and non-spiking neurons

    Directory of Open Access Journals (Sweden)

    Weiliang eChen

    2011-03-01

    Full Text Available The problem we address in this paper is that of finding effective and parsimonious patterns of connectivity in sparse associative memories. This problem must be addressed in real neuronal systems, so that results in artificial systems could throw light on real systems. We show that there are efficient patterns of connectivity and that these patterns are effective in models with either spiking or non-spiking neurons. This suggests that there may be some underlying general principles governing good connectivity in such networks. We also show that the clustering of the network, measured by Clustering Coefficient, has a strong linear correlation to the performance of associative memory. This result is important since a purely static measure of network connectivity appears to determine an important dynamic property of the network.

  19. Joint Probability-Based Neuronal Spike Train Classification

    Directory of Open Access Journals (Sweden)

    Yan Chen

    2009-01-01

    Full Text Available Neuronal spike trains are used by the nervous system to encode and transmit information. Euclidean distance-based methods (EDBMs have been applied to quantify the similarity between temporally-discretized spike trains and model responses. In this study, using the same discretization procedure, we developed and applied a joint probability-based method (JPBM to classify individual spike trains of slowly adapting pulmonary stretch receptors (SARs. The activity of individual SARs was recorded in anaesthetized, paralysed adult male rabbits, which were artificially-ventilated at constant rate and one of three different volumes. Two-thirds of the responses to the 600 stimuli presented at each volume were used to construct three response models (one for each stimulus volume consisting of a series of time bins, each with spike probabilities. The remaining one-third of the responses where used as test responses to be classified into one of the three model responses. This was done by computing the joint probability of observing the same series of events (spikes or no spikes, dictated by the test response in a given model and determining which probability of the three was highest. The JPBM generally produced better classification accuracy than the EDBM, and both performed well above chance. Both methods were similarly affected by variations in discretization parameters, response epoch duration, and two different response alignment strategies. Increasing bin widths increased classification accuracy, which also improved with increased observation time, but primarily during periods of increasing lung inflation. Thus, the JPBM is a simple and effective method performing spike train classification.

  20. Self-organization of spiking neurons using action potential timing.

    Science.gov (United States)

    Ruf, B; Schmitt, M

    1998-01-01

    We propose a mechanism for unsupervised learning in networks of spiking neurons which is based on the timing of single firing events. Our results show that a topology preserving behavior quite similar to that of Kohonen's self-organizing map can be achieved using temporal coding. In contrast to previous approaches, which use rate coding, the winner among competing neurons can be determined fast and locally. Our model is a further step toward a more realistic description of unsupervised learning in biological neural systems. Furthermore, it may provide a basis for fast implementations in pulsed VLSI (very large scale integration).

  1. Effect of lateral connections on the accuracy of the population code for a network of spiking neurons

    OpenAIRE

    Spiridon, M.; Gerstner, W.

    2001-01-01

    We study how neuronal connections in a population of spiking neurons affect the accuracy of stimulus estimation. Neurons in our model code for a one-dimensional orientation variable $\\phi$. Connectivity between two neurons depends on the absolute difference $|\\phi-\\phi'|$ between the preferred orientation of the two neurons. We derive an analytical expression of the activity profile for a population of neurons described by the spike response model with noisy threshold. We estimate the stimulu...

  2. A self-resetting spiking phase-change neuron.

    Science.gov (United States)

    Cobley, R A; Hayat, H; Wright, C D

    2018-05-11

    Neuromorphic, or brain-inspired, computing applications of phase-change devices have to date concentrated primarily on the implementation of phase-change synapses. However, the so-called accumulation mode of operation inherent in phase-change materials and devices can also be used to mimic the integrative properties of a biological neuron. Here we demonstrate, using physical modelling of nanoscale devices and SPICE modelling of associated circuits, that a single phase-change memory cell integrated into a comparator type circuit can deliver a basic hardware mimic of an integrate-and-fire spiking neuron with self-resetting capabilities. Such phase-change neurons, in combination with phase-change synapses, can potentially open a new route for the realisation of all-phase-change neuromorphic computing.

  3. Critical slowing down governs the transition to neuron spiking.

    Directory of Open Access Journals (Sweden)

    Christian Meisel

    2015-02-01

    Full Text Available Many complex systems have been found to exhibit critical transitions, or so-called tipping points, which are sudden changes to a qualitatively different system state. These changes can profoundly impact the functioning of a system ranging from controlled state switching to a catastrophic break-down; signals that predict critical transitions are therefore highly desirable. To this end, research efforts have focused on utilizing qualitative changes in markers related to a system's tendency to recover more slowly from a perturbation the closer it gets to the transition--a phenomenon called critical slowing down. The recently studied scaling of critical slowing down offers a refined path to understand critical transitions: to identify the transition mechanism and improve transition prediction using scaling laws. Here, we outline and apply this strategy for the first time in a real-world system by studying the transition to spiking in neurons of the mammalian cortex. The dynamical system approach has identified two robust mechanisms for the transition from subthreshold activity to spiking, saddle-node and Hopf bifurcation. Although theory provides precise predictions on signatures of critical slowing down near the bifurcation to spiking, quantitative experimental evidence has been lacking. Using whole-cell patch-clamp recordings from pyramidal neurons and fast-spiking interneurons, we show that 1 the transition to spiking dynamically corresponds to a critical transition exhibiting slowing down, 2 the scaling laws suggest a saddle-node bifurcation governing slowing down, and 3 these precise scaling laws can be used to predict the bifurcation point from a limited window of observation. To our knowledge this is the first report of scaling laws of critical slowing down in an experiment. They present a missing link for a broad class of neuroscience modeling and suggest improved estimation of tipping points by incorporating scaling laws of critical slowing

  4. The Chronotron: A Neuron That Learns to Fire Temporally Precise Spike Patterns

    Science.gov (United States)

    Florian, Răzvan V.

    2012-01-01

    In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm. PMID:22879876

  5. The chronotron: a neuron that learns to fire temporally precise spike patterns.

    Directory of Open Access Journals (Sweden)

    Răzvan V Florian

    Full Text Available In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons, one that provides high memory capacity (E-learning, and one that has a higher biological plausibility (I-learning. With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.

  6. On the Non-Learnability of a Single Spiking Neuron

    Czech Academy of Sciences Publication Activity Database

    Šíma, Jiří; Sgall, Jiří

    2005-01-01

    Roč. 17, č. 12 (2005), s. 2635-2647 ISSN 0899-7667 R&D Projects: GA ČR GA201/02/1456; GA AV ČR 1ET100300517; GA MŠk LN00A056; GA MŠk(CZ) 1M0545 Institutional research plan: CEZ:AV0Z10300504; CEZ:AV0Z10190503 Keywords : spiking neuron * consistency problem * NP-completness * PAC model * robust learning * representation problem Subject RIV: BA - General Mathematics Impact factor: 2.591, year: 2005

  7. Spiking and bursting patterns of fractional-order Izhikevich model

    Science.gov (United States)

    Teka, Wondimu W.; Upadhyay, Ranjit Kumar; Mondal, Argha

    2018-03-01

    Bursting and spiking oscillations play major roles in processing and transmitting information in the brain through cortical neurons that respond differently to the same signal. These oscillations display complex dynamics that might be produced by using neuronal models and varying many model parameters. Recent studies have shown that models with fractional order can produce several types of history-dependent neuronal activities without the adjustment of several parameters. We studied the fractional-order Izhikevich model and analyzed different kinds of oscillations that emerge from the fractional dynamics. The model produces a wide range of neuronal spike responses, including regular spiking, fast spiking, intrinsic bursting, mixed mode oscillations, regular bursting and chattering, by adjusting only the fractional order. Both the active and silent phase of the burst increase when the fractional-order model further deviates from the classical model. For smaller fractional order, the model produces memory dependent spiking activity after the pulse signal turned off. This special spiking activity and other properties of the fractional-order model are caused by the memory trace that emerges from the fractional-order dynamics and integrates all the past activities of the neuron. On the network level, the response of the neuronal network shifts from random to scale-free spiking. Our results suggest that the complex dynamics of spiking and bursting can be the result of the long-term dependence and interaction of intracellular and extracellular ionic currents.

  8. Energetics based spike generation of a single neuron: simulation results and analysis

    Directory of Open Access Journals (Sweden)

    Nagarajan eVenkateswaran

    2012-02-01

    Full Text Available Existing current based models that capture spike activity, though useful in studying information processing capabilities of neurons, fail to throw light on their internal functioning. It is imperative to develop a model that captures the spike train of a neuron as a function of its intra cellular parameters for non-invasive diagnosis of diseased neurons. This is the first ever article to present such an integrated model that quantifies the inter-dependency between spike activity and intra cellular energetics. The generated spike trains from our integrated model will throw greater light on the intra-cellular energetics than existing current models. Now, an abnormality in the spike of a diseased neuron can be linked and hence effectively analyzed at the energetics level. The spectral analysis of the generated spike trains in a time-frequency domain will help identify abnormalities in the internals of a neuron. As a case study, the parameters of our model are tuned for Alzheimer disease and its resultant spike trains are studied and presented.

  9. Spike sorting of heterogeneous neuron types by multimodality-weighted PCA and explicit robust variational Bayes

    Directory of Open Access Journals (Sweden)

    Takashi eTakekawa

    2012-03-01

    Full Text Available This study introduces a new spike sorting method that classifies spike waveforms from multiunit recordings into spike trains of individual neurons. In particular, we develop a method to sort a spike mixture generated by a heterogeneous neural population. Such a spike sorting has a significant practical value, but was previously difficult. The method combines a feature extraction method, which we may term multimodality-weighted principal component analysis (mPCA, and a clustering method by variational Bayes for Student’s t mixture model (SVB. The performance of the proposed method was compared with that of other conventional methods for simulated and experimental data sets. We found that the mPCA efficiently extracts highly informative features as clusters clearly separable in a relatively low-dimensional feature space. The SVB was implemented explicitly without relying on Maximum-A-Posterior (MAP inference for the degree of freedom parameters. The explicit SVB is faster than the conventional SVB derived with MAP inference and works more reliably over various data sets that include spiking patterns difficult to sort. For instance, spikes of a single bursting neuron may be separated incorrectly into multiple clusters, whereas those of a sparsely firing neuron tend to be merged into clusters for other neurons. Our method showed significantly improved performance in spike sorting of these difficult neurons. A parallelized implementation of the proposed algorithm (EToS version 3 is available as open-source code at http://etos.sourceforge.net/.

  10. Spike Neural Models Part II: Abstract Neural Models

    OpenAIRE

    Johnson, Melissa G.; Chartier, Sylvain

    2018-01-01

    Neurons are complex cells that require a lot of time and resources to model completely. In spiking neural networks (SNN) though, not all that complexity is required. Therefore simple, abstract models are often used. These models save time, use less computer resources, and are easier to understand. This tutorial presents two such models: Izhikevich's model, which is biologically realistic in the resulting spike trains but not in the parameters, and the Leaky Integrate and Fire (LIF) model whic...

  11. Input-output relation and energy efficiency in the neuron with different spike threshold dynamics

    Directory of Open Access Journals (Sweden)

    Guo-Sheng eYi

    2015-05-01

    Full Text Available Neuron encodes and transmits information through generating sequences of output spikes, which is a high energy-consuming process. The spike is initiated when membrane depolarization reaches a threshold voltage. In many neurons, threshold is dynamic and depends on the rate of membrane depolarization (dV/dt preceding a spike. Identifying the metabolic energy involved in neural coding and their relationship to threshold dynamic is critical to understanding neuronal function and evolution. Here, we use a modified Morris-Lecar model to investigate neuronal input-output property and energy efficiency associated with different spike threshold dynamics. We find that the neurons with dynamic threshold sensitive to dV/dt generate discontinuous frequency-current curve and type II phase response curve (PRC through Hopf bifurcation, and weak noise could prohibit spiking when bifurcation just occurs. The threshold that is insensitive to dV/dt, instead, results in a continuous frequency-current curve, a type I PRC and a saddle-node on invariant circle bifurcation, and simultaneously weak noise cannot inhibit spiking. It is also shown that the bifurcation, frequency-current curve and PRC type associated with different threshold dynamics arise from the distinct subthreshold interactions of membrane currents. Further, we observe that the energy consumption of the neuron is related to its firing characteristics. The depolarization of spike threshold improves neuronal energy efficiency by reducing the overlap of Na+ and K+ currents during an action potential. The high energy efficiency is achieved at more depolarized spike threshold and high stimulus current. These results provide a fundamental biophysical connection that links spike threshold dynamics, input-output relation, energetics and spike initiation, which could contribute to uncover neural encoding mechanism.

  12. Input-output relation and energy efficiency in the neuron with different spike threshold dynamics.

    Science.gov (United States)

    Yi, Guo-Sheng; Wang, Jiang; Tsang, Kai-Ming; Wei, Xi-Le; Deng, Bin

    2015-01-01

    Neuron encodes and transmits information through generating sequences of output spikes, which is a high energy-consuming process. The spike is initiated when membrane depolarization reaches a threshold voltage. In many neurons, threshold is dynamic and depends on the rate of membrane depolarization (dV/dt) preceding a spike. Identifying the metabolic energy involved in neural coding and their relationship to threshold dynamic is critical to understanding neuronal function and evolution. Here, we use a modified Morris-Lecar model to investigate neuronal input-output property and energy efficiency associated with different spike threshold dynamics. We find that the neurons with dynamic threshold sensitive to dV/dt generate discontinuous frequency-current curve and type II phase response curve (PRC) through Hopf bifurcation, and weak noise could prohibit spiking when bifurcation just occurs. The threshold that is insensitive to dV/dt, instead, results in a continuous frequency-current curve, a type I PRC and a saddle-node on invariant circle bifurcation, and simultaneously weak noise cannot inhibit spiking. It is also shown that the bifurcation, frequency-current curve and PRC type associated with different threshold dynamics arise from the distinct subthreshold interactions of membrane currents. Further, we observe that the energy consumption of the neuron is related to its firing characteristics. The depolarization of spike threshold improves neuronal energy efficiency by reducing the overlap of Na(+) and K(+) currents during an action potential. The high energy efficiency is achieved at more depolarized spike threshold and high stimulus current. These results provide a fundamental biophysical connection that links spike threshold dynamics, input-output relation, energetics and spike initiation, which could contribute to uncover neural encoding mechanism.

  13. Synchronous spikes are necessary but not sufficient for a synchrony code in populations of spiking neurons.

    Science.gov (United States)

    Grewe, Jan; Kruscha, Alexandra; Lindner, Benjamin; Benda, Jan

    2017-03-07

    Synchronous activity in populations of neurons potentially encodes special stimulus features. Selective readout of either synchronous or asynchronous activity allows formation of two streams of information processing. Theoretical work predicts that such a synchrony code is a fundamental feature of populations of spiking neurons if they operate in specific noise and stimulus regimes. Here we experimentally test the theoretical predictions by quantifying and comparing neuronal response properties in tuberous and ampullary electroreceptor afferents of the weakly electric fish Apteronotus leptorhynchus These related systems show similar levels of synchronous activity, but only in the more irregularly firing tuberous afferents a synchrony code is established, whereas in the more regularly firing ampullary afferents it is not. The mere existence of synchronous activity is thus not sufficient for a synchrony code. Single-cell features such as the irregularity of spiking and the frequency dependence of the neuron's transfer function determine whether synchronous spikes possess a distinct meaning for the encoding of time-dependent signals.

  14. Spiking irregularity and frequency modulate the behavioral report of single-neuron stimulation.

    Science.gov (United States)

    Doron, Guy; von Heimendahl, Moritz; Schlattmann, Peter; Houweling, Arthur R; Brecht, Michael

    2014-02-05

    The action potential activity of single cortical neurons can evoke measurable sensory effects, but it is not known how spiking parameters and neuronal subtypes affect the evoked sensations. Here, we examined the effects of spike train irregularity, spike frequency, and spike number on the detectability of single-neuron stimulation in rat somatosensory cortex. For regular-spiking, putative excitatory neurons, detectability increased with spike train irregularity and decreasing spike frequencies but was not affected by spike number. Stimulation of single, fast-spiking, putative inhibitory neurons led to a larger sensory effect compared to regular-spiking neurons, and the effect size depended only on spike irregularity. An ideal-observer analysis suggests that, under our experimental conditions, rats were using integration windows of a few hundred milliseconds or more. Our data imply that the behaving animal is sensitive to single neurons' spikes and even to their temporal patterning. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Time Resolution Dependence of Information Measures for Spiking Neurons: Scaling and Universality

    Directory of Open Access Journals (Sweden)

    James P Crutchfield

    2015-08-01

    Full Text Available The mutual information between stimulus and spike-train response is commonly used to monitor neural coding efficiency, but neuronal computation broadly conceived requires more refined and targeted information measures of input-output joint processes. A first step towards that larger goal is todevelop information measures for individual output processes, including information generation (entropy rate, stored information (statisticalcomplexity, predictable information (excess entropy, and active information accumulation (bound information rate. We calculate these for spike trains generated by a variety of noise-driven integrate-and-fire neurons as a function of time resolution and for alternating renewal processes. We show that their time-resolution dependence reveals coarse-grained structural properties of interspike interval statistics; e.g., $tau$-entropy rates that diverge less quickly than the firing rate indicate interspike interval correlations. We also find evidence that the excess entropy and regularized statistical complexity of different types of integrate-and-fire neurons are universal in the continuous-time limit in the sense that they do not depend on mechanism details. This suggests a surprising simplicity in the spike trains generated by these model neurons. Interestingly, neurons with gamma-distributed ISIs and neurons whose spike trains are alternating renewal processes do not fall into the same universality class. These results lead to two conclusions. First, the dependence of information measures on time resolution reveals mechanistic details about spike train generation. Second, information measures can be used as model selection tools for analyzing spike train processes.

  16. Neuronal spike sorting based on radial basis function neural networks

    Directory of Open Access Journals (Sweden)

    Taghavi Kani M

    2011-02-01

    Full Text Available "nBackground: Studying the behavior of a society of neurons, extracting the communication mechanisms of brain with other tissues, finding treatment for some nervous system diseases and designing neuroprosthetic devices, require an algorithm to sort neuralspikes automatically. However, sorting neural spikes is a challenging task because of the low signal to noise ratio (SNR of the spikes. The main purpose of this study was to design an automatic algorithm for classifying neuronal spikes that are emitted from a specific region of the nervous system."n "nMethods: The spike sorting process usually consists of three stages: detection, feature extraction and sorting. We initially used signal statistics to detect neural spikes. Then, we chose a limited number of typical spikes as features and finally used them to train a radial basis function (RBF neural network to sort the spikes. In most spike sorting devices, these signals are not linearly discriminative. In order to solve this problem, the aforesaid RBF neural network was used."n "nResults: After the learning process, our proposed algorithm classified any arbitrary spike. The obtained results showed that even though the proposed Radial Basis Spike Sorter (RBSS reached to the same error as the previous methods, however, the computational costs were much lower compared to other algorithms. Moreover, the competitive points of the proposed algorithm were its good speed and low computational complexity."n "nConclusion: Regarding the results of this study, the proposed algorithm seems to serve the purpose of procedures that require real-time processing and spike sorting.

  17. A Cross-Correlated Delay Shift Supervised Learning Method for Spiking Neurons with Application to Interictal Spike Detection in Epilepsy.

    Science.gov (United States)

    Guo, Lilin; Wang, Zhenzhong; Cabrerizo, Mercedes; Adjouadi, Malek

    2017-05-01

    This study introduces a novel learning algorithm for spiking neurons, called CCDS, which is able to learn and reproduce arbitrary spike patterns in a supervised fashion allowing the processing of spatiotemporal information encoded in the precise timing of spikes. Unlike the Remote Supervised Method (ReSuMe), synapse delays and axonal delays in CCDS are variants which are modulated together with weights during learning. The CCDS rule is both biologically plausible and computationally efficient. The properties of this learning rule are investigated extensively through experimental evaluations in terms of reliability, adaptive learning performance, generality to different neuron models, learning in the presence of noise, effects of its learning parameters and classification performance. Results presented show that the CCDS learning method achieves learning accuracy and learning speed comparable with ReSuMe, but improves classification accuracy when compared to both the Spike Pattern Association Neuron (SPAN) learning rule and the Tempotron learning rule. The merit of CCDS rule is further validated on a practical example involving the automated detection of interictal spikes in EEG records of patients with epilepsy. Results again show that with proper encoding, the CCDS rule achieves good recognition performance.

  18. STD-dependent and independent encoding of input irregularity as spike rate in a computational model of a cerebellar nucleus neuron

    NARCIS (Netherlands)

    J. Luthman (Johannes); F.E. Hoebeek (Freek); R. Maex (Reinoud); N. Davey (Neil); R. Adams (Rod); C.I. de Zeeuw (Chris); V. Steuber (Volker)

    2011-01-01

    textabstractNeurons in the cerebellar nuclei (CN) receive inhibitory inputs from Purkinje cells in the cerebellar cortex and provide the major output from the cerebellum, but their computational function is not well understood. It has recently been shown that the spike activity of Purkinje cells is

  19. Origin of heterogeneous spiking patterns from continuously distributed ion channel densities: a computational study in spinal dorsal horn neurons.

    Science.gov (United States)

    Balachandar, Arjun; Prescott, Steven A

    2018-01-20

    Distinct spiking patterns may arise from qualitative differences in ion channel expression (i.e. when different neurons express distinct ion channels) and/or when quantitative differences in expression levels qualitatively alter the spike generation process. We hypothesized that spiking patterns in neurons of the superficial dorsal horn (SDH) of spinal cord reflect both mechanisms. We reproduced SDH neuron spiking patterns by varying densities of K V 1- and A-type potassium conductances. Plotting the spiking patterns that emerge from different density combinations revealed spiking-pattern regions separated by boundaries (bifurcations). This map suggests that certain spiking pattern combinations occur when the distribution of potassium channel densities straddle boundaries, whereas other spiking patterns reflect distinct patterns of ion channel expression. The former mechanism may explain why certain spiking patterns co-occur in genetically identified neuron types. We also present algorithms to predict spiking pattern proportions from ion channel density distributions, and vice versa. Neurons are often classified by spiking pattern. Yet, some neurons exhibit distinct patterns under subtly different test conditions, which suggests that they operate near an abrupt transition, or bifurcation. A set of such neurons may exhibit heterogeneous spiking patterns not because of qualitative differences in which ion channels they express, but rather because quantitative differences in expression levels cause neurons to operate on opposite sides of a bifurcation. Neurons in the spinal dorsal horn, for example, respond to somatic current injection with patterns that include tonic, single, gap, delayed and reluctant spiking. It is unclear whether these patterns reflect five cell populations (defined by distinct ion channel expression patterns), heterogeneity within a single population, or some combination thereof. We reproduced all five spiking patterns in a computational model by

  20. Supervised learning with decision margins in pools of spiking neurons.

    Science.gov (United States)

    Le Mouel, Charlotte; Harris, Kenneth D; Yger, Pierre

    2014-10-01

    Learning to categorise sensory inputs by generalising from a few examples whose category is precisely known is a crucial step for the brain to produce appropriate behavioural responses. At the neuronal level, this may be performed by adaptation of synaptic weights under the influence of a training signal, in order to group spiking patterns impinging on the neuron. Here we describe a framework that allows spiking neurons to perform such "supervised learning", using principles similar to the Support Vector Machine, a well-established and robust classifier. Using a hinge-loss error function, we show that requesting a margin similar to that of the SVM improves performance on linearly non-separable problems. Moreover, we show that using pools of neurons to discriminate categories can also increase the performance by sharing the load among neurons.

  1. Span: spike pattern association neuron for learning spatio-temporal spike patterns.

    Science.gov (United States)

    Mohemmed, Ammar; Schliebs, Stefan; Matsuda, Satoshi; Kasabov, Nikola

    2012-08-01

    Spiking Neural Networks (SNN) were shown to be suitable tools for the processing of spatio-temporal information. However, due to their inherent complexity, the formulation of efficient supervised learning algorithms for SNN is difficult and remains an important problem in the research area. This article presents SPAN - a spiking neuron that is able to learn associations of arbitrary spike trains in a supervised fashion allowing the processing of spatio-temporal information encoded in the precise timing of spikes. The idea of the proposed algorithm is to transform spike trains during the learning phase into analog signals so that common mathematical operations can be performed on them. Using this conversion, it is possible to apply the well-known Widrow-Hoff rule directly to the transformed spike trains in order to adjust the synaptic weights and to achieve a desired input/output spike behavior of the neuron. In the presented experimental analysis, the proposed learning algorithm is evaluated regarding its learning capabilities, its memory capacity, its robustness to noisy stimuli and its classification performance. Differences and similarities of SPAN regarding two related algorithms, ReSuMe and Chronotron, are discussed.

  2. Superficial dorsal horn neurons with double spike activity in the rat.

    Science.gov (United States)

    Rojas-Piloni, Gerardo; Dickenson, Anthony H; Condés-Lara, Miguel

    2007-05-29

    Superficial dorsal horn neurons promote the transfer of nociceptive information from the periphery to supraspinal structures. The membrane and discharge properties of spinal cord neurons can alter the reliability of peripheral signals. In this paper, we analyze the location and response properties of a particular class of dorsal horn neurons that exhibits double spike discharge with a very short interspike interval (2.01+/-0.11 ms). These neurons receive nociceptive C-fiber input and are located in laminae I-II. Double spikes are generated spontaneously or by depolarizing current injection (interval of 2.37+/-0.22). Cells presenting double spike (interval 2.28+/-0.11) increased the firing rate by electrical noxious stimulation, as well as, in the first minutes after carrageenan injection into their receptive field. Carrageenan is a polysaccharide soluble in water and it is used for producing an experimental model of semi-chronic pain. In the present study carrageenan also produces an increase in the interval between double spikes and then, reduced their occurrence after 5-10 min. The results suggest that double spikes are due to intrinsic membrane properties and that their frequency is related to C-fiber nociceptive activity. The present work shows evidence that double spikes in superficial spinal cord neurones are related to the nociceptive stimulation, and they are possibly part of an acute pain-control mechanism.

  3. Nicotine-Mediated ADP to Spike Transition: Double Spiking in Septal Neurons.

    Science.gov (United States)

    Kodirov, Sodikdjon A; Wehrmeister, Michael; Colom, Luis

    2016-04-01

    The majority of neurons in lateral septum (LS) are electrically silent at resting membrane potential. Nicotine transiently excites a subset of neurons and occasionally leads to long lasting bursting activity upon longer applications. We have observed simultaneous changes in frequencies and amplitudes of spontaneous action potentials (AP) in the presence of nicotine. During the prolonged exposure, nicotine increased numbers of spikes within a burst. One of the hallmarks of nicotine effects was the occurrences of double spikes (known also as bursting). Alignment of 51 spontaneous spikes, triggered upon continuous application of nicotine, revealed that the slope of after-depolarizing potential gradually increased (1.4 vs. 3 mV/ms) and neuron fired the second AP, termed as double spiking. A transition from a single AP to double spikes increased the amplitude of after-hyperpolarizing potential. The amplitude of the second (premature) AP was smaller compared to the first one, and this correlation persisted in regard to their duration (half-width). A similar bursting activity in the presence of nicotine, to our knowledge, has not been reported previously in the septal structure in general and in LS in particular.

  4. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels.

    Science.gov (United States)

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.

  5. Thermal impact on spiking properties in Hodgkin-Huxley neuron ...

    Indian Academy of Sciences (India)

    Thermal impact on spiking properties in Hodgkin-Huxley neuron with synaptic stimulus. Shenbing ... Department of Physical Science and Technology, Wuhan University of Technology, Wuhan, 430070, China; State Key Laboratory of Advanced Technology for Materials Synthesis and Processing, Wuhan, 430070, China ...

  6. Dynamics of spiking neurons: between homogeneity and synchrony.

    Science.gov (United States)

    Rangan, Aaditya V; Young, Lai-Sang

    2013-06-01

    Randomly connected networks of neurons driven by Poisson inputs are often assumed to produce "homogeneous" dynamics, characterized by largely independent firing and approximable by diffusion processes. At the same time, it is well known that such networks can fire synchronously. Between these two much studied scenarios lies a vastly complex dynamical landscape that is relatively unexplored. In this paper, we discuss a phenomenon which commonly manifests in these intermediate regimes, namely brief spurts of spiking activity which we call multiple firing events (MFE). These events do not depend on structured network architecture nor on structured input; they are an emergent property of the system. We came upon them in an earlier modeling paper, in which we discovered, through a careful benchmarking process, that MFEs are the single most important dynamical mechanism behind many of the V1 phenomena we were able to replicate. In this paper we explain in a simpler setting how MFEs come about, as well as their potential dynamic consequences. Although the mechanism underlying MFEs cannot easily be captured by current population dynamics models, this phenomena should not be ignored during analysis; there is a growing body of evidence that such collaborative activity may be a key towards unlocking the possible functional properties of many neuronal networks.

  7. Theory of input spike auto- and cross-correlations and their effect on the response of spiking neurons.

    Science.gov (United States)

    Moreno-Bote, Rubén; Renart, Alfonso; Parga, Néstor

    2008-07-01

    Spike correlations between neurons are ubiquitous in the cortex, but their role is not understood. Here we describe the firing response of a leaky integrate-and-fire neuron (LIF) when it receives a temporarily correlated input generated by presynaptic correlated neuronal populations. Input correlations are characterized in terms of the firing rates, Fano factors, correlation coefficients, and correlation timescale of the neurons driving the target neuron. We show that the sum of the presynaptic spike trains cannot be well described by a Poisson process. In fact, the total input current has a nontrivial two-point correlation function described by two main parameters: the correlation timescale (how precise the input correlations are in time) and the correlation magnitude (how strong they are). Therefore, the total current generated by the input spike trains is not well described by a white noise gaussian process. Instead, we model the total current as a colored gaussian process with the same mean and two-point correlation function, leading to the formulation of the problem in terms of a Fokker-Planck equation. Solutions of the output firing rate are found in the limit of short and long correlation timescales. The solutions described here expand and improve on our previous results (Moreno, de la Rocha, Renart, & Parga, 2002) by presenting new analytical expressions for the output firing rate for general IF neurons, extending the validity of the results for arbitrarily large correlation magnitude, and by describing the differential effect of correlations on the mean-driven or noise-dominated firing regimes. Also the details of this novel formalism are given here for the first time. We employ numerical simulations to confirm the analytical solutions and study the firing response to sudden changes in the input correlations. We expect this formalism to be useful for the study of correlations in neuronal networks and their role in neural processing and information

  8. Relation Between Firing Statistics of Spiking Neuron with Instantaneous Feedback and Without Feedback

    Science.gov (United States)

    Vidybida, Alexander

    2015-09-01

    We consider a class of spiking neuron models, defined by a set of conditions which are typical for basic threshold-type models like leaky integrate-and-fire, or binding neuron model and also for some artificial neurons. A neuron is fed with a point renewal process. A relation between the three probability density functions (PDF): (i) PDF of input interspike intervals ISIs, (ii) PDF of output interspike intervals of a neuron with a feedback and (iii) PDF for that same neuron without feedback is derived. This allows to calculate any one of the three PDFs provided the remaining two are given. Similar relation between corresponding means and variances is derived. The relations are checked exactly for the binding neuron model stimulated with Poisson stream.

  9. Detecting dependencies between spike trains of pairs of neurons through copulas

    DEFF Research Database (Denmark)

    Sacerdote, Laura; Tamborrino, Massimiliano; Zucca, Cristina

    2011-01-01

    recorded spike trains. We develop a non-parametric method based on copulas, that we apply to simulated data according to different bivariate Leaky In- tegrate and Fire models. The method discerns dependencies determined by the surround- ing network, from those determined by direct interactions between......The dynamics of a neuron are influenced by the connections with the network where it lies. Recorded spike trains exhibit patterns due to the interactions between neurons. However, the structure of the network is not known. A challenging task is to investigate it from the analysis of simultaneously...

  10. Simulating large-scale spiking neuronal networks with NEST

    OpenAIRE

    Senk, Johanna; Diesmann, Markus

    2014-01-01

    The Neural Simulation Tool NEST [1, www.nest-simulator.org] is the simulator for spiking neural networkmodels of the HBP that focuses on the dynamics, size and structure of neural systems rather than on theexact morphology of individual neurons. Its simulation kernel is written in C++ and it runs on computinghardware ranging from simple laptops to clusters and supercomputers with thousands of processor cores.The development of NEST is coordinated by the NEST Initiative [www.nest-initiative.or...

  11. From spiking neurons to brain waves

    NARCIS (Netherlands)

    Visser, S.

    2013-01-01

    No single model would be able to capture all processes in the brain at once, since its interactions are too numerous and too complex. Therefore, it is common practice to simplify the parts of the system. Typically, the goal is to describe the collective action of many underlying processes, without

  12. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size.

    Science.gov (United States)

    Schwalger, Tilo; Deger, Moritz; Gerstner, Wulfram

    2017-04-01

    Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50-2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations.

  13. Encoding noxious heat by spike bursts of antennal bimodal hygroreceptor (dry) neurons in the carabid Pterostichus oblongopunctatus.

    Science.gov (United States)

    Must, Anne; Merivee, Enno; Nurme, Karin; Sibul, Ivar; Muzzi, Maurizio; Di Giulio, Andrea; Williams, Ingrid; Tooming, Ene

    2017-04-01

    Despite thermosensation being crucial in effective thermoregulation behaviour, it is poorly studied in insects. Very little is known about encoding of noxious high temperatures by peripheral thermoreceptor neurons. In carabids, thermo- and hygrosensitive neurons innervate antennal dome-shaped sensilla (DSS). In this study, we demonstrate that several essential fine structural features of dendritic outer segments of the sensory neurons in the DSS and the classical model of insect thermo- and hygrosensitive sensilla differ fundamentally. Here, we show that spike bursts produced by the bimodal dry neurons in the antennal DSS may contribute to the sensation of noxious heat in P. oblongopunctatus. Our electrophysiological experiments showed that, at temperatures above 25 °C, these neurons switch from humidity-dependent regular spiking to temperature-dependent spike bursting. Five out of seven measured parameters of the bursty spike trains, the percentage of bursty dry neurons, the CV of ISIs in a spike train, the percentage of bursty spikes, the number of spikes in a burst and the ISIs in a burst, are unambiguously dependent on temperature and thus may precisely encode both noxious high steady temperatures up to 45 °C as well as rapid step-changes in it. The cold neuron starts to produce temperature-dependent spike bursts at temperatures above 30-35 °C. Thus, the two neurons encode different but largely overlapping ranges in noxious heat. The extent of dendritic branching and lamellation of the neurons largely varies in different DSS, which might be the structural basis for their variation in threshold temperatures for spike bursting.

  14. Ensembles of Spiking Neurons with Noise Support Optimal Probabilistic Inference in a Dynamically Changing Environment

    Science.gov (United States)

    Legenstein, Robert; Maass, Wolfgang

    2014-01-01

    It has recently been shown that networks of spiking neurons with noise can emulate simple forms of probabilistic inference through “neural sampling”, i.e., by treating spikes as samples from a probability distribution of network states that is encoded in the network. Deficiencies of the existing model are its reliance on single neurons for sampling from each random variable, and the resulting limitation in representing quickly varying probabilistic information. We show that both deficiencies can be overcome by moving to a biologically more realistic encoding of each salient random variable through the stochastic firing activity of an ensemble of neurons. The resulting model demonstrates that networks of spiking neurons with noise can easily track and carry out basic computational operations on rapidly varying probability distributions, such as the odds of getting rewarded for a specific behavior. We demonstrate the viability of this new approach towards neural coding and computation, which makes use of the inherent parallelism of generic neural circuits, by showing that this model can explain experimentally observed firing activity of cortical neurons for a variety of tasks that require rapid temporal integration of sensory information. PMID:25340749

  15. Temporally coordinated spiking activity of human induced pluripotent stem cell-derived neurons co-cultured with astrocytes.

    Science.gov (United States)

    Kayama, Tasuku; Suzuki, Ikuro; Odawara, Aoi; Sasaki, Takuya; Ikegaya, Yuji

    2018-01-01

    In culture conditions, human induced-pluripotent stem cells (hiPSC)-derived neurons form synaptic connections with other cells and establish neuronal networks, which are expected to be an in vitro model system for drug discovery screening and toxicity testing. While early studies demonstrated effects of co-culture of hiPSC-derived neurons with astroglial cells on survival and maturation of hiPSC-derived neurons, the population spiking patterns of such hiPSC-derived neurons have not been fully characterized. In this study, we analyzed temporal spiking patterns of hiPSC-derived neurons recorded by a multi-electrode array system. We discovered that specific sets of hiPSC-derived neurons co-cultured with astrocytes showed more frequent and highly coherent non-random synchronized spike trains and more dynamic changes in overall spike patterns over time. These temporally coordinated spiking patterns are physiological signs of organized circuits of hiPSC-derived neurons and suggest benefits of co-culture of hiPSC-derived neurons with astrocytes. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Spiking, Bursting, and Population Dynamics in a Network of Growth Transform Neurons.

    Science.gov (United States)

    Gangopadhyay, Ahana; Chakrabartty, Shantanu

    2017-04-27

    This paper investigates the dynamical properties of a network of neurons, each of which implements an asynchronous mapping based on polynomial growth transforms. In the first part of this paper, we present a geometric approach for visualizing the dynamics of the network where each of the neurons traverses a trajectory in a dual optimization space, whereas the network itself traverses a trajectory in an equivalent primal optimization space. We show that as the network learns to solve basic classification tasks, different choices of primal-dual mapping produce unique but interpretable neural dynamics like noise shaping, spiking, and bursting. While the proposed framework is general enough, in this paper, we demonstrate its use for designing support vector machines (SVMs) that exhibit noise-shaping properties similar to those of ΣΔ modulators, and for designing SVMs that learn to encode information using spikes and bursts. It is demonstrated that the emergent switching, spiking, and burst dynamics produced by each neuron encodes its respective margin of separation from a classification hyperplane whose parameters are encoded by the network population dynamics. We believe that the proposed growth transform neuron model and the underlying geometric framework could serve as an important tool to connect well-established machine learning algorithms like SVMs to neuromorphic principles like spiking, bursting, population encoding, and noise shaping.

  17. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations

    Directory of Open Access Journals (Sweden)

    Jan eHahne

    2015-09-01

    Full Text Available Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy...

  18. Weak noise in neurons may powerfully inhibit the generation of repetitive spiking but not its propagation.

    Directory of Open Access Journals (Sweden)

    Henry C Tuckwell

    2010-05-01

    Full Text Available Many neurons have epochs in which they fire action potentials in an approximately periodic fashion. To see what effects noise of relatively small amplitude has on such repetitive activity we recently examined the response of the Hodgkin-Huxley (HH space-clamped system to such noise as the mean and variance of the applied current vary, near the bifurcation to periodic firing. This article is concerned with a more realistic neuron model which includes spatial extent. Employing the Hodgkin-Huxley partial differential equation system, the deterministic component of the input current is restricted to a small segment whereas the stochastic component extends over a region which may or may not overlap the deterministic component. For mean values below, near and above the critical values for repetitive spiking, the effects of weak noise of increasing strength is ascertained by simulation. As in the point model, small amplitude noise near the critical value dampens the spiking activity and leads to a minimum as noise level increases. This was the case for both additive noise and conductance-based noise. Uniform noise along the whole neuron is only marginally more effective in silencing the cell than noise which occurs near the region of excitation. In fact it is found that if signal and noise overlap in spatial extent, then weak noise may inhibit spiking. If, however, signal and noise are applied on disjoint intervals, then the noise has no effect on the spiking activity, no matter how large its region of application, though the trajectories are naturally altered slightly by noise. Such effects could not be discerned in a point model and are important for real neuron behavior. Interference with the spike train does nevertheless occur when the noise amplitude is larger, even when noise and signal do not overlap, being due to the instigation of secondary noise-induced wave phenomena rather than switching the system from one attractor (firing regularly to

  19. Approximate, computationally efficient online learning in Bayesian spiking neurons.

    Science.gov (United States)

    Kuhlmann, Levin; Hauser-Raspe, Michael; Manton, Jonathan H; Grayden, David B; Tapson, Jonathan; van Schaik, André

    2014-03-01

    Bayesian spiking neurons (BSNs) provide a probabilistic interpretation of how neurons perform inference and learning. Online learning in BSNs typically involves parameter estimation based on maximum-likelihood expectation-maximization (ML-EM) which is computationally slow and limits the potential of studying networks of BSNs. An online learning algorithm, fast learning (FL), is presented that is more computationally efficient than the benchmark ML-EM for a fixed number of time steps as the number of inputs to a BSN increases (e.g., 16.5 times faster run times for 20 inputs). Although ML-EM appears to converge 2.0 to 3.6 times faster than FL, the computational cost of ML-EM means that ML-EM takes longer to simulate to convergence than FL. FL also provides reasonable convergence performance that is robust to initialization of parameter estimates that are far from the true parameter values. However, parameter estimation depends on the range of true parameter values. Nevertheless, for a physiologically meaningful range of parameter values, FL gives very good average estimation accuracy, despite its approximate nature. The FL algorithm therefore provides an efficient tool, complementary to ML-EM, for exploring BSN networks in more detail in order to better understand their biological relevance. Moreover, the simplicity of the FL algorithm means it can be easily implemented in neuromorphic VLSI such that one can take advantage of the energy-efficient spike coding of BSNs.

  20. Spike Pattern Structure Influences Synaptic Efficacy Variability Under STDP and Synaptic Homeostasis. I: Spike Generating Models on Converging Motifs

    Directory of Open Access Journals (Sweden)

    Zedong eBi

    2016-02-01

    Full Text Available In neural systems, synaptic plasticity is usually driven by spike trains. Due to the inherent noises of neurons and synapses as well as the randomness of connection details, spike trains typically exhibit variability such as spatial randomness and temporal stochasticity, resulting in variability of synaptic changes under plasticity, which we call efficacy variability. How the variability of spike trains influences the efficacy variability of synapses remains unclear. In this paper, we try to understand this influence under pair-wise additive spike-timing dependent plasticity (STDP when the mean strength of plastic synapses into a neuron is bounded (synaptic homeostasis. Specifically, we systematically study, analytically and numerically, how four aspects of statistical features, i.e. synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations, as well as their interactions influence the efficacy variability in converging motifs (simple networks in which one neuron receives from many other neurons. Neurons (including the post-synaptic neuron in a converging motif generate spikes according to statistical models with tunable parameters. In this way, we can explicitly control the statistics of the spike patterns, and investigate their influence onto the efficacy variability, without worrying about the feedback from synaptic changes onto the dynamics of the post-synaptic neuron. We separate efficacy variability into two parts: the drift part (DriftV induced by the heterogeneity of change rates of different synapses, and the diffusion part (DiffV induced by weight diffusion caused by stochasticity of spike trains. Our main findings are: (1 synchronous firing and burstiness tend to increase DiffV, (2 heterogeneity of rates induces DriftV when potentiation and depression in STDP are not balanced, and (3 heterogeneity of cross-correlations induces DriftV together with heterogeneity of rates. We anticipate our

  1. Symbol manipulation and rule learning in spiking neuronal networks.

    Science.gov (United States)

    Fernando, Chrisantha

    2011-04-21

    It has been claimed that the productivity, systematicity and compositionality of human language and thought necessitate the existence of a physical symbol system (PSS) in the brain. Recent discoveries about temporal coding suggest a novel type of neuronal implementation of a physical symbol system. Furthermore, learning classifier systems provide a plausible algorithmic basis by which symbol re-write rules could be trained to undertake behaviors exhibiting systematicity and compositionality, using a kind of natural selection of re-write rules in the brain, We show how the core operation of a learning classifier system, namely, the replication with variation of symbol re-write rules, can be implemented using spike-time dependent plasticity based supervised learning. As a whole, the aim of this paper is to integrate an algorithmic and an implementation level description of a neuronal symbol system capable of sustaining systematic and compositional behaviors. Previously proposed neuronal implementations of symbolic representations are compared with this new proposal. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. An online supervised learning method based on gradient descent for spiking neurons.

    Science.gov (United States)

    Xu, Yan; Yang, Jing; Zhong, Shuiming

    2017-09-01

    The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by precise firing times of spikes. The gradient-descent-based (GDB) learning methods are widely used and verified in the current research. Although the existing GDB multi-spike learning (or spike sequence learning) methods have good performance, they work in an offline manner and still have some limitations. This paper proposes an online GDB spike sequence learning method for spiking neurons that is based on the online adjustment mechanism of real biological neuron synapses. The method constructs error function and calculates the adjustment of synaptic weights as soon as the neurons emit a spike during their running process. We analyze and synthesize desired and actual output spikes to select appropriate input spikes in the calculation of weight adjustment in this paper. The experimental results show that our method obviously improves learning performance compared with the offline learning manner and has certain advantage on learning accuracy compared with other learning methods. Stronger learning ability determines that the method has large pattern storage capacity. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Associative memory based on synchronized firing of spiking neurons with time-delayed interactions

    Science.gov (United States)

    Yoshioka, Masahiko; Shiino, Masatoshi

    1998-09-01

    We study associative memory of a neural network of spiking neurons with time-delayed synaptic interactions incorporating the time taken by an action potential to propagate along the axon. Individual spiking neurons are described by a set of nonlinear differential equations capable of exhibiting excitability such as that of Hodgkin-Huxley and FitzHugh neurons. When a simple learning rule of the autocorrelation type based on random patterns is assumed, memory retrieval is shown to be accompanied by synchronized firing of neurons. The reduced dynamics with a few degrees of freedom of the network with a finite number of stored patterns is analytically derived in the limit of infinitely many neurons. The dependence of the appearance of retrieval states on the distribution of time delay and on the size of refractory period given implicitly in the model is obtained, showing good agreement between the result of numerical simulations and that obtained from the reduced dynamics. The behavior of the network with an extensive number of patterns is also investigated and an approximate analysis is presented to discuss the storage capacity.

  4. Spike Train SIMilarity Space (SSIMS): a frame-work for single neuron and ensemble data analysis

    Science.gov (United States)

    Vargas-Irwin, Carlos E.; Brandman, David M.; Zimmermann, Jonas B.; Donoghue, John P.; Black, Michael J.

    2014-01-01

    Increased emphasis on circuit level activity in the brain makes it necessary to have methods to visualize and evaluate large scale ensemble activity, beyond that revealed by raster-histograms or pairwise correlations. We present a method to evaluate the relative similarity of neural spiking patterns by combining spike train distance metrics with dimensionality reduction. Spike train distance metrics provide an estimate of similarity between activity patterns at multiple temporal resolutions. Vectors of pair-wise distances are used to represent the intrinsic relationships between multiple activity patterns at the level of single units or neuronal ensembles. Dimensionality reduction is then used to project the data into concise representations suitable for clustering analysis as well as exploratory visualization. Algorithm performance and robustness are evaluated using multielectrode ensemble activity data recorded in behaving primates. We demonstrate how Spike train SIMilarity Space (SSIMS) analysis captures the relationship between goal directions for an 8-directional reaching task and successfully segregates grasp types in a 3D grasping task in the absence of kinematic information. The algorithm enables exploration of virtually any type of neural spiking (time series) data, providing similarity-based clustering of neural activity states with minimal assumptions about potential information encoding models. PMID:25380335

  5. Spike timing rigidity is maintained in bursting neurons under pentobarbital-induced anesthetic conditions

    Directory of Open Access Journals (Sweden)

    Risako Kato

    2016-11-01

    Full Text Available Pentobarbital potentiates γ-aminobutyric acid (GABA-mediated inhibitory synaptic transmission by prolonging the open time of GABAA receptors. However, it is unknown how pentobarbital regulates cortical neuronal activities via local circuits in vivo. To examine this question, we performed extracellular unit recording in rat insular cortex under awake and anesthetic conditions. Not a few studies apply time-rescaling theorem to detect the features of repetitive spike firing. Similar to these methods, we define an average spike interval locally in time using random matrix theory (RMT, which enables us to compare different activity states on a universal scale. Neurons with high spontaneous firing frequency (> 5 Hz and bursting were classified as HFB neurons (n = 10, and those with low spontaneous firing frequency (< 10 Hz and without bursting were classified as non-HFB neurons (n = 48. Pentobarbital injection (30 mg/kg reduced firing frequency in all HFB neurons and in 78% of non-HFB neurons. RMT analysis demonstrated that pentobarbital increased in the number of neurons with repulsion in both HFB and non-HFB neurons, suggesting that there is a correlation between spikes within a short interspike interval. Under awake conditions, in 50% of HFB and 40% of non-HFB neurons, the decay phase of normalized histograms of spontaneous firing were fitted to an exponential function, which indicated that the first spike had no correlation with subsequent spikes. In contrast, under pentobarbital-induced anesthesia conditions, the number of non-HFB neurons that were fitted to an exponential function increased to 80%, but almost no change in HFB neurons was observed. These results suggest that under both awake and pentobarbital-induced anesthetized conditions, spike firing in HFB neurons is more robustly regulated by preceding spikes than by non-HFB neurons, which may reflect the GABAA receptor-mediated regulation of cortical activities. Whole-cell patch

  6. Impact of Morphometry, Myelinization and Synaptic Current Strength on Spike Conduction in Human and Cat Spiral Ganglion Neurons

    Science.gov (United States)

    Rattay, Frank; Potrusil, Thomas; Wenger, Cornelia; Wise, Andrew K.; Glueckert, Rudolf; Schrott-Fischer, Anneliese

    2013-01-01

    Background Our knowledge about the neural code in the auditory nerve is based to a large extent on experiments on cats. Several anatomical differences between auditory neurons in human and cat are expected to lead to functional differences in speed and safety of spike conduction. Methodology/Principal Findings Confocal microscopy was used to systematically evaluate peripheral and central process diameters, commonness of myelination and morphology of spiral ganglion neurons (SGNs) along the cochlea of three human and three cats. Based on these morphometric data, model analysis reveales that spike conduction in SGNs is characterized by four phases: a postsynaptic delay, constant velocity in the peripheral process, a presomatic delay and constant velocity in the central process. The majority of SGNs are type I, connecting the inner hair cells with the brainstem. In contrast to those of humans, type I neurons of the cat are entirely myelinated. Biophysical model evaluation showed delayed and weak spikes in the human soma region as a consequence of a lack of myelin. The simulated spike conduction times are in accordance with normal interwave latencies from auditory brainstem response recordings from man and cat. Simulated 400 pA postsynaptic currents from inner hair cell ribbon synapses were 15 times above threshold. They enforced quick and synchronous spiking. Both of these properties were not present in type II cells as they receive fewer and much weaker (∼26 pA) synaptic stimuli. Conclusions/Significance Wasting synaptic energy boosts spike initiation, which guarantees the rapid transmission of temporal fine structure of auditory signals. However, a lack of myelin in the soma regions of human type I neurons causes a large delay in spike conduction in comparison with cat neurons. The absent myelin, in combination with a longer peripheral process, causes quantitative differences of temporal parameters in the electrically stimulated human cochlea compared to the cat

  7. Spike Timing Rigidity Is Maintained in Bursting Neurons under Pentobarbital-Induced Anesthetic Conditions.

    Science.gov (United States)

    Kato, Risako; Yamanaka, Masanori; Yokota, Eiko; Koshikawa, Noriaki; Kobayashi, Masayuki

    2016-01-01

    Pentobarbital potentiates γ-aminobutyric acid (GABA)-mediated inhibitory synaptic transmission by prolonging the open time of GABA A receptors. However, it is unknown how pentobarbital regulates cortical neuronal activities via local circuits in vivo . To examine this question, we performed extracellular unit recording in rat insular cortex under awake and anesthetic conditions. Not a few studies apply time-rescaling theorem to detect the features of repetitive spike firing. Similar to these methods, we define an average spike interval locally in time using random matrix theory (RMT), which enables us to compare different activity states on a universal scale. Neurons with high spontaneous firing frequency (>5 Hz) and bursting were classified as HFB neurons ( n = 10), and those with low spontaneous firing frequency (Pentobarbital injection (30 mg/kg) reduced firing frequency in all HFB neurons and in 78% of non-HFB neurons. RMT analysis demonstrated that pentobarbital increased in the number of neurons with repulsion in both HFB and non-HFB neurons, suggesting that there is a correlation between spikes within a short interspike interval (ISI). Under awake conditions, in 50% of HFB and 40% of non-HFB neurons, the decay phase of normalized histograms of spontaneous firing were fitted to an exponential function, which indicated that the first spike had no correlation with subsequent spikes. In contrast, under pentobarbital-induced anesthesia conditions, the number of non-HFB neurons that were fitted to an exponential function increased to 80%, but almost no change in HFB neurons was observed. These results suggest that under both awake and pentobarbital-induced anesthetized conditions, spike firing in HFB neurons is more robustly regulated by preceding spikes than by non-HFB neurons, which may reflect the GABA A receptor-mediated regulation of cortical activities. Whole-cell patch-clamp recording in the IC slice preparation was performed to compare the regularity of

  8. Spike neural models (part I: The Hodgkin-Huxley model

    Directory of Open Access Journals (Sweden)

    Johnson, Melissa G.

    2017-05-01

    Full Text Available Artificial neural networks, or ANNs, have grown a lot since their inception back in the 1940s. But no matter the changes, one of the most important components of neural networks is still the node, which represents the neuron. Within spiking neural networks, the node is especially important because it contains the functions and properties of neurons that are necessary for their network. One important aspect of neurons is the ionic flow which produces action potentials, or spikes. Forces of diffusion and electrostatic pressure work together with the physical properties of the cell to move ions around changing the cell membrane potential which ultimately produces the action potential. This tutorial reviews the Hodkgin-Huxley model and shows how it simulates the ionic flow of the giant squid axon via four differential equations. The model is implemented in Matlab using Euler's Method to approximate the differential equations. By using Euler's method, an extra parameter is created, the time step. This new parameter needs to be carefully considered or the results of the node may be impaired.

  9. A Neuron Model Based Ultralow Current Sensor System for Bioapplications

    Directory of Open Access Journals (Sweden)

    A. K. M. Arifuzzman

    2016-01-01

    Full Text Available An ultralow current sensor system based on the Izhikevich neuron model is presented in this paper. The Izhikevich neuron model has been used for its superior computational efficiency and greater biological plausibility over other well-known neuron spiking models. Of the many biological neuron spiking features, regular spiking, chattering, and neostriatal spiny projection spiking have been reproduced by adjusting the parameters associated with the model at hand. This paper also presents a modified interpretation of the regular spiking feature in which the firing pattern is similar to that of the regular spiking but with improved dynamic range offering. The sensor current ranges between 2 pA and 8 nA and exhibits linearity in the range of 0.9665 to 0.9989 for different spiking features. The efficacy of the sensor system in detecting low amount of current along with its high linearity attribute makes it very suitable for biomedical applications.

  10. The Site of Spontaneous Ectopic Spike Initiation Facilitates Signal Integration in a Sensory Neuron.

    Science.gov (United States)

    Städele, Carola; Stein, Wolfgang

    2016-06-22

    Essential to understanding the process of neuronal signal integration is the knowledge of where within a neuron action potentials (APs) are generated. Recent studies support the idea that the precise location where APs are initiated and the properties of spike initiation zones define the cell's information processing capabilities. Notably, the location of spike initiation can be modified homeostatically within neurons to adjust neuronal activity. Here we show that this potential mechanism for neuronal plasticity can also be exploited in a rapid and dynamic fashion. We tested whether dislocation of the spike initiation zone affects signal integration by studying ectopic spike initiation in the anterior gastric receptor neuron (AGR) of the stomatogastric nervous system of Cancer borealis Like many other vertebrate and invertebrate neurons, AGR can generate ectopic APs in regions distinct from the axon initial segment. Using voltage-sensitive dyes and electrophysiology, we determined that AGR's ectopic spike activity was consistently initiated in the neuropil region of the stomatogastric ganglion motor circuits. At least one neurite branched off the AGR axon in this area; and indeed, we found that AGR's ectopic spike activity was influenced by local motor neurons. This sensorimotor interaction was state-dependent in that focal axon modulation with the biogenic amine octopamine, abolished signal integration at the primary spike initiation zone by dislocating spike initiation to a distant region of the axon. We demonstrate that the site of ectopic spike initiation is important for signal integration and that axonal neuromodulation allows for a dynamic adjustment of signal integration. Although it is known that action potentials are initiated at specific sites in the axon, it remains to be determined how the precise location of action potential initiation affects neuronal activity and signal integration. We addressed this issue by studying ectopic spiking in the axon of

  11. DL-ReSuMe: A Delay Learning-Based Remote Supervised Method for Spiking Neurons.

    Science.gov (United States)

    Taherkhani, Aboozar; Belatreche, Ammar; Li, Yuhua; Maguire, Liam P

    2015-12-01

    Recent research has shown the potential capability of spiking neural networks (SNNs) to model complex information processing in the brain. There is biological evidence to prove the use of the precise timing of spikes for information coding. However, the exact learning mechanism in which the neuron is trained to fire at precise times remains an open problem. The majority of the existing learning methods for SNNs are based on weight adjustment. However, there is also biological evidence that the synaptic delay is not constant. In this paper, a learning method for spiking neurons, called delay learning remote supervised method (DL-ReSuMe), is proposed to merge the delay shift approach and ReSuMe-based weight adjustment to enhance the learning performance. DL-ReSuMe uses more biologically plausible properties, such as delay learning, and needs less weight adjustment than ReSuMe. Simulation results have shown that the proposed DL-ReSuMe approach achieves learning accuracy and learning speed improvements compared with ReSuMe.

  12. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.

    Directory of Open Access Journals (Sweden)

    Gabriel Koch Ocker

    2015-08-01

    Full Text Available The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.

  13. SPAN: spike pattern association neuron for learning spatio-temporal sequences

    OpenAIRE

    Mohemmed, A; Schliebs, S; Matsuda, S; Kasabov, N

    2012-01-01

    Spiking Neural Networks (SNN) were shown to be suitable tools for the processing of spatio-temporal information. However, due to their inherent complexity, the formulation of efficient supervised learning algorithms for SNN is difficult and remains an important problem in the research area. This article presents SPAN — a spiking neuron that is able to learn associations of arbitrary spike trains in a supervised fashion allowing the processing of spatio-temporal information encoded in the prec...

  14. Simulation of a spiking neuron circuit using carbon nanotube transistors

    Energy Technology Data Exchange (ETDEWEB)

    Najari, Montassar, E-mail: malnjar@jazanu.edu.sa [Departement of Physics, Faculty of Sciences, University of Gabes, Gabes (Tunisia); IKCE unit, Jazan University, Jazan (Saudi Arabia); El-Grour, Tarek, E-mail: grour-tarek@hotmail.fr [Departement of Physics, Faculty of Sciences, University of Gabes, Gabes (Tunisia); Jelliti, Sami, E-mail: sjelliti@jazanu.edu.sa [IKCE unit, Jazan University, Jazan (Saudi Arabia); Hakami, Othman Mousa, E-mail: omhakami@jazanu.edu.sa [IKCE unit, Jazan University, Jazan (Saudi Arabia); Faculty of Sciences, Jazan University, Jazan (Saudi Arabia)

    2016-06-10

    Neuromorphic engineering is related to the existing analogies between the physical semiconductor VLSI (Very Large Scale Integration) and biophysics. Neuromorphic systems propose to reproduce the structure and function of biological neural systems for transferring their calculation capacity on silicon. Since the innovative research of Carver Mead, the neuromorphic engineering continues to emerge remarkable implementation of biological system. This work presents a simulation of an elementary neuron cell with a carbon nanotube transistor (CNTFET) based technology. The model of the cell neuron which was simulated is called integrate and fire (I&F) model firstly introduced by G. Indiveri in 2009. This circuit has been simulated with CNTFET technology using ADS environment to verify the neuromorphic activities in terms of membrane potential. This work has demonstrated the efficiency of this emergent device; i.e CNTFET on the design of such architecture in terms of power consumption and technology integration density.

  15. A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses

    Science.gov (United States)

    Qiao, Ning; Mostafa, Hesham; Corradi, Federico; Osswald, Marc; Stefanini, Fabio; Sumislawska, Dora; Indiveri, Giacomo

    2015-01-01

    Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks, with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm2, and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities. PMID:25972778

  16. Spectral components of cytosolic [Ca2+] spiking in neurons

    DEFF Research Database (Denmark)

    Kardos, J; Szilágyi, N; Juhász, G

    1998-01-01

    We show here, by means of evolutionary spectral analysis and synthesis of cytosolic Ca2+ ([Ca2+]c) spiking observed at the single cell level using digital imaging fluorescence microscopy of fura-2-loaded mouse cerebellar granule cells in culture, that [Ca2+]c spiking can be resolved into evolutio......We show here, by means of evolutionary spectral analysis and synthesis of cytosolic Ca2+ ([Ca2+]c) spiking observed at the single cell level using digital imaging fluorescence microscopy of fura-2-loaded mouse cerebellar granule cells in culture, that [Ca2+]c spiking can be resolved...... into evolutionary spectra of a characteristic set of frequencies. Non-delayed small spikes on top of sustained [Ca2+]c were synthesized by a main component frequency, 0.132+/-0.012 Hz, showing its maximal amplitude in phase with the start of depolarization (25 mM KCI) combined with caffeine (10 mM) application...

  17. Neurobiologically Realistic Determinants of Self-Organized Criticality in Networks of Spiking Neurons

    Science.gov (United States)

    Rubinov, Mikail; Sporns, Olaf; Thivierge, Jean-Philippe; Breakspear, Michael

    2011-01-01

    Self-organized criticality refers to the spontaneous emergence of self-similar dynamics in complex systems poised between order and randomness. The presence of self-organized critical dynamics in the brain is theoretically appealing and is supported by recent neurophysiological studies. Despite this, the neurobiological determinants of these dynamics have not been previously sought. Here, we systematically examined the influence of such determinants in hierarchically modular networks of leaky integrate-and-fire neurons with spike-timing-dependent synaptic plasticity and axonal conduction delays. We characterized emergent dynamics in our networks by distributions of active neuronal ensemble modules (neuronal avalanches) and rigorously assessed these distributions for power-law scaling. We found that spike-timing-dependent synaptic plasticity enabled a rapid phase transition from random subcritical dynamics to ordered supercritical dynamics. Importantly, modular connectivity and low wiring cost broadened this transition, and enabled a regime indicative of self-organized criticality. The regime only occurred when modular connectivity, low wiring cost and synaptic plasticity were simultaneously present, and the regime was most evident when between-module connection density scaled as a power-law. The regime was robust to variations in other neurobiologically relevant parameters and favored systems with low external drive and strong internal interactions. Increases in system size and connectivity facilitated internal interactions, permitting reductions in external drive and facilitating convergence of postsynaptic-response magnitude and synaptic-plasticity learning rate parameter values towards neurobiologically realistic levels. We hence infer a novel association between self-organized critical neuronal dynamics and several neurobiologically realistic features of structural connectivity. The central role of these features in our model may reflect their importance for

  18. Intracellular calcium spikes in rat suprachiasmatic nucleus neurons induced by BAPTA-based calcium dyes.

    Directory of Open Access Journals (Sweden)

    Jin Hee Hong

    Full Text Available BACKGROUND: Circadian rhythms in spontaneous action potential (AP firing frequencies and in cytosolic free calcium concentrations have been reported for mammalian circadian pacemaker neurons located within the hypothalamic suprachiasmatic nucleus (SCN. Also reported is the existence of "Ca(2+ spikes" (i.e., [Ca(2+](c transients having a bandwidth of 10 approximately 100 seconds in SCN neurons, but it is unclear if these SCN Ca(2+ spikes are related to the slow circadian rhythms. METHODOLOGY/PRINCIPAL FINDINGS: We addressed this issue based on a Ca(2+ indicator dye (fluo-4 and a protein Ca(2+ sensor (yellow cameleon. Using fluo-4 AM dye, we found spontaneous Ca(2+ spikes in 18% of rat SCN cells in acute brain slices, but the Ca(2+ spiking frequencies showed no day/night variation. We repeated the same experiments with rat (and mouse SCN slice cultures that expressed yellow cameleon genes for a number of different circadian phases and, surprisingly, spontaneous Ca(2+ spike was barely observed (<3%. When fluo-4 AM or BAPTA-AM was loaded in addition to the cameleon-expressing SCN cultures, however, the number of cells exhibiting Ca(2+ spikes was increased to 13 approximately 14%. CONCLUSIONS/SIGNIFICANCE: Despite our extensive set of experiments, no evidence of a circadian rhythm was found in the spontaneous Ca(2+ spiking activity of SCN. Furthermore, our study strongly suggests that the spontaneous Ca(2+ spiking activity is caused by the Ca(2+ chelating effect of the BAPTA-based fluo-4 dye. Therefore, this induced activity seems irrelevant to the intrinsic circadian rhythm of [Ca(2+](c in SCN neurons. The problems with BAPTA based dyes are widely known and our study provides a clear case for concern, in particular, for SCN Ca(2+ spikes. On the other hand, our study neither invalidates the use of these dyes as a whole, nor undermines the potential role of SCN Ca(2+ spikes in the function of SCN.

  19. Operant conditioning of synaptic and spiking activity patterns in single hippocampal neurons.

    Science.gov (United States)

    Ishikawa, Daisuke; Matsumoto, Nobuyoshi; Sakaguchi, Tetsuya; Matsuki, Norio; Ikegaya, Yuji

    2014-04-02

    Learning is a process of plastic adaptation through which a neural circuit generates a more preferable outcome; however, at a microscopic level, little is known about how synaptic activity is patterned into a desired configuration. Here, we report that animals can generate a specific form of synaptic activity in a given neuron in the hippocampus. In awake, head-restricted mice, we applied electrical stimulation to the lateral hypothalamus, a reward-associated brain region, when whole-cell patch-clamped CA1 neurons exhibited spontaneous synaptic activity that met preset criteria. Within 15 min, the mice learned to generate frequently the excitatory synaptic input pattern that satisfied the criteria. This reinforcement learning of synaptic activity was not observed for inhibitory input patterns. When a burst unit activity pattern was conditioned in paired and nonpaired paradigms, the frequency of burst-spiking events increased and decreased, respectively. The burst reinforcement occurred in the conditioned neuron but not in other adjacent neurons; however, ripple field oscillations were concomitantly reinforced. Neural conditioning depended on activation of NMDA receptors and dopamine D1 receptors. Acutely stressed mice and depression model mice that were subjected to forced swimming failed to exhibit the neural conditioning. This learning deficit was rescued by repetitive treatment with fluoxetine, an antidepressant. Therefore, internally motivated animals are capable of routing an ongoing action potential series into a specific neural pathway of the hippocampal network.

  20. An STDP training algorithm for a spiking neural network with dynamic threshold neurons.

    Science.gov (United States)

    Strain, T J; McDaid, L J; McGinnity, T M; Maguire, L P; Sayers, H M

    2010-12-01

    This paper proposes a supervised training algorithm for Spiking Neural Networks (SNNs) which modifies the Spike Timing Dependent Plasticity (STDP)learning rule to support both local and network level training with multiple synaptic connections and axonal delays. The training algorithm applies the rule to two and three layer SNNs, and is benchmarked using the Iris and Wisconsin Breast Cancer (WBC) data sets. The effectiveness of hidden layer dynamic threshold neurons is also investigated and results are presented.

  1. Integrative spike dynamics of rat CA1 neurons: a multineuronal imaging study.

    Science.gov (United States)

    Sasaki, Takuya; Kimura, Rie; Tsukamoto, Masako; Matsuki, Norio; Ikegaya, Yuji

    2006-07-01

    The brain operates through a coordinated interplay of numerous neurons, yet little is known about the collective behaviour of individual neurons embedded in a huge network. We used large-scale optical recordings to address synaptic integration in hundreds of neurons. In hippocampal slice cultures bolus-loaded with Ca2+ fluorophores, we stimulated the Schaffer collaterals and monitored the aggregate presynaptic activity from the stratum radiatum and individual postsynaptic spikes from the CA1 stratum pyramidale. Single neurons responded to varying synaptic inputs with unreliable spikes, but at the population level, the networks stably output a linear sum of synaptic inputs. Nonetheless, the network activity, even though given constant stimuli, varied from trial to trial. This variation emerged through time-varying recruitment of different neuron subsets, which were shaped by correlated background noise. We also mapped the input-frequency preference in spiking activity and found that the majority of CA1 neurons fired in response to a limited range of presynaptic firing rates (20-40 Hz), acting like a band-pass filter, although a few neurons had high pass-like or low pass-like characteristics. This frequency selectivity depended on phasic inhibitory transmission. Thus, our imaging approach enables the linking of single-cell behaviours to their communal dynamics, and we discovered that, even in a relatively simple CA1 circuit, neurons could be engaged in concordant information processing.

  2. Stochastic hybrid model of spontaneous dendritic NMDA spikes

    International Nuclear Information System (INIS)

    Bressloff, Paul C; Newby, Jay M

    2014-01-01

    Following recent advances in imaging techniques and methods of dendritic stimulation, active voltage spikes have been observed in thin dendritic branches of excitatory pyramidal neurons, where the majority of synapses occur. The generation of these dendritic spikes involves both Na + ion channels and M-methyl-D-aspartate receptor (NMDAR) channels. During strong stimulation of a thin dendrite, the resulting high levels of glutamate, the main excitatory neurotransmitter in the central nervous system and an NMDA agonist, modify the current-voltage (I–V) characteristics of an NMDAR so that it behaves like a voltage-gated Na + channel. Hence, the NMDARs can fire a regenerative dendritic spike, just as Na + channels support the initiation of an action potential following membrane depolarization. However, the duration of the dendritic spike is of the order 100 ms rather than 1 ms, since it involves slow unbinding of glutamate from NMDARs rather than activation of hyperpolarizing K + channels. It has been suggested that dendritic NMDA spikes may play an important role in dendritic computations and provide a cellular substrate for short-term memory. In this paper, we consider a stochastic, conductance-based model of dendritic NMDA spikes, in which the noise originates from the stochastic opening and closing of a finite number of Na + and NMDA receptor ion channels. The resulting model takes the form of a stochastic hybrid system, in which membrane voltage evolves according to a piecewise deterministic dynamics that is coupled to a jump Markov process describing the opening and closing of the ion channels. We formulate the noise-induced initiation and termination of a dendritic spike in terms of a first-passage time problem, under the assumption that glutamate unbinding is negligible, which we then solve using a combination of WKB methods and singular perturbation theory. Using a stochastic phase-plane analysis we then extend our analysis to take proper account of the

  3. Acetic acid modulates spike rate and spike latency to salt in peripheral gustatory neurons of rats

    Science.gov (United States)

    Breza, Joseph M.

    2012-01-01

    Sour and salt taste interactions are not well understood in the peripheral gustatory system. Therefore, we investigated the interaction of acetic acid and NaCl on taste processing by rat chorda tympani neurons. We recorded multi-unit responses from the severed chorda tympani nerve (CT) and single-cell responses from intact narrowly tuned and broadly tuned salt-sensitive neurons in the geniculate ganglion simultaneously with stimulus-evoked summated potentials to signal when the stimulus contacted the lingual epithelium. Artificial saliva served as the rinse and solvent for all stimuli [0.3 M NH4Cl, 0.5 M sucrose, 0.1 M NaCl, 0.01 M citric acid, 0.02 M quinine hydrochloride (QHCl), 0.1 M KCl, 0.003–0.1 M acetic acid, and 0.003–0.1 M acetic acid mixed with 0.1 M NaCl]. We used benzamil to assess NaCl responses mediated by the epithelial sodium channel (ENaC). The CT nerve responses to acetic acid/NaCl mixtures were less than those predicted by summing the component responses. Single-unit analyses revealed that acetic acid activated acid-generalist neurons exclusively in a concentration-dependent manner: increasing acid concentration increased response frequency and decreased response latency in a parallel fashion. Acetic acid suppressed NaCl responses in ENaC-dependent NaCl-specialist neurons, whereas acetic acid-NaCl mixtures were additive in acid-generalist neurons. These data suggest that acetic acid attenuates sodium responses in ENaC-expressing-taste cells in contact with NaCl-specialist neurons, whereas acetic acid-NaCl mixtures activate distinct receptor/cellular mechanisms on taste cells in contact with acid-generalist neurons. We speculate that NaCl-specialist neurons are in contact with type I cells, whereas acid-generalist neurons are in contact with type III cells in fungiform taste buds. PMID:22896718

  4. Versatile Networks of Simulated Spiking Neurons Displaying Winner-Take-All Behavior

    Directory of Open Access Journals (Sweden)

    Yanqing eChen

    2013-03-01

    Full Text Available We describe simulations of large-scale networks of excitatory and inhibitory spiking neurons that can generate dynamically stable winner-take-all (WTA behavior. The network connectivity is a variant of center-surround architecture that we call center-annular-surround (CAS. In this architecture each neuron is excited by nearby neighbors and inhibited by more distant neighbors in an annular-surround region. The neural units of these networks simulate conductance-based spiking neurons that interact via mechanisms susceptible to both short-term synaptic plasticity and STDP. We show that such CAS networks display robust WTA behavior unlike the center-surround networks and other control architectures that we have studied. We find that a large-scale network of spiking neurons with separate populations of excitatory and inhibitory neurons can give rise to smooth maps of sensory input. In addition, we show that a humanoid Brain-Based-Device (BBD under the control of a spiking WTA neural network can learn to reach to target positions in its visual field, thus demonstrating the acquisition of sensorimotor coordination.

  5. Synaptic Plasticity and Spike Synchronisation in Neuronal Networks

    Science.gov (United States)

    Borges, Rafael R.; Borges, Fernando S.; Lameu, Ewandson L.; Protachevicz, Paulo R.; Iarosz, Kelly C.; Caldas, Iberê L.; Viana, Ricardo L.; Macau, Elbert E. N.; Baptista, Murilo S.; Grebogi, Celso; Batista, Antonio M.

    2017-12-01

    Brain plasticity, also known as neuroplasticity, is a fundamental mechanism of neuronal adaptation in response to changes in the environment or due to brain injury. In this review, we show our results about the effects of synaptic plasticity on neuronal networks composed by Hodgkin-Huxley neurons. We show that the final topology of the evolved network depends crucially on the ratio between the strengths of the inhibitory and excitatory synapses. Excitation of the same order of inhibition revels an evolved network that presents the rich-club phenomenon, well known to exist in the brain. For initial networks with considerably larger inhibitory strengths, we observe the emergence of a complex evolved topology, where neurons sparsely connected to other neurons, also a typical topology of the brain. The presence of noise enhances the strength of both types of synapses, but if the initial network has synapses of both natures with similar strengths. Finally, we show how the synchronous behaviour of the evolved network will reflect its evolved topology.

  6. Spike train statistics for consonant and dissonant musical accords in a simple auditory sensory model

    Science.gov (United States)

    Ushakov, Yuriy V.; Dubkov, Alexander A.; Spagnolo, Bernardo

    2010-04-01

    The phenomena of dissonance and consonance in a simple auditory sensory model composed of three neurons are considered. Two of them, here so-called sensory neurons, are driven by noise and subthreshold periodic signals with different ratio of frequencies, and its outputs plus noise are applied synaptically to a third neuron, so-called interneuron. We present a theoretical analysis with a probabilistic approach to investigate the interspike intervals statistics of the spike train generated by the interneuron. We find that tones with frequency ratios that are considered consonant by musicians produce at the third neuron inter-firing intervals statistics densities that are very distinctive from densities obtained using tones with ratios that are known to be dissonant. In other words, at the output of the interneuron, inharmonious signals give rise to blurry spike trains, while the harmonious signals produce more regular, less noisy, spike trains. Theoretical results are compared with numerical simulations.

  7. Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.

    Science.gov (United States)

    Jovanović, Stojan; Rotter, Stefan

    2016-06-01

    The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs) are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.

  8. Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.

    Directory of Open Access Journals (Sweden)

    Stojan Jovanović

    2016-06-01

    Full Text Available The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.

  9. Complex transitions between spike, burst or chaos synchronization states in coupled neurons with coexisting bursting patterns

    International Nuclear Information System (INIS)

    Gu Hua-Guang; Chen Sheng-Gen; Li Yu-Ye

    2015-01-01

    We investigated the synchronization dynamics of a coupled neuronal system composed of two identical Chay model neurons. The Chay model showed coexisting period-1 and period-2 bursting patterns as a parameter and initial values are varied. We simulated multiple periodic and chaotic bursting patterns with non-(NS), burst phase (BS), spike phase (SS), complete (CS), and lag synchronization states. When the coexisting behavior is near period-2 bursting, the transitions of synchronization states of the coupled system follows very complex transitions that begins with transitions between BS and SS, moves to transitions between CS and SS, and to CS. Most initial values lead to the CS state of period-2 bursting while only a few lead to the CS state of period-1 bursting. When the coexisting behavior is near period-1 bursting, the transitions begin with NS, move to transitions between SS and BS, to transitions between SS and CS, and then to CS. Most initial values lead to the CS state of period-1 bursting but a few lead to the CS state of period-2 bursting. The BS was identified as chaos synchronization. The patterns for NS and transitions between BS and SS are insensitive to initial values. The patterns for transitions between CS and SS and the CS state are sensitive to them. The number of spikes per burst of non-CS bursting increases with increasing coupling strength. These results not only reveal the initial value- and parameter-dependent synchronization transitions of coupled systems with coexisting behaviors, but also facilitate interpretation of various bursting patterns and synchronization transitions generated in the nervous system with weak coupling strength. (paper)

  10. Modulation of the spike activity of neocortex neurons during a conditioned reflex.

    Science.gov (United States)

    Storozhuk, V M; Sanzharovskii, A V; Sachenko, V V; Busel, B I

    2000-01-01

    Experiments were conducted on cats to study the effects of iontophoretic application of glutamate and a number of modulators on the spike activity of neurons in the sensorimotor cortex during a conditioned reflex. These studies showed that glutamate, as well as exerting a direct influence on neuron spike activity, also had a delayed facilitatory action lasting 10-20 min after iontophoresis was finished. Adrenomimetics were found to have a double modulatory effect on intracortical glutamate connections: inhibitory and facilitatory effects were mediated by beta1 and beta2 adrenoceptors respectively. Although dopamine, like glutamate, facilitated neuron spike activity during the period of application, the simultaneous facilitatory actions of glutamate and L-DOPA were accompanied by occlusion of spike activity, and simultaneous application of glutamate and haloperidol suppressed spike activity associated with the conditioned reflex response. Facilitation thus appears to show a significant level of dependence on metabotropic glutamate receptors which, like dopamine receptors, are linked to the intracellular medium via Gi proteins.

  11. Code-specific learning rules improve action selection by populations of spiking neurons.

    Science.gov (United States)

    Friedrich, Johannes; Urbanczik, Robert; Senn, Walter

    2014-08-01

    Population coding is widely regarded as a key mechanism for achieving reliable behavioral decisions. We previously introduced reinforcement learning for population-based decision making by spiking neurons. Here we generalize population reinforcement learning to spike-based plasticity rules that take account of the postsynaptic neural code. We consider spike/no-spike, spike count and spike latency codes. The multi-valued and continuous-valued features in the postsynaptic code allow for a generalization of binary decision making to multi-valued decision making and continuous-valued action selection. We show that code-specific learning rules speed up learning both for the discrete classification and the continuous regression tasks. The suggested learning rules also speed up with increasing population size as opposed to standard reinforcement learning rules. Continuous action selection is further shown to explain realistic learning speeds in the Morris water maze. Finally, we introduce the concept of action perturbation as opposed to the classical weight- or node-perturbation as an exploration mechanism underlying reinforcement learning. Exploration in the action space greatly increases the speed of learning as compared to exploration in the neuron or weight space.

  12. Thermal impact on spiking properties in Hodgkin–Huxley neuron ...

    Indian Academy of Sciences (India)

    ... which form the engineering perspective indicates the occurrence of optimal use of synaptic transmission in the nervous system. We further explore the biophysical origin of this phenomenon associated with ion channel gating kinetics and also discuss its possible biological relevance in information processing in neuronal ...

  13. Fluctuations and information filtering in coupled populations of spiking neurons with adaptation.

    Science.gov (United States)

    Deger, Moritz; Schwalger, Tilo; Naud, Richard; Gerstner, Wulfram

    2014-12-01

    Finite-sized populations of spiking elements are fundamental to brain function but also are used in many areas of physics. Here we present a theory of the dynamics of finite-sized populations of spiking units, based on a quasirenewal description of neurons with adaptation. We derive an integral equation with colored noise that governs the stochastic dynamics of the population activity in response to time-dependent stimulation and calculate the spectral density in the asynchronous state. We show that systems of coupled populations with adaptation can generate a frequency band in which sensory information is preferentially encoded. The theory is applicable to fully as well as randomly connected networks and to leaky integrate-and-fire as well as to generalized spiking neurons with adaptation on multiple time scales.

  14. Analog frontend for multichannel neuronal recording system with spike and LFP separation.

    Science.gov (United States)

    Perelman, Yevgeny; Ginosar, Ran

    2006-05-15

    A 0.35microm CMOS integrated circuit for multi-channel neuronal recording with twelve true-differential channels, band separation and digital offset calibration is presented. The measured signal is separated into a low-frequency local field potential and high-frequency spike data. Digitally programmable gains of up to 60 and 80 dB for the local field potential and spike bands are provided. DC offsets are compensated on both bands by means of digitally programmable DACs. Spike band is limited by a second order low-pass filter with digitally programmable cutoff frequency. The IC has been fabricated and tested. 3microV input referred noise on the spike data band was measured.

  15. Inference of neuronal functional circuitry with spike-triggered non-negative matrix factorization.

    Science.gov (United States)

    Liu, Jian K; Schreyer, Helene M; Onken, Arno; Rozenblit, Fernando; Khani, Mohammad H; Krishnamoorthy, Vidhyasankar; Panzeri, Stefano; Gollisch, Tim

    2017-07-26

    Neurons in sensory systems often pool inputs over arrays of presynaptic cells, giving rise to functional subunits inside a neuron's receptive field. The organization of these subunits provides a signature of the neuron's presynaptic functional connectivity and determines how the neuron integrates sensory stimuli. Here we introduce the method of spike-triggered non-negative matrix factorization for detecting the layout of subunits within a neuron's receptive field. The method only requires the neuron's spiking responses under finely structured sensory stimulation and is therefore applicable to large populations of simultaneously recorded neurons. Applied to recordings from ganglion cells in the salamander retina, the method retrieves the receptive fields of presynaptic bipolar cells, as verified by simultaneous bipolar and ganglion cell recordings. The identified subunit layouts allow improved predictions of ganglion cell responses to natural stimuli and reveal shared bipolar cell input into distinct types of ganglion cells.How a neuron integrates sensory information requires knowledge about its functional presynaptic connections. Here the authors report a new method using non-negative matrix factorization to identify the layout of presynaptic bipolar cell inputs onto retinal ganglion cells and predict their responses to natural stimuli.

  16. Reconstructing stimuli from the spike-times of leaky integrate and fire neurons

    Directory of Open Access Journals (Sweden)

    Sebastian eGerwinn

    2011-02-01

    Full Text Available Reconstructing stimuli from the spike-trains of neurons is an important approach for understanding the neural code. One of the difficulties associated with this task is that signals which are varying continuously in time are encoded into sequences of discrete events or spikes. An important problem is to determine how much information about the continuously varying stimulus can be extracted from the time-points at which spikes were observed, especially if these time-points are subject to some sort of randomness. For the special case of spike trains generated by leaky integrate and fire neurons, noise can be introduced by allowing variations in the threshold every time a spike is released. A simple decoding algorithm previously derived for the noiseless case can be extended to the stochastic case, but turns out to be biased. Here, we review a solution to this problem, by presenting a simple yet efficient algorithm which greatly reduces the bias, and therefore leads to better decoding performance in the stochastic case.

  17. Training and Spontaneous Reinforcement of Neuronal Assemblies by Spike Timing Plasticity.

    Science.gov (United States)

    Ocker, Gabriel Koch; Doiron, Brent

    2018-02-03

    The synaptic connectivity of cortex is plastic, with experience shaping the ongoing interactions between neurons. Theoretical studies of spike timing-dependent plasticity (STDP) have focused on either just pairs of neurons or large-scale simulations. A simple analytic account for how fast spike time correlations affect both microscopic and macroscopic network structure is lacking. We develop a low-dimensional mean field theory for STDP in recurrent networks and show the emergence of assemblies of strongly coupled neurons with shared stimulus preferences. After training, this connectivity is actively reinforced by spike train correlations during the spontaneous dynamics. Furthermore, the stimulus coding by cell assemblies is actively maintained by these internally generated spiking correlations, suggesting a new role for noise correlations in neural coding. Assembly formation has often been associated with firing rate-based plasticity schemes; our theory provides an alternative and complementary framework, where fine temporal correlations and STDP form and actively maintain learned structure in cortical networks. © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  18. Linear stability analysis of retrieval state in associative memory neural networks of spiking neurons

    Science.gov (United States)

    Yoshioka, Masahiko

    2002-12-01

    We study associative memory neural networks of the Hodgkin-Huxley type of spiking neurons in which multiple periodic spatiotemporal patterns of spike timing are memorized as limit-cycle-type attractors. In encoding the spatiotemporal patterns, we assume the spike-timing-dependent synaptic plasticity with the asymmetric time window. Analysis for periodic solution of retrieval state reveals that if the area of the negative part of the time window is equivalent to the positive part, then crosstalk among encoded patterns vanishes. Phase transition due to the loss of the stability of periodic solution is observed when we assume fast α function for direct interaction among neurons. In order to evaluate the critical point of this phase transition, we employ Floquet theory in which the stability problem of the infinite number of spiking neurons interacting with α function is reduced to the eigenvalue problem with the finite size of matrix. Numerical integration of the single-body dynamics yields the explicit value of the matrix, which enables us to determine the critical point of the phase transition with a high degree of precision.

  19. Fractal characterization of acupuncture-induced spike trains of rat WDR neurons

    International Nuclear Information System (INIS)

    Chen, Yingyuan; Guo, Yi; Wang, Jiang; Hong, Shouhai; Wei, Xile; Yu, Haitao; Deng, Bin

    2015-01-01

    Highlights: •Fractal analysis is a valuable tool for measuring MA-induced neural activities. •In course of the experiments, the spike trains display different fractal properties. •The fractal properties reflect the long-term modulation of MA on WDR neurons. •The results may explain the long-lasting effects induced by acupuncture. -- Abstract: The experimental and the clinical studies have showed manual acupuncture (MA) could evoke multiple responses in various neural regions. Characterising the neuronal activities in these regions may provide more deep insights into acupuncture mechanisms. This paper used fractal analysis to investigate MA-induced spike trains of Wide Dynamic Range (WDR) neurons in rat spinal dorsal horn, an important relay station and integral component in processing acupuncture information. Allan factor and Fano factor were utilized to test whether the spike trains were fractal, and Allan factor were used to evaluate the scaling exponents and Hurst exponents. It was found that these two fractal exponents before and during MA were different significantly. During MA, the scaling exponents of WDR neurons were regulated in a small range, indicating a special fractal pattern. The neuronal activities were long-range correlated over multiple time scales. The scaling exponents during and after MA were similar, suggesting that the long-range correlations not only displayed during MA, but also extended to after withdrawing the needle. Our results showed that fractal analysis is a useful tool for measuring acupuncture effects. MA could modulate neuronal activities of which the fractal properties change as time proceeding. This evolution of fractal dynamics in course of MA experiments may explain at the level of neuron why the effect of MA observed in experiment and in clinic are complex, time-evolutionary, long-range even lasting for some time after stimulation

  20. Characteristics of fast-spiking neurons in the striatum of behaving monkeys.

    Science.gov (United States)

    Yamada, Hiroshi; Inokawa, Hitoshi; Hori, Yukiko; Pan, Xiaochuan; Matsuzaki, Ryuichi; Nakamura, Kae; Samejima, Kazuyuki; Shidara, Munetaka; Kimura, Minoru; Sakagami, Masamichi; Minamimoto, Takafumi

    2016-04-01

    Inhibitory interneurons are the fundamental constituents of neural circuits that organize network outputs. The striatum as part of the basal ganglia is involved in reward-directed behaviors. However, the role of the inhibitory interneurons in this process remains unclear, especially in behaving monkeys. We recorded the striatal single neuron activity while monkeys performed reward-directed hand or eye movements. Presumed parvalbumin-containing GABAergic interneurons (fast-spiking neurons, FSNs) were identified based on narrow spike shapes in three independent experiments, though they were a small population (4.2%, 42/997). We found that FSNs are characterized by high-frequency and less-bursty discharges, which are distinct from the basic firing properties of the presumed projection neurons (phasically active neurons, PANs). Besides, the encoded information regarding actions and outcomes was similar between FSNs and PANs in terms of proportion of neurons, but the discharge selectivity was higher in PANs than that of FSNs. The coding of actions and outcomes in FSNs and PANs was consistently observed under various behavioral contexts in distinct parts of the striatum (caudate nucleus, putamen, and anterior striatum). Our results suggest that FSNs may enhance the discharge selectivity of postsynaptic output neurons (PANs) in encoding crucial variables for a reward-directed behavior. Copyright © 2015 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  1. Spike Pattern Structure Influences Synaptic Efficacy Variability under STDP and Synaptic Homeostasis. I: Spike Generating Models on Converging Motifs

    OpenAIRE

    Bi, Zedong; Zhou, Changsong

    2016-01-01

    In neural systems, synaptic plasticity is usually driven by spike trains. Due to the inherent noises of neurons and synapses as well as the randomness of connection details, spike trains typically exhibit variability such as spatial randomness and temporal stochasticity, resulting in variability of synaptic changes under plasticity, which we call efficacy variability. How the variability of spike trains influences the efficacy variability of synapses remains unclear. In this paper, we try to...

  2. STDP allows close-to-optimal spatiotemporal spike pattern detection by single coincidence detector neurons.

    Science.gov (United States)

    Masquelier, Timothée

    2017-06-29

    Repeating spatiotemporal spike patterns exist and carry information. How this information is extracted by downstream neurons is unclear. Here we theoretically investigate to what extent a single cell could detect a given spike pattern and what the optimal parameters to do so are, in particular the membrane time constant τ. Using a leaky integrate-and-fire (LIF) neuron with homogeneous Poisson input, we computed this optimum analytically. We found that a relatively small τ (at most a few tens of ms) is usually optimal, even when the pattern is much longer. This is somewhat counter-intuitive as the resulting detector ignores most of the pattern, due to its fast memory decay. Next, we wondered if spike-timing-dependent plasticity (STDP) could enable a neuron to reach the theoretical optimum. We simulated a LIF equipped with additive STDP, and repeatedly exposed it to a given input spike pattern. As in previous studies, the LIF progressively became selective to the repeating pattern with no supervision, even when the pattern was embedded in Poisson activity. Here we show that, using certain STDP parameters, the resulting pattern detector is optimal. These mechanisms may explain how humans learn repeating sensory sequences. Long sequences could be recognized thanks to coincidence detectors working at a much shorter timescale. This is consistent with the fact that recognition is still possible if a sound sequence is compressed, played backward, or scrambled using 10-ms bins. Coincidence detection is a simple yet powerful mechanism, which could be the main function of neurons in the brain. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  3. Mechanisms of Winner-Take-All and Group Selection in Neuronal Spiking Networks.

    Science.gov (United States)

    Chen, Yanqing

    2017-01-01

    A major function of central nervous systems is to discriminate different categories or types of sensory input. Neuronal networks accomplish such tasks by learning different sensory maps at several stages of neural hierarchy, such that different neurons fire selectively to reflect different internal or external patterns and states. The exact mechanisms of such map formation processes in the brain are not completely understood. Here we study the mechanism by which a simple recurrent/reentrant neuronal network accomplish group selection and discrimination to different inputs in order to generate sensory maps. We describe the conditions and mechanism of transition from a rhythmic epileptic state (in which all neurons fire synchronized and indiscriminately to any input) to a winner-take-all state in which only a subset of neurons fire for a specific input. We prove an analytic condition under which a stable bump solution and a winner-take-all state can emerge from the local recurrent excitation-inhibition interactions in a three-layer spiking network with distinct excitatory and inhibitory populations, and demonstrate the importance of surround inhibitory connection topology on the stability of dynamic patterns in spiking neural network.

  4. Spike-Triggered Regression for Synaptic Connectivity Reconstruction in Neuronal Networks.

    Science.gov (United States)

    Zhang, Yaoyu; Xiao, Yanyang; Zhou, Douglas; Cai, David

    2017-01-01

    How neurons are connected in the brain to perform computation is a key issue in neuroscience. Recently, the development of calcium imaging and multi-electrode array techniques have greatly enhanced our ability to measure the firing activities of neuronal populations at single cell level. Meanwhile, the intracellular recording technique is able to measure subthreshold voltage dynamics of a neuron. Our work addresses the issue of how to combine these measurements to reveal the underlying network structure. We propose the spike-triggered regression (STR) method, which employs both the voltage trace and firing activity of the neuronal population to reconstruct the underlying synaptic connectivity. Our numerical study of the conductance-based integrate-and-fire neuronal network shows that only short data of 20 ~ 100 s is required for an accurate recovery of network topology as well as the corresponding coupling strength. Our method can yield an accurate reconstruction of a large neuronal network even in the case of dense connectivity and nearly synchronous dynamics, which many other network reconstruction methods cannot successfully handle. In addition, we point out that, for sparse networks, the STR method can infer coupling strength between each pair of neurons with high accuracy in the absence of the global information of all other neurons.

  5. Stochastic resonance of ensemble neurons for transient spike trains: Wavelet analysis

    International Nuclear Information System (INIS)

    Hasegawa, Hideo

    2002-01-01

    By using the wavelet transformation (WT), I have analyzed the response of an ensemble of N (=1, 10, 100, and 500) Hodgkin-Huxley neurons to transient M-pulse spike trains (M=1 to 3) with independent Gaussian noises. The cross correlation between the input and output signals is expressed in terms of the WT expansion coefficients. The signal-to-noise ratio (SNR) is evaluated by using the denoising method within the WT, by which the noise contribution is extracted from the output signals. Although the response of a single (N=1) neuron to subthreshold transient signals with noises is quite unreliable, the transmission fidelity assessed by the cross correlation and SNR is shown to be much improved by increasing the value of N: a population of neurons plays an indispensable role in the stochastic resonance (SR) for transient spike inputs. It is also shown that in a large-scale ensemble, the transmission fidelity for suprathreshold transient spikes is not significantly degraded by a weak noise which is responsible to SR for subthreshold inputs

  6. A COMPUTATIONAL MODEL OF MOTOR NEURON DEGENERATION

    Science.gov (United States)

    Le Masson, Gwendal; Przedborski, Serge; Abbott, L.F.

    2014-01-01

    SUMMARY To explore the link between bioenergetics and motor neuron degeneration, we used a computational model in which detailed morphology and ion conductance are paired with intracellular ATP production and consumption. We found that reduced ATP availability increases the metabolic cost of a single action potential and disrupts K+/Na+ homeostasis, resulting in a chronic depolarization. The magnitude of the ATP shortage at which this ionic instability occurs depends on the morphology and intrinsic conductance characteristic of the neuron. If ATP shortage is confined to the distal part of the axon, the ensuing local ionic instability eventually spreads to the whole neuron and involves fasciculation-like spiking events. A shortage of ATP also causes a rise in intracellular calcium. Our modeling work supports the notion that mitochondrial dysfunction can account for salient features of the paralytic disorder amyotrophic lateral sclerosis, including motor neuron hyperexcitability, fasciculation, and differential vulnerability of motor neuron subpopulations. PMID:25088365

  7. Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.

    Science.gov (United States)

    Burbank, Kendra S

    2015-12-01

    The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.

  8. Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.

    Directory of Open Access Journals (Sweden)

    Kendra S Burbank

    2015-12-01

    Full Text Available The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP, leading to a symmetric combined rule we call Mirrored STDP (mSTDP. We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological

  9. A Re-configurable On-line Learning Spiking Neuromorphic Processor comprising 256 neurons and 128K synapses

    Directory of Open Access Journals (Sweden)

    Ning eQiao

    2015-04-01

    Full Text Available Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm 2 , and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.

  10. Hyperbolic Plykin attractor can exist in neuron models

    DEFF Research Database (Denmark)

    Belykh, V.; Belykh, I.; Mosekilde, Erik

    2005-01-01

    of the neuron model, we derive a flow-defined Poincare map giving ail accurate account of the system's dynamics. In a parameter region where the neuron system undergoes bifurcations causing transitions between tonic spiking and bursting, this two-dimensional map becomes a map of a disk with several periodic...

  11. Spike Timing Matters in Novel Neuronal Code Involved in Vibrotactile Frequency Perception.

    Science.gov (United States)

    Birznieks, Ingvars; Vickery, Richard M

    2017-05-22

    Skin vibrations sensed by tactile receptors contribute significantly to the perception of object properties during tactile exploration [1-4] and to sensorimotor control during object manipulation [5]. Sustained low-frequency skin vibration (sensation referred to as flutter whose frequency can be clearly perceived [6]. How afferent spiking activity translates into the perception of frequency is still unknown. Measures based on mean spike rates of neurons in the primary somatosensory cortex are sufficient to explain performance in some frequency discrimination tasks [7-11]; however, there is emerging evidence that stimuli can be distinguished based also on temporal features of neural activity [12, 13]. Our study's advance is to demonstrate that temporal features are fundamental for vibrotactile frequency perception. Pulsatile mechanical stimuli were used to elicit specified temporal spike train patterns in tactile afferents, and subsequently psychophysical methods were employed to characterize human frequency perception. Remarkably, the most salient temporal feature determining vibrotactile frequency was not the underlying periodicity but, rather, the duration of the silent gap between successive bursts of neural activity. This burst gap code for frequency represents a previously unknown form of neural coding in the tactile sensory system, which parallels auditory pitch perception mechanisms based on purely temporal information where longer inter-pulse intervals receive higher perceptual weights than short intervals [14]. Our study also demonstrates that human perception of stimuli can be determined exclusively by temporal features of spike trains independent of the mean spike rate and without contribution from population response factors. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Spike correlations in a songbird agree with a simple markov population model.

    Directory of Open Access Journals (Sweden)

    Andrea P Weber

    2007-12-01

    Full Text Available The relationships between neural activity at the single-cell and the population levels are of central importance for understanding neural codes. In many sensory systems, collective behaviors in large cell groups can be described by pairwise spike correlations. Here, we test whether in a highly specialized premotor system of songbirds, pairwise spike correlations themselves can be seen as a simple corollary of an underlying random process. We test hypotheses on connectivity and network dynamics in the motor pathway of zebra finches using a high-level population model that is independent of detailed single-neuron properties. We assume that neural population activity evolves along a finite set of states during singing, and that during sleep population activity randomly switches back and forth between song states and a single resting state. Individual spike trains are generated by associating with each of the population states a particular firing mode, such as bursting or tonic firing. With an overall modification of one or two simple control parameters, the Markov model is able to reproduce observed firing statistics and spike correlations in different neuron types and behavioral states. Our results suggest that song- and sleep-related firing patterns are identical on short time scales and result from random sampling of a unique underlying theme. The efficiency of our population model may apply also to other neural systems in which population hypotheses can be tested on recordings from small neuron groups.

  13. Inferring Neuronal Network Connectivity from Spike Data: A Temporal Data Mining Approach

    Directory of Open Access Journals (Sweden)

    Debprakash Patnaik

    2008-01-01

    Full Text Available Understanding the functioning of a neural system in terms of its underlying circuitry is an important problem in neuroscience. Recent developments in electrophysiology and imaging allow one to simultaneously record activities of hundreds of neurons. Inferring the underlying neuronal connectivity patterns from such multi-neuronal spike train data streams is a challenging statistical and computational problem. This task involves finding significant temporal patterns from vast amounts of symbolic time series data. In this paper we show that the frequent episode mining methods from the field of temporal data mining can be very useful in this context. In the frequent episode discovery framework, the data is viewed as a sequence of events, each of which is characterized by an event type and its time of occurrence and episodes are certain types of temporal patterns in such data. Here we show that, using the set of discovered frequent episodes from multi-neuronal data, one can infer different types of connectivity patterns in the neural system that generated it. For this purpose, we introduce the notion of mining for frequent episodes under certain temporal constraints; the structure of these temporal constraints is motivated by the application. We present algorithms for discovering serial and parallel episodes under these temporal constraints. Through extensive simulation studies we demonstrate that these methods are useful for unearthing patterns of neuronal network connectivity.

  14. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition

    Science.gov (United States)

    Bill, Johannes; Buesing, Lars; Habenschuss, Stefan; Nessler, Bernhard; Maass, Wolfgang; Legenstein, Robert

    2015-01-01

    During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input. PMID:26284370

  15. Distributed Bayesian Computation and Self-Organized Learning in Sheets of Spiking Neurons with Local Lateral Inhibition.

    Directory of Open Access Journals (Sweden)

    Johannes Bill

    Full Text Available During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input.

  16. A Scalable Weight-Free Learning Algorithm for Regulatory Control of Cell Activity in Spiking Neuronal Networks.

    Science.gov (United States)

    Zhang, Xu; Foderaro, Greg; Henriquez, Craig; Ferrari, Silvia

    2018-03-01

    Recent developments in neural stimulation and recording technologies are providing scientists with the ability of recording and controlling the activity of individual neurons in vitro or in vivo, with very high spatial and temporal resolution. Tools such as optogenetics, for example, are having a significant impact in the neuroscience field by delivering optical firing control with the precision and spatiotemporal resolution required for investigating information processing and plasticity in biological brains. While a number of training algorithms have been developed to date for spiking neural network (SNN) models of biological neuronal circuits, exiting methods rely on learning rules that adjust the synaptic strengths (or weights) directly, in order to obtain the desired network-level (or functional-level) performance. As such, they are not applicable to modifying plasticity in biological neuronal circuits, in which synaptic strengths only change as a result of pre- and post-synaptic neuron firings or biological mechanisms beyond our control. This paper presents a weight-free training algorithm that relies solely on adjusting the spatiotemporal delivery of neuron firings in order to optimize the network performance. The proposed weight-free algorithm does not require any knowledge of the SNN model or its plasticity mechanisms. As a result, this training approach is potentially realizable in vitro or in vivo via neural stimulation and recording technologies, such as optogenetics and multielectrode arrays, and could be utilized to control plasticity at multiple scales of biological neuronal circuits. The approach is demonstrated by training SNNs with hundreds of units to control a virtual insect navigating in an unknown environment.

  17. Stochastic neuron models

    CERN Document Server

    Greenwood, Priscilla E

    2016-01-01

    This book describes a large number of open problems in the theory of stochastic neural systems, with the aim of enticing probabilists to work on them. This includes problems arising from stochastic models of individual neurons as well as those arising from stochastic models of the activities of small and large networks of interconnected neurons. The necessary neuroscience background to these problems is outlined within the text, so readers can grasp the context in which they arise. This book will be useful for graduate students and instructors providing material and references for applying probability to stochastic neuron modeling. Methods and results are presented, but the emphasis is on questions where additional stochastic analysis may contribute neuroscience insight. An extensive bibliography is included. Dr. Priscilla E. Greenwood is a Professor Emerita in the Department of Mathematics at the University of British Columbia. Dr. Lawrence M. Ward is a Professor in the Department of Psychology and the Brain...

  18. Efficient transmission of subthreshold signals in complex networks of spiking neurons.

    Directory of Open Access Journals (Sweden)

    Joaquin J Torres

    Full Text Available We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances--that naturally balances the network with excitatory and inhibitory synapses--and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest.

  19. Spiking neural network model for memorizing sequences with forward and backward recall.

    Science.gov (United States)

    Borisyuk, Roman; Chik, David; Kazanovich, Yakov; da Silva Gomes, João

    2013-06-01

    We present an oscillatory network of conductance based spiking neurons of Hodgkin-Huxley type as a model of memory storage and retrieval of sequences of events (or objects). The model is inspired by psychological and neurobiological evidence on sequential memories. The building block of the model is an oscillatory module which contains excitatory and inhibitory neurons with all-to-all connections. The connection architecture comprises two layers. A lower layer represents consecutive events during their storage and recall. This layer is composed of oscillatory modules. Plastic excitatory connections between the modules are implemented using an STDP type learning rule for sequential storage. Excitatory neurons in the upper layer project star-like modifiable connections toward the excitatory lower layer neurons. These neurons in the upper layer are used to tag sequences of events represented in the lower layer. Computer simulations demonstrate good performance of the model including difficult cases when different sequences contain overlapping events. We show that the model with STDP type or anti-STDP type learning rules can be applied for the simulation of forward and backward replay of neural spikes respectively. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. Spiking Neural P Systems with Communication on Request.

    Science.gov (United States)

    Pan, Linqiang; Păun, Gheorghe; Zhang, Gexiang; Neri, Ferrante

    2017-12-01

    Spiking Neural [Formula: see text] Systems are Neural System models characterized by the fact that each neuron mimics a biological cell and the communication between neurons is based on spikes. In the Spiking Neural [Formula: see text] systems investigated so far, the application of evolution rules depends on the contents of a neuron (checked by means of a regular expression). In these [Formula: see text] systems, a specified number of spikes are consumed and a specified number of spikes are produced, and then sent to each of the neurons linked by a synapse to the evolving neuron. [Formula: see text]In the present work, a novel communication strategy among neurons of Spiking Neural [Formula: see text] Systems is proposed. In the resulting models, called Spiking Neural [Formula: see text] Systems with Communication on Request, the spikes are requested from neighboring neurons, depending on the contents of the neuron (still checked by means of a regular expression). Unlike the traditional Spiking Neural [Formula: see text] systems, no spikes are consumed or created: the spikes are only moved along synapses and replicated (when two or more neurons request the contents of the same neuron). [Formula: see text]The Spiking Neural [Formula: see text] Systems with Communication on Request are proved to be computationally universal, that is, equivalent with Turing machines as long as two types of spikes are used. Following this work, further research questions are listed to be open problems.

  1. The Fast Spiking Subpopulation of Striatal Neurons Coding for Temporal Cognition of Movements

    Directory of Open Access Journals (Sweden)

    Bo Shen

    2017-12-01

    Full Text Available Background: Timing dysfunctions occur in a number of neurological and psychiatric disorders such as Parkinson’s disease, obsessive-compulsive disorder, autism and attention-deficit-hyperactivity disorder. Several lines of evidence show that disrupted timing processing is involved in specific fronto-striatal abnormalities. The striatum encodes reinforcement learning and procedural motion, and consequently is required to represent temporal information precisely, which then guides actions in proper sequence. Previous studies highlighted the temporal scaling property of timing-relevant striatal neurons; however, it is still unknown how this is accomplished over short temporal latencies, such as the sub-seconds to seconds range.Methods: We designed a task with a series of timing behaviors that required rats to reproduce a fixed duration with robust action. Using chronic multichannel electrode arrays, we recorded neural activity from dorso-medial striatum in 4 rats performing the task and identified modulation response of each neuron to different events. Cell type classification was performed according to a multi-criteria clustering analysis.Results: Dorso-medial striatal neurons (n = 557 were recorded, of which 113 single units were considered as timing-relevant neurons, especially the fast-spiking subpopulation that had trial–to–trial ramping up or ramping down firing modulation during the time estimation period. Furthermore, these timing-relevant striatal neurons had to calibrate the spread of their firing pattern by rewarded experience to express the timing behavior accurately.Conclusion: Our data suggests that the dynamic activities of timing-relevant units encode information about the current duration and recent outcomes, which is needed to predict and drive the following action. These results reveal the potential mechanism of time calibration in a short temporal resolution, which may help to explain the neural basis of motor coordination

  2. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Chan, Pak Kwan; Tin, Chung

    2018-02-01

    Objective. Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). Approach. The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. Main results. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. Significance. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  3. Real-time cerebellar neuroprosthetic system based on a spiking neural network model of motor learning.

    Science.gov (United States)

    Xu, Tao; Xiao, Na; Zhai, Xiaolong; Kwan Chan, Pak; Tin, Chung

    2018-02-01

    Damage to the brain, as a result of various medical conditions, impacts the everyday life of patients and there is still no complete cure to neurological disorders. Neuroprostheses that can functionally replace the damaged neural circuit have recently emerged as a possible solution to these problems. Here we describe the development of a real-time cerebellar neuroprosthetic system to substitute neural function in cerebellar circuitry for learning delay eyeblink conditioning (DEC). The system was empowered by a biologically realistic spiking neural network (SNN) model of the cerebellar neural circuit, which considers the neuronal population and anatomical connectivity of the network. The model simulated synaptic plasticity critical for learning DEC. This SNN model was carefully implemented on a field programmable gate array (FPGA) platform for real-time simulation. This hardware system was interfaced in in vivo experiments with anesthetized rats and it used neural spikes recorded online from the animal to learn and trigger conditioned eyeblink in the animal during training. This rat-FPGA hybrid system was able to process neuronal spikes in real-time with an embedded cerebellum model of ~10 000 neurons and reproduce learning of DEC with different inter-stimulus intervals. Our results validated that the system performance is physiologically relevant at both the neural (firing pattern) and behavioral (eyeblink pattern) levels. This integrated system provides the sufficient computation power for mimicking the cerebellar circuit in real-time. The system interacts with the biological system naturally at the spike level and can be generalized for including other neural components (neuron types and plasticity) and neural functions for potential neuroprosthetic applications.

  4. Seven neurons memorizing sequences of alphabetical images via spike-timing dependent plasticity.

    Science.gov (United States)

    Osogami, Takayuki; Otsuka, Makoto

    2015-09-16

    An artificial neural network, such as a Boltzmann machine, can be trained with the Hebb rule so that it stores static patterns and retrieves a particular pattern when an associated cue is presented to it. Such a network, however, cannot effectively deal with dynamic patterns in the manner of living creatures. Here, we design a dynamic Boltzmann machine (DyBM) and a learning rule that has some of the properties of spike-timing dependent plasticity (STDP), which has been postulated for biological neural networks. We train a DyBM consisting of only seven neurons in a way that it memorizes the sequence of the bitmap patterns in an alphabetical image "SCIENCE" and its reverse sequence and retrieves either sequence when a partial sequence is presented as a cue. The DyBM is to STDP as the Boltzmann machine is to the Hebb rule.

  5. Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.

    Science.gov (United States)

    Merolla, Paul A; Arthur, John V; Alvarez-Icaza, Rodrigo; Cassidy, Andrew S; Sawada, Jun; Akopyan, Filipp; Jackson, Bryan L; Imam, Nabil; Guo, Chen; Nakamura, Yutaka; Brezzo, Bernard; Vo, Ivan; Esser, Steven K; Appuswamy, Rathinakumar; Taba, Brian; Amir, Arnon; Flickner, Myron D; Risk, William P; Manohar, Rajit; Modha, Dharmendra S

    2014-08-08

    Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts. Copyright © 2014, American Association for the Advancement of Science.

  6. Nonlinear Maps for Design of Discrete-Time Models of Neuronal Network Dynamics

    Science.gov (United States)

    2016-03-31

    Neuronal Network Dynamics Sb. GRANT NUMBER N00014-16-1-2252 Sc. PROGRAM ELEMENT NUMBER 6. AUTHOR{S) Sd. PROJECT NUMBER Nikolai Rulkov Se. TASK NUMBER...studies of large-scale neuronal network activity. D 15. SUBJECT TERMS Map-based neuronal model, Discrete time spiking dynamics, Synapses, Neurons ...time involvement (50%) of a postdoc, which have experience with neuronal network simulations using standard conductance-based models and analysis of

  7. Effects of the action of microwave-frequency electromagnetic radiation on the spike activity of neurons in the supraoptic nucleus of the hypothalamus in rats.

    Science.gov (United States)

    Minasyan, S M; Grigoryan, G Yu; Saakyan, S G; Akhumyan, A A; Kalantaryan, V P

    2007-02-01

    Acute experiments on white rats anesthetized with Nembutal (40 mg/kg, i.p.) were performed with extracellular recording and analysis of background spike activity from neurons in the supraoptic nucleus of the hypothalamus after exposure to electromagnetic radiation in the millimeter range. The distribution of neurons was determined in terms of the degree of regularity, the nature of the dynamics of neural streams, and the modalities of histograms of interspike intervals; the mean neuron spike frequency was calculated, along with the coefficient of variation of interspike intervals. These studies demonstrated changes in the background spike activity, predominantly affecting the internal structure of the spike streams recorded. The major changes were in the duration of interspike intervals and the degree of regularity of spike activity. Statistically significant changes in the mean spike frequencies of neuron populations in individual frequency ranges were also seen.

  8. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights

    Directory of Open Access Journals (Sweden)

    Wilten eNicola

    2016-02-01

    Full Text Available A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF. The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks

  9. Spike-timing computation properties of a feed-forward neural network model

    Directory of Open Access Journals (Sweden)

    Drew Benjamin Sinha

    2014-01-01

    Full Text Available Brain function is characterized by dynamical interactions among networks of neurons. These interactions are mediated by network topology at many scales ranging from microcircuits to brain areas. Understanding how networks operate can be aided by understanding how the transformation of inputs depends upon network connectivity patterns, e.g. serial and parallel pathways. To tractably determine how single synapses or groups of synapses in such pathways shape transformations, we modeled feed-forward networks of 7-22 neurons in which synaptic strength changed according to a spike-timing dependent plasticity rule. We investigated how activity varied when dynamics were perturbed by an activity-dependent electrical stimulation protocol (spike-triggered stimulation; STS in networks of different topologies and background input correlations. STS can successfully reorganize functional brain networks in vivo, but with a variability in effectiveness that may derive partially from the underlying network topology. In a simulated network with a single disynaptic pathway driven by uncorrelated background activity, structured spike-timing relationships between polysynaptically connected neurons were not observed. When background activity was correlated or parallel disynaptic pathways were added, however, robust polysynaptic spike timing relationships were observed, and application of STS yielded predictable changes in synaptic strengths and spike-timing relationships. These observations suggest that precise input-related or topologically induced temporal relationships in network activity are necessary for polysynaptic signal propagation. Such constraints for polysynaptic computation suggest potential roles for higher-order topological structure in network organization, such as maintaining polysynaptic correlation in the face of relatively weak synapses.

  10. A Hybrid Setarx Model for Spikes in Tight Electricity Markets

    Directory of Open Access Journals (Sweden)

    Carlo Lucheroni

    2012-01-01

    Full Text Available The paper discusses a simple looking but highly nonlinear regime-switching, self-excited threshold model for hourly electricity prices in continuous and discrete time. The regime structure of the model is linked to organizational features of the market. In continuous time, the model can include spikes without using jumps, by defining stochastic orbits. In passing from continuous time to discrete time, the stochastic orbits survive discretization and can be identified again as spikes. A calibration technique suitable for the discrete version of this model, which does not need deseasonalization or spike filtering, is developed, tested and applied to market data. The discussion of the properties of the model uses phase-space analysis, an approach uncommon in econometrics. (original abstract

  11. Power-Law Dynamics of Membrane Conductances Increase Spiking Diversity in a Hodgkin-Huxley Model.

    Science.gov (United States)

    Teka, Wondimu; Stockton, David; Santamaria, Fidel

    2016-03-01

    We studied the effects of non-Markovian power-law voltage dependent conductances on the generation of action potentials and spiking patterns in a Hodgkin-Huxley model. To implement slow-adapting power-law dynamics of the gating variables of the potassium, n, and sodium, m and h, conductances we used fractional derivatives of order η≤1. The fractional derivatives were used to solve the kinetic equations of each gate. We systematically classified the properties of each gate as a function of η. We then tested if the full model could generate action potentials with the different power-law behaving gates. Finally, we studied the patterns of action potential that emerged in each case. Our results show the model produces a wide range of action potential shapes and spiking patterns in response to constant current stimulation as a function of η. In comparison with the classical model, the action potential shapes for power-law behaving potassium conductance (n gate) showed a longer peak and shallow hyperpolarization; for power-law activation of the sodium conductance (m gate), the action potentials had a sharp rise time; and for power-law inactivation of the sodium conductance (h gate) the spikes had wider peak that for low values of η replicated pituitary- and cardiac-type action potentials. With all physiological parameters fixed a wide range of spiking patterns emerged as a function of the value of the constant input current and η, such as square wave bursting, mixed mode oscillations, and pseudo-plateau potentials. Our analyses show that the intrinsic memory trace of the fractional derivative provides a negative feedback mechanism between the voltage trace and the activity of the power-law behaving gate variable. As a consequence, power-law behaving conductances result in an increase in the number of spiking patterns a neuron can generate and, we propose, expand the computational capacity of the neuron.

  12. Self-organizing spiking neural model for learning fault-tolerant spatio-motor transformations.

    Science.gov (United States)

    Srinivasa, Narayan; Cho, Youngkwan

    2012-10-01

    In this paper, we present a spiking neural model that learns spatio-motor transformations. The model is in the form of a multilayered architecture consisting of integrate and fire neurons and synapses that employ spike-timing-dependent plasticity learning rule to enable the learning of such transformations. We developed a simple 2-degree-of-freedom robot-based reaching task which involves the learning of a nonlinear function. Computer simulations demonstrate the capability of such a model for learning the forward and inverse kinematics for such a task and hence to learn spatio-motor transformations. The interesting aspect of the model is its capacity to be tolerant to partial absence of sensory or motor inputs at various stages of learning. We believe that such a model lays the foundation for learning other complex functions and transformations in real-world scenarios.

  13. Orientation selectivity in inhibition-dominated networks of spiking neurons: effect of single neuron properties and network dynamics.

    Science.gov (United States)

    Sadeh, Sadra; Rotter, Stefan

    2015-01-01

    The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not

  14. Recording Spikes Activity in Cultured Hippocampal Neurons Using Flexible or Transparent Graphene Transistors

    Directory of Open Access Journals (Sweden)

    Farida Veliev

    2017-08-01

    Full Text Available The emergence of nanoelectronics applied to neural interfaces has started few decades ago, and aims to provide new tools for replacing or restoring disabled functions of the nervous systems as well as further understanding the evolution of such complex organization. As the same time, graphene and other 2D materials have offered new possibilities for integrating micro and nano-devices on flexible, transparent, and biocompatible substrates, promising for bio and neuro-electronics. In addition to many bio-suitable features of graphene interface, such as, chemical inertness and anti-corrosive properties, its optical transparency enables multimodal approach of neuronal based systems, the electrical layer being compatible with additional microfluidics and optical manipulation ports. The convergence of these fields will provide a next generation of neural interfaces for the reliable detection of single spike and record with high fidelity activity patterns of neural networks. Here, we report on the fabrication of graphene field effect transistors (G-FETs on various substrates (silicon, sapphire, glass coverslips, and polyimide deposited onto Si/SiO2 substrates, exhibiting high sensitivity (4 mS/V, close to the Dirac point at VLG < VD and low noise level (10−22 A2/Hz, at VLG = 0 V. We demonstrate the in vitro detection of the spontaneous activity of hippocampal neurons in-situ-grown on top of the graphene sensors during several weeks in a millimeter size PDMS fluidics chamber (8 mm wide. These results provide an advance toward the realization of biocompatible devices for reliable and high spatio-temporal sensing of neuronal activity for both in vitro and in vivo applications.

  15. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    International Nuclear Information System (INIS)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-01-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  16. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin, E-mail: dengbin@tju.edu.cn; Chan, Wai-lok [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2016-06-15

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  17. A Model of Fast Hebbian Spike Latency Normalization

    Directory of Open Access Journals (Sweden)

    Hafsteinn Einarsson

    2017-05-01

    Full Text Available Hebbian changes of excitatory synapses are driven by and enhance correlations between pre- and postsynaptic neuronal activations, forming a positive feedback loop that can lead to instability in simulated neural networks. Because Hebbian learning may occur on time scales of seconds to minutes, it is conjectured that some form of fast stabilization of neural firing is necessary to avoid runaway of excitation, but both the theoretical underpinning and the biological implementation for such homeostatic mechanism are to be fully investigated. Supported by analytical and computational arguments, we show that a Hebbian spike-timing-dependent metaplasticity rule, accounts for inherently-stable, quick tuning of the total input weight of a single neuron in the general scenario of asynchronous neural firing characterized by UP and DOWN states of activity.

  18. A compound memristive synapse model for statistical learning through STDP in spiking neural networks

    Directory of Open Access Journals (Sweden)

    Johannes eBill

    2014-12-01

    Full Text Available Memristors have recently emerged as promising circuit elements to mimic the function of biological synapses in neuromorphic computing. The fabrication of reliable nanoscale memristive synapses, that feature continuous conductance changes based on the timing of pre- and postsynaptic spikes, has however turned out to be challenging. In this article, we propose an alternative approach, the compound memristive synapse, that circumvents this problem by the use of memristors with binary memristive states. A compound memristive synapse employs multiple bistable memristors in parallel to jointly form one synapse, thereby providing a spectrum of synaptic efficacies. We investigate the computational implications of synaptic plasticity in the compound synapse by integrating the recently observed phenomenon of stochastic filament formation into an abstract model of stochastic switching. Using this abstract model, we first show how standard pulsing schemes give rise to spike-timing dependent plasticity (STDP with a stabilizing weight dependence in compound synapses. In a next step, we study unsupervised learning with compound synapses in networks of spiking neurons organized in a winner-take-all architecture. Our theoretical analysis reveals that compound-synapse STDP implements generalized Expectation-Maximization in the spiking network. Specifically, the emergent synapse configuration represents the most salient features of the input distribution in a Mixture-of-Gaussians generative model. Furthermore, the network’s spike response to spiking input streams approximates a well-defined Bayesian posterior distribution. We show in computer simulations how such networks learn to represent high-dimensional distributions over images of handwritten digits with high fidelity even in presence of substantial device variations and under severe noise conditions. Therefore, the compound memristive synapse may provide a synaptic design principle for future neuromorphic

  19. Detection of bursts in extracellular spike trains using hidden semi-Markov point process models.

    Science.gov (United States)

    Tokdar, Surya; Xi, Peiyi; Kelly, Ryan C; Kass, Robert E

    2010-08-01

    Neurons in vitro and in vivo have epochs of bursting or "up state" activity during which firing rates are dramatically elevated. Various methods of detecting bursts in extracellular spike trains have appeared in the literature, the most widely used apparently being Poisson Surprise (PS). A natural description of the phenomenon assumes (1) there are two hidden states, which we label "burst" and "non-burst," (2) the neuron evolves stochastically, switching at random between these two states, and (3) within each state the spike train follows a time-homogeneous point process. If in (2) the transitions from non-burst to burst and burst to non-burst states are memoryless, this becomes a hidden Markov model (HMM). For HMMs, the state transitions follow exponential distributions, and are highly irregular. Because observed bursting may in some cases be fairly regular-exhibiting inter-burst intervals with small variation-we relaxed this assumption. When more general probability distributions are used to describe the state transitions the two-state point process model becomes a hidden semi-Markov model (HSMM). We developed an efficient Bayesian computational scheme to fit HSMMs to spike train data. Numerical simulations indicate the method can perform well, sometimes yielding very different results than those based on PS.

  20. Artificial neuron operations and spike-timing-dependent plasticity using memristive devices for brain-inspired computing

    Science.gov (United States)

    Marukame, Takao; Nishi, Yoshifumi; Yasuda, Shin-ichi; Tanamoto, Tetsufumi

    2018-04-01

    The use of memristive devices for creating artificial neurons is promising for brain-inspired computing from the viewpoints of computation architecture and learning protocol. We present an energy-efficient multiplier accumulator based on a memristive array architecture incorporating both analog and digital circuitries. The analog circuitry is used to full advantage for neural networks, as demonstrated by the spike-timing-dependent plasticity (STDP) in fabricated AlO x /TiO x -based metal-oxide memristive devices. STDP protocols for controlling periodic analog resistance with long-range stability were experimentally verified using a variety of voltage amplitudes and spike timings.

  1. Computational modeling of spiking neural network with learning rules from STDP and intrinsic plasticity

    Science.gov (United States)

    Li, Xiumin; Wang, Wei; Xue, Fangzheng; Song, Yongduan

    2018-02-01

    Recently there has been continuously increasing interest in building up computational models of spiking neural networks (SNN), such as the Liquid State Machine (LSM). The biologically inspired self-organized neural networks with neural plasticity can enhance the capability of computational performance, with the characteristic features of dynamical memory and recurrent connection cycles which distinguish them from the more widely used feedforward neural networks. Despite a variety of computational models for brain-like learning and information processing have been proposed, the modeling of self-organized neural networks with multi-neural plasticity is still an important open challenge. The main difficulties lie in the interplay among different forms of neural plasticity rules and understanding how structures and dynamics of neural networks shape the computational performance. In this paper, we propose a novel approach to develop the models of LSM with a biologically inspired self-organizing network based on two neural plasticity learning rules. The connectivity among excitatory neurons is adapted by spike-timing-dependent plasticity (STDP) learning; meanwhile, the degrees of neuronal excitability are regulated to maintain a moderate average activity level by another learning rule: intrinsic plasticity (IP). Our study shows that LSM with STDP+IP performs better than LSM with a random SNN or SNN obtained by STDP alone. The noticeable improvement with the proposed method is due to the better reflected competition among different neurons in the developed SNN model, as well as the more effectively encoded and processed relevant dynamic information with its learning and self-organizing mechanism. This result gives insights to the optimization of computational models of spiking neural networks with neural plasticity.

  2. Visual language recognition with a feed-forward network of spiking neurons

    Energy Technology Data Exchange (ETDEWEB)

    Rasmussen, Craig E [Los Alamos National Laboratory; Garrett, Kenyan [Los Alamos National Laboratory; Sottile, Matthew [GALOIS; Shreyas, Ns [INDIANA UNIV.

    2010-01-01

    An analogy is made and exploited between the recognition of visual objects and language parsing. A subset of regular languages is used to define a one-dimensional 'visual' language, in which the words are translational and scale invariant. This allows an exploration of the viewpoint invariant languages that can be solved by a network of concurrent, hierarchically connected processors. A language family is defined that is hierarchically tiling system recognizable (HREC). As inspired by nature, an algorithm is presented that constructs a cellular automaton that recognizes strings from a language in the HREC family. It is demonstrated how a language recognizer can be implemented from the cellular automaton using a feed-forward network of spiking neurons. This parser recognizes fixed-length strings from the language in parallel and as the computation is pipelined, a new string can be parsed in each new interval of time. The analogy with formal language theory allows inferences to be drawn regarding what class of objects can be recognized by visual cortex operating in purely feed-forward fashion and what class of objects requires a more complicated network architecture.

  3. Nonlinear Maps for Design of Discrete Time Models of Neuronal Network Dynamics

    Science.gov (United States)

    2016-02-29

    network activity. D· 1S. SUBJECT TERMS Map-based neuronal model, Discrete time spiking dynamics, Synapses, Neurons , Neurobiological Networks 16...N00014-16-1-2252 Report #1 Performance/Technical Monthly Report Nonlinear Maps for Design of Discrete-Time Models of Neuronal Network Dynamics...Postdoc. The research plan assumes part-time involvement (50%) of a postdoc, which have experience with neuronal network simulations using standard

  4. A Spiking Neural Network Model of the Lateral Geniculate Nucleus on the SpiNNaker Machine

    Directory of Open Access Journals (Sweden)

    Basabdatta Sen-Bhattacharya

    2017-08-01

    Full Text Available We present a spiking neural network model of the thalamic Lateral Geniculate Nucleus (LGN developed on SpiNNaker, which is a state-of-the-art digital neuromorphic hardware built with very-low-power ARM processors. The parallel, event-based data processing in SpiNNaker makes it viable for building massively parallel neuro-computational frameworks. The LGN model has 140 neurons representing a “basic building block” for larger modular architectures. The motivation of this work is to simulate biologically plausible LGN dynamics on SpiNNaker. Synaptic layout of the model is consistent with biology. The model response is validated with existing literature reporting entrainment in steady state visually evoked potentials (SSVEP—brain oscillations corresponding to periodic visual stimuli recorded via electroencephalography (EEG. Periodic stimulus to the model is provided by: a synthetic spike-train with inter-spike-intervals in the range 10–50 Hz at a resolution of 1 Hz; and spike-train output from a state-of-the-art electronic retina subjected to a light emitting diode flashing at 10, 20, and 40 Hz, simulating real-world visual stimulus to the model. The resolution of simulation is 0.1 ms to ensure solution accuracy for the underlying differential equations defining Izhikevichs neuron model. Under this constraint, 1 s of model simulation time is executed in 10 s real time on SpiNNaker; this is because simulations on SpiNNaker work in real time for time-steps dt ⩾ 1 ms. The model output shows entrainment with both sets of input and contains harmonic components of the fundamental frequency. However, suppressing the feed-forward inhibition in the circuit produces subharmonics within the gamma band (>30 Hz implying a reduced information transmission fidelity. These model predictions agree with recent lumped-parameter computational model-based predictions, using conventional computers. Scalability of the framework is demonstrated by a multi

  5. Deep Spiking Networks

    NARCIS (Netherlands)

    O'Connor, P.; Welling, M.

    2016-01-01

    We introduce an algorithm to do backpropagation on a spiking network. Our network is "spiking" in the sense that our neurons accumulate their activation into a potential over time, and only send out a signal (a "spike") when this potential crosses a threshold and the neuron is reset. Neurons only

  6. A spiking network model of cerebellar Purkinje cells and molecular layer interneurons exhibiting irregular firing

    Directory of Open Access Journals (Sweden)

    William eLennon

    2014-12-01

    Full Text Available While the anatomy of the cerebellar microcircuit is well studied, how it implements cerebellar function is not understood. A number of models have been proposed to describe this mechanism but few emphasize the role of the vast network Purkinje cells (PKJs form with the molecular layer interneurons (MLIs – the stellate and basket cells. We propose a model of the MLI-PKJ network composed of simple spiking neurons incorporating the major anatomical and physiological features. In computer simulations, the model reproduces the irregular firing patterns observed in PKJs and MLIs in vitro and a shift toward faster, more regular firing patterns when inhibitory synaptic currents are blocked. In the model, the time between PKJ spikes is shown to be proportional to the amount of feedforward inhibition from an MLI on average. The two key elements of the model are: (1 spontaneously active PKJs and MLIs due to an endogenous depolarizing current, and (2 adherence to known anatomical connectivity along a parasagittal strip of cerebellar cortex. We propose this model to extend previous spiking network models of the cerebellum and for further computational investigation into the role of irregular firing and MLIs in cerebellar learning and function.

  7. Multistability in a neuron model with extracellular potassium dynamics

    Science.gov (United States)

    Wu, Xing-Xing; Shuai, J. W.

    2012-06-01

    Experiments show a primary role of extracellular potassium concentrations in neuronal hyperexcitability and in the generation of epileptiform bursting and depolarization blocks without synaptic mechanisms. We adopt a physiologically relevant hippocampal CA1 neuron model in a zero-calcium condition to better understand the function of extracellular potassium in neuronal seizurelike activities. The model neuron is surrounded by interstitial space in which potassium ions are able to accumulate. Potassium currents, Na+-K+ pumps, glial buffering, and ion diffusion are regulatory mechanisms of extracellular potassium. We also consider a reduced model with a fixed potassium concentration. The bifurcation structure and spiking frequency of the two models are studied. We show that, besides hyperexcitability and bursting pattern modulation, the potassium dynamics can induce not only bistability but also tristability of different firing patterns. Our results reveal the emergence of the complex behavior of multistability due to the dynamical [K+]o modulation on neuronal activities.

  8. Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data

    Science.gov (United States)

    Deng, Xinyi

    2016-08-01

    A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in

  9. The dynamic brain: from spiking neurons to neural masses and cortical fields.

    Directory of Open Access Journals (Sweden)

    Gustavo Deco

    2008-08-01

    Full Text Available The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition. Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly. In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement. Computational models at different space-time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data. Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons. Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns. Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem. Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI, electroencephalogram (EEG, and magnetoencephalogram (MEG. Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties. Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data. This makes dynamic models critical in integrating theory and experiments. We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the

  10. Neuronal Networks in Children with Continuous Spikes and Waves during Slow Sleep

    Science.gov (United States)

    Siniatchkin, Michael; Groening, Kristina; Moehring, Jan; Moeller, Friederike; Boor, Rainer; Brodbeck, Verena; Michel, Christoph M.; Rodionov, Roman; Lemieux, Louis; Stephani, Ulrich

    2010-01-01

    Epileptic encephalopathy with continuous spikes and waves during slow sleep is an age-related disorder characterized by the presence of interictal epileptiform discharges during at least greater than 85% of sleep and cognitive deficits associated with this electroencephalography pattern. The pathophysiological mechanisms of continuous spikes and…

  11. A Bayesian supervised dual-dimensionality reduction model for simultaneous decoding of LFP and spike train signals.

    Science.gov (United States)

    Holbrook, Andrew; Vandenberg-Rodes, Alexander; Fortin, Norbert; Shahbaba, Babak

    2017-01-01

    Neuroscientists are increasingly collecting multimodal data during experiments and observational studies. Different data modalities-such as EEG, fMRI, LFP, and spike trains-offer different views of the complex systems contributing to neural phenomena. Here, we focus on joint modeling of LFP and spike train data, and present a novel Bayesian method for neural decoding to infer behavioral and experimental conditions. This model performs supervised dual-dimensionality reduction: it learns low-dimensional representations of two different sources of information that not only explain variation in the input data itself, but also predict extra-neuronal outcomes. Despite being one probabilistic unit, the model consists of multiple modules: exponential PCA and wavelet PCA are used for dimensionality reduction in the spike train and LFP modules, respectively; these modules simultaneously interface with a Bayesian binary regression module. We demonstrate how this model may be used for prediction, parametric inference, and identification of influential predictors. In prediction, the hierarchical model outperforms other models trained on LFP alone, spike train alone, and combined LFP and spike train data. We compare two methods for modeling the loading matrix and find them to perform similarly. Finally, model parameters and their posterior distributions yield scientific insights.

  12. SPICODYN: A Toolbox for the Analysis of Neuronal Network Dynamics and Connectivity from Multi-Site Spike Signal Recordings.

    Science.gov (United States)

    Pastore, Vito Paolo; Godjoski, Aleksandar; Martinoia, Sergio; Massobrio, Paolo

    2018-01-01

    We implemented an automated and efficient open-source software for the analysis of multi-site neuronal spike signals. The software package, named SPICODYN, has been developed as a standalone windows GUI application, using C# programming language with Microsoft Visual Studio based on .NET framework 4.5 development environment. Accepted input data formats are HDF5, level 5 MAT and text files, containing recorded or generated time series spike signals data. SPICODYN processes such electrophysiological signals focusing on: spiking and bursting dynamics and functional-effective connectivity analysis. In particular, for inferring network connectivity, a new implementation of the transfer entropy method is presented dealing with multiple time delays (temporal extension) and with multiple binary patterns (high order extension). SPICODYN is specifically tailored to process data coming from different Multi-Electrode Arrays setups, guarantying, in those specific cases, automated processing. The optimized implementation of the Delayed Transfer Entropy and the High-Order Transfer Entropy algorithms, allows performing accurate and rapid analysis on multiple spike trains from thousands of electrodes.

  13. Application of unfolding transformation in the random matrix theory to analyze in vivo neuronal spike firing during awake and anesthetized conditions

    Directory of Open Access Journals (Sweden)

    Risako Kato

    2018-03-01

    Full Text Available General anesthetics decrease the frequency and density of spike firing. This effect makes it difficult to detect spike regularity. To overcome this problem, we developed a method utilizing the unfolding transformation which analyzes the energy level statistics in the random matrix theory. We regarded the energy axis as time axis of neuron spike and analyzed the time series of cortical neural firing in vivo. Unfolding transformation detected regularities of neural firing while changes in firing densities were associated with pentobarbital. We found that unfolding transformation enables us to compare firing regularity between awake and anesthetic conditions on a universal scale. Keywords: Unfolding transformation, Spike-timing, Regularity

  14. Hue opponency: chromatic valence functions, individual differences, cortical winner-take-all opponent modeling, and the relationship between spikes and sensitivity.

    Science.gov (United States)

    Billock, Vincent A

    2018-04-01

    Neural spike rate data are more restricted in range than related psychophysical data. For example, several studies suggest a compressive (roughly cube root) nonlinear relationship between wavelength-opponent spike rates in primate midbrain and color appearance in humans, two rather widely separated domains. This presents an opportunity to partially bridge a chasm between these two domains and to probe the putative nonlinearity with other psychophysical data. Here neural wavelength-opponent data are used to create cortical competition models for hue opponency. This effort led to creation of useful models of spiking neuron winner-take-all (WTA) competition and MAX selection. When fed with actual primate data, the spiking WTA models generate reasonable wavelength-opponent spike rate behaviors. An average psychophysical observer for red-green and blue-yellow opponency is curated from eight applicable studies in the refereed and dissertation literatures, with cancellation data roughly every 10 nm in 18 subjects for yellow-blue opponency and 15 subjects for red-green opponency. A direct mapping between spiking neurons with broadband wavelength sensitivity and human psychophysical luminance yields a power law exponent of 0.27, similar to the cube root nonlinearity. Similarly, direct mapping between the WTA model opponent spike rates and psychophysical opponent data suggests power law relationships with exponents between 0.24 and 0.41.

  15. Forskolin suppresses delayed-rectifier K+ currents and enhances spike frequency-dependent adaptation of sympathetic neurons.

    Directory of Open Access Journals (Sweden)

    Luis I Angel-Chavez

    Full Text Available In signal transduction research natural or synthetic molecules are commonly used to target a great variety of signaling proteins. For instance, forskolin, a diterpene activator of adenylate cyclase, has been widely used in cellular preparations to increase the intracellular cAMP level. However, it has been shown that forskolin directly inhibits some cloned K+ channels, which in excitable cells set up the resting membrane potential, the shape of action potential and regulate repetitive firing. Despite the growing evidence indicating that K+ channels are blocked by forskolin, there are no studies yet assessing the impact of this mechanism of action on neuron excitability and firing patterns. In sympathetic neurons, we find that forskolin and its derivative 1,9-Dideoxyforskolin, reversibly suppress the delayed rectifier K+ current (IKV. Besides, forskolin reduced the spike afterhyperpolarization and enhanced the spike frequency-dependent adaptation. Given that IKV is mostly generated by Kv2.1 channels, HEK-293 cells were transfected with cDNA encoding for the Kv2.1 α subunit, to characterize the mechanism of forskolin action. Both drugs reversible suppressed the Kv2.1-mediated K+ currents. Forskolin inhibited Kv2.1 currents and IKV with an IC50 of ~32 μM and ~24 µM, respectively. Besides, the drug induced an apparent current inactivation and slowed-down current deactivation. We suggest that forskolin reduces the excitability of sympathetic neurons by enhancing the spike frequency-dependent adaptation, partially through a direct block of their native Kv2.1 channels.

  16. Population model of hippocampal pyramidal neurons, linking a refractory density approach to conductance-based neurons

    Science.gov (United States)

    Chizhov, Anton V.; Graham, Lyle J.

    2007-01-01

    We propose a macroscopic approach toward realistic simulations of the population activity of hippocampal pyramidal neurons, based on the known refractory density equation with a different hazard function and on a different single-neuron threshold model. The threshold model is a conductance-based model taking into account adaptation-providing currents, which is reduced by omitting the fast sodium current and instead using an explicit threshold criterion for action potential events. Compared to the full pyramidal neuron model, the threshold model well approximates spike-time moments, postspike refractory states, and postsynaptic current integration. The dynamics of a neural population continuum are described by a set of one-dimensional partial differential equations in terms of the distributions of the refractory density (where the refractory state is defined by the time elapsed since the last action potential), the membrane potential, and the gating variables of the voltage-dependent channels, across the entire population. As the source term in the density equation, the probability density of firing, or hazard function, is derived from the Fokker-Planck (FP) equation, assuming that a single neuron is governed by a deterministic average-across-population input and a noise term. A self-similar solution of the FP equation in the subthreshold regime is obtained. Responses of the ensemble to stimulation by a current step and oscillating current are simulated and compared with individual neuron simulations. An example of interictal-like activity of a population of all-to-all connected excitatory neurons is presented.

  17. Dimensionality reduction in neural models: an information-theoretic generalization of spike-triggered average and covariance analysis.

    Science.gov (United States)

    Pillow, Jonathan W; Simoncelli, Eero P

    2006-04-28

    We describe an information-theoretic framework for fitting neural spike responses with a Linear-Nonlinear-Poisson cascade model. This framework unifies the spike-triggered average (STA) and spike-triggered covariance (STC) approaches to neural characterization and recovers a set of linear filters that maximize mean and variance-dependent information between stimuli and spike responses. The resulting approach has several useful properties, namely, (1) it recovers a set of linear filters sorted according to their informativeness about the neural response; (2) it is both computationally efficient and robust, allowing recovery of multiple linear filters from a data set of relatively modest size; (3) it provides an explicit "default" model of the nonlinear stage mapping the filter responses to spike rate, in the form of a ratio of Gaussians; (4) it is equivalent to maximum likelihood estimation of this default model but also converges to the correct filter estimates whenever the conditions for the consistency of STA or STC analysis are met; and (5) it can be augmented with additional constraints on the filters, such as space-time separability. We demonstrate the effectiveness of the method by applying it to simulated responses of a Hodgkin-Huxley neuron and the recorded extracellular responses of macaque retinal ganglion cells and V1 cells.

  18. Spike-timing-dependent plasticity enhanced synchronization transitions induced by autapses in adaptive Newman-Watts neuronal networks.

    Science.gov (United States)

    Gong, Yubing; Wang, Baoying; Xie, Huijuan

    2016-12-01

    In this paper, we numerically study the effect of spike-timing-dependent plasticity (STDP) on synchronization transitions induced by autaptic activity in adaptive Newman-Watts Hodgkin-Huxley neuron networks. It is found that synchronization transitions induced by autaptic delay vary with the adjusting rate A p of STDP and become strongest at a certain A p value, and the A p value increases when network randomness or network size increases. It is also found that the synchronization transitions induced by autaptic delay become strongest at a certain network randomness and network size, and the values increase and related synchronization transitions are enhanced when A p increases. These results show that there is optimal STDP that can enhance the synchronization transitions induced by autaptic delay in the adaptive neuronal networks. These findings provide a new insight into the roles of STDP and autapses for the information transmission in neural systems. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer

    Directory of Open Access Journals (Sweden)

    Michael eHines

    2011-11-01

    Full Text Available The performance of several spike exchange methods using a Blue Gene/P supercomputerhas been tested with 8K to 128K cores using randomly connected networks of up to 32M cells with 1k connections per cell and 4M cells with 10k connections per cell. The spike exchange methods used are the standard Message Passing Interface collective, MPI_Allgather, and several variants of the non-blocking multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access communication available on the Blue Gene/P. In all cases the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods --- the persistent multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast;and a two phase multisend in which a DCMF_Multicast is used to first send to a subset of phase 1 destination cores which then pass it on to their subset of phase 2 destination cores --- had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the multisend methods is almost completely due to load imbalance caused by the largevariation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a direct memory access controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect neural net architecture but be randomly distributed so that sets of cells which are burst firing together should be on different processors with their targets on as large a set of processors as possible.

  20. Automatic detection of interictal spikes using data mining models.

    Science.gov (United States)

    Valenti, Pablo; Cazamajou, Enrique; Scarpettini, Marcelo; Aizemberg, Ariel; Silva, Walter; Kochen, Silvia

    2006-01-15

    A prospective candidate for epilepsy surgery is studied both the ictal and interictal spikes (IS) to determine the localization of the epileptogenic zone. In this work, data mining (DM) classification techniques were utilized to build an automatic detection model. The selected DM algorithms are: Decision Trees (J 4.8), and Statistical Bayesian Classifier (naïve model). The main objective was the detection of IS, isolating them from the EEG's base activity. On the other hand, DM has an attractive advantage in such applications, in that the recognition of epileptic discharges does not need a clear definition of spike morphology. Furthermore, previously 'unseen' patterns could be recognized by the DM with proper 'training'. The results obtained showed that the efficacy of the selected DM algorithms is comparable to the current visual analysis used by the experts. Moreover, DM is faster than the time required for the visual analysis of the EEG. So this tool can assist the experts by facilitating the analysis of a patient's information, and reducing the time and effort required in the process.

  1. Variable Action Potential Backpropagation during Tonic Firing and Low-Threshold Spike Bursts in Thalamocortical But Not Thalamic Reticular Nucleus Neurons.

    Science.gov (United States)

    Connelly, William M; Crunelli, Vincenzo; Errington, Adam C

    2017-05-24

    Backpropagating action potentials (bAPs) are indispensable in dendritic signaling. Conflicting Ca 2+ -imaging data and an absence of dendritic recording data means that the extent of backpropagation in thalamocortical (TC) and thalamic reticular nucleus (TRN) neurons remains unknown. Because TRN neurons signal electrically through dendrodendritic gap junctions and possibly via chemical dendritic GABAergic synapses, as well as classical axonal GABA release, this lack of knowledge is problematic. To address this issue, we made two-photon targeted patch-clamp recordings from rat TC and TRN neuron dendrites to measure bAPs directly. These recordings reveal that "tonic"' and low-threshold-spike (LTS) "burst" APs in both cell types are always recorded first at the soma before backpropagating into the dendrites while undergoing substantial distance-dependent dendritic amplitude attenuation. In TC neurons, bAP attenuation strength varies according to firing mode. During LTS bursts, somatic AP half-width increases progressively with increasing spike number, allowing late-burst spikes to propagate more efficiently into the dendritic tree compared with spikes occurring at burst onset. Tonic spikes have similar somatic half-widths to late burst spikes and undergo similar dendritic attenuation. In contrast, in TRN neurons, AP properties are unchanged between LTS bursts and tonic firing and, as a result, distance-dependent dendritic attenuation remains consistent across different firing modes. Therefore, unlike LTS-associated global electrical and calcium signals, the spatial influence of bAP signaling in TC and TRN neurons is more restricted, with potentially important behavioral-state-dependent consequences for synaptic integration and plasticity in thalamic neurons. SIGNIFICANCE STATEMENT In most neurons, action potentials (APs) initiate in the axosomatic region and propagate into the dendritic tree to provide a retrograde signal that conveys information about the level of

  2. A new approach to detect the coding rule of the cortical spiking model in the information transmission.

    Science.gov (United States)

    Nazari, Soheila; Faez, Karim; Janahmadi, Mahyar

    2018-03-01

    Investigation of the role of the local field potential (LFP) fluctuations in encoding the received sensory information by the nervous system remains largely unknown. On the other hand, transmission of these translation rules in information transmission between the structure of sensory stimuli and the cortical oscillations to the bio-inspired artificial neural networks operating at the efficiency of the nervous system is still a vague puzzle. In order to move towards this important goal, computational neuroscience tools can be useful so, we simulated a large-scale network of excitatory and inhibitory spiking neurons with synaptic connections consisting of AMPA and GABA currents as a model of cortical populations. Spiking network was equipped with spike-based unsupervised weight optimization based on the dynamical behavior of the excitatory (AMPA) and inhibitory (GABA) synapses using Spike Timing Dependent Plasticity (STDP) on the MNIST benchmark and we specified how the generated LFP by the network contained information about input patterns. The main result of this article is that the calculated coefficients of Prolate spheroidal wave functions (PSWF) from the input pattern with mean square error (MSE) criterion and power spectrum of LFP with maximum correntropy criterion (MCC) are equal. The more important result is that 82.3% of PSWF coefficients are the same as the connecting weights of the cortical neurons to the classifying neurons after the completion of the training process. Higher compliance percentage of coefficients with synaptic weights (82.3%) gives the expectance us that this coding rule will be able to extend to biological systems. Eventually, we introduced the cortical spiking network as an information channel, which transmits the information of the input pattern in the form of PSWF coefficients to the power spectrum of the output generated LFP. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Effects of spike-time-dependent plasticity on the stochastic resonance of small-world neuronal networks.

    Science.gov (United States)

    Yu, Haitao; Guo, Xinmeng; Wang, Jiang; Deng, Bin; Wei, Xile

    2014-09-01

    The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient for the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks.

  4. Effects of spike-time-dependent plasticity on the stochastic resonance of small-world neuronal networks

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Haitao; Guo, Xinmeng; Wang, Jiang, E-mail: jiangwang@tju.edu.cn; Deng, Bin; Wei, Xile [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2014-09-01

    The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient for the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks.

  5. Dopamine-signalled reward predictions generated by competitive excitation and inhibition in a spiking neural network model

    Directory of Open Access Journals (Sweden)

    Paul eChorley

    2011-05-01

    Full Text Available Dopaminergic neurons in the mammalian substantia nigra displaycharacteristic phasic responses to stimuli which reliably predict thereceipt of primary rewards. These responses have been suggested toencode reward prediction-errors similar to those used in reinforcementlearning. Here, we propose a model of dopaminergic activity in whichprediction error signals are generated by the joint action ofshort-latency excitation and long-latency inhibition, in a networkundergoing dopaminergic neuromodulation of both spike-timing dependentsynaptic plasticity and neuronal excitability. In contrast toprevious models, sensitivity to recent events is maintained by theselective modification of specific striatal synapses, efferent tocortical neurons exhibiting stimulus-specific, temporally extendedactivity patterns. Our model shows, in the presence of significantbackground activity, (i a shift in dopaminergic response from rewardto reward predicting stimuli, (ii preservation of a response tounexpected rewards, and (iii a precisely-timed below-baseline dip inactivity observed when expected rewards are omitted.

  6. A computationally efficient method for nonparametric modeling of neural spiking activity with point processes.

    Science.gov (United States)

    Coleman, Todd P; Sarma, Sridevi S

    2010-08-01

    Point-process models have been shown to be useful in characterizing neural spiking activity as a function of extrinsic and intrinsic factors. Most point-process models of neural activity are parametric, as they are often efficiently computable. However, if the actual point process does not lie in the assumed parametric class of functions, misleading inferences can arise. Nonparametric methods are attractive due to fewer assumptions, but computation in general grows with the size of the data. We propose a computationally efficient method for nonparametric maximum likelihood estimation when the conditional intensity function, which characterizes the point process in its entirety, is assumed to be a Lipschitz continuous function but otherwise arbitrary. We show that by exploiting much structure, the problem becomes efficiently solvable. We next demonstrate a model selection procedure to estimate the Lipshitz parameter from data, akin to the minimum description length principle and demonstrate consistency of our estimator under appropriate assumptions. Finally, we illustrate the effectiveness of our method with simulated neural spiking data, goldfish retinal ganglion neural data, and activity recorded in CA1 hippocampal neurons from an awake behaving rat. For the simulated data set, our method uncovers a more compact representation of the conditional intensity function when it exists. For the goldfish and rat neural data sets, we show that our nonparametric method gives a superior absolute goodness-of-fit measure used for point processes than the most common parametric and splines-based approaches.

  7. Spiking Excitable Semiconductor Laser as Optical Neurons: Dynamics, Clustering and Global Emerging Behaviors

    Science.gov (United States)

    2014-06-28

    N. Rubido, J. Tiana-Alsina, M. C. Torrent , and C. Masoller, Distinguishing signatures of deter- minism and stochasticity in spiking complex systems...Cohen, A. Aragoneses, D. Rontani, M. C. Torrent , C. Masoller and D. J. Gauthier, Multidimensional subwavelength position sensing using a...semiconductor laser with optical feedback, Opt. Lett. 38, 4331 (2013). Download 10. A. Aragoneses, T. Sorrentino, S. Perrone, D. J. Gauthier, M. C. Torrent and C

  8. Stochastic resonance in models of neuronal ensembles

    International Nuclear Information System (INIS)

    Chialvo, D.R.; Longtin, A.; Mueller-Gerkin, J.

    1997-01-01

    Two recently suggested mechanisms for the neuronal encoding of sensory information involving the effect of stochastic resonance with aperiodic time-varying inputs are considered. It is shown, using theoretical arguments and numerical simulations, that the nonmonotonic behavior with increasing noise of the correlation measures used for the so-called aperiodic stochastic resonance (ASR) scenario does not rely on the cooperative effect typical of stochastic resonance in bistable and excitable systems. Rather, ASR with slowly varying signals is more properly interpreted as linearization by noise. Consequently, the broadening of the open-quotes resonance curveclose quotes in the multineuron stochastic resonance without tuning scenario can also be explained by this linearization. Computation of the input-output correlation as a function of both signal frequency and noise for the model system further reveals conditions where noise-induced firing with aperiodic inputs will benefit from stochastic resonance rather than linearization by noise. Thus, our study clarifies the tuning requirements for the optimal transduction of subthreshold aperiodic signals. It also shows that a single deterministic neuron can perform as well as a network when biased into a suprathreshold regime. Finally, we show that the inclusion of a refractory period in the spike-detection scheme produces a better correlation between instantaneous firing rate and input signal. copyright 1997 The American Physical Society

  9. Starburst amacrine cells change from spiking to nonspiking neurons during retinal development.

    Science.gov (United States)

    Zhou, Z J; Fain, G L

    1996-01-01

    The membrane excitability of cholinergic (starburst) amacrine cells was studied in the rabbit retina during postnatal development. Whole-cell patch-clamp recordings were made from 110 displaced starburst cells in a thin retina] slice preparation of rabbits between postnatal days P1 and P56 old. We report that displaced starburst cells undergo a dramatic transition from spiking to nonspiking, caused by a loss of voltage-gated Na currents. This change in membrane excitability occurred just after eye opening (P10), such that all of the starburst cells tested before eye opening had conspicuous tetrodotoxin-sensitive Na currents and action potentials, but none tested after the first 3 postnatal weeks had detectable Na currents or spikes. Our results suggest that starburst cells use action potentials transiently during development and probably play a functional role in visual development. These cells then cease to spike as the retina matures, presumably consistent with their role in visual processing in the mature retina. Images Fig. 1 PMID:8755602

  10. Mouse neuroblastoma cell-based model and the effect of epileptic events on calcium oscillations and neural spikes

    Science.gov (United States)

    Kim, Suhwan; Jung, Unsang; Baek, Juyoung; Lee, Sangwon; Jung, Woonggyu; Kim, Jeehyun; Kang, Shinwon

    2013-01-01

    Recently, mouse neuroblastoma cells have been considered as an attractive model for the study of human neurological and prion diseases, and they have been intensively used as a model system in different areas. For example, the differentiation of neuro2a (N2A) cells, receptor-mediated ion current, and glutamate-induced physiological responses have been actively investigated with these cells. These mouse neuroblastoma N2A cells are of interest because they grow faster than other cells of neural origin and have a number of other advantages. The calcium oscillations and neural spikes of mouse neuroblastoma N2A cells in epileptic conditions are evaluated. Based on our observations of neural spikes in these cells with our proposed imaging modality, we reported that they can be an important model in epileptic activity studies. We concluded that mouse neuroblastoma N2A cells produce epileptic spikes in vitro in the same way as those produced by neurons or astrocytes. This evidence suggests that increased levels of neurotransmitter release due to the enhancement of free calcium from 4-aminopyridine causes the mouse neuroblastoma N2A cells to produce epileptic spikes and calcium oscillations.

  11. Mouse neuroblastoma cell based model and the effect of epileptic events on calcium oscillations and neural spikes

    Science.gov (United States)

    Kim, Suhwan; Baek, Juyeong; Jung, Unsang; Lee, Sangwon; Jung, Woonggyu; Kim, Jeehyun; Kang, Shinwon

    2013-05-01

    Recently, Mouse neuroblastoma cells are considered as an attractive model for the study of human neurological and prion diseases, and intensively used as a model system in different areas. Among those areas, differentiation of neuro2a (N2A) cells, receptor mediated ion current, and glutamate induced physiological response are actively investigated. The reason for the interest to mouse neuroblastoma N2A cells is that they have a fast growing rate than other cells in neural origin with a few another advantages. This study evaluated the calcium oscillations and neural spikes recording of mouse neuroblastoma N2A cells in an epileptic condition. Based on our observation of neural spikes in mouse N2A cell with our proposed imaging modality, we report that mouse neuroblastoma N2A cells can be an important model related to epileptic activity studies. It is concluded that the mouse neuroblastoma N2A cells produce the epileptic spikes in vitro in the same way as produced by the neurons or the astrocytes. This evidence advocates the increased and strong level of neurotransmitters release by enhancement in free calcium using the 4-aminopyridine which causes the mouse neuroblastoma N2A cells to produce the epileptic spikes and calcium oscillation.

  12. Parallel optical control of spatiotemporal neuronal spike activity using high-frequency digital light processingtechnology

    Directory of Open Access Journals (Sweden)

    Jason eJerome

    2011-08-01

    Full Text Available Neurons in the mammalian neocortex receive inputs from and communicate back to thousands of other neurons, creating complex spatiotemporal activity patterns. The experimental investigation of these parallel dynamic interactions has been limited due to the technical challenges of monitoring or manipulating neuronal activity at that level of complexity. Here we describe a new massively parallel photostimulation system that can be used to control action potential firing in in vitro brain slices with high spatial and temporal resolution while performing extracellular or intracellular electrophysiological measurements. The system uses Digital-Light-Processing (DLP technology to generate 2-dimensional (2D stimulus patterns with >780,000 independently controlled photostimulation sites that operate at high spatial (5.4 µm and temporal (>13kHz resolution. Light is projected through the quartz-glass bottom of the perfusion chamber providing access to a large area (2.76 x 2.07 mm2 of the slice preparation. This system has the unique capability to induce temporally precise action potential firing in large groups of neurons distributed over a wide area covering several cortical columns. Parallel photostimulation opens up new opportunities for the in vitro experimental investigation of spatiotemporal neuronal interactions at a broad range of anatomical scales.

  13. Supervised spike-timing-dependent plasticity: a spatiotemporal neuronal learning rule for function approximation and decisions.

    Science.gov (United States)

    Franosch, Jan-Moritz P; Urban, Sebastian; van Hemmen, J Leo

    2013-12-01

    How can an animal learn from experience? How can it train sensors, such as the auditory or tactile system, based on other sensory input such as the visual system? Supervised spike-timing-dependent plasticity (supervised STDP) is a possible answer. Supervised STDP trains one modality using input from another one as "supervisor." Quite complex time-dependent relationships between the senses can be learned. Here we prove that under very general conditions, supervised STDP converges to a stable configuration of synaptic weights leading to a reconstruction of primary sensory input.

  14. Operant Conditioning: A Minimal Components Requirement in Artificial Spiking Neurons Designed for Bio-Inspired Robot’s Controller

    Directory of Open Access Journals (Sweden)

    André eCyr

    2014-07-01

    Full Text Available We demonstrate the operant conditioning (OC learning process within a basic bio-inspired robot controller paradigm, using an artificial spiking neural network (ASNN with minimal component count as artificial brain. In biological agents, OC results in behavioral changes that are learned from the consequences of previous actions, using progressive prediction adjustment triggered by reinforcers. In a robotics context, virtual and physical robots may benefit from a similar learning skill when facing unknown environments with no supervision. In this work, we demonstrate that a simple ASNN can efficiently realise many OC scenarios. The elementary learning kernel that we describe relies on a few critical neurons, synaptic links and the integration of habituation and spike-timing dependent plasticity (STDP as learning rules. Using four tasks of incremental complexity, our experimental results show that such minimal neural component set may be sufficient to implement many OC procedures. Hence, with the described bio-inspired module, OC can be implemented in a wide range of robot controllers, including those with limited computational resources.

  15. Transmitter modulation of spike-evoked calcium transients in arousal related neurons

    DEFF Research Database (Denmark)

    Kohlmeier, Kristi Anne; Leonard, Christopher S

    2006-01-01

    Nitric oxide synthase (NOS)-containing cholinergic neurons in the laterodorsal tegmentum (LDT) influence behavioral and motivational states through their projections to the thalamus, ventral tegmental area and a brainstem 'rapid eye movement (REM)-induction' site. Action potential-evoked intracel...

  16. Phase transitions and self-organized criticality in networks of stochastic spiking neurons

    Science.gov (United States)

    Brochini, Ludmila; de Andrade Costa, Ariadne; Abadi, Miguel; Roque, Antônio C.; Stolfi, Jorge; Kinouchi, Osame

    2016-11-01

    Phase transitions and critical behavior are crucial issues both in theoretical and experimental neuroscience. We report analytic and computational results about phase transitions and self-organized criticality (SOC) in networks with general stochastic neurons. The stochastic neuron has a firing probability given by a smooth monotonic function Φ(V) of the membrane potential V, rather than a sharp firing threshold. We find that such networks can operate in several dynamic regimes (phases) depending on the average synaptic weight and the shape of the firing function Φ. In particular, we encounter both continuous and discontinuous phase transitions to absorbing states. At the continuous transition critical boundary, neuronal avalanches occur whose distributions of size and duration are given by power laws, as observed in biological neural networks. We also propose and test a new mechanism to produce SOC: the use of dynamic neuronal gains - a form of short-term plasticity probably located at the axon initial segment (AIS) - instead of depressing synapses at the dendrites (as previously studied in the literature). The new self-organization mechanism produces a slightly supercritical state, that we called SOSC, in accord to some intuitions of Alan Turing.

  17. Phase transitions and self-organized criticality in networks of stochastic spiking neurons.

    Science.gov (United States)

    Brochini, Ludmila; de Andrade Costa, Ariadne; Abadi, Miguel; Roque, Antônio C; Stolfi, Jorge; Kinouchi, Osame

    2016-11-07

    Phase transitions and critical behavior are crucial issues both in theoretical and experimental neuroscience. We report analytic and computational results about phase transitions and self-organized criticality (SOC) in networks with general stochastic neurons. The stochastic neuron has a firing probability given by a smooth monotonic function Φ(V) of the membrane potential V, rather than a sharp firing threshold. We find that such networks can operate in several dynamic regimes (phases) depending on the average synaptic weight and the shape of the firing function Φ. In particular, we encounter both continuous and discontinuous phase transitions to absorbing states. At the continuous transition critical boundary, neuronal avalanches occur whose distributions of size and duration are given by power laws, as observed in biological neural networks. We also propose and test a new mechanism to produce SOC: the use of dynamic neuronal gains - a form of short-term plasticity probably located at the axon initial segment (AIS) - instead of depressing synapses at the dendrites (as previously studied in the literature). The new self-organization mechanism produces a slightly supercritical state, that we called SOSC, in accord to some intuitions of Alan Turing.

  18. Multiplicative multifractal modeling and discrimination of human neuronal activity

    International Nuclear Information System (INIS)

    Zheng Yi; Gao Jianbo; Sanchez, Justin C.; Principe, Jose C.; Okun, Michael S.

    2005-01-01

    Understanding neuronal firing patterns is one of the most important problems in theoretical neuroscience. It is also very important for clinical neurosurgery. In this Letter, we introduce a computational procedure to examine whether neuronal firing recordings could be characterized by cascade multiplicative multifractals. By analyzing raw recording data as well as generated spike train data from 3 patients collected in two brain areas, the globus pallidus externa (GPe) and the globus pallidus interna (GPi), we show that the neural firings are consistent with a multifractal process over certain time scale range (t 1 ,t 2 ), where t 1 is argued to be not smaller than the mean inter-spike-interval of neuronal firings, while t 2 may be related to the time that neuronal signals propagate in the major neural branching structures pertinent to GPi and GPe. The generalized dimension spectrum D q effectively differentiates the two brain areas, both intra- and inter-patients. For distinguishing between GPe and GPi, it is further shown that the cascade model is more effective than the methods recently examined by Schiff et al. as well as the Fano factor analysis. Therefore, the methodology may be useful in developing computer aided tools to help clinicians perform precision neurosurgery in the operating room

  19. Effects of Hypocretin/Orexin and Major Transmitters of Arousal on Fast Spiking Neurons in Mouse Cortical Layer 6B.

    Science.gov (United States)

    Wenger Combremont, Anne-Laure; Bayer, Laurence; Dupré, Anouk; Mühlethaler, Michel; Serafin, Mauro

    2016-08-01

    Fast spiking (FS) GABAergic neurons are thought to be involved in the generation of high-frequency cortical rhythms during the waking state. We previously showed that cortical layer 6b (L6b) was a specific target for the wake-promoting transmitter, hypocretin/orexin (hcrt/orx). Here, we have investigated whether L6b FS cells were sensitive to hcrt/orx and other transmitters associated with cortical activation. Recordings were thus made from L6b FS cells in either wild-type mice or in transgenic mice in which GFP-positive GABAergic cells are parvalbumin positive. Whereas in a control condition hcrt/orx induced a strong increase in the frequency, but not amplitude, of spontaneous synaptic currents, in the presence of TTX, it had no effect at all on miniature synaptic currents. Hcrt/orx effect was thus presynaptic although not by an action on glutamatergic terminals but rather on neighboring cells. In contrast, noradrenaline and acetylcholine depolarized and excited these cells through a direct postsynaptic action. Neurotensin, which is colocalized in hcrt/orx neurons, also depolarized and excited these cells but the effect was indirect. Morphologically, these cells exhibited basket-like features. These results suggest that hcrt/orx, noradrenaline, acetylcholine, and neurotensin could contribute to high-frequency cortical activity through an action on L6b GABAergic FS cells. © The Author 2016. Published by Oxford University Press.

  20. Contributions of adaptation currents to dynamic spike threshold on slow timescales: Biophysical insights from conductance-based models

    Science.gov (United States)

    Yi, Guosheng; Wang, Jiang; Wei, Xile; Deng, Bin; Li, Huiyan; Che, Yanqiu

    2017-06-01

    Spike-frequency adaptation (SFA) mediated by various adaptation currents, such as voltage-gated K+ current (IM), Ca2+-gated K+ current (IAHP), or Na+-activated K+ current (IKNa), exists in many types of neurons, which has been shown to effectively shape their information transmission properties on slow timescales. Here we use conductance-based models to investigate how the activation of three adaptation currents regulates the threshold voltage for action potential (AP) initiation during the course of SFA. It is observed that the spike threshold gets depolarized and the rate of membrane depolarization (dV/dt) preceding AP is reduced as adaptation currents reduce firing rate. It is indicated that the presence of inhibitory adaptation currents enables the neuron to generate a dynamic threshold inversely correlated with preceding dV/dt on slower timescales than fast dynamics of AP generation. By analyzing the interactions of ionic currents at subthreshold potentials, we find that the activation of adaptation currents increase the outward level of net membrane current prior to AP initiation, which antagonizes inward Na+ to result in a depolarized threshold and lower dV/dt from one AP to the next. Our simulations demonstrate that the threshold dynamics on slow timescales is a secondary effect caused by the activation of adaptation currents. These findings have provided a biophysical interpretation of the relationship between adaptation currents and spike threshold.

  1. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  2. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  3. Temporal Information Processing and Stability Analysis of the MHSN Neuron Model in DDF

    Directory of Open Access Journals (Sweden)

    Saket Kumar Choudhary

    2016-12-01

    Full Text Available Implementation of a neuron like information processing structure at hardware level is a burning research problem. In this article, we analyze the modified hybrid spiking neuron model (the MHSN model in distributed delay framework (DDF for hardware level implementation point of view. We investigate its temporal information processing capability in term of inter-spike-interval (ISI distribution. We also perform the stability analysis of the MHSN model, in which, we compute nullclines, steady state solution, eigenvalues corresponding the MHSN model. During phase plane analysis, we notice that the MHSN model generates limit cycle oscillations which is an important phenomenon in many biological processes. Qualitative behavior of these limit cycle does not changes due to the variation in applied input stimulus, however, delay effect the spiking activity and duration of cycle get altered.

  4. Stochastic Variational Learning in Recurrent Spiking Networks

    Directory of Open Access Journals (Sweden)

    Danilo eJimenez Rezende

    2014-04-01

    Full Text Available The ability to learn and perform statistical inference with biologically plausible recurrent network of spiking neurons is an important step towards understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators conveying information about ``novelty on a statistically rigorous ground.Simulations show that our model is able to learn bothstationary and non-stationary patterns of spike trains.We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.

  5. Learning Pitch with STDP: A Computational Model of Place and Temporal Pitch Perception Using Spiking Neural Networks.

    Directory of Open Access Journals (Sweden)

    Nafise Erfanian Saeedi

    2016-04-01

    Full Text Available Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons' action potentials (spikes as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy.

  6. Auditory information coding by modeled cochlear nucleus neurons.

    Science.gov (United States)

    Wang, Huan; Isik, Michael; Borst, Alexander; Hemmert, Werner

    2011-06-01

    In this paper we use information theory to quantify the information in the output spike trains of modeled cochlear nucleus globular bushy cells (GBCs). GBCs are part of the sound localization pathway. They are known for their precise temporal processing, and they code amplitude modulations with high fidelity. Here we investigated the information transmission for a natural sound, a recorded vowel. We conclude that the maximum information transmission rate for a single neuron was close to 1,050 bits/s, which corresponds to a value of approximately 5.8 bits per spike. For quasi-periodic signals like voiced speech, the transmitted information saturated as word duration increased. In general, approximately 80% of the available information from the spike trains was transmitted within about 20 ms. Transmitted information for speech signals concentrated around formant frequency regions. The efficiency of neural coding was above 60% up to the highest temporal resolution we investigated (20 μs). The increase in transmitted information to that precision indicates that these neurons are able to code information with extremely high fidelity, which is required for sound localization. On the other hand, only 20% of the information was captured when the temporal resolution was reduced to 4 ms. As the temporal resolution of most speech recognition systems is limited to less than 10 ms, this massive information loss might be one of the reasons which are responsible for the lack of noise robustness of these systems.

  7. Reconstruction of audio waveforms from spike trains of artificial cochlea models

    Science.gov (United States)

    Zai, Anja T.; Bhargava, Saurabh; Mesgarani, Nima; Liu, Shih-Chii

    2015-01-01

    Spiking cochlea models describe the analog processing and spike generation process within the biological cochlea. Reconstructing the audio input from the artificial cochlea spikes is therefore useful for understanding the fidelity of the information preserved in the spikes. The reconstruction process is challenging particularly for spikes from the mixed signal (analog/digital) integrated circuit (IC) cochleas because of multiple non-linearities in the model and the additional variance caused by random transistor mismatch. This work proposes an offline method for reconstructing the audio input from spike responses of both a particular spike-based hardware model called the AEREAR2 cochlea and an equivalent software cochlea model. This method was previously used to reconstruct the auditory stimulus based on the peri-stimulus histogram of spike responses recorded in the ferret auditory cortex. The reconstructed audio from the hardware cochlea is evaluated against an analogous software model using objective measures of speech quality and intelligibility; and further tested in a word recognition task. The reconstructed audio under low signal-to-noise (SNR) conditions (SNR < –5 dB) gives a better classification performance than the original SNR input in this word recognition task. PMID:26528113

  8. Reconstruction of audio waveforms from spike trains of artificial cochlea models

    Directory of Open Access Journals (Sweden)

    Anja eZai

    2015-10-01

    Full Text Available Spiking cochlea models describe the analog processing and spike generation process within the biological cochlea. Reconstructing the audio input from the artificial cochlea spikes is therefore useful for understanding the fidelity of the information preserved in the spikes. The reconstruction process is challenging particularly for spikes from the mixed signal (analog/digital integrated circuit (IC cochleas because of multiple nonlinearities in the model and the additional variance caused by random transistor mismatch. This work proposes an offline method for reconstructing the audio input from spike responses of both a particular spike-based hardware model called the AEREAR2 cochlea and an equivalent software cochlea model. This method was previously used to reconstruct the auditory stimulus based on the peri-stimulus histogram of spike responses recorded in the ferret auditory cortex. The reconstructed audio from the hardware cochlea is evaluated against an analogous software model using objective measures of speech quality and intelligibility; and further tested in a word recognition task. The reconstructed audio under low signal-to-noise (SNR conditions (SNR < -5 dB gives a better classification performance than the original SNR input in this word recognition task.

  9. Spiking patterns of a hippocampus model in electric fields

    International Nuclear Information System (INIS)

    Men Cong; Wang Jiang; Qin Ying-Mei; Wei Xi-Le; Deng Bin; Che Yan-Qiu

    2011-01-01

    We develop a model of CA3 neurons embedded in a resistive array to mimic the effects of electric fields from a new perspective. Effects of DC and sinusoidal electric fields on firing patterns in CA3 neurons are investigated in this study. The firing patterns can be switched from no firing pattern to burst or from burst to fast periodic firing pattern with the increase of DC electric field intensity. It is also found that the firing activities are sensitive to the frequency and amplitude of the sinusoidal electric field. Different phase-locking states and chaotic firing regions are observed in the parameter space of frequency and amplitude. These findings are qualitatively in accordance with the results of relevant experimental and numerical studies. It is implied that the external or endogenous electric field can modulate the neural code in the brain. Furthermore, it is helpful to develop control strategies based on electric fields to control neural diseases such as epilepsy. (interdisciplinary physics and related areas of science and technology)

  10. Decision making under uncertainty in a spiking neural network model of the basal ganglia.

    Science.gov (United States)

    Héricé, Charlotte; Khalil, Radwa; Moftah, Marie; Boraud, Thomas; Guthrie, Martin; Garenne, André

    2016-12-01

    The mechanisms of decision-making and action selection are generally thought to be under the control of parallel cortico-subcortical loops connecting back to distinct areas of cortex through the basal ganglia and processing motor, cognitive and limbic modalities of decision-making. We have used these properties to develop and extend a connectionist model at a spiking neuron level based on a previous rate model approach. This model is demonstrated on decision-making tasks that have been studied in primates and the electrophysiology interpreted to show that the decision is made in two steps. To model this, we have used two parallel loops, each of which performs decision-making based on interactions between positive and negative feedback pathways. This model is able to perform two-level decision-making as in primates. We show here that, before learning, synaptic noise is sufficient to drive the decision-making process and that, after learning, the decision is based on the choice that has proven most likely to be rewarded. The model is then submitted to lesion tests, reversal learning and extinction protocols. We show that, under these conditions, it behaves in a consistent manner and provides predictions in accordance with observed experimental data.

  11. Modeling activity-dependent plasticity in BCM spiking neural networks with application to human behavior recognition.

    Science.gov (United States)

    Meng, Yan; Jin, Yaochu; Yin, Jun

    2011-12-01

    Spiking neural networks (SNNs) are considered to be computationally more powerful than conventional NNs. However, the capability of SNNs in solving complex real-world problems remains to be demonstrated. In this paper, we propose a substantial extension of the Bienenstock, Cooper, and Munro (BCM) SNN model, in which the plasticity parameters are regulated by a gene regulatory network (GRN). Meanwhile, the dynamics of the GRN is dependent on the activation levels of the BCM neurons. We term the whole model "GRN-BCM." To demonstrate its computational power, we first compare the GRN-BCM with a standard BCM, a hidden Markov model, and a reservoir computing model on a complex time series classification problem. Simulation results indicate that the GRN-BCM significantly outperforms the compared models. The GRN-BCM is then applied to two widely used datasets for human behavior recognition. Comparative results on the two datasets suggest that the GRN-BCM is very promising for human behavior recognition, although the current experiments are still limited to the scenarios in which only one object is moving in the considered video sequences.

  12. Conduction Delay Learning Model for Unsupervised and Supervised Classification of Spatio-Temporal Spike Patterns.

    Science.gov (United States)

    Matsubara, Takashi

    2017-01-01

    Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning.

  13. Conduction Delay Learning Model for Unsupervised and Supervised Classification of Spatio-Temporal Spike Patterns

    Directory of Open Access Journals (Sweden)

    Takashi Matsubara

    2017-11-01

    Full Text Available Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning.

  14. Differential Activation of Fast-Spiking and Regular-Firing Neuron Populations During Movement and Reward in the Dorsal Medial Frontal Cortex

    Science.gov (United States)

    Insel, Nathan; Barnes, Carol A.

    2015-01-01

    The medial prefrontal cortex is thought to be important for guiding behavior according to an animal's expectations. Efforts to decode the region have focused not only on the question of what information it computes, but also how distinct circuit components become engaged during behavior. We find that the activity of regular-firing, putative projection neurons contains rich information about behavioral context and firing fields cluster around reward sites, while activity among putative inhibitory and fast-spiking neurons is most associated with movement and accompanying sensory stimulation. These dissociations were observed even between adjacent neurons with apparently reciprocal, inhibitory–excitatory connections. A smaller population of projection neurons with burst-firing patterns did not show clustered firing fields around rewards; these neurons, although heterogeneous, were generally less selective for behavioral context than regular-firing cells. The data suggest a network that tracks an animal's behavioral situation while, at the same time, regulating excitation levels to emphasize high valued positions. In this scenario, the function of fast-spiking inhibitory neurons is to constrain network output relative to incoming sensory flow. This scheme could serve as a bridge between abstract sensorimotor information and single-dimensional codes for value, providing a neural framework to generate expectations from behavioral state. PMID:24700585

  15. STICK: Spike Time Interval Computational Kernel, a Framework for General Purpose Computation Using Neurons, Precise Timing, Delays, and Synchrony.

    Science.gov (United States)

    Lagorce, Xavier; Benosman, Ryad

    2015-11-01

    There has been significant research over the past two decades in developing new platforms for spiking neural computation. Current neural computers are primarily developed to mimic biology. They use neural networks, which can be trained to perform specific tasks to mainly solve pattern recognition problems. These machines can do more than simulate biology; they allow us to rethink our current paradigm of computation. The ultimate goal is to develop brain-inspired general purpose computation architectures that can breach the current bottleneck introduced by the von Neumann architecture. This work proposes a new framework for such a machine. We show that the use of neuron-like units with precise timing representation, synaptic diversity, and temporal delays allows us to set a complete, scalable compact computation framework. The framework provides both linear and nonlinear operations, allowing us to represent and solve any function. We show usability in solving real use cases from simple differential equations to sets of nonlinear differential equations leading to chaotic attractors.

  16. Simple cortical and thalamic neuron models for digital arithmetic circuit implementation

    Directory of Open Access Journals (Sweden)

    Takuya eNanami

    2016-05-01

    Full Text Available Trade-off between reproducibility of neuronal activities and computational efficiency is one ofcrucial subjects in computational neuroscience and neuromorphic engineering. A wide variety ofneuronal models have been studied from different viewpoints. The digital spiking silicon neuron(DSSN model is a qualitative model that focuses on efficient implementation by digital arithmeticcircuits. We expanded the DSSN model and found appropriate parameter sets with which itreproduces the dynamical behaviors of the ionic-conductance models of four classes of corticaland thalamic neurons. We first developed a 4-variable model by reducing the number of variablesin the ionic-conductance models and elucidated its mathematical structures using bifurcationanalysis. Then, expanded DSSN models were constructed that reproduce these mathematicalstructures and capture the characteristic behavior of each neuron class. We confirmed thatstatistics of the neuronal spike sequences are similar in the DSSN and the ionic-conductancemodels. Computational cost of the DSSN model is larger than that of the recent sophisticatedIntegrate-and-Fire-based models, but smaller than the ionic-conductance models. This modelis intended to provide another meeting point for above trade-off that satisfies the demand forlarge-scale neuronal network simulation with closer-to-biology models.

  17. Noise and Synchronization Analysis of the Cold-Receptor Neuronal Network Model

    Directory of Open Access Journals (Sweden)

    Ying Du

    2014-01-01

    Full Text Available This paper analyzes the dynamics of the cold receptor neural network model. First, it examines noise effects on neuronal stimulus in the model. From ISI plots, it is shown that there are considerable differences between purely deterministic simulations and noisy ones. The ISI-distance is used to measure the noise effects on spike trains quantitatively. It is found that spike trains observed in neural models can be more strongly affected by noise for different temperatures in some aspects; meanwhile, spike train has greater variability with the noise intensity increasing. The synchronization of neuronal network with different connectivity patterns is also studied. It is shown that chaotic and high period patterns are more difficult to get complete synchronization than the situation in single spike and low period patterns. The neuronal network will exhibit various patterns of firing synchronization by varying some key parameters such as the coupling strength. Different types of firing synchronization are diagnosed by a correlation coefficient and the ISI-distance method. The simulations show that the synchronization status of neurons is related to the network connectivity patterns.

  18. One-dimensional map-based neuron model: A logistic modification

    International Nuclear Information System (INIS)

    Mesbah, Samineh; Moghtadaei, Motahareh; Hashemi Golpayegani, Mohammad Reza; Towhidkhah, Farzad

    2014-01-01

    A one-dimensional map is proposed for modeling some of the neuronal activities, including different spiking and bursting behaviors. The model is obtained by applying some modifications on the well-known Logistic map and is named the Modified and Confined Logistic (MCL) model. Map-based neuron models are known as phenomenological models and recently, they are widely applied in modeling tasks due to their computational efficacy. Most of discrete map-based models involve two variables representing the slow-fast prototype. There are also some one-dimensional maps, which can replicate some of the neuronal activities. However, the existence of four bifurcation parameters in the MCL model gives rise to reproduction of spiking behavior with control over the frequency of the spikes, and imitation of chaotic and regular bursting responses concurrently. It is also shown that the proposed model has the potential to reproduce more realistic bursting activity by adding a second variable. Moreover the MCL model is able to replicate considerable number of experimentally observed neuronal responses introduced in Izhikevich (2004) [23]. Some analytical and numerical analyses of the MCL model dynamics are presented to explain the emersion of complex dynamics from this one-dimensional map

  19. Volterra dendritic stimulus processors and biophysical spike generators with intrinsic noise sources

    OpenAIRE

    Lazar, Aurel A.; Zhou, Yiyin

    2014-01-01

    We consider a class of neural circuit models with internal noise sources arising in sensory systems. The basic neuron model in these circuits consists of a nonlinear dendritic stimulus processor (DSP) cascaded with a biophysical spike generator (BSG). The nonlinear dendritic processor is modeled as a set of nonlinear operators that are assumed to have a Volterra series representation. Biophysical point neuron models, such as the Hodgkin-Huxley neuron, are used to model the spike generator. We...

  20. Spiking neural network for recognizing spatiotemporal sequences of spikes

    International Nuclear Information System (INIS)

    Jin, Dezhe Z.

    2004-01-01

    Sensory neurons in many brain areas spike with precise timing to stimuli with temporal structures, and encode temporally complex stimuli into spatiotemporal spikes. How the downstream neurons read out such neural code is an important unsolved problem. In this paper, we describe a decoding scheme using a spiking recurrent neural network. The network consists of excitatory neurons that form a synfire chain, and two globally inhibitory interneurons of different types that provide delayed feedforward and fast feedback inhibition, respectively. The network signals recognition of a specific spatiotemporal sequence when the last excitatory neuron down the synfire chain spikes, which happens if and only if that sequence was present in the input spike stream. The recognition scheme is invariant to variations in the intervals between input spikes within some range. The computation of the network can be mapped into that of a finite state machine. Our network provides a simple way to decode spatiotemporal spikes with diverse types of neurons

  1. Parametric model to estimate containment loads following an ex-vessel steam spike

    International Nuclear Information System (INIS)

    Lopez, R.; Hernandez, J.; Huerta, A.

    1998-01-01

    This paper describes the use of a relatively simple parametric model to estimate containment loads following an ex-vessel steam spike. The study was motivated because several PSAs have identified containment loads accompanying reactor vessel failures as a major contributor to early containment failure. The paper includes a detailed description of the simple but physically sound parametric model which was adopted to estimate containment loads following a steam spike into the reactor cavity. (author)

  2. Electricity market price spike analysis by a hybrid data model and feature selection technique

    International Nuclear Information System (INIS)

    Amjady, Nima; Keynia, Farshid

    2010-01-01

    In a competitive electricity market, energy price forecasting is an important activity for both suppliers and consumers. For this reason, many techniques have been proposed to predict electricity market prices in the recent years. However, electricity price is a complex volatile signal owning many spikes. Most of electricity price forecast techniques focus on the normal price prediction, while price spike forecast is a different and more complex prediction process. Price spike forecasting has two main aspects: prediction of price spike occurrence and value. In this paper, a novel technique for price spike occurrence prediction is presented composed of a new hybrid data model, a novel feature selection technique and an efficient forecast engine. The hybrid data model includes both wavelet and time domain variables as well as calendar indicators, comprising a large candidate input set. The set is refined by the proposed feature selection technique evaluating both relevancy and redundancy of the candidate inputs. The forecast engine is a probabilistic neural network, which are fed by the selected candidate inputs of the feature selection technique and predict price spike occurrence. The efficiency of the whole proposed method for price spike occurrence forecasting is evaluated by means of real data from the Queensland and PJM electricity markets. (author)

  3. A Multiple-Plasticity Spiking Neural Network Embedded in a Closed-Loop Control System to Model Cerebellar Pathologies.

    Science.gov (United States)

    Geminiani, Alice; Casellato, Claudia; Antonietti, Alberto; D'Angelo, Egidio; Pedrocchi, Alessandra

    2018-06-01

    The cerebellum plays a crucial role in sensorimotor control and cerebellar disorders compromise adaptation and learning of motor responses. However, the link between alterations at network level and cerebellar dysfunction is still unclear. In principle, this understanding would benefit of the development of an artificial system embedding the salient neuronal and plastic properties of the cerebellum and operating in closed-loop. To this aim, we have exploited a realistic spiking computational model of the cerebellum to analyze the network correlates of cerebellar impairment. The model was modified to reproduce three different damages of the cerebellar cortex: (i) a loss of the main output neurons (Purkinje Cells), (ii) a lesion to the main cerebellar afferents (Mossy Fibers), and (iii) a damage to a major mechanism of synaptic plasticity (Long Term Depression). The modified network models were challenged with an Eye-Blink Classical Conditioning test, a standard learning paradigm used to evaluate cerebellar impairment, in which the outcome was compared to reference results obtained in human or animal experiments. In all cases, the model reproduced the partial and delayed conditioning typical of the pathologies, indicating that an intact cerebellar cortex functionality is required to accelerate learning by transferring acquired information to the cerebellar nuclei. Interestingly, depending on the type of lesion, the redistribution of synaptic plasticity and response timing varied greatly generating specific adaptation patterns. Thus, not only the present work extends the generalization capabilities of the cerebellar spiking model to pathological cases, but also predicts how changes at the neuronal level are distributed across the network, making it usable to infer cerebellar circuit alterations occurring in cerebellar pathologies.

  4. Relative spike time coding and STDP-based orientation selectivity in the early visual system in natural continuous and saccadic vision: a computational model.

    Science.gov (United States)

    Masquelier, Timothée

    2012-06-01

    We have built a phenomenological spiking model of the cat early visual system comprising the retina, the Lateral Geniculate Nucleus (LGN) and V1's layer 4, and established four main results (1) When exposed to videos that reproduce with high fidelity what a cat experiences under natural conditions, adjacent Retinal Ganglion Cells (RGCs) have spike-time correlations at a short timescale (~30 ms), despite neuronal noise and possible jitter accumulation. (2) In accordance with recent experimental findings, the LGN filters out some noise. It thus increases the spike reliability and temporal precision, the sparsity, and, importantly, further decreases down to ~15 ms adjacent cells' correlation timescale. (3) Downstream simple cells in V1's layer 4, if equipped with Spike Timing-Dependent Plasticity (STDP), may detect these fine-scale cross-correlations, and thus connect principally to ON- and OFF-centre cells with Receptive Fields (RF) aligned in the visual space, and thereby become orientation selective, in accordance with Hubel and Wiesel (Journal of Physiology 160:106-154, 1962) classic model. Up to this point we dealt with continuous vision, and there was no absolute time reference such as a stimulus onset, yet information was encoded and decoded in the relative spike times. (4) We then simulated saccades to a static image and benchmarked relative spike time coding and time-to-first spike coding w.r.t. to saccade landing in the context of orientation representation. In both the retina and the LGN, relative spike times are more precise, less affected by pre-landing history and global contrast than absolute ones, and lead to robust contrast invariant orientation representations in V1.

  5. Estimating short-term synaptic plasticity from pre- and postsynaptic spiking

    Science.gov (United States)

    Malyshev, Aleksey; Stevenson, Ian H.

    2017-01-01

    Short-term synaptic plasticity (STP) critically affects the processing of information in neuronal circuits by reversibly changing the effective strength of connections between neurons on time scales from milliseconds to a few seconds. STP is traditionally studied using intracellular recordings of postsynaptic potentials or currents evoked by presynaptic spikes. However, STP also affects the statistics of postsynaptic spikes. Here we present two model-based approaches for estimating synaptic weights and short-term plasticity from pre- and postsynaptic spike observations alone. We extend a generalized linear model (GLM) that predicts postsynaptic spiking as a function of the observed pre- and postsynaptic spikes and allow the connection strength (coupling term in the GLM) to vary as a function of time based on the history of presynaptic spikes. Our first model assumes that STP follows a Tsodyks-Markram description of vesicle depletion and recovery. In a second model, we introduce a functional description of STP where we estimate the coupling term as a biophysically unrestrained function of the presynaptic inter-spike intervals. To validate the models, we test the accuracy of STP estimation using the spiking of pre- and postsynaptic neurons with known synaptic dynamics. We first test our models using the responses of layer 2/3 pyramidal neurons to simulated presynaptic input with different types of STP, and then use simulated spike trains to examine the effects of spike-frequency adaptation, stochastic vesicle release, spike sorting errors, and common input. We find that, using only spike observations, both model-based methods can accurately reconstruct the time-varying synaptic weights of presynaptic inputs for different types of STP. Our models also capture the differences in postsynaptic spike responses to presynaptic spikes following short vs long inter-spike intervals, similar to results reported for thalamocortical connections. These models may thus be useful

  6. Comparison of current-driven and conductance-driven neocortical model neurons with Hodgkin-Huxley voltage-gated channels.

    Science.gov (United States)

    Tiesinga, P H; José, J V; Sejnowski, T J

    2000-12-01

    Intrinsic noise and random synaptic inputs generate a fluctuating current across neuron membranes. We determine the statistics of the output spike train of a biophysical model neuron as a function of the mean and variance of the fluctuating current, when the current is white noise, or when it derives from Poisson trains of excitatory and inhibitory postsynaptic conductances. In the first case, the firing rate increases with increasing variance of the current, whereas in the latter case it decreases. In contrast, the firing rate is independent of variance (for constant mean) in the commonly used random walk, and perfect integrate-and-fire models for spike generation. The model neuron can be in the current-dominated state, representative of neurons in the in vitro slice preparation, or in the fluctuation-dominated state, representative of in vivo neurons. We discuss the functional relevance of these states to cortical information processing.

  7. Parametric Anatomical Modeling: A method for modeling the anatomical layout of neurons and their projections

    Directory of Open Access Journals (Sweden)

    Martin ePyka

    2014-09-01

    Full Text Available Computational models of neural networks can be based on a variety of different parameters. These parameters include, for example, the 3d shape of neuron layers, the neurons' spatial projection patterns, spiking dynamics and neurotransmitter systems. While many well-developed approaches are available to model, for example, the spiking dynamics, there is a lack of approaches for modeling the anatomical layout of neurons and their projections. We present a new method, called Parametric Anatomical Modeling (PAM, to fill this gap. PAM can be used to derive network connectivities and conduction delays from anatomical data, such as the position and shape of the neuronal layers and the dendritic and axonal projection patterns. Within the PAM framework, several mapping techniques between layers can account for a large variety of connection properties between pre- and post-synaptic neuron layers. PAM is implemented as a Python tool and integrated in the 3d modeling software Blender. We demonstrate on a 3d model of the hippocampal formation how PAM can help reveal complex properties of the synaptic connectivity and conduction delays, properties that might be relevant to uncover the function of the hippocampus. Based on these analyses, two experimentally testable predictions arose: i the number of neurons and the spread of connections is heterogeneously distributed across the main anatomical axes, ii the distribution of connection lengths in CA3-CA1 differ qualitatively from those between DG-CA3 and CA3-CA3. Models created by PAM can also serve as an educational tool to visualize the 3d connectivity of brain regions. The low-dimensional, but yet biologically plausible, parameter space renders PAM suitable to analyse allometric and evolutionary factors in networks and to model the complexity of real networks with comparatively little effort.

  8. Parametric Anatomical Modeling: a method for modeling the anatomical layout of neurons and their projections.

    Science.gov (United States)

    Pyka, Martin; Klatt, Sebastian; Cheng, Sen

    2014-01-01

    Computational models of neural networks can be based on a variety of different parameters. These parameters include, for example, the 3d shape of neuron layers, the neurons' spatial projection patterns, spiking dynamics and neurotransmitter systems. While many well-developed approaches are available to model, for example, the spiking dynamics, there is a lack of approaches for modeling the anatomical layout of neurons and their projections. We present a new method, called Parametric Anatomical Modeling (PAM), to fill this gap. PAM can be used to derive network connectivities and conduction delays from anatomical data, such as the position and shape of the neuronal layers and the dendritic and axonal projection patterns. Within the PAM framework, several mapping techniques between layers can account for a large variety of connection properties between pre- and post-synaptic neuron layers. PAM is implemented as a Python tool and integrated in the 3d modeling software Blender. We demonstrate on a 3d model of the hippocampal formation how PAM can help reveal complex properties of the synaptic connectivity and conduction delays, properties that might be relevant to uncover the function of the hippocampus. Based on these analyses, two experimentally testable predictions arose: (i) the number of neurons and the spread of connections is heterogeneously distributed across the main anatomical axes, (ii) the distribution of connection lengths in CA3-CA1 differ qualitatively from those between DG-CA3 and CA3-CA3. Models created by PAM can also serve as an educational tool to visualize the 3d connectivity of brain regions. The low-dimensional, but yet biologically plausible, parameter space renders PAM suitable to analyse allometric and evolutionary factors in networks and to model the complexity of real networks with comparatively little effort.

  9. Functional Relevance of Different Basal Ganglia Pathways Investigated in a Spiking Model with Reward Dependent Plasticity

    Directory of Open Access Journals (Sweden)

    Pierre Berthet

    2016-07-01

    Full Text Available The brain enables animals to behaviourally adapt in order to survive in a complex and dynamic environment, but how reward-oriented behaviours are achieved and computed by its underlying neural circuitry is an open question. To address this concern, we have developed a spiking model of the basal ganglia (BG that learns to dis-inhibit the action leading to a reward despite ongoing changes in the reward schedule. The architecture of the network features the two pathways commonly described in BG, the direct (denoted D1 and the indirect (denoted D2 pathway, as well as a loop involving striatum and the dopaminergic system. The activity of these dopaminergic neurons conveys the reward prediction error (RPE, which determines the magnitude of synaptic plasticity within the different pathways. All plastic connections implement a versatile four-factor learning rule derived from Bayesian inference that depends upon pre- and postsynaptic activity, receptor type and dopamine level. Synaptic weight updates occur in the D1 or D2 pathways depending on the sign of the RPE, and an efference copy informs upstream nuclei about the action selected. We demonstrate successful performance of the system in a multiple-choice learning task with a transiently changing reward schedule. We simulate lesioning of the various pathways and show that a condition without the D2 pathway fares worse than one without D1. Additionally, we simulate the degeneration observed in Parkinson’s disease (PD by decreasing the number of dopaminergic neurons during learning. The results suggest that the D1 pathway impairment in PD might have been overlooked. Furthermore, an analysis of the alterations in the synaptic weights shows that using the absolute reward value instead of the RPE leads to a larger change in D1.

  10. Force sensor in simulated skin and neural model mimic tactile SAI afferent spiking response to ramp and hold stimuli.

    Science.gov (United States)

    Kim, Elmer K; Wellnitz, Scott A; Bourdon, Sarah M; Lumpkin, Ellen A; Gerling, Gregory J

    2012-07-23

    The next generation of prosthetic limbs will restore sensory feedback to the nervous system by mimicking how skin mechanoreceptors, innervated by afferents, produce trains of action potentials in response to compressive stimuli. Prior work has addressed building sensors within skin substitutes for robotics, modeling skin mechanics and neural dynamics of mechanotransduction, and predicting response timing of action potentials for vibration. The effort here is unique because it accounts for skin elasticity by measuring force within simulated skin, utilizes few free model parameters for parsimony, and separates parameter fitting and model validation. Additionally, the ramp-and-hold, sustained stimuli used in this work capture the essential features of the everyday task of contacting and holding an object. This systems integration effort computationally replicates the neural firing behavior for a slowly adapting type I (SAI) afferent in its temporally varying response to both intensity and rate of indentation force by combining a physical force sensor, housed in a skin-like substrate, with a mathematical model of neuronal spiking, the leaky integrate-and-fire. Comparison experiments were then conducted using ramp-and-hold stimuli on both the spiking-sensor model and mouse SAI afferents. The model parameters were iteratively fit against recorded SAI interspike intervals (ISI) before validating the model to assess its performance. Model-predicted spike firing compares favorably with that observed for single SAI afferents. As indentation magnitude increases (1.2, 1.3, to 1.4 mm), mean ISI decreases from 98.81 ± 24.73, 54.52 ± 6.94, to 41.11 ± 6.11 ms. Moreover, as rate of ramp-up increases, ISI during ramp-up decreases from 21.85 ± 5.33, 19.98 ± 3.10, to 15.42 ± 2.41 ms. Considering first spikes, the predicted latencies exhibited a decreasing trend as stimulus rate increased, as is observed in afferent recordings. Finally, the SAI afferent's characteristic response

  11. Post-spike hyperpolarization participates in the formation of auditory behavior-related response patterns of inferior collicular neurons in Hipposideros pratti.

    Science.gov (United States)

    Li, Y-L; Fu, Z-Y; Yang, M-J; Wang, J; Peng, K; Yang, L-J; Tang, J; Chen, Q-C

    2015-03-19

    To probe the mechanism underlying the auditory behavior-related response patterns of inferior collicular neurons to constant frequency-frequency modulation (CF-FM) stimulus in Hipposideros pratti, we studied the role of post-spike hyperpolarization (PSH) in the formation of response patterns. Neurons obtained by in vivo extracellular (N=145) and intracellular (N=171) recordings could be consistently classified into single-on (SO) and double-on (DO) neurons. Using intracellular recording, we found that both SO and DO neurons have a PSH with different durations. Statistical analysis showed that most SO neurons had a longer PSH duration than DO neurons (p<0.01). These data suggested that the PSH directly participated in the formation of SO and DO neurons, and the PSH elicited by the CF component was the main synaptic mechanism underlying the SO and DO response patterns. The possible biological significance of these findings relevant to bat echolocation is discussed. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Spike timing precision in the visual front-end

    OpenAIRE

    Borghuis, B.G. (Bart Gerard)

    2003-01-01

    This thesis describes a series of investigations into the reliability of neural responses in the primary visual pathway. The results described in subsequent chapters are primarily based on extracellular recordings from single neurons in anaesthetized cats and area MT of an awake monkey, and computational model analysis. Comparison of spike timing precision in recorded and Poisson-simulated spike trains shows that spike timing in the front-end visual system is considerably more precise than on...

  13. Photospheric Current Spikes And Their Possible Association With Flares - Results from an HMI Data Driven Model

    Science.gov (United States)

    Goodman, M. L.; Kwan, C.; Ayhan, B.; Eric, S. L.

    2016-12-01

    A data driven, near photospheric magnetohydrodynamic model predicts spikes in the horizontal current density, and associated resistive heating rate. The spikes appear as increases by orders of magnitude above background values in neutral line regions (NLRs) of active regions (ARs). The largest spikes typically occur a few hours to a few days prior to M or X flares. The spikes correspond to large vertical derivatives of the horizontal magnetic field. The model takes as input the photospheric magnetic field observed by the Helioseismic & Magnetic Imager (HMI) on the Solar Dynamics Observatory (SDO) satellite. This 2.5 D field is used to determine an analytic expression for a 3 D magnetic field, from which the current density, vector potential, and electric field are computed in every AR pixel for 14 ARs. The field is not assumed to be force-free. The spurious 6, 12, and 24 hour Doppler periods due to SDO orbital motion are filtered out of the time series of the HMI magnetic field for each pixel. The subset of spikes analyzed at the pixel level are found to occur on HMI and granulation scales of 1 arcsec and 12 minutes. Spikes are found in ARs with and without M or X flares, and outside as well as inside NLRs, but the largest spikes are localized in the NLRs of ARs with M or X flares. The energy to drive the heating associated with the largest current spikes comes from bulk flow kinetic energy, not the electromagnetic field, and the current density is highly non-force free. The results suggest that, in combination with the model, HMI is revealing strong, convection driven, non-force free heating events on granulation scales, and it is plausible these events are correlated with subsequent M or X flares. More and longer time series need to be analyzed to determine if such a correlation exists.

  14. A novel role of dendritic gap junction and mechanisms underlying its interaction with thalamocortical conductance in fast spiking inhibitory neurons

    Directory of Open Access Journals (Sweden)

    Sun Qian-Quan

    2009-10-01

    Full Text Available Abstract Background Little is known about the roles of dendritic gap junctions (GJs of inhibitory interneurons in modulating temporal properties of sensory induced responses in sensory cortices. Electrophysiological dual patch-clamp recording and computational simulation methods were used in combination to examine a novel role of GJs in sensory mediated feed-forward inhibitory responses in barrel cortex layer IV and its underlying mechanisms. Results Under physiological conditions, excitatory post-junctional potentials (EPJPs interact with thalamocortical (TC inputs within an unprecedented few milliseconds (i.e. over 200 Hz to enhance the firing probability and synchrony of coupled fast-spiking (FS cells. Dendritic GJ coupling allows fourfold increase in synchrony and a significant enhancement in spike transmission efficacy in excitatory spiny stellate cells. The model revealed the following novel mechanisms: 1 rapid capacitive current (Icap underlies the activation of voltage-gated sodium channels; 2 there was less than 2 milliseconds in which the Icap underlying TC input and EPJP was coupled effectively; 3 cells with dendritic GJs had larger input conductance and smaller membrane response to weaker inputs; 4 synchrony in inhibitory networks by GJ coupling leads to reduced sporadic lateral inhibition and increased TC transmission efficacy. Conclusion Dendritic GJs of neocortical inhibitory networks can have very powerful effects in modulating the strength and the temporal properties of sensory induced feed-forward inhibitory and excitatory responses at a very high frequency band (>200 Hz. Rapid capacitive currents are identified as main mechanisms underlying interaction between two transient synaptic conductances.

  15. An Asynchronous Recurrent Network of Cellular Automaton-Based Neurons and Its Reproduction of Spiking Neural Network Activities.

    Science.gov (United States)

    Matsubara, Takashi; Torikai, Hiroyuki

    2016-04-01

    Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.

  16. Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.

    Science.gov (United States)

    Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon

    2016-01-01

    Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.

  17. Modeling thermal spike driven reactions at low temperature and application to zirconium carbide radiation damage

    Science.gov (United States)

    Ulmer, Christopher J.; Motta, Arthur T.

    2017-11-01

    The development of TEM-visible damage in materials under irradiation at cryogenic temperatures cannot be explained using classical rate theory modeling with thermally activated reactions since at low temperatures thermal reaction rates are too low. Although point defect mobility approaches zero at low temperature, the thermal spikes induced by displacement cascades enable some atom mobility as it cools. In this work a model is developed to calculate "athermal" reaction rates from the atomic mobility within the irradiation-induced thermal spikes, including both displacement cascades and electronic stopping. The athermal reaction rates are added to a simple rate theory cluster dynamics model to allow for the simulation of microstructure evolution during irradiation at cryogenic temperatures. The rate theory model is applied to in-situ irradiation of ZrC and compares well at cryogenic temperatures. The results show that the addition of the thermal spike model makes it possible to rationalize microstructure evolution in the low temperature regime.

  18. A generative spike train model with time-structured higher order correlations.

    Science.gov (United States)

    Trousdale, James; Hu, Yu; Shea-Brown, Eric; Josić, Krešimir

    2013-01-01

    Emerging technologies are revealing the spiking activity in ever larger neural ensembles. Frequently, this spiking is far from independent, with correlations in the spike times of different cells. Understanding how such correlations impact the dynamics and function of neural ensembles remains an important open problem. Here we describe a new, generative model for correlated spike trains that can exhibit many of the features observed in data. Extending prior work in mathematical finance, this generalized thinning and shift (GTaS) model creates marginally Poisson spike trains with diverse temporal correlation structures. We give several examples which highlight the model's flexibility and utility. For instance, we use it to examine how a neural network responds to highly structured patterns of inputs. We then show that the GTaS model is analytically tractable, and derive cumulant densities of all orders in terms of model parameters. The GTaS framework can therefore be an important tool in the experimental and theoretical exploration of neural dynamics.

  19. Hypocretin/Orexin Peptides Alter Spike Encoding by Serotonergic Dorsal Raphe Neurons through Two Distinct Mechanisms That Increase the Late Afterhyperpolarization.

    Science.gov (United States)

    Ishibashi, Masaru; Gumenchuk, Iryna; Miyazaki, Kenichi; Inoue, Takafumi; Ross, William N; Leonard, Christopher S

    2016-09-28

    Orexins (hypocretins) are neuropeptides that regulate multiple homeostatic processes, including reward and arousal, in part by exciting serotonergic dorsal raphe neurons, the major source of forebrain serotonin. Here, using mouse brain slices, we found that, instead of simply depolarizing these neurons, orexin-A altered the spike encoding process by increasing the postspike afterhyperpolarization (AHP) via two distinct mechanisms. This orexin-enhanced AHP (oeAHP) was mediated by both OX1 and OX2 receptors, required Ca(2+) influx, reversed near EK, and decayed with two components, the faster of which resulted from enhanced SK channel activation, whereas the slower component decayed like a slow AHP (sAHP), but was not blocked by UCL2077, an antagonist of sAHPs in some neurons. Intracellular phospholipase C inhibition (U73122) blocked the entire oeAHP, but neither component was sensitive to PKC inhibition or altered PKA signaling, unlike classical sAHPs. The enhanced SK current did not depend on IP3-mediated Ca(2+) release but resulted from A-current inhibition and the resultant spike broadening, which increased Ca(2+) influx and Ca(2+)-induced-Ca(2+) release, whereas the slower component was insensitive to these factors. Functionally, the oeAHP slowed and stabilized orexin-induced firing compared with firing produced by a virtual orexin conductance lacking the oeAHP. The oeAHP also reduced steady-state firing rate and firing fidelity in response to stimulation, without affecting the initial rate or fidelity. Collectively, these findings reveal a new orexin action in serotonergic raphe neurons and suggest that, when orexin is released during arousal and reward, it enhances the spike encoding of phasic over tonic inputs, such as those related to sensory, motor, and reward events. Orexin peptides are known to excite neurons via slow postsynaptic depolarizations. Here we elucidate a significant new orexin action that increases and prolongs the postspike

  20. Effective stimuli for constructing reliable neuron models.

    Directory of Open Access Journals (Sweden)

    Shaul Druckmann

    2011-08-01

    Full Text Available The rich dynamical nature of neurons poses major conceptual and technical challenges for unraveling their nonlinear membrane properties. Traditionally, various current waveforms have been injected at the soma to probe neuron dynamics, but the rationale for selecting specific stimuli has never been rigorously justified. The present experimental and theoretical study proposes a novel framework, inspired by learning theory, for objectively selecting the stimuli that best unravel the neuron's dynamics. The efficacy of stimuli is assessed in terms of their ability to constrain the parameter space of biophysically detailed conductance-based models that faithfully replicate the neuron's dynamics as attested by their ability to generalize well to the neuron's response to novel experimental stimuli. We used this framework to evaluate a variety of stimuli in different types of cortical neurons, ages and animals. Despite their simplicity, a set of stimuli consisting of step and ramp current pulses outperforms synaptic-like noisy stimuli in revealing the dynamics of these neurons. The general framework that we propose paves a new way for defining, evaluating and standardizing effective electrical probing of neurons and will thus lay the foundation for a much deeper understanding of the electrical nature of these highly sophisticated and non-linear devices and of the neuronal networks that they compose.

  1. Spatio-Temporal Modeling of Neuron Fields

    DEFF Research Database (Denmark)

    Lund, Adam

    The starting point and focal point for this thesis was stochastic dynamical modelling of neuronal imaging data with the declared objective of drawing inference, within this model framework, in a large-scale (high-dimensional) data setting. Implicitly this objective entails carrying out three......-temporal array data. This framework was developed with neuron field models in mind but may in turn be applied to other settings conforming to the spatio-temporal array data setup....

  2. Neuronal modelling of baroreflex response to orthostatic stress

    Science.gov (United States)

    Samin, Azfar

    The accelerations experienced in aerial combat can cause pilot loss of consciousness (GLOC) due to a critical reduction in cerebral blood circulation. The development of smart protective equipment requires understanding of how the brain processes blood pressure (BP) information in response to acceleration. We present a biologically plausible model of the Baroreflex to investigate the neural correlates of short-term BP control under acceleration or orthostatic stress. The neuronal network model, which employs an integrate-and-fire representation of a biological neuron, comprises the sensory, motor, and the central neural processing areas that form the Baroreflex. Our modelling strategy is to test hypotheses relating to the encoding mechanisms of multiple sensory inputs to the nucleus tractus solitarius (NTS), the site of central neural processing. The goal is to run simulations and reproduce model responses that are consistent with the variety of available experimental data. Model construction and connectivity are inspired by the available anatomical and neurophysiological evidence that points to a barotopic organization in the NTS, and the presence of frequency-dependent synaptic depression, which provides a mechanism for generating non-linear local responses in NTS neurons that result in quantifiable dynamic global baroreflex responses. The entire physiological range of BP and rate of change of BP variables is encoded in a palisade of NTS neurons in that the spike responses approximate Gaussian 'tuning' curves. An adapting weighted-average decoding scheme computes the motor responses and a compensatory signal regulates the heart rate (HR). Model simulations suggest that: (1) the NTS neurons can encode the hydrostatic pressure difference between two vertically separated sensory receptor regions at +Gz, and use changes in that difference for the regulation of HR; (2) even though NTS neurons do not fire with a cardiac rhythm seen in the afferents, pulse

  3. Competition model for aperiodic stochastic resonance in a Fitzhugh-Nagumo model of cardiac sensory neurons.

    Science.gov (United States)

    Kember, G C; Fenton, G A; Armour, J A; Kalyaniwalla, N

    2001-04-01

    Regional cardiac control depends upon feedback of the status of the heart from afferent neurons responding to chemical and mechanical stimuli as transduced by an array of sensory neurites. Emerging experimental evidence shows that neural control in the heart may be partially exerted using subthreshold inputs that are amplified by noisy mechanical fluctuations. This amplification is known as aperiodic stochastic resonance (ASR). Neural control in the noisy, subthreshold regime is difficult to see since there is a near absence of any correlation between input and the output, the latter being the average firing (spiking) rate of the neuron. This lack of correlation is unresolved by traditional energy models of ASR since these models are unsuitable for identifying "cause and effect" between such inputs and outputs. In this paper, the "competition between averages" model is used to determine what portion of a noisy, subthreshold input is responsible, on average, for the output of sensory neurons as represented by the Fitzhugh-Nagumo equations. A physiologically relevant conclusion of this analysis is that a nearly constant amount of input is responsible for a spike, on average, and this amount is approximately independent of the firing rate. Hence, correlation measures are generally reduced as the firing rate is lowered even though neural control under this model is actually unaffected.

  4. Low-intensity repetitive magnetic stimulation lowers action potential threshold and increases spike firing in layer 5 pyramidal neurons in vitro.

    Science.gov (United States)

    Tang, Alexander D; Hong, Ivan; Boddington, Laura J; Garrett, Andrew R; Etherington, Sarah; Reynolds, John N J; Rodger, Jennifer

    2016-10-29

    Repetitive transcranial magnetic stimulation (rTMS) has become a popular method of modulating neural plasticity in humans. Clinically, rTMS is delivered at high intensities to modulate neuronal excitability. While the high-intensity magnetic field can be targeted to stimulate specific cortical regions, areas adjacent to the targeted area receive stimulation at a lower intensity and may contribute to the overall plasticity induced by rTMS. We have previously shown that low-intensity rTMS induces molecular and structural plasticity in vivo, but the effects on membrane properties and neural excitability have not been investigated. Here we investigated the acute effect of low-intensity repetitive magnetic stimulation (LI-rMS) on neuronal excitability and potential changes on the passive and active electrophysiological properties of layer 5 pyramidal neurons in vitro. Whole-cell current clamp recordings were made at baseline prior to subthreshold LI-rMS (600 pulses of iTBS, n=9 cells from 7 animals) or sham (n=10 cells from 9 animals), immediately after stimulation, as well as 10 and 20min post-stimulation. Our results show that LI-rMS does not alter passive membrane properties (resting membrane potential and input resistance) but hyperpolarises action potential threshold and increases evoked spike-firing frequency. Increases in spike firing frequency were present throughout the 20min post-stimulation whereas action potential (AP) threshold hyperpolarization was present immediately after stimulation and at 20min post-stimulation. These results provide evidence that LI-rMS alters neuronal excitability of excitatory neurons. We suggest that regions outside the targeted region of high-intensity rTMS are susceptible to neuromodulation and may contribute to rTMS-induced plasticity. Copyright © 2016 IBRO. All rights reserved.

  5. Neonatal NMDA receptor blockade disrupts spike timing and glutamatergic synapses in fast spiking interneurons in a NMDA receptor hypofunction model of schizophrenia.

    Directory of Open Access Journals (Sweden)

    Kevin S Jones

    Full Text Available The dysfunction of parvalbumin-positive, fast-spiking interneurons (FSI is considered a primary contributor to the pathophysiology of schizophrenia (SZ, but deficits in FSI physiology have not been explicitly characterized. We show for the first time, that a widely-employed model of schizophrenia minimizes first spike latency and increases GluN2B-mediated current in neocortical FSIs. The reduction in FSI first-spike latency coincides with reduced expression of the Kv1.1 potassium channel subunit which provides a biophysical explanation for the abnormal spiking behavior. Similarly, the increase in NMDA current coincides with enhanced expression of the GluN2B NMDA receptor subunit, specifically in FSIs. In this study mice were treated with the NMDA receptor antagonist, MK-801, during the first week of life. During adolescence, we detected reduced spike latency and increased GluN2B-mediated NMDA current in FSIs, which suggests transient disruption of NMDA signaling during neonatal development exerts lasting changes in the cellular and synaptic physiology of neocortical FSIs. Overall, we propose these physiological disturbances represent a general impairment to the physiological maturation of FSIs which may contribute to schizophrenia-like behaviors produced by this model.

  6. Training spiking neural networks to associate spatio-temporal input-output spike patterns

    OpenAIRE

    Mohemmed, A; Schliebs, S; Matsuda, S; Kasabov, N

    2013-01-01

    In a previous work (Mohemmed et al., Method for training a spiking neuron to associate input–output spike trains) [1] we have proposed a supervised learning algorithm based on temporal coding to train a spiking neuron to associate input spatiotemporal spike patterns to desired output spike patterns. The algorithm is based on the conversion of spike trains into analogue signals and the application of the Widrow–Hoff learning rule. In this paper we present a mathematical formulation of the prop...

  7. Predictive coding of dynamical variables in balanced spiking networks.

    Science.gov (United States)

    Boerlin, Martin; Machens, Christian K; Denève, Sophie

    2013-01-01

    Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated.

  8. Complete Neuron-Astrocyte Interaction Model: Digital Multiplierless Design and Networking Mechanism.

    Science.gov (United States)

    Haghiri, Saeed; Ahmadi, Arash; Saif, Mehrdad

    2017-02-01

    Glial cells, also known as neuroglia or glia, are non-neuronal cells providing support and protection for neurons in the central nervous system (CNS). They also act as supportive cells in the brain. Among a variety of glial cells, the star-shaped glial cells, i.e., astrocytes, are the largest cell population in the brain. The important role of astrocyte such as neuronal synchronization, synaptic information regulation, feedback to neural activity and extracellular regulation make the astrocytes play a vital role in brain disease. This paper presents a modified complete neuron-astrocyte interaction model that is more suitable for efficient and large scale biological neural network realization on digital platforms. Simulation results show that the modified complete interaction model can reproduce biological-like behavior of the original neuron-astrocyte mechanism. The modified interaction model is investigated in terms of digital realization feasibility and cost targeting a low cost hardware implementation. Networking behavior of this interaction is investigated and compared between two cases: i) the neuron spiking mechanism without astrocyte effects, and ii) the effect of astrocyte in regulating the neurons behavior and synaptic transmission via controlling the LTP and LTD processes. Hardware implementation on FPGA shows that the modified model mimics the main mechanism of neuron-astrocyte communication with higher performance and considerably lower hardware overhead cost compared with the original interaction model.

  9. Mirror neurons: functions, mechanisms and models.

    Science.gov (United States)

    Oztop, Erhan; Kawato, Mitsuo; Arbib, Michael A

    2013-04-12

    Mirror neurons for manipulation fire both when the animal manipulates an object in a specific way and when it sees another animal (or the experimenter) perform an action that is more or less similar. Such neurons were originally found in macaque monkeys, in the ventral premotor cortex, area F5 and later also in the inferior parietal lobule. Recent neuroimaging data indicate that the adult human brain is endowed with a "mirror neuron system," putatively containing mirror neurons and other neurons, for matching the observation and execution of actions. Mirror neurons may serve action recognition in monkeys as well as humans, whereas their putative role in imitation and language may be realized in human but not in monkey. This article shows the important role of computational models in providing sufficient and causal explanations for the observed phenomena involving mirror systems and the learning processes which form them, and underlines the need for additional circuitry to lift up the monkey mirror neuron circuit to sustain the posited cognitive functions attributed to the human mirror neuron system. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  10. Cellular Origin of Spontaneous Ganglion Cell Spike Activity in Animal Models of Retinitis Pigmentosa

    Directory of Open Access Journals (Sweden)

    David J. Margolis

    2011-01-01

    Full Text Available Here we review evidence that loss of photoreceptors due to degenerative retinal disease causes an increase in the rate of spontaneous ganglion spike discharge. Information about persistent spike activity is important since it is expected to add noise to the communication between the eye and the brain and thus impact the design and effective use of retinal prosthetics for restoring visual function in patients blinded by disease. Patch-clamp recordings from identified types of ON and OFF retinal ganglion cells in the adult (36–210 d old rd1 mouse show that the ongoing oscillatory spike activity in both cell types is driven by strong rhythmic synaptic input from presynaptic neurons that is blocked by CNQX. The recurrent synaptic activity may arise in a negative feedback loop between a bipolar cell and an amacrine cell that exhibits resonant behavior and oscillations in membrane potential when the normal balance between excitation and inhibition is disrupted by the absence of photoreceptor input.

  11. Olfactory learning without the mushroom bodies: Spiking neural network models of the honeybee lateral antennal lobe tract reveal its capacities in odour memory tasks of varied complexities.

    Science.gov (United States)

    MaBouDi, HaDi; Shimazaki, Hideaki; Giurfa, Martin; Chittka, Lars

    2017-06-01

    The honeybee olfactory system is a well-established model for understanding functional mechanisms of learning and memory. Olfactory stimuli are first processed in the antennal lobe, and then transferred to the mushroom body and lateral horn through dual pathways termed medial and lateral antennal lobe tracts (m-ALT and l-ALT). Recent studies reported that honeybees can perform elemental learning by associating an odour with a reward signal even after lesions in m-ALT or blocking the mushroom bodies. To test the hypothesis that the lateral pathway (l-ALT) is sufficient for elemental learning, we modelled local computation within glomeruli in antennal lobes with axons of projection neurons connecting to a decision neuron (LHN) in the lateral horn. We show that inhibitory spike-timing dependent plasticity (modelling non-associative plasticity by exposure to different stimuli) in the synapses from local neurons to projection neurons decorrelates the projection neurons' outputs. The strength of the decorrelations is regulated by global inhibitory feedback within antennal lobes to the projection neurons. By additionally modelling octopaminergic modification of synaptic plasticity among local neurons in the antennal lobes and projection neurons to LHN connections, the model can discriminate and generalize olfactory stimuli. Although positive patterning can be accounted for by the l-ALT model, negative patterning requires further processing and mushroom body circuits. Thus, our model explains several-but not all-types of associative olfactory learning and generalization by a few neural layers of odour processing in the l-ALT. As an outcome of the combination between non-associative and associative learning, the modelling approach allows us to link changes in structural organization of honeybees' antennal lobes with their behavioural performances over the course of their life.

  12. Attention deficit associated with early life interictal spikes in a rat model is improved with ACTH.

    Directory of Open Access Journals (Sweden)

    Amanda E Hernan

    Full Text Available Children with epilepsy often present with pervasive cognitive and behavioral comorbidities including working memory impairments, attention deficit hyperactivity disorder (ADHD and autism spectrum disorder. These non-seizure characteristics are severely detrimental to overall quality of life. Some of these children, particularly those with epilepsies classified as Landau-Kleffner Syndrome or continuous spike and wave during sleep, have infrequent seizure activity but frequent focal epileptiform activity. This frequent epileptiform activity is thought to be detrimental to cognitive development; however, it is also possible that these IIS events initiate pathophysiological pathways in the developing brain that may be independently associated with cognitive deficits. These hypotheses are difficult to address due to the previous lack of an appropriate animal model. To this end, we have recently developed a rat model to test the role of frequent focal epileptiform activity in the prefrontal cortex. Using microinjections of a GABA(A antagonist (bicuculline methiodine delivered multiple times per day from postnatal day (p 21 to p25, we showed that rat pups experiencing frequent, focal, recurrent epileptiform activity in the form of interictal spikes during neurodevelopment have significant long-term deficits in attention and sociability that persist into adulthood. To determine if treatment with ACTH, a drug widely used to treat early-life seizures, altered outcome we administered ACTH once per day subcutaneously during the time of the induced interictal spike activity. We show a modest amelioration of the attention deficit seen in animals with a history of early life interictal spikes with ACTH, in the absence of alteration of interictal spike activity. These results suggest that pharmacological intervention that is not targeted to the interictal spike activity is worthy of future study as it may be beneficial for preventing or ameliorating adverse

  13. SNAVA-A real-time multi-FPGA multi-model spiking neural network simulation architecture.

    Science.gov (United States)

    Sripad, Athul; Sanchez, Giovanny; Zapata, Mireya; Pirrone, Vito; Dorta, Taho; Cambria, Salvatore; Marti, Albert; Krishnamourthy, Karthikeyan; Madrenas, Jordi

    2018-01-01

    Spiking Neural Networks (SNN) for Versatile Applications (SNAVA) simulation platform is a scalable and programmable parallel architecture that supports real-time, large-scale, multi-model SNN computation. This parallel architecture is implemented in modern Field-Programmable Gate Arrays (FPGAs) devices to provide high performance execution and flexibility to support large-scale SNN models. Flexibility is defined in terms of programmability, which allows easy synapse and neuron implementation. This has been achieved by using a special-purpose Processing Elements (PEs) for computing SNNs, and analyzing and customizing the instruction set according to the processing needs to achieve maximum performance with minimum resources. The parallel architecture is interfaced with customized Graphical User Interfaces (GUIs) to configure the SNN's connectivity, to compile the neuron-synapse model and to monitor SNN's activity. Our contribution intends to provide a tool that allows to prototype SNNs faster than on CPU/GPU architectures but significantly cheaper than fabricating a customized neuromorphic chip. This could be potentially valuable to the computational neuroscience and neuromorphic engineering communities. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Learning to Recognize Actions From Limited Training Examples Using a Recurrent Spiking Neural Model

    Science.gov (United States)

    Panda, Priyadarshini; Srinivasa, Narayan

    2018-01-01

    A fundamental challenge in machine learning today is to build a model that can learn from few examples. Here, we describe a reservoir based spiking neural model for learning to recognize actions with a limited number of labeled videos. First, we propose a novel encoding, inspired by how microsaccades influence visual perception, to extract spike information from raw video data while preserving the temporal correlation across different frames. Using this encoding, we show that the reservoir generalizes its rich dynamical activity toward signature action/movements enabling it to learn from few training examples. We evaluate our approach on the UCF-101 dataset. Our experiments demonstrate that our proposed reservoir achieves 81.3/87% Top-1/Top-5 accuracy, respectively, on the 101-class data while requiring just 8 video examples per class for training. Our results establish a new benchmark for action recognition from limited video examples for spiking neural models while yielding competitive accuracy with respect to state-of-the-art non-spiking neural models. PMID:29551962

  15. Novel model of neuronal bioenergetics

    DEFF Research Database (Denmark)

    Bak, Lasse Kristoffer; Obel, Linea Lykke Frimodt; Walls, Anne B

    2012-01-01

    matrix thus activating the tricarboxylic acid cycle dehydrogenases. This will lead to a lower activity of the MASH (malate-aspartate shuttle), which in turn will result in anaerobic glycolysis and lactate production rather than lactate utilization. In the present work, we have investigated the effect...... is positively correlated with intracellular Ca2+ whereas lactate utilization is not. This result lends further support for a significant role of glucose in neuronal bioenergetics and that Ca2+ signalling may control the switch between glucose and lactate utilization during synaptic activity. Based...... a positive correlation between oxidative metabolism of glucose and Ca2+ signalling....

  16. Population density models of integrate-and-fire neurons with jumps: well-posedness.

    Science.gov (United States)

    Dumont, Grégory; Henry, Jacques

    2013-09-01

    In this paper we study the well-posedness of different models of population of leaky integrate-and-fire neurons with a population density approach. The synaptic interaction between neurons is modeled by a potential jump at the reception of a spike. We study populations that are self excitatory or self inhibitory. We distinguish the cases where this interaction is instantaneous from the one where there is a repartition of conduction delays. In the case of a bounded density of delays both excitatory and inhibitory population models are shown to be well-posed. But without conduction delay the solution of the model of self excitatory neurons may blow up. We analyze the different behaviours of the model with jumps compared to its diffusion approximation.

  17. Context-aware modeling of neuronal morphologies

    Directory of Open Access Journals (Sweden)

    Benjamin eTorben-Nielsen

    2014-09-01

    Full Text Available Neuronal morphologies are pivotal for brain functioning: physical overlap between dendrites and axons constrain the circuit topology, and the precise shape and composition of dendrites determine the integration of inputs to produce an output signal. At the same time, morphologies are highly diverse and variant. The variance, presumably, originates from neurons developing in a densely packed brain substrate where they interact (e.g., repulsion or attraction with other actors in this substrate. However, when studying neurons their context is never part of the analysis and they are treated as if they existed in isolation.Here we argue that to fully understand neuronal morphology and its variance it is important to consider neurons in relation to each other and to other actors in the surrounding brain substrate, i.e., their context. We propose a context-aware computational framework, NeuroMaC, in which large numbers of neurons can be grown simultaneously according to growth rules expressed in terms of interactions between the developing neuron and the surrounding brain substrate.As a proof of principle, we demonstrate that by using NeuroMaC we can generate accurate virtual morphologies of distinct classes both in isolation and as part of neuronal forests. Accuracy is validated against population statistics of experimentally reconstructed morphologies. We show that context-aware generation of neurons can explain characteristics of variation. Indeed, plausible variation is an inherent property of the morphologies generated by context-aware rules. We speculate about the applicability of this framework to investigate morphologies and circuits, to classify healthy and pathological morphologies, and to generate large quantities of morphologies for large-scale modeling.

  18. An introduction to modeling neuronal dynamics

    CERN Document Server

    Börgers, Christoph

    2017-01-01

    This book is intended as a text for a one-semester course on Mathematical and Computational Neuroscience for upper-level undergraduate and beginning graduate students of mathematics, the natural sciences, engineering, or computer science. An undergraduate introduction to differential equations is more than enough mathematical background. Only a slim, high school-level background in physics is assumed, and none in biology. Topics include models of individual nerve cells and their dynamics, models of networks of neurons coupled by synapses and gap junctions, origins and functions of population rhythms in neuronal networks, and models of synaptic plasticity. An extensive online collection of Matlab programs generating the figures accompanies the book. .

  19. Visually Evoked Spiking Evolves While Spontaneous Ongoing Dynamics Persist

    DEFF Research Database (Denmark)

    Huys, Raoul; Jirsa, Viktor K; Darokhan, Ziauddin

    2016-01-01

    by evoked spiking. This study of laminar recordings of spontaneous spiking and visually evoked spiking of neurons in the ferret primary visual cortex shows that the spiking dynamics does not change: the spontaneous spiking as well as evoked spiking is controlled by a stable and persisting fixed point...

  20. Kinetic Modeling of Methionine Oxidation in Monoclonal Antibodies from Hydrogen Peroxide Spiking Studies.

    Science.gov (United States)

    Hui, Ada; Lam, Xanthe M; Kuehl, Christopher; Grauschopf, Ulla; Wang, Y John

    2015-01-01

    When isolator technology is applied to biotechnology drug product fill-finish process, hydrogen peroxide (H2O2) spiking studies for the determination of the sensitivity of protein to residual peroxide in the isolator can be useful for assessing a maximum vapor phase hydrogen peroxide (VPHP) level. When monoclonal antibody (mAb) drug products were spiked with H2O2, an increase in methionine (Met 252 and Met 428) oxidation in the Fc region of the mAbs with a decrease in H2O2 concentration was observed for various levels of spiked-in peroxide. The reaction between Fc-Met and H2O2 was stoichiometric (i.e., 1:1 molar ratio), and the reaction rate was dependent on the concentrations of mAb and H2O2. The consumption of H2O2 by Fc-Met oxidation in the mAb followed pseudo first-order kinetics, and the rate was proportional to mAb concentration. The extent of Met 428 oxidation was half of that of Met 252, supporting that Met 252 is twice as reactive as Met 428. Similar results were observed for free L-methionine when spiked with H2O2. However, mAb formulation excipients may affect the rate of H2O2 consumption. mAb formulations containing trehalose or sucrose had faster H2O2 consumption rates than formulations without the sugars, which could be the result of impurities (e.g., metal ions) present in the excipients that may act as catalysts. Based on the H2O2 spiking study results, we can predict the amount Fc-Met oxidation for a given protein concentration and H2O2 level. Our kinetic modeling of the reaction between Fc-Met oxidation and H2O2 provides an outline to design a H2O2 spiking study to support the use of VPHP isolator for antibody drug product manufacture. Isolator technology is increasing used in drug product manufacturing of biotherapeutics. In order to understand the impact of residual vapor phase hydrogen peroxide (VPHP) levels on protein product quality, hydrogen peroxide (H2O2) spiking studies may be performed to determine the sensitivity of monoclonal antibody

  1. A codimension-2 bifurcation controlling endogenous bursting activity and pulse-triggered responses of a neuron model.

    Science.gov (United States)

    Barnett, William H; Cymbalyuk, Gennady S

    2014-01-01

    The dynamics of individual neurons are crucial for producing functional activity in neuronal networks. An open question is how temporal characteristics can be controlled in bursting activity and in transient neuronal responses to synaptic input. Bifurcation theory provides a framework to discover generic mechanisms addressing this question. We present a family of mechanisms organized around a global codimension-2 bifurcation. The cornerstone bifurcation is located at the intersection of the border between bursting and spiking and the border between bursting and silence. These borders correspond to the blue sky catastrophe bifurcation and the saddle-node bifurcation on an invariant circle (SNIC) curves, respectively. The cornerstone bifurcation satisfies the conditions for both the blue sky catastrophe and SNIC. The burst duration and interburst interval increase as the inverse of the square root of the difference between the corresponding bifurcation parameter and its bifurcation value. For a given set of burst duration and interburst interval, one can find the parameter values supporting these temporal characteristics. The cornerstone bifurcation also determines the responses of silent and spiking neurons. In a silent neuron with parameters close to the SNIC, a pulse of current triggers a single burst. In a spiking neuron with parameters close to the blue sky catastrophe, a pulse of current temporarily silences the neuron. These responses are stereotypical: the durations of the transient intervals-the duration of the burst and the duration of latency to spiking-are governed by the inverse-square-root laws. The mechanisms described here could be used to coordinate neuromuscular control in central pattern generators. As proof of principle, we construct small networks that control metachronal-wave motor pattern exhibited in locomotion. This pattern is determined by the phase relations of bursting neurons in a simple central pattern generator modeled by a chain of

  2. Computational modeling of distinct neocortical oscillations driven by cell-type selective optogenetic drive: Separable resonant circuits controlled by low-threshold spiking and fast-spiking interneurons

    Directory of Open Access Journals (Sweden)

    Dorea Vierling-Claassen

    2010-11-01

    Full Text Available Selective optogenetic drive of fast spiking interneurons (FS leads to enhanced local field potential (LFP power across the traditional gamma frequency band (20-80Hz; Cardin et al., 2009. In contrast, drive to regular-spiking pyramidal cells (RS enhances power at lower frequencies, with a peak at 8 Hz. The first result is consistent with previous computational studies emphasizing the role of FS and the time constant of GABAA synaptic inhibition in gamma rhythmicity. However, the same theoretical models do not typically predict low-frequency LFP enhancement with RS drive. To develop hypotheses as to how the same network can support these contrasting behaviors, we constructed a biophysically principled network model of primary somatosensory neocortex containing FS, RS and low-threshold-spiking (LTS interneurons. Cells were modeled with detailed cell anatomy and physiology, multiple dendritic compartments, and included active somatic and dendritic ionic currents. Consistent with prior studies, the model demonstrated gamma resonance during FS drive, dependent on the time-constant of GABAA inhibition induced by synchronous FS activity. Lower frequency enhancement during RS drive was replicated only on inclusion of an inhibitory LTS population, whose activation was critically dependent on RS synchrony and evoked longer-lasting inhibition. Our results predict that differential recruitment of FS and LTS inhibitory populations is essential to the observed cortical dynamics and may provide a means for amplifying the natural expression of distinct oscillations in normal cortical processing.

  3. Inferring oscillatory modulation in neural spike trains.

    Science.gov (United States)

    Arai, Kensuke; Kass, Robert E

    2017-10-01

    Oscillations are observed at various frequency bands in continuous-valued neural recordings like the electroencephalogram (EEG) and local field potential (LFP) in bulk brain matter, and analysis of spike-field coherence reveals that spiking of single neurons often occurs at certain phases of the global oscillation. Oscillatory modulation has been examined in relation to continuous-valued oscillatory signals, and independently from the spike train alone, but behavior or stimulus triggered firing-rate modulation, spiking sparseness, presence of slow modulation not locked to stimuli and irregular oscillations with large variability in oscillatory periods, present challenges to searching for temporal structures present in the spike train. In order to study oscillatory modulation in real data collected under a variety of experimental conditions, we describe a flexible point-process framework we call the Latent Oscillatory Spike Train (LOST) model to decompose the instantaneous firing rate in biologically and behaviorally relevant factors: spiking refractoriness, event-locked firing rate non-stationarity, and trial-to-trial variability accounted for by baseline offset and a stochastic oscillatory modulation. We also extend the LOST model to accommodate changes in the modulatory structure over the duration of the experiment, and thereby discover trial-to-trial variability in the spike-field coherence of a rat primary motor cortical neuron to the LFP theta rhythm. Because LOST incorporates a latent stochastic auto-regressive term, LOST is able to detect oscillations when the firing rate is low, the modulation is weak, and when the modulating oscillation has a broad spectral peak.

  4. Chimera-like states in a neuronal network model of the cat brain

    Science.gov (United States)

    Santos, M. S.; Szezech, J. D.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Batista, A. M.; Viana, R. L.; Kurths, J.

    2017-08-01

    Neuronal systems have been modeled by complex networks in different description levels. Recently, it has been verified that networks can simultaneously exhibit one coherent and other incoherent domain, known as chimera states. In this work, we study the existence of chimera states in a network considering the connectivity matrix based on the cat cerebral cortex. The cerebral cortex of the cat can be separated in 65 cortical areas organised into the four cognitive regions: visual, auditory, somatosensory-motor and frontolimbic. We consider a network where the local dynamics is given by the Hindmarsh-Rose model. The Hindmarsh-Rose equations are a well known model of neuronal activity that has been considered to simulate membrane potential in neuron. Here, we analyse under which conditions chimera states are present, as well as the affects induced by intensity of coupling on them. We observe the existence of chimera states in that incoherent structure can be composed of desynchronised spikes or desynchronised bursts. Moreover, we find that chimera states with desynchronised bursts are more robust to neuronal noise than with desynchronised spikes.

  5. Adaptive cerebellar spiking model embedded in the control loop: context switching and robustness against noise.

    Science.gov (United States)

    Luque, N R; Garrido, J A; Carrillo, R R; Tolu, S; Ros, E

    2011-10-01

    This work evaluates the capability of a spiking cerebellar model embedded in different loop architectures (recurrent, forward, and forward&recurrent) to control a robotic arm (three degrees of freedom) using a biologically-inspired approach. The implemented spiking network relies on synaptic plasticity (long-term potentiation and long-term depression) to adapt and cope with perturbations in the manipulation scenario: changes in dynamics and kinematics of the simulated robot. Furthermore, the effect of several degrees of noise in the cerebellar input pathway (mossy fibers) was assessed depending on the employed control architecture. The implemented cerebellar model managed to adapt in the three control architectures to different dynamics and kinematics providing corrective actions for more accurate movements. According to the obtained results, coupling both control architectures (forward&recurrent) provides benefits of the two of them and leads to a higher robustness against noise.

  6. Towards reproducible descriptions of neuronal network models.

    Directory of Open Access Journals (Sweden)

    Eilen Nordlie

    2009-08-01

    Full Text Available Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing--and thinking about--complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain.

  7. Enhancement of Spike-Timing-Dependent Plasticity in Spiking Neural Systems with Noise.

    Science.gov (United States)

    Nobukawa, Sou; Nishimura, Haruhiko

    2016-08-01

    Synaptic plasticity is widely recognized to support adaptable information processing in the brain. Spike-timing-dependent plasticity, one subtype of plasticity, can lead to synchronous spike propagation with temporal spiking coding information. Recently, it was reported that in a noisy environment, like the actual brain, the spike-timing-dependent plasticity may be made efficient by the effect of stochastic resonance. In the stochastic resonance, the presence of noise helps a nonlinear system in amplifying a weak (under barrier) signal. However, previous studies have ignored the full variety of spiking patterns and many relevant factors in neural dynamics. Thus, in order to prove the physiological possibility for the enhancement of spike-timing-dependent plasticity by stochastic resonance, it is necessary to demonstrate that this stochastic resonance arises in realistic cortical neural systems. In this study, we evaluate this stochastic resonance phenomenon in the realistic cortical neural system described by the Izhikevich neuron model and compare the characteristics of typical spiking patterns of regular spiking, intrinsically bursting and chattering experimentally observed in the cortex.

  8. A robust and biologically plausible spike pattern recognition network.

    Science.gov (United States)

    Larson, Eric; Perrone, Ben P; Sen, Kamal; Billimoria, Cyrus P

    2010-11-17

    The neural mechanisms that enable recognition of spiking patterns in the brain are currently unknown. This is especially relevant in sensory systems, in which the brain has to detect such patterns and recognize relevant stimuli by processing peripheral inputs; in particular, it is unclear how sensory systems can recognize time-varying stimuli by processing spiking activity. Because auditory stimuli are represented by time-varying fluctuations in frequency content, it is useful to consider how such stimuli can be recognized by neural processing. Previous models for sound recognition have used preprocessed or low-level auditory signals as input, but complex natural sounds such as speech are thought to be processed in auditory cortex, and brain regions involved in object recognition in general must deal with the natural variability present in spike trains. Thus, we used neural recordings to investigate how a spike pattern recognition system could deal with the intrinsic variability and diverse response properties of cortical spike trains. We propose a biologically plausible computational spike pattern recognition model that uses an excitatory chain of neurons to spatially preserve the temporal representation of the spike pattern. Using a single neural recording as input, the model can be trained using a spike-timing-dependent plasticity-based learning rule to recognize neural responses to 20 different bird songs with >98% accuracy and can be stimulated to evoke reverse spike pattern playback. Although we test spike train recognition performance in an auditory task, this model can be applied to recognize sufficiently reliable spike patterns from any neuronal system.

  9. Bidirectional coupling between astrocytes and neurons mediates learning and dynamic coordination in the brain: a multiple modeling approach.

    Directory of Open Access Journals (Sweden)

    John J Wade

    Full Text Available In recent years research suggests that astrocyte networks, in addition to nutrient and waste processing functions, regulate both structural and synaptic plasticity. To understand the biological mechanisms that underpin such plasticity requires the development of cell level models that capture the mutual interaction between astrocytes and neurons. This paper presents a detailed model of bidirectional signaling between astrocytes and neurons (the astrocyte-neuron model or AN model which yields new insights into the computational role of astrocyte-neuronal coupling. From a set of modeling studies we demonstrate two significant findings. Firstly, that spatial signaling via astrocytes can relay a "learning signal" to remote synaptic sites. Results show that slow inward currents cause synchronized postsynaptic activity in remote neurons and subsequently allow Spike-Timing-Dependent Plasticity based learning to occur at the associated synapses. Secondly, that bidirectional communication between neurons and astrocytes underpins dynamic coordination between neuron clusters. Although our composite AN model is presently applied to simplified neural structures and limited to coordination between localized neurons, the principle (which embodies structural, functional and dynamic complexity, and the modeling strategy may be extended to coordination among remote neuron clusters.

  10. On the genesis of spike-wave oscillations in a mean-field model of human thalamic and corticothalamic dynamics

    International Nuclear Information System (INIS)

    Rodrigues, Serafim; Terry, John R.; Breakspear, Michael

    2006-01-01

    In this Letter, the genesis of spike-wave activity-a hallmark of many generalized epileptic seizures-is investigated in a reduced mean-field model of human neural activity. Drawing upon brain modelling and dynamical systems theory, we demonstrate that the thalamic circuitry of the system is crucial for the generation of these abnormal rhythms, observing that the combination of inhibition from reticular nuclei and excitation from the cortical signal, interplay to generate the spike-wave oscillation. The mechanism revealed provides an explanation of why approaches based on linear stability and Heaviside approximations to the activation function have failed to explain the phenomena of spike-wave behaviour in mean-field models. A mathematical understanding of this transition is a crucial step towards relating spiking network models and mean-field approaches to human brain modelling

  11. Stochastic price modeling of high volatility, mean-reverting, spike-prone commodities: The Australian wholesale spot electricity market

    International Nuclear Information System (INIS)

    Higgs, Helen; Worthington, Andrew

    2008-01-01

    It is commonly known that wholesale spot electricity markets exhibit high price volatility, strong mean-reversion and frequent extreme price spikes. This paper employs a basic stochastic model, a mean-reverting model and a regime-switching model to capture these features in the Australian national electricity market (NEM), comprising the interconnected markets of New South Wales, Queensland, South Australia and Victoria. Daily spot prices from 1 January 1999 to 31 December 2004 are employed. The results show that the regime-switching model outperforms the basic stochastic and mean-reverting models. Electricity prices are also found to exhibit stronger mean-reversion after a price spike than in the normal period, and price volatility is more than fourteen times higher in spike periods than in normal periods. The probability of a spike on any given day ranges between 5.16% in NSW and 9.44% in Victoria

  12. Comparison of Langevin and Markov channel noise models for neuronal signal generation.

    Science.gov (United States)

    Sengupta, B; Laughlin, S B; Niven, J E

    2010-01-01

    The stochastic opening and closing of voltage-gated ion channels produce noise in neurons. The effect of this noise on the neuronal performance has been modeled using either an approximate or Langevin model based on stochastic differential equations or an exact model based on a Markov process model of channel gating. Yet whether the Langevin model accurately reproduces the channel noise produced by the Markov model remains unclear. Here we present a comparison between Langevin and Markov models of channel noise in neurons using single compartment Hodgkin-Huxley models containing either Na+ and K+, or only K+ voltage-gated ion channels. The performance of the Langevin and Markov models was quantified over a range of stimulus statistics, membrane areas, and channel numbers. We find that in comparison to the Markov model, the Langevin model underestimates the noise contributed by voltage-gated ion channels, overestimating information rates for both spiking and nonspiking membranes. Even with increasing numbers of channels, the difference between the two models persists. This suggests that the Langevin model may not be suitable for accurately simulating channel noise in neurons, even in simulations with large numbers of ion channels.

  13. Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks

    Science.gov (United States)

    Pyle, Ryan; Rosenbaum, Robert

    2017-01-01

    Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.

  14. Is the thermal-spike model consistent with experimentally determined electron temperature?

    International Nuclear Information System (INIS)

    Ajryan, Eh.A.; Fedorov, A.V.; Kostenko, B.F.

    2000-01-01

    Carbon K-Auger electron spectra from amorphous carbon foils induced by fast heavy ions are theoretically investigated. The high-energy tail of the Auger structure showing a clear projectile charge dependence is analyzed within the thermal-spike model framework as well as in the frame of another model taking into account some kinetic features of the process. A poor comparison results between theoretically and experimentally determined temperatures are suggested to be due to an improper account of double electron excitations or due to shake-up processes which leave the system in a more energetic initial state than a statically screened core hole

  15. Technique for infrared and visible image fusion based on non-subsampled shearlet transform and spiking cortical model

    Science.gov (United States)

    Kong, Weiwei; Wang, Binghe; Lei, Yang

    2015-07-01

    Fusion of infrared and visible images is an active research area in image processing, and a variety of relevant algorithms have been developed. However, the existing techniques commonly cannot gain good fusion performance and acceptable computational complexity simultaneously. This paper proposes a novel image fusion approach that integrates the non-subsampled shearlet transform (NSST) with spiking cortical model (SCM) to overcome the above drawbacks. On the one hand, using NSST to conduct the decompositions and reconstruction not only consists with human vision characteristics, but also effectively decreases the computational complexity compared with the current popular multi-resolution analysis tools such as non-subsampled contourlet transform (NSCT). On the other hand, SCM, which has been considered to be an optimal neuron network model recently, is responsible for the fusion of sub-images from different scales and directions. Experimental results indicate that the proposed method is promising, and it does significantly improve the fusion quality in both aspects of subjective visual performance and objective comparisons compared with other current popular ones.

  16. Response of Electrical Activity in an Improved Neuron Model under Electromagnetic Radiation and Noise

    Directory of Open Access Journals (Sweden)

    Feibiao Zhan

    2017-11-01

    Full Text Available Electrical activities are ubiquitous neuronal bioelectric phenomena, which have many different modes to encode the expression of biological information, and constitute the whole process of signal propagation between neurons. Therefore, we focus on the electrical activities of neurons, which is also causing widespread concern among neuroscientists. In this paper, we mainly investigate the electrical activities of the Morris-Lecar (M-L model with electromagnetic radiation or Gaussian white noise, which can restore the authenticity of neurons in realistic neural network. First, we explore dynamical response of the whole system with electromagnetic induction (EMI and Gaussian white noise. We find that there are slight differences in the discharge behaviors via comparing the response of original system with that of improved system, and electromagnetic induction can transform bursting or spiking state to quiescent state and vice versa. Furthermore, we research bursting transition mode and the corresponding periodic solution mechanism for the isolated neuron model with electromagnetic induction by using one-parameter and bi-parameters bifurcation analysis. Finally, we analyze the effects of Gaussian white noise on the original system and coupled system, which is conducive to understand the actual discharge properties of realistic neurons.

  17. Response of Electrical Activity in an Improved Neuron Model under Electromagnetic Radiation and Noise.

    Science.gov (United States)

    Zhan, Feibiao; Liu, Shenquan

    2017-01-01

    Electrical activities are ubiquitous neuronal bioelectric phenomena, which have many different modes to encode the expression of biological information, and constitute the whole process of signal propagation between neurons. Therefore, we focus on the electrical activities of neurons, which is also causing widespread concern among neuroscientists. In this paper, we mainly investigate the electrical activities of the Morris-Lecar (M-L) model with electromagnetic radiation or Gaussian white noise, which can restore the authenticity of neurons in realistic neural network. First, we explore dynamical response of the whole system with electromagnetic induction (EMI) and Gaussian white noise. We find that there are slight differences in the discharge behaviors via comparing the response of original system with that of improved system, and electromagnetic induction can transform bursting or spiking state to quiescent state and vice versa. Furthermore, we research bursting transition mode and the corresponding periodic solution mechanism for the isolated neuron model with electromagnetic induction by using one-parameter and bi-parameters bifurcation analysis. Finally, we analyze the effects of Gaussian white noise on the original system and coupled system, which is conducive to understand the actual discharge properties of realistic neurons.

  18. Neuronal chains for actions in the parietal lobe: a computational model.

    Science.gov (United States)

    Chersi, Fabian; Ferrari, Pier Francesco; Fogassi, Leonardo

    2011-01-01

    The inferior part of the parietal lobe (IPL) is known to play a very important role in sensorimotor integration. Neurons in this region code goal-related motor acts performed with the mouth, with the hand and with the arm. It has been demonstrated that most IPL motor neurons coding a specific motor act (e.g., grasping) show markedly different activation patterns according to the final goal of the action sequence in which the act is embedded (grasping for eating or grasping for placing). Some of these neurons (parietal mirror neurons) show a similar selectivity also during the observation of the same action sequences when executed by others. Thus, it appears that the neuronal response occurring during the execution and the observation of a specific grasping act codes not only the executed motor act, but also the agent's final goal (intention).In this work we present a biologically inspired neural network architecture that models mechanisms of motor sequences execution and recognition. In this network, pools composed of motor and mirror neurons that encode motor acts of a sequence are arranged in form of action goal-specific neuronal chains. The execution and the recognition of actions is achieved through the propagation of activity bursts along specific chains modulated by visual and somatosensory inputs.The implemented spiking neuron network is able to reproduce the results found in neurophysiological recordings of parietal neurons during task performance and provides a biologically plausible implementation of the action selection and recognition process.Finally, the present paper proposes a mechanism for the formation of new neural chains by linking together in a sequential manner neurons that represent subsequent motor acts, thus producing goal-directed sequences.

  19. Neuronal chains for actions in the parietal lobe: a computational model.

    Directory of Open Access Journals (Sweden)

    Fabian Chersi

    Full Text Available The inferior part of the parietal lobe (IPL is known to play a very important role in sensorimotor integration. Neurons in this region code goal-related motor acts performed with the mouth, with the hand and with the arm. It has been demonstrated that most IPL motor neurons coding a specific motor act (e.g., grasping show markedly different activation patterns according to the final goal of the action sequence in which the act is embedded (grasping for eating or grasping for placing. Some of these neurons (parietal mirror neurons show a similar selectivity also during the observation of the same action sequences when executed by others. Thus, it appears that the neuronal response occurring during the execution and the observation of a specific grasping act codes not only the executed motor act, but also the agent's final goal (intention.In this work we present a biologically inspired neural network architecture that models mechanisms of motor sequences execution and recognition. In this network, pools composed of motor and mirror neurons that encode motor acts of a sequence are arranged in form of action goal-specific neuronal chains. The execution and the recognition of actions is achieved through the propagation of activity bursts along specific chains modulated by visual and somatosensory inputs.The implemented spiking neuron network is able to reproduce the results found in neurophysiological recordings of parietal neurons during task performance and provides a biologically plausible implementation of the action selection and recognition process.Finally, the present paper proposes a mechanism for the formation of new neural chains by linking together in a sequential manner neurons that represent subsequent motor acts, thus producing goal-directed sequences.

  20. Neuronal Chains for Actions in the Parietal Lobe: A Computational Model

    Science.gov (United States)

    Chersi, Fabian; Ferrari, Pier Francesco; Fogassi, Leonardo

    2011-01-01

    The inferior part of the parietal lobe (IPL) is known to play a very important role in sensorimotor integration. Neurons in this region code goal-related motor acts performed with the mouth, with the hand and with the arm. It has been demonstrated that most IPL motor neurons coding a specific motor act (e.g., grasping) show markedly different activation patterns according to the final goal of the action sequence in which the act is embedded (grasping for eating or grasping for placing). Some of these neurons (parietal mirror neurons) show a similar selectivity also during the observation of the same action sequences when executed by others. Thus, it appears that the neuronal response occurring during the execution and the observation of a specific grasping act codes not only the executed motor act, but also the agent's final goal (intention). In this work we present a biologically inspired neural network architecture that models mechanisms of motor sequences execution and recognition. In this network, pools composed of motor and mirror neurons that encode motor acts of a sequence are arranged in form of action goal-specific neuronal chains. The execution and the recognition of actions is achieved through the propagation of activity bursts along specific chains modulated by visual and somatosensory inputs. The implemented spiking neuron network is able to reproduce the results found in neurophysiological recordings of parietal neurons during task performance and provides a biologically plausible implementation of the action selection and recognition process. Finally, the present paper proposes a mechanism for the formation of new neural chains by linking together in a sequential manner neurons that represent subsequent motor acts, thus producing goal-directed sequences. PMID:22140455

  1. Diffusion approximation of neuronal models revisited

    Czech Academy of Sciences Publication Activity Database

    Čupera, Jakub

    2014-01-01

    Roč. 11, č. 1 (2014), s. 11-25 ISSN 1547-1063. [International Workshop on Neural Coding (NC) /10./. Praha, 02.09.2012-07.09.2012] R&D Projects: GA ČR(CZ) GAP103/11/0282 Institutional support: RVO:67985823 Keywords : stochastic model * neuronal activity * first-passage time Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.840, year: 2014

  2. Unsupervised learning of temporal features for word categorization in a spiking neural network model of the auditory brain.

    Science.gov (United States)

    Higgins, Irina; Stringer, Simon; Schnupp, Jan

    2017-01-01

    The nature of the code used in the auditory cortex to represent complex auditory stimuli, such as naturally spoken words, remains a matter of debate. Here we argue that such representations are encoded by stable spatio-temporal patterns of firing within cell assemblies known as polychronous groups, or PGs. We develop a physiologically grounded, unsupervised spiking neural network model of the auditory brain with local, biologically realistic, spike-time dependent plasticity (STDP) learning, and show that the plastic cortical layers of the network develop PGs which convey substantially more information about the speaker independent identity of two naturally spoken word stimuli than does rate encoding that ignores the precise spike timings. We furthermore demonstrate that such informative PGs can only develop if the input spatio-temporal spike patterns to the plastic cortical areas of the model are relatively stable.

  3. The Spike-and-Slab Lasso Generalized Linear Models for Prediction and Associated Genes Detection.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Zhang, Xinyan; Yi, Nengjun

    2017-01-01

    Large-scale "omics" data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, there are considerable challenges in analyzing high-dimensional molecular data, including the large number of potential molecular predictors, limited number of samples, and small effect of each predictor. We propose new Bayesian hierarchical generalized linear models, called spike-and-slab lasso GLMs, for prognostic prediction and detection of associated genes using large-scale molecular data. The proposed model employs a spike-and-slab mixture double-exponential prior for coefficients that can induce weak shrinkage on large coefficients, and strong shrinkage on irrelevant coefficients. We have developed a fast and stable algorithm to fit large-scale hierarchal GLMs by incorporating expectation-maximization (EM) steps into the fast cyclic coordinate descent algorithm. The proposed approach integrates nice features of two popular methods, i.e., penalized lasso and Bayesian spike-and-slab variable selection. The performance of the proposed method is assessed via extensive simulation studies. The results show that the proposed approach can provide not only more accurate estimates of the parameters, but also better prediction. We demonstrate the proposed procedure on two cancer data sets: a well-known breast cancer data set consisting of 295 tumors, and expression data of 4919 genes; and the ovarian cancer data set from TCGA with 362 tumors, and expression data of 5336 genes. Our analyses show that the proposed procedure can generate powerful models for predicting outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Copyright © 2017 by the Genetics Society of America.

  4. A Dynamic Bayesian Model for Characterizing Cross-Neuronal Interactions During Decision-Making.

    Science.gov (United States)

    Zhou, Bo; Moorman, David E; Behseta, Sam; Ombao, Hernando; Shahbaba, Babak

    2016-01-01

    The goal of this paper is to develop a novel statistical model for studying cross-neuronal spike train interactions during decision making. For an individual to successfully complete the task of decision-making, a number of temporally-organized events must occur: stimuli must be detected, potential outcomes must be evaluated, behaviors must be executed or inhibited, and outcomes (such as reward or no-reward) must be experienced. Due to the complexity of this process, it is likely the case that decision-making is encoded by the temporally-precise interactions between large populations of neurons. Most existing statistical models, however, are inadequate for analyzing such a phenomenon because they provide only an aggregated measure of interactions over time. To address this considerable limitation, we propose a dynamic Bayesian model which captures the time-varying nature of neuronal activity (such as the time-varying strength of the interactions between neurons). The proposed method yielded results that reveal new insight into the dynamic nature of population coding in the prefrontal cortex during decision making. In our analysis, we note that while some neurons in the prefrontal cortex do not synchronize their firing activity until the presence of a reward, a different set of neurons synchronize their activity shortly after stimulus onset. These differentially synchronizing sub-populations of neurons suggests a continuum of population representation of the reward-seeking task. Secondly, our analyses also suggest that the degree of synchronization differs between the rewarded and non-rewarded conditions. Moreover, the proposed model is scalable to handle data on many simultaneously-recorded neurons and is applicable to analyzing other types of multivariate time series data with latent structure. Supplementary materials (including computer codes) for our paper are available online.

  5. On the properties of input-to-output transformations in neuronal networks.

    Science.gov (United States)

    Olypher, Andrey; Vaillant, Jean

    2016-06-01

    Information processing in neuronal networks in certain important cases can be considered as maps of binary vectors, where ones (spikes) and zeros (no spikes) of input neurons are transformed into spikes and no spikes of output neurons. A simple but fundamental characteristic of such a map is how it transforms distances between input vectors into distances between output vectors. We advanced earlier known results by finding an exact solution to this problem for McCulloch-Pitts neurons. The obtained explicit formulas allow for detailed analysis of how the network connectivity and neuronal excitability affect the transformation of distances in neurons. As an application, we explored a simple model of information processing in the hippocampus, a brain area critically implicated in learning and memory. We found network connectivity and neuronal excitability parameter values that optimize discrimination between similar and distinct inputs. A decrease of neuronal excitability, which in biological neurons may be associated with decreased inhibition, impaired the optimality of discrimination.

  6. A model of human motor sequence learning explains facilitation and interference effects based on spike-timing dependent plasticity.

    Directory of Open Access Journals (Sweden)

    Quan Wang

    2017-08-01

    Full Text Available The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP and synaptic normalization (SN. When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network's changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network's sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that

  7. A model of human motor sequence learning explains facilitation and interference effects based on spike-timing dependent plasticity.

    Science.gov (United States)

    Wang, Quan; Rothkopf, Constantin A; Triesch, Jochen

    2017-08-01

    The ability to learn sequential behaviors is a fundamental property of our brains. Yet a long stream of studies including recent experiments investigating motor sequence learning in adult human subjects have produced a number of puzzling and seemingly contradictory results. In particular, when subjects have to learn multiple action sequences, learning is sometimes impaired by proactive and retroactive interference effects. In other situations, however, learning is accelerated as reflected in facilitation and transfer effects. At present it is unclear what the underlying neural mechanism are that give rise to these diverse findings. Here we show that a recently developed recurrent neural network model readily reproduces this diverse set of findings. The self-organizing recurrent neural network (SORN) model is a network of recurrently connected threshold units that combines a simplified form of spike-timing dependent plasticity (STDP) with homeostatic plasticity mechanisms ensuring network stability, namely intrinsic plasticity (IP) and synaptic normalization (SN). When trained on sequence learning tasks modeled after recent experiments we find that it reproduces the full range of interference, facilitation, and transfer effects. We show how these effects are rooted in the network's changing internal representation of the different sequences across learning and how they depend on an interaction of training schedule and task similarity. Furthermore, since learning in the model is based on fundamental neuronal plasticity mechanisms, the model reveals how these plasticity mechanisms are ultimately responsible for the network's sequence learning abilities. In particular, we find that all three plasticity mechanisms are essential for the network to learn effective internal models of the different training sequences. This ability to form effective internal models is also the basis for the observed interference and facilitation effects. This suggests that STDP, IP, and SN

  8. Methods of Stochastic Analysis of Complex Regimes in the 3D Hindmarsh-Rose Neuron Model

    Science.gov (United States)

    Bashkirtseva, Irina; Ryashko, Lev; Slepukhina, Evdokia

    A problem of the stochastic nonlinear analysis of neuronal activity is studied by the example of the Hindmarsh-Rose (HR) model. For the parametric region of tonic spiking oscillations, it is shown that random noise transforms the spiking dynamic regime into the bursting one. This stochastic phenomenon is specified by qualitative changes in distributions of random trajectories and interspike intervals (ISIs). For a quantitative analysis of the noise-induced bursting, we suggest a constructive semi-analytical approach based on the stochastic sensitivity function (SSF) technique and the method of confidence domains that allows us to describe geometrically a distribution of random states around the deterministic attractors. Using this approach, we develop a new algorithm for estimation of critical values for the noise intensity corresponding to the qualitative changes in stochastic dynamics. We show that the obtained estimations are in good agreement with the numerical results. An interplay between noise-induced bursting and transitions from order to chaos is discussed.

  9. Macro- and micro-chaotic structures in the Hindmarsh-Rose model of bursting neurons

    Energy Technology Data Exchange (ETDEWEB)

    Barrio, Roberto, E-mail: rbarrio@unizar.es; Serrano, Sergio [Computational Dynamics Group, Departamento de Matemática Aplicada, GME and IUMA, Universidad de Zaragoza, E-50009 Zaragoza (Spain); Angeles Martínez, M. [Computational Dynamics Group, GME, Universidad de Zaragoza, E-50009 Zaragoza (Spain); Shilnikov, Andrey [Neuroscience Institute and Department of Mathematics and Statistics, Georgia State University, Atlanta, Georgia 30078 (United States); Department of Computational Mathematics and Cybernetics, Lobachevsky State University of Nizhni Novgorod, 603950 Nizhni Novgorod (Russian Federation)

    2014-06-01

    We study a plethora of chaotic phenomena in the Hindmarsh-Rose neuron model with the use of several computational techniques including the bifurcation parameter continuation, spike-quantification, and evaluation of Lyapunov exponents in bi-parameter diagrams. Such an aggregated approach allows for detecting regions of simple and chaotic dynamics, and demarcating borderlines—exact bifurcation curves. We demonstrate how the organizing centers—points corresponding to codimension-two homoclinic bifurcations—along with fold and period-doubling bifurcation curves structure the biparametric plane, thus forming macro-chaotic regions of onion bulb shapes and revealing spike-adding cascades that generate micro-chaotic structures due to the hysteresis.

  10. Macro- and micro-chaotic structures in the Hindmarsh-Rose model of bursting neurons

    International Nuclear Information System (INIS)

    Barrio, Roberto; Serrano, Sergio; Angeles Martínez, M.; Shilnikov, Andrey

    2014-01-01

    We study a plethora of chaotic phenomena in the Hindmarsh-Rose neuron model with the use of several computational techniques including the bifurcation parameter continuation, spike-quantification, and evaluation of Lyapunov exponents in bi-parameter diagrams. Such an aggregated approach allows for detecting regions of simple and chaotic dynamics, and demarcating borderlines—exact bifurcation curves. We demonstrate how the organizing centers—points corresponding to codimension-two homoclinic bifurcations—along with fold and period-doubling bifurcation curves structure the biparametric plane, thus forming macro-chaotic regions of onion bulb shapes and revealing spike-adding cascades that generate micro-chaotic structures due to the hysteresis

  11. Exact subthreshold integration with continuous spike times in discrete-time neural network simulations.

    Science.gov (United States)

    Morrison, Abigail; Straube, Sirko; Plesser, Hans Ekkehard; Diesmann, Markus

    2007-01-01

    Very large networks of spiking neurons can be simulated efficiently in parallel under the constraint that spike times are bound to an equidistant time grid. Within this scheme, the subthreshold dynamics of a wide class of integrate-and-fire-type neuron models can be integrated exactly from one grid point to the next. However, the loss in accuracy caused by restricting spike times to the grid can have undesirable consequences, which has led to interest in interpolating spike times between the grid points to retrieve an adequate representation of network dynamics. We demonstrate that the exact integration scheme can be combined naturally with off-grid spike events found by interpolation. We show that by exploiting the existence of a minimal synaptic propagation delay, the need for a central event queue is removed, so that the precision of event-driven simulation on the level of single neurons is combined with the efficiency of time-driven global scheduling. Further, for neuron models with linear subthreshold dynamics, even local event queuing can be avoided, resulting in much greater efficiency on the single-neuron level. These ideas are exemplified by two implementations of a widely used neuron model. We present a measure for the efficiency of network simulations in terms of their integration error and show that for a wide range of input spike rates, the novel techniques we present are both more accurate and faster than standard techniques.

  12. A Spiking Neural Network in sEMG Feature Extraction.

    Science.gov (United States)

    Lobov, Sergey; Mironov, Vasiliy; Kastalskiy, Innokentiy; Kazantsev, Victor

    2015-11-03

    We have developed a novel algorithm for sEMG feature extraction and classification. It is based on a hybrid network composed of spiking and artificial neurons. The spiking neuron layer with mutual inhibition was assigned as feature extractor. We demonstrate that the classification accuracy of the proposed model could reach high values comparable with existing sEMG interface systems. Moreover, the algorithm sensibility for different sEMG collecting systems characteristics was estimated. Results showed rather equal accuracy, despite a significant sampling rate difference. The proposed algorithm was successfully tested for mobile robot control.

  13. A neuron-astrocyte transistor-like model for neuromorphic dressed neurons.

    Science.gov (United States)

    Valenza, G; Pioggia, G; Armato, A; Ferro, M; Scilingo, E P; De Rossi, D

    2011-09-01

    Experimental evidences on the role of the synaptic glia as an active partner together with the bold synapse in neuronal signaling and dynamics of neural tissue strongly suggest to investigate on a more realistic neuron-glia model for better understanding human brain processing. Among the glial cells, the astrocytes play a crucial role in the tripartite synapsis, i.e. the dressed neuron. A well-known two-way astrocyte-neuron interaction can be found in the literature, completely revising the purely supportive role for the glia. The aim of this study is to provide a computationally efficient model for neuron-glia interaction. The neuron-glia interactions were simulated by implementing the Li-Rinzel model for an astrocyte and the Izhikevich model for a neuron. Assuming the dressed neuron dynamics similar to the nonlinear input-output characteristics of a bipolar junction transistor, we derived our computationally efficient model. This model may represent the fundamental computational unit for the development of real-time artificial neuron-glia networks opening new perspectives in pattern recognition systems and in brain neurophysiology. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Spike detection from noisy neural data in linear-probe recordings.

    Science.gov (United States)

    Takekawa, Takashi; Ota, Keisuke; Murayama, Masanori; Fukai, Tomoki

    2014-06-01

    Simultaneous recordings of multiple neuron activities with multi-channel extracellular electrodes are widely used for studying information processing by the brain's neural circuits. In this method, the recorded signals containing the spike events of a number of adjacent or distant neurons must be correctly sorted into spike trains of individual neurons, and a variety of methods have been proposed for this spike sorting. However, spike sorting is computationally difficult because the recorded signals are often contaminated by biological noise. Here, we propose a novel method for spike detection, which is the first stage of spike sorting and hence crucially determines overall sorting performance. Our method utilizes a model of extracellular recording data that takes into account variations in spike waveforms, such as the widths and amplitudes of spikes, by detecting the peaks of band-pass-filtered data. We show that the new method significantly improves the cost-performance of multi-channel electrode recordings by increasing the number of cleanly sorted neurons. © 2014 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  15. Spike-threshold adaptation predicted by membrane potential dynamics in vivo.

    Directory of Open Access Journals (Sweden)

    Bertrand Fontaine

    2014-04-01

    Full Text Available Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo.

  16. Linked Gauss-Diffusion processes for modeling a finite-size neuronal network.

    Science.gov (United States)

    Carfora, M F; Pirozzi, E

    2017-11-01

    A Leaky Integrate-and-Fire (LIF) model with stochastic current-based linkages is considered to describe the firing activity of neurons interacting in a (2×2)-size feed-forward network. In the subthreshold regime and under the assumption that no more than one spike is exchanged between coupled neurons, the stochastic evolution of the neuronal membrane voltage is subject to random jumps due to interactions in the network. Linked Gauss-Diffusion processes are proposed to describe this dynamics and to provide estimates of the firing probability density of each neuron. To this end, an iterated integral equation-based approach is applied to evaluate numerically the first passage time density of such processes through the firing threshold. Asymptotic approximations of the firing densities of surrounding neurons are used to obtain closed-form expressions for the mean of the involved processes and to simplify the numerical procedure. An extension of the model to an (N×N)-size network is also given. Histograms of firing times obtained by simulations of the LIF dynamics and numerical firings estimates are compared. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. A new model of artificial neuron: cyberneuron and its use

    OpenAIRE

    Polikarpov, S. V.; Dergachev, V. S.; Rumyantsev, K. E.; Golubchikov, D. M.

    2009-01-01

    This article describes a new type of artificial neuron, called the authors "cyberneuron". Unlike classical models of artificial neurons, this type of neuron used table substitution instead of the operation of multiplication of input values for the weights. This allowed to significantly increase the information capacity of a single neuron, but also greatly simplify the process of learning. Considered an example of the use of "cyberneuron" with the task of detecting computer viruses.

  18. Mathematical modelling as a tool to assessment of loads in volleyball player's shoulder joint during spike.

    Science.gov (United States)

    Jurkojć, Jacek; Michnik, Robert; Czapla, Krzysztof

    2017-06-01

    This article deals with kinematic and kinetic conditions in volleyball attack and identifies loads in the shoulder joint. Joint angles and velocities of individual segments of upper limb were measured with the use of the motion capture system XSENS. Muscle forces and loads in skeletal system were calculated by means of mathematical model elaborated in AnyBody system. Spikes performed by players in the best and worst way were compared with each other. The relationships were found between reactions in shoulder joint and flexion/extension, abduction/adduction and rotation angles in the same joint and flexion/extension in the elbow joint. Reactions in shoulder joint varied from 591 N to 2001 N (in relation to body weight [BW] 83-328%). The analysis proved that hand velocity at the moment of the ball hit (which varied between 6.8 and 13.3 m s -1 ) influences on the value of reaction in joints, but positions of individual segments relative to each other are also crucial. It was also proved in objective way, that position of the upper limb during spike can be more or less harmful assuming that bigger reaction increases possibility of injury, what can be an indication for trainers and physiotherapists how to improve injury prevention.

  19. Precise-Spike-Driven Synaptic Plasticity: Learning Hetero-Association of Spatiotemporal Spike Patterns

    Science.gov (United States)

    Yu, Qiang; Tang, Huajin; Tan, Kay Chen; Li, Haizhou

    2013-01-01

    A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe. PMID:24223789

  20. Precise-spike-driven synaptic plasticity: learning hetero-association of spatiotemporal spike patterns.

    Directory of Open Access Journals (Sweden)

    Qiang Yu

    Full Text Available A new learning rule (Precise-Spike-Driven (PSD Synaptic Plasticity is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.

  1. Precise-spike-driven synaptic plasticity: learning hetero-association of spatiotemporal spike patterns.

    Science.gov (United States)

    Yu, Qiang; Tang, Huajin; Tan, Kay Chen; Li, Haizhou

    2013-01-01

    A new learning rule (Precise-Spike-Driven (PSD) Synaptic Plasticity) is proposed for processing and memorizing spatiotemporal patterns. PSD is a supervised learning rule that is analytically derived from the traditional Widrow-Hoff rule and can be used to train neurons to associate an input spatiotemporal spike pattern with a desired spike train. Synaptic adaptation is driven by the error between the desired and the actual output spikes, with positive errors causing long-term potentiation and negative errors causing long-term depression. The amount of modification is proportional to an eligibility trace that is triggered by afferent spikes. The PSD rule is both computationally efficient and biologically plausible. The properties of this learning rule are investigated extensively through experimental simulations, including its learning performance, its generality to different neuron models, its robustness against noisy conditions, its memory capacity, and the effects of its learning parameters. Experimental results show that the PSD rule is capable of spatiotemporal pattern classification, and can even outperform a well studied benchmark algorithm with the proposed relative confidence criterion. The PSD rule is further validated on a practical example of an optical character recognition problem. The results again show that it can achieve a good recognition performance with a proper encoding. Finally, a detailed discussion is provided about the PSD rule and several related algorithms including tempotron, SPAN, Chronotron and ReSuMe.

  2. A Model of Electrically Stimulated Auditory Nerve Fiber Responses with Peripheral and Central Sites of Spike Generation

    DEFF Research Database (Denmark)

    Joshi, Suyash Narendra; Dau, Torsten; Epp, Bastian

    2017-01-01

    A computational model of cat auditory nerve fiber (ANF) responses to electrical stimulation is presented. The model assumes that (1) there exist at least two sites of spike generation along the ANF and (2) both an anodic (positive) and a cathodic (negative) charge in isolation can evoke a spike...... of facilitation, accommodation, refractoriness, and spike-rate adaptation in ANF. Although the model is parameterized using data for either single or paired pulse stimulation with monophasic rectangular pulses, it correctly predicts effects of various stimulus pulse shapes, stimulation pulse rates, and level...... on the neural response statistics. The model may serve as a framework to explore the effects of different stimulus parameters on psychophysical performance measured in cochlear implant listeners....

  3. Cerebellar Nuclear Neurons Use Time and Rate Coding to Transmit Purkinje Neuron Pauses.

    Science.gov (United States)

    Sudhakar, Shyam Kumar; Torben-Nielsen, Benjamin; De Schutter, Erik

    2015-12-01

    Neurons of the cerebellar nuclei convey the final output of the cerebellum to their targets in various parts of the brain. Within the cerebellum their direct upstream connections originate from inhibitory Purkinje neurons. Purkinje neurons have a complex firing pattern of regular spikes interrupted by intermittent pauses of variable length. How can the cerebellar nucleus process this complex input pattern? In this modeling study, we investigate different forms of Purkinje neuron simple spike pause synchrony and its influence on candidate coding strategies in the cerebellar nuclei. That is, we investigate how different alignments of synchronous pauses in synthetic Purkinje neuron spike trains affect either time-locking or rate-changes in the downstream nuclei. We find that Purkinje neuron synchrony is mainly represented by changes in the firing rate of cerebellar nuclei neurons. Pause beginning synchronization produced a unique effect on nuclei neuron firing, while the effect of pause ending and pause overlapping synchronization could not be distinguished from each other. Pause beginning synchronization produced better time-locking of nuclear neurons for short length pauses. We also characterize the effect of pause length and spike jitter on the nuclear neuron firing. Additionally, we find that the rate of rebound responses in nuclear neurons after a synchronous pause is controlled by the firing rate of Purkinje neurons preceding it.

  4. Cerebellar Nuclear Neurons Use Time and Rate Coding to Transmit Purkinje Neuron Pauses

    Science.gov (United States)

    Sudhakar, Shyam Kumar; Torben-Nielsen, Benjamin; De Schutter, Erik

    2015-01-01

    Neurons of the cerebellar nuclei convey the final output of the cerebellum to their targets in various parts of the brain. Within the cerebellum their direct upstream connections originate from inhibitory Purkinje neurons. Purkinje neurons have a complex firing pattern of regular spikes interrupted by intermittent pauses of variable length. How can the cerebellar nucleus process this complex input pattern? In this modeling study, we investigate different forms of Purkinje neuron simple spike pause synchrony and its influence on candidate coding strategies in the cerebellar nuclei. That is, we investigate how different alignments of synchronous pauses in synthetic Purkinje neuron spike trains affect either time-locking or rate-changes in the downstream nuclei. We find that Purkinje neuron synchrony is mainly represented by changes in the firing rate of cerebellar nuclei neurons. Pause beginning synchronization produced a unique effect on nuclei neuron firing, while the effect of pause ending and pause overlapping synchronization could not be distinguished from each other. Pause beginning synchronization produced better time-locking of nuclear neurons for short length pauses. We also characterize the effect of pause length and spike jitter on the nuclear neuron firing. Additionally, we find that the rate of rebound responses in nuclear neurons after a synchronous pause is controlled by the firing rate of Purkinje neurons preceding it. PMID:26630202

  5. Spike-timing-based computation in sound localization.

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2010-11-01

    Full Text Available Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.

  6. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    Directory of Open Access Journals (Sweden)

    Varsha H. Rallapalli

    2016-10-01

    Full Text Available Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM has demonstrated that the signal-to-noise ratio (SNRENV from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N is assumed to: (a reduce S + N envelope power by filling in dips within clean speech (S and (b introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  7. Models of the stochastic activity of neurones

    CERN Document Server

    Holden, Arun Vivian

    1976-01-01

    These notes have grown from a series of seminars given at Leeds between 1972 and 1975. They represent an attempt to gather together the different kinds of model which have been proposed to account for the stochastic activity of neurones, and to provide an introduction to this area of mathematical biology. A striking feature of the electrical activity of the nervous system is that it appears stochastic: this is apparent at all levels of recording, ranging from intracellular recordings to the electroencephalogram. The chapters start with fluctuations in membrane potential, proceed through single unit and synaptic activity and end with the behaviour of large aggregates of neurones: L have chgaen this seque~~e\\/~~';uggest that the interesting behaviourr~f :the nervous system - its individuality, variability and dynamic forms - may in part result from the stochastic behaviour of its components. I would like to thank Dr. Julio Rubio for reading and commenting on the drafts, Mrs. Doris Beighton for producing the fin...

  8. Spiking cortical model-based nonlocal means method for speckle reduction in optical coherence tomography images

    Science.gov (United States)

    Zhang, Xuming; Li, Liu; Zhu, Fei; Hou, Wenguang; Chen, Xinjian

    2014-06-01

    Optical coherence tomography (OCT) images are usually degraded by significant speckle noise, which will strongly hamper their quantitative analysis. However, speckle noise reduction in OCT images is particularly challenging because of the difficulty in differentiating between noise and the information components of the speckle pattern. To address this problem, the spiking cortical model (SCM)-based nonlocal means method is presented. The proposed method explores self-similarities of OCT images based on rotation-invariant features of image patches extracted by SCM and then restores the speckled images by averaging the similar patches. This method can provide sufficient speckle reduction while preserving image details very well due to its effectiveness in finding reliable similar patches under high speckle noise contamination. When applied to the retinal OCT image, this method provides signal-to-noise ratio improvements of >16 dB with a small 5.4% loss of similarity.

  9. Functionalized anatomical models for EM-neuron Interaction modeling

    Science.gov (United States)

    Neufeld, Esra; Cassará, Antonino Mario; Montanaro, Hazael; Kuster, Niels; Kainz, Wolfgang

    2016-06-01

    The understanding of interactions between electromagnetic (EM) fields and nerves are crucial in contexts ranging from therapeutic neurostimulation to low frequency EM exposure safety. To properly consider the impact of in vivo induced field inhomogeneity on non-linear neuronal dynamics, coupled EM-neuronal dynamics modeling is required. For that purpose, novel functionalized computable human phantoms have been developed. Their implementation and the systematic verification of the integrated anisotropic quasi-static EM solver and neuronal dynamics modeling functionality, based on the method of manufactured solutions and numerical reference data, is described. Electric and magnetic stimulation of the ulnar and sciatic nerve were modeled to help understanding a range of controversial issues related to the magnitude and optimal determination of strength-duration (SD) time constants. The results indicate the importance of considering the stimulation-specific inhomogeneous field distributions (especially at tissue interfaces), realistic models of non-linear neuronal dynamics, very short pulses, and suitable SD extrapolation models. These results and the functionalized computable phantom will influence and support the development of safe and effective neuroprosthetic devices and novel electroceuticals. Furthermore they will assist the evaluation of existing low frequency exposure standards for the entire population under all exposure conditions.

  10. Impact of spike train autostructure on probability distribution of joint spike events.

    Science.gov (United States)

    Pipa, Gordon; Grün, Sonja; van Vreeswijk, Carl

    2013-05-01

    The discussion whether temporally coordinated spiking activity really exists and whether it is relevant has been heated over the past few years. To investigate this issue, several approaches have been taken to determine whether synchronized events occur significantly above chance, that is, whether they occur more often than expected if the neurons fire independently. Most investigations ignore or destroy the autostructure of the spiking activity of individual cells or assume Poissonian spiking as a model. Such methods that ignore the autostructure can significantly bias the coincidence statistics. Here, we study the influence of the autostructure on the probability distribution of coincident spiking events between tuples of mutually independent non-Poisson renewal processes. In particular, we consider two types of renewal processes that were suggested as appropriate models of experimental spike trains: a gamma and a log-normal process. For a gamma process, we characterize the shape of the distribution analytically with the Fano factor (FFc). In addition, we perform Monte Carlo estimations to derive the full shape of the distribution and the probability for false positives if a different process type is assumed as was actually present. We also determine how manipulations of such spike trains, here dithering, used for the generation of surrogate data change the distribution of coincident events and influence the significance estimation. We find, first, that the width of the coincidence count distribution and its FFc depend critically and in a nontrivial way on the detailed properties of the structure of the spike trains as characterized by the coefficient of variation CV. Second, the dependence of the FFc on the CV is complex and mostly nonmonotonic. Third, spike dithering, even if as small as a fraction of the interspike interval, can falsify the inference on coordinated firing.

  11. Neuronal model with distributed delay: analysis and simulation study for gamma distribution memory kernel.

    Science.gov (United States)

    Karmeshu; Gupta, Varun; Kadambari, K V

    2011-06-01

    A single neuronal model incorporating distributed delay (memory)is proposed. The stochastic model has been formulated as a Stochastic Integro-Differential Equation (SIDE) which results in the underlying process being non-Markovian. A detailed analysis of the model when the distributed delay kernel has exponential form (weak delay) has been carried out. The selection of exponential kernel has enabled the transformation of the non-Markovian model to a Markovian model in an extended state space. For the study of First Passage Time (FPT) with exponential delay kernel, the model has been transformed to a system of coupled Stochastic Differential Equations (SDEs) in two-dimensional state space. Simulation studies of the SDEs provide insight into the effect of weak delay kernel on the Inter-Spike Interval(ISI) distribution. A measure based on Jensen-Shannon divergence is proposed which can be used to make a choice between two competing models viz. distributed delay model vis-á-vis LIF model. An interesting feature of the model is that the behavior of (CV(t))((ISI)) (Coefficient of Variation) of the ISI distribution with respect to memory kernel time constant parameter η reveals that neuron can switch from a bursting state to non-bursting state as the noise intensity parameter changes. The membrane potential exhibits decaying auto-correlation structure with or without damped oscillatory behavior depending on the choice of parameters. This behavior is in agreement with empirically observed pattern of spike count in a fixed time window. The power spectral density derived from the auto-correlation function is found to exhibit single and double peaks. The model is also examined for the case of strong delay with memory kernel having the form of Gamma distribution. In contrast to fast decay of damped oscillations of the ISI distribution for the model with weak delay kernel, the decay of damped oscillations is found to be slower for the model with strong delay kernel.

  12. Spike Pattern Structure Influences Synaptic Efficacy Variability under STDP and Synaptic Homeostasis. II: Spike Shuffling Methods on LIF Networks

    Science.gov (United States)

    Bi, Zedong; Zhou, Changsong

    2016-01-01

    Synapses may undergo variable changes during plasticity because of the variability of spike patterns such as temporal stochasticity and spatial randomness. Here, we call the variability of synaptic weight changes during plasticity to be efficacy variability. In this paper, we investigate how four aspects of spike pattern statistics (i.e., synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations) influence the efficacy variability under pair-wise additive spike-timing dependent plasticity (STDP) and synaptic homeostasis (the mean strength of plastic synapses into a neuron is bounded), by implementing spike shuffling methods onto spike patterns self-organized by a network of excitatory and inhibitory leaky integrate-and-fire (LIF) neurons. With the increase of the decay time scale of the inhibitory synaptic currents, the LIF network undergoes a transition from asynchronous state to weak synchronous state and then to synchronous bursting state. We first shuffle these spike patterns using a variety of methods, each designed to evidently change a specific pattern statistics; and then investigate the change of efficacy variability of the synapses under STDP and synaptic homeostasis, when the neurons in the network fire according to the spike patterns before and after being treated by a shuffling method. In this way, we can understand how the change of pattern statistics may cause the change of efficacy variability. Our results are consistent with those of our previous study which implements spike-generating models on converging motifs. We also find that burstiness/regularity is important to determine the efficacy variability under asynchronous states, while heterogeneity of cross-correlations is the main factor to cause efficacy variability when the network moves into synchronous bursting states (the states observed in epilepsy). PMID:27555816

  13. Spike Pattern Structure Influences Synaptic Efficacy Variability Under STDP and Synaptic Homeostasis. II: Spike Shuffling Methods on LIF Networks

    Directory of Open Access Journals (Sweden)

    Zedong Bi

    2016-08-01

    Full Text Available Synapses may undergo variable changes during plasticity because of the variability of spike patterns such as temporal stochasticity and spatial randomness. Here, we call the variability of synaptic weight changes during plasticity to be efficacy variability. In this paper, we investigate how four aspects of spike pattern statistics (i.e., synchronous firing, burstiness/regularity, heterogeneity of rates and heterogeneity of cross-correlations influence the efficacy variability under pair-wise additive spike-timing dependent plasticity (STDP and synaptic homeostasis (the mean strength of plastic synapses into a neuron is bounded, by implementing spike shuffling methods onto spike patterns self-organized by a network of excitatory and inhibitory leaky integrate-and-fire (LIF neurons. With the increase of the decay time scale of the inhibitory synaptic currents, the LIF network undergoes a transition from asynchronous state to weak synchronous state and then to synchronous bursting state. We first shuffle these spike patterns using a variety of methods, each designed to evidently change a specific pattern statistics; and then investigate the change of efficacy variability of the synapses under STDP and synaptic homeostasis, when the neurons in the network fire according to the spike patterns before and after being treated by a shuffling method. In this way, we can understand how the change of pattern statistics may cause the change of efficacy variability. Our results are consistent with those of our previous study which implements spike-generating models on converging motifs. We also find that burstiness/regularity is important to determine the efficacy variability under asynchronous states, while heterogeneity of cross-correlations is the main factor to cause efficacy variability when the network moves into synchronous bursting states (the states observed in epilepsy.

  14. Structural and functional properties of a probabilistic model of neuronal connectivity in a simple locomotor network

    Science.gov (United States)

    Merrison-Hort, Robert; Soffe, Stephen R; Borisyuk, Roman

    2018-01-01

    Although, in most animals, brain connectivity varies between individuals, behaviour is often similar across a species. What fundamental structural properties are shared across individual networks that define this behaviour? We describe a probabilistic model of connectivity in the hatchling Xenopus tadpole spinal cord which, when combined with a spiking model, reliably produces rhythmic activity corresponding to swimming. The probabilistic model allows calculation of structural characteristics that reflect common network properties, independent of individual network realisations. We use the structural characteristics to study examples of neuronal dynamics, in the complete network and various sub-networks, and this allows us to explain the basis for key experimental findings, and make predictions for experiments. We also study how structural and functional features differ between detailed anatomical connectomes and those generated by our new, simpler, model (meta-model). PMID:29589828

  15. Stochastic differential equation models for ion channel noise in Hodgkin-Huxley neurons

    Science.gov (United States)

    Goldwyn, Joshua H.; Imennov, Nikita S.; Famulare, Michael; Shea-Brown, Eric

    2011-04-01

    The random transitions of ion channels between conducting and nonconducting states generate a source of internal fluctuations in a neuron, known as channel noise. The standard method for modeling the states of ion channels nonlinearly couples continuous-time Markov chains to a differential equation for voltage. Beginning with the work of R. F. Fox and Y.-N. Lu [Phys. Rev. EMTHDE91539-375510.1103/PhysRevE.49.3421 49, 3421 (1994)], there have been attempts to generate simpler models that use stochastic differential equation (SDEs) to approximate the stochastic spiking activity produced by Markov chain models. Recent numerical investigations, however, have raised doubts that SDE models can capture the stochastic dynamics of Markov chain models.We analyze three SDE models that have been proposed as approximations to the Markov chain model: one that describes the states of the ion channels and two that describe the states of the ion channel subunits. We show that the former channel-based approach can capture the distribution of channel noise and its effects on spiking in a Hodgkin-Huxley neuron model to a degree not previously demonstrated, but the latter two subunit-based approaches cannot. Our analysis provides intuitive and mathematical explanations for why this is the case. The temporal correlation in the channel noise is determined by the combinatorics of bundling subunits into channels, but the subunit-based approaches do not correctly account for this structure. Our study confirms and elucidates the findings of previous numerical investigations of subunit-based SDE models. Moreover, it presents evidence that Markov chain models of the nonlinear, stochastic dynamics of neural membranes can be accurately approximated by SDEs. This finding opens a door to future modeling work using SDE techniques to further illuminate the effects of ion channel fluctuations on electrically active cells.

  16. Dynamic Granger-Geweke causality modeling with application to interictal spike propagation.

    Science.gov (United States)

    Lin, Fa-Hsuan; Hara, Keiko; Solo, Victor; Vangel, Mark; Belliveau, John W; Stufflebeam, Steven M; Hämäläinen, Matti S

    2009-06-01

    A persistent problem in developing plausible neurophysiological models of perception, cognition, and action is the difficulty of characterizing the interactions between different neural systems. Previous studies have approached this problem by estimating causal influences across brain areas activated during cognitive processing using structural equation modeling (SEM) and, more recently, with Granger-Geweke causality. While SEM is complicated by the need for a priori directional connectivity information, the temporal resolution of dynamic Granger-Geweke estimates is limited because the underlying autoregressive (AR) models assume stationarity over the period of analysis. We have developed a novel optimal method for obtaining data-driven directional causality estimates with high temporal resolution in both time and frequency domains. This is achieved by simultaneously optimizing the length of the analysis window and the chosen AR model order using the SURE criterion. Dynamic Granger-Geweke causality in time and frequency domains is subsequently calculated within a moving analysis window. We tested our algorithm by calculating the Granger-Geweke causality of epileptic spike propagation from the right frontal lobe to the left frontal lobe. The results quantitatively suggested that the epileptic activity at the left frontal lobe was propagated from the right frontal lobe, in agreement with the clinical diagnosis. Our novel computational tool can be used to help elucidate complex directional interactions in the human brain. (c) 2009 Wiley-Liss, Inc.

  17. An approximation to the adaptive exponential integrate-and-fire neuron model allows fast and predictive fitting to physiological data

    Directory of Open Access Journals (Sweden)

    Loreen eHertäg

    2012-09-01

    Full Text Available For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ('in-vivo-like' input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a 'high-throughput' model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.

  18. Direct Neuronal Reprogramming for Disease Modeling Studies Using Patient-Derived Neurons: What Have We Learned?

    Directory of Open Access Journals (Sweden)

    Janelle Drouin-Ouellet

    2017-09-01

    Full Text Available Direct neuronal reprogramming, by which a neuron is formed via direct conversion from a somatic cell without going through a pluripotent intermediate stage, allows for the possibility of generating patient-derived neurons. A unique feature of these so-called induced neurons (iNs is the potential to maintain aging and epigenetic signatures of the donor, which is critical given that many diseases of the CNS are age related. Here, we review the published literature on the work that has been undertaken using iNs to model human brain disorders. Furthermore, as disease-modeling studies using this direct neuronal reprogramming approach are becoming more widely adopted, it is important to assess the criteria that are used to characterize the iNs, especially in relation to the extent to which they are mature adult neurons. In particular: i what constitutes an iN cell, ii which stages of conversion offer the earliest/optimal time to assess features that are specific to neurons and/or a disorder and iii whether generating subtype-specific iNs is critical to the disease-related features that iNs express. Finally, we discuss the range of potential biomedical applications that can be explored using patient-specific models of neurological disorders with iNs, and the challenges that will need to be overcome in order to realize these applications.

  19. Mathematical modeling of the neuron morphology using two dimensional images.

    Science.gov (United States)

    Rajković, Katarina; Marić, Dušica L; Milošević, Nebojša T; Jeremic, Sanja; Arsenijević, Valentina Arsić; Rajković, Nemanja

    2016-02-07

    In this study mathematical analyses such as the analysis of area and length, fractal analysis and modified Sholl analysis were applied on two dimensional (2D) images of neurons from adult human dentate nucleus (DN). Using mathematical analyses main morphological properties were obtained including the size of neuron and soma, the length of all dendrites, the density of dendritic arborization, the position of the maximum density and the irregularity of dendrites. Response surface methodology (RSM) was used for modeling the size of neurons and the length of all dendrites. However, the RSM model based on the second-order polynomial equation was only possible to apply to correlate changes in the size of the neuron with other properties of its morphology. Modeling data provided evidence that the size of DN neurons statistically depended on the size of the soma, the density of dendritic arborization and the irregularity of dendrites. The low value of mean relative percent deviation (MRPD) between the experimental data and the predicted neuron size obtained by RSM model showed that model was suitable for modeling the size of DN neurons. Therefore, RSM can be generally used for modeling neuron size from 2D images. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. A decision-making model based on a spiking neural circuit and synaptic plasticity.

    Science.gov (United States)

    Wei, Hui; Bu, Yijie; Dai, Dawei

    2017-10-01

    To adapt to the environment and survive, most animals can control their behaviors by making decisions. The process of decision-making and responding according to cues in the environment is stable, sustainable, and learnable. Understanding how behaviors are regulated by neural circuits and the encoding and decoding mechanisms from stimuli to responses are important goals in neuroscience. From results observed in Drosophila experiments, the underlying decision-making process is discussed, and a neural circuit that implements a two-choice decision-making model is proposed to explain and reproduce the observations. Compared with previous two-choice decision making models, our model uses synaptic plasticity to explain changes in decision output given the same environment. Moreover, biological meanings of parameters of our decision-making model are discussed. In this paper, we explain at the micro-level (i.e., neurons and synapses) how observable decision-making behavior at the macro-level is acquired and achieved.

  1. Event-driven contrastive divergence for spiking neuromorphic systems.

    Science.gov (United States)

    Neftci, Emre; Das, Srinjoy; Pedroni, Bruno; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert

    2013-01-01

    Restricted Boltzmann Machines (RBMs) and Deep Belief Networks have been demonstrated to perform efficiently in a variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However, the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD) are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F) neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The recurrent activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP) carries out the weight updates in an online, asynchronous fashion. We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.

  2. Fast computation with spikes in a recurrent neural network

    International Nuclear Information System (INIS)

    Jin, Dezhe Z.; Seung, H. Sebastian

    2002-01-01

    Neural networks with recurrent connections are sometimes regarded as too slow at computation to serve as models of the brain. Here we analytically study a counterexample, a network consisting of N integrate-and-fire neurons with self excitation, all-to-all inhibition, instantaneous synaptic coupling, and constant external driving inputs. When the inhibition and/or excitation are large enough, the network performs a winner-take-all computation for all possible external inputs and initial states of the network. The computation is done very quickly: As soon as the winner spikes once, the computation is completed since no other neurons will spike. For some initial states, the winner is the first neuron to spike, and the computation is done at the first spike of the network. In general, there are M potential winners, corresponding to the top M external inputs. When the external inputs are close in magnitude, M tends to be larger. If M>1, the selection of the actual winner is strongly influenced by the initial states. If a special relation between the excitation and inhibition is satisfied, the network always selects the neuron with the maximum external input as the winner

  3. Event-Driven Contrastive Divergence for Spiking Neuromorphic Systems

    Directory of Open Access Journals (Sweden)

    Emre eNeftci

    2014-01-01

    Full Text Available Restricted Boltzmann Machines (RBMs and Deep Belief Networks have been demonstrated to perform efficiently in variety of applications, such as dimensionality reduction, feature learning, and classification. Their implementation on neuromorphic hardware platforms emulating large-scale networks of spiking neurons can have significant advantages from the perspectives of scalability, power dissipation and real-time interfacing with the environment. However the traditional RBM architecture and the commonly used training algorithm known as Contrastive Divergence (CD are based on discrete updates and exact arithmetics which do not directly map onto a dynamical neural substrate. Here, we present an event-driven variation of CD to train a RBM constructed with Integrate & Fire (I&F neurons, that is constrained by the limitations of existing and near future neuromorphic hardware platforms. Our strategy is based on neural sampling, which allows us to synthesize a spiking neural network that samples from a target Boltzmann distribution. The reverberating activity of the network replaces the discrete steps of the CD algorithm, while Spike Time Dependent Plasticity (STDP carries out the weight updates in an online, asynchronous fashion.We demonstrate our approach by training an RBM composed of leaky I&F neurons with STDP synapses to learn a generative model of the MNIST hand-written digit dataset, and by testing it in recognition, generation and cue integration tasks. Our results contribute to a machine learning-driven approach for synthesizing networks of spiking neurons capable of carrying out practical, high-level functionality.

  4. Modeling of inter-neuronal coupling medium and its impact on neuronal synchronization.

    Directory of Open Access Journals (Sweden)

    Muhammad Iqbal

    Full Text Available In this paper, modeling of the coupling medium between two neurons, the effects of the model parameters on the synchronization of those neurons, and compensation of coupling strength deficiency in synchronization are studied. Our study exploits the inter-neuronal coupling medium and investigates its intrinsic properties in order to get insight into neuronal-information transmittance and, there from, brain-information processing. A novel electrical model of the coupling medium that represents a well-known RLC circuit attributable to the coupling medium's intrinsic resistive, inductive, and capacitive properties is derived. Surprisingly, the integration of such properties reveals the existence of a natural three-term control strategy, referred to in the literature as the proportional integral derivative (PID controller, which can be responsible for synchronization between two neurons. Consequently, brain-information processing can rely on a large number of PID controllers based on the coupling medium properties responsible for the coherent behavior of neurons in a neural network. Herein, the effects of the coupling model (or natural PID controller parameters are studied and, further, a supervisory mechanism is proposed that follows a learning and adaptation policy based on the particle swarm optimization algorithm for compensation of the coupling strength deficiency.

  5. Bursts generate a non-reducible spike-pattern code

    Directory of Open Access Journals (Sweden)

    Hugo G Eyherabide

    2009-05-01

    Full Text Available On the single-neuron level, precisely timed spikes can either constitute firing-rate codes or spike-pattern codes that utilize the relative timing between consecutive spikes. There has been little experimental support for the hypothesis that such temporal patterns contribute substantially to information transmission. Using grasshopper auditory receptors as a model system, we show that correlations between spikes can be used to represent behaviorally relevant stimuli. The correlations reflect the inner structure of the spike train: a succession of burst-like patterns. We demonstrate that bursts with different spike counts encode different stimulus features, such that about 20% of the transmitted information corresponds to discriminating between different features, and the remaining 80% is used to allocate these features in time. In this spike-pattern code, the "what" and the "when" of the stimuli are encoded in the duration of each burst and the time of burst onset, respectively. Given the ubiquity of burst firing, we expect similar findings also for other neural systems.

  6. Spiking Neural P Systems With Scheduled Synapses.

    Science.gov (United States)

    Cabarle, Francis George C; Adorna, Henry N; Jiang, Min; Zeng, Xiangxiang

    2017-12-01

    Spiking neural P systems (SN P systems) are models of computation inspired by biological spiking neurons. SN P systems have neurons as spike processors, which are placed on the nodes of a directed and static graph (the edges in the graph are the synapses). In this paper, we introduce a variant called SN P systems with scheduled synapses (SSN P systems). SSN P systems are inspired and motivated by the structural dynamism of biological synapses, while incorporating ideas from nonstatic (i.e., dynamic) graphs and networks. In particular, synapses in SSN P systems are available only at specific durations according to their schedules. The SSN P systems model is a response to the problem of introducing durations to synapses of SN P systems. Since SN P systems are in essence static graphs, it is natural to consider them for dynamic graphs also. We introduce local and global schedule types, also taking inspiration from the above-mentioned sources. We prove that SSN P systems are computationally universal as number generators and acceptors for both schedule types, under a normal form (i.e., a simplifying set of restrictions). The introduction of synapse schedules for either schedule type proves useful in programming the system, despite restrictions in the normal form.

  7. Spiking cortical model based non-local means method for despeckling multiframe optical coherence tomography data

    Science.gov (United States)

    Gu, Yameng; Zhang, Xuming

    2017-05-01

    Optical coherence tomography (OCT) images are severely degraded by speckle noise. Existing methods for despeckling multiframe OCT data cannot deliver sufficient speckle suppression while preserving image details well. To address this problem, the spiking cortical model (SCM) based non-local means (NLM) method has been proposed in this letter. In the proposed method, the considered frame and two neighboring frames are input into three SCMs to generate the temporal series of pulse outputs. The normalized moment of inertia (NMI) of the considered patches in the pulse outputs is extracted to represent the rotational and scaling invariant features of the corresponding patches in each frame. The pixel similarity is computed based on the Euclidean distance between the NMI features and used as the weight. Each pixel in the considered frame is restored by the weighted averaging of all pixels in the pre-defined search window in the three frames. Experiments on the real multiframe OCT data of the pig eye demonstrate the advantage of the proposed method over the frame averaging method, the multiscale sparsity based tomographic denoising method, the wavelet-based method and the traditional NLM method in terms of visual inspection and objective metrics such as signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), equivalent number of looks (ENL) and cross-correlation (XCOR).

  8. How neurons migrate: a dynamic in-silico model of neuronal migration in the developing cortex

    LENUS (Irish Health Repository)

    Setty, Yaki

    2011-09-30

    Abstract Background Neuronal migration, the process by which neurons migrate from their place of origin to their final position in the brain, is a central process for normal brain development and function. Advances in experimental techniques have revealed much about many of the molecular components involved in this process. Notwithstanding these advances, how the molecular machinery works together to govern the migration process has yet to be fully understood. Here we present a computational model of neuronal migration, in which four key molecular entities, Lis1, DCX, Reelin and GABA, form a molecular program that mediates the migration process. Results The model simulated the dynamic migration process, consistent with in-vivo observations of morphological, cellular and population-level phenomena. Specifically, the model reproduced migration phases, cellular dynamics and population distributions that concur with experimental observations in normal neuronal development. We tested the model under reduced activity of Lis1 and DCX and found an aberrant development similar to observations in Lis1 and DCX silencing expression experiments. Analysis of the model gave rise to unforeseen insights that could guide future experimental study. Specifically: (1) the model revealed the possibility that under conditions of Lis1 reduced expression, neurons experience an oscillatory neuron-glial association prior to the multipolar stage; and (2) we hypothesized that observed morphology variations in rats and mice may be explained by a single difference in the way that Lis1 and DCX stimulate bipolar motility. From this we make the following predictions: (1) under reduced Lis1 and enhanced DCX expression, we predict a reduced bipolar migration in rats, and (2) under enhanced DCX expression in mice we predict a normal or a higher bipolar migration. Conclusions We present here a system-wide computational model of neuronal migration that integrates theory and data within a precise

  9. Enhanced Burst-Suppression and Disruption of Local Field Potential Synchrony in a Mouse Model of Focal Cortical Dysplasia Exhibiting Spike-Wave Seizures

    Directory of Open Access Journals (Sweden)

    Anthony J. Williams

    2016-11-01

    Full Text Available Focal cortical dysplasias (FCDs are a common cause of brain seizures and are often associated with intractable epilepsy. Here we evaluated aberrant brain neurophysiology in an in vivo mouse model of FCD induced by neonatal freeze lesions (FLs to the right cortical hemisphere (near S1. Linear multi-electrode arrays were used to record extracellular potentials from cortical and subcortical brain regions near the FL in anesthetized mice (5-13 months old followed by 24 h cortical EEG recordings. Results indicated that FL animals exhibit a high prevalence of spontaneous spike-wave discharges (SWDs, predominately during sleep (EEG, and an increase in the incidence of hyper-excitable burst/suppression activity under general anesthesia (extracellular recordings, 0.5-3.0% isoflurane. Brief periods of burst activity in the local field potential (LFP typically presented as an arrhythmic pattern of increased theta-alpha spectral peaks (4-12 Hz on a background of low-amplitude delta activity (1-4 Hz, were associated with an increase in spontaneous spiking of cortical neurons, and were highly synchronized in control animals across recording sites in both cortical and subcortical layers (average cross-correlation values ranging from +0.73 to +1.0 with minimal phase shift between electrodes. However, in FL animals, cortical vs. subcortical burst activity was strongly out of phase with significantly lower cross-correlation values compared to controls (average values of -0.1 to +0.5, P<0.05 between groups. In particular, a marked reduction in the level of synchronous burst activity was observed the closer the recording electrodes were to the malformation (Pearson’s Correlation = 0.525, P<0.05. In a subset of FL animals (3/9, burst activity also included a spike or spike-wave pattern similar to the SWDs observed in unanesthetized animals. In summary, neonatal FLs increased the hyperexcitable pattern of burst activity induced by anesthesia and disrupted field

  10. Information filtering by synchronous spikes in a neural population.

    Science.gov (United States)

    Sharafi, Nahal; Benda, Jan; Lindner, Benjamin

    2013-04-01

    Information about time-dependent sensory stimuli is encoded by the spike trains of neurons. Here we consider a population of uncoupled but noisy neurons (each subject to some intrinsic noise) that are driven by a common broadband signal. We ask specifically how much information is encoded in the synchronous activity of the population and how this information transfer is distributed with respect to frequency bands. In order to obtain some insight into the mechanism of information filtering effects found previously in the literature, we develop a mathematical framework to calculate the coherence of the synchronous output with the common stimulus for populations of simple neuron models. Within this frame, the synchronous activity is treated as the product of filtered versions of the spike trains of a subset of neurons. We compare our results for the simple cases of (1) a Poisson neuron with a rate modulation and (2) an LIF neuron with intrinsic white current noise and a current stimulus. For the Poisson neuron, formulas are particularly simple but show only a low-pass behavior of the coherence of synchronous activity. For the LIF model, in contrast, the coherence function of the synchronous activity shows a clear peak at high frequencies, comparable to recent experimental findings. We uncover the mechanism for this shift in the maximum of the coherence and discuss some biological implications of our findings.

  11. Interictal spike frequency varies with ovarian cycle stage in a rat model of epilepsy.

    Science.gov (United States)

    D'Amour, James; Magagna-Poveda, Alejandra; Moretto, Jillian; Friedman, Daniel; LaFrancois, John J; Pearce, Patrice; Fenton, Andre A; MacLusky, Neil J; Scharfman, Helen E

    2015-07-01

    In catamenial epilepsy, seizures exhibit a cyclic pattern that parallels the menstrual cycle. Many studies suggest that catamenial seizures are caused by fluctuations in gonadal hormones during the menstrual cycle, but this has been difficult to study in rodent models of epilepsy because the ovarian cycle in rodents, called the estrous cycle, is disrupted by severe seizures. Thus, when epilepsy is severe, estrous cycles become irregular or stop. Therefore, we modified kainic acid (KA)- and pilocarpine-induced status epilepticus (SE) models of epilepsy so that seizures were rare for the first months after SE, and conducted video-EEG during this time. The results showed that interictal spikes (IIS) occurred intermittently. All rats with regular 4-day estrous cycles had IIS that waxed and waned with the estrous cycle. The association between the estrous cycle and IIS was strong: if the estrous cycles became irregular transiently, IIS frequency also became irregular, and when the estrous cycle resumed its 4-day pattern, IIS frequency did also. Furthermore, when rats were ovariectomized, or males were recorded, IIS frequency did not show a 4-day pattern. Systemic administration of an estrogen receptor antagonist stopped the estrous cycle transiently, accompanied by transient irregularity of the IIS pattern. Eventually all animals developed severe, frequent seizures and at that time both the estrous cycle and the IIS became irregular. We conclude that the estrous cycle entrains IIS in the modified KA and pilocarpine SE models of epilepsy. The data suggest that the ovarian cycle influences more aspects of epilepsy than seizure susceptibility. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Delayed Maturation of Fast-Spiking Interneurons Is Rectified by Activation of the TrkB Receptor in the Mouse Model of Fragile X Syndrome.

    Science.gov (United States)

    Nomura, Toshihiro; Musial, Timothy F; Marshall, John J; Zhu, Yiwen; Remmers, Christine L; Xu, Jian; Nicholson, Daniel A; Contractor, Anis

    2017-11-22

    Fragile X syndrome (FXS) is a neurodevelopmental disorder that is a leading cause of inherited intellectual disability, and the most common known cause of autism spectrum disorder. FXS is broadly characterized by sensory hypersensitivity and several developmental alterations in synaptic and circuit function have been uncovered in the sensory cortex of the mouse model of FXS ( Fmr1 KO). GABA-mediated neurotransmission and fast-spiking (FS) GABAergic interneurons are central to cortical circuit development in the neonate. Here we demonstrate that there is a delay in the maturation of the intrinsic properties of FS interneurons in the sensory cortex, and a deficit in the formation of excitatory synaptic inputs on to these neurons in neonatal Fmr1 KO mice. Both these delays in neuronal and synaptic maturation were rectified by chronic administration of a TrkB receptor agonist. These results demonstrate that the maturation of the GABAergic circuit in the sensory cortex is altered during a critical developmental period due in part to a perturbation in BDNF-TrkB signaling, and could contribute to the alterations in cortical development underlying the sensory pathophysiology of FXS. SIGNIFICANCE STATEMENT Fragile X (FXS) individuals have a range of sensory related phenotypes, and there is growing evidence of alterations in neuronal circuits in the sensory cortex of the mouse model of FXS ( Fmr1 KO). GABAergic interneurons are central to the correct formation of circuits during cortical critical periods. Here we demonstrate a delay in the maturation of the properties and synaptic connectivity of interneurons in Fmr1 KO mice during a critical period of cortical development. The delays both in cellular and synaptic maturation were rectified by administration of a TrkB receptor agonist, suggesting reduced BDNF-TrkB signaling as a contributing factor. These results provide evidence that the function of fast-spiking interneurons is disrupted due to a deficiency in neurotrophin

  13. Parameter estimation in neuronal stochastic differential equation models from intracellular recordings of membrane potentials in single neurons

    DEFF Research Database (Denmark)

    Ditlevsen, Susanne; Samson, Adeline

    2016-01-01

    Dynamics of the membrane potential in a single neuron can be studied by estimating biophysical parameters from intracellular recordings. Diffusion processes, given as continuous solutions to stochastic differential equations, are widely applied as models for the neuronal membrane potential evolut...

  14. The role of dendritic non-linearities in single neuron computation

    Directory of Open Access Journals (Sweden)

    Boris Gutkin

    2014-05-01

    Full Text Available Experiment has demonstrated that summation of excitatory post-synaptic protientials (EPSPs in dendrites is non-linear. The sum of multiple EPSPs can be larger than their arithmetic sum, a superlinear summation due to the opening of voltage-gated channels and similar to somatic spiking. The so-called dendritic spike. The sum of multiple of EPSPs can also be smaller than their arithmetic sum, because the synaptic current necessarily saturates at some point. While these observations are well-explained by biophysical models the impact of dendritic spikes on computation remains a matter of debate. One reason is that dendritic spikes may fail to make the neuron spike; similarly, dendritic saturations are sometime presented as a glitch which should be corrected by dendritic spikes. We will provide solid arguments against this claim and show that dendritic saturations as well as dendritic spikes enhance single neuron computation, even when they cannot directly make the neuron fire. To explore the computational impact of dendritic spikes and saturations, we are using a binary neuron model in conjunction with Boolean algebra. We demonstrate using these tools that a single dendritic non-linearity, either spiking or saturating, combined with somatic non-linearity, enables a neuron to compute linearly non-separable Boolean functions (lnBfs. These functions are impossible to compute when summation is linear and the exclusive OR is a famous example of lnBfs. Importantly, the implementation of these functions does not require the dendritic non-linearity to make the neuron spike. Next, We show that reduced and realistic biophysical models of the neuron are capable of computing lnBfs. Within these models and contrary to the binary model, the dendritic and somatic non-linearity are tightly coupled. Yet we show that these neuron models are capable of linearly non-separable computations.

  15. Dynamical analysis of periodic bursting in piece-wise linear planar neuron model.

    Science.gov (United States)

    Ji, Ying; Zhang, Xiaofang; Liang, Minjie; Hua, Tingting; Wang, Yawei

    2015-12-01

    A piece-wise linear planar neuron model, namely, two-dimensional McKean model with periodic drive is investigated in this paper. Periodical bursting phenomenon can be observed in the numerical simulations. By assuming the formal solutions associated with different intervals of this non-autonomous system and introducing the generalized Jacobian matrix at the non-smooth boundaries, the bifurcation mechanism for the bursting solution induced by the slowly varying periodic drive is presented. It is shown that, the discontinuous Hopf bifurcation occurring at the non-smooth boundaries, i.e., the bifurcation taking place at the thresholds of the stimulation, leads the alternation between the rest state and spiking state. That is, different oscillation modes of this non-autonomous system convert periodically due to the non-smoothness of the vector field and the slow variation of the periodic drive as well.

  16. Small is beautiful: models of small neuronal networks.

    Science.gov (United States)

    Lamb, Damon G; Calabrese, Ronald L

    2012-08-01

    Modeling has contributed a great deal to our understanding of how individual neurons and neuronal networks function. In this review, we focus on models of the small neuronal networks of invertebrates, especially rhythmically active CPG networks. Models have elucidated many aspects of these networks, from identifying key interacting membrane properties to pointing out gaps in our understanding, for example missing neurons. Even the complex CPGs of vertebrates, such as those that underlie respiration, have been reduced to small network models to great effect. Modeling of these networks spans from simplified models, which are amenable to mathematical analyses, to very complicated biophysical models. Some researchers have now adopted a population approach, where they generate and analyze many related models that differ in a few to several judiciously chosen free parameters; often these parameters show variability across animals and thus justify the approach. Models of small neuronal networks will continue to expand and refine our understanding of how neuronal networks in all animals program motor output, process sensory information and learn. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Complementary processing of haptic information by slowly and rapidly adapting neurons in the trigeminothalamic pathway. Electrophysiology, mathematical modeling and simulations of vibrissae-related neurons.

    Directory of Open Access Journals (Sweden)

    Abel eSanchez-Jimenez

    2013-06-01

    Full Text Available Tonic (slowly adapting and phasic (rapidly adapting primary afferents convey complementary aspects of haptic information to the central nervous system: object location and texture the former, shape the latter. Tonic and phasic neural responses are also recorded in all relay stations of the somatosensory pathway, yet it is unknown their role in both, information processing and information transmission to the cortex: we don’t know if tonic and phasic neurons process complementary aspects of haptic information and/or if these two types constitute two separate channels that convey complementary aspects of tactile information to the cortex. Here we propose to elucidate these two questions in the fast trigeminal pathway of the rat (PrV-VPM: principal trigeminal nucleus-ventroposteromedial thalamic nucleus. We analyze early and global behavior, latencies and stability of the responses of individual cells in PrV and medial lemniscus under 1-40 Hz stimulation of the whiskers in control and decorticated animals and we use stochastic spiking models and extensive simulations. Our results strongly suggest that in the first relay station of the somatosensory system (PrV: 1 tonic and phasic neurons process complementary aspects of whisker-related tactile information 2 tonic and phasic responses are not originated from two different types of neurons 3 the two responses are generated by the differential action of the somatosensory cortex on a unique type of PrV cell 4 tonic and phasic neurons do not belong to two different channels for the transmission of tactile information to the thalamus 5 trigeminothalamic transmission is exclusively performed by tonically firing neurons and 6 all aspects of haptic information are coded into low-pass, band-pass and high-pass filtering profiles of tonically firing neurons. Our results are important for both, basic research on neural circuits and information processing, and development of sensory neuroprostheses.

  18. Stochastic biomathematical models with applications to neuronal modeling

    CERN Document Server

    Batzel, Jerry; Ditlevsen, Susanne

    2013-01-01

    Stochastic biomathematical models are becoming increasingly important as new light is shed on the role of noise in living systems. In certain biological systems, stochastic effects may even enhance a signal, thus providing a biological motivation for the noise observed in living systems. Recent advances in stochastic analysis and increasing computing power facilitate the analysis of more biophysically realistic models, and this book provides researchers in computational neuroscience and stochastic systems with an overview of recent developments. Key concepts are developed in chapters written by experts in their respective fields. Topics include: one-dimensional homogeneous diffusions and their boundary behavior, large deviation theory and its application in stochastic neurobiological models, a review of mathematical methods for stochastic neuronal integrate-and-fire models, stochastic partial differential equation models in neurobiology, and stochastic modeling of spreading cortical depression.

  19. Linking Memories across Time via Neuronal and Dendritic Overlaps in Model Neurons with Active Dendrites

    Directory of Open Access Journals (Sweden)

    George Kastellakis

    2016-11-01

    Full Text Available Memories are believed to be stored in distributed neuronal assemblies through activity-induced changes in synaptic and intrinsic properties. However, the specific mechanisms by which different memories become associated or linked remain a mystery. Here, we develop a simplified, biophysically inspired network model that incorporates multiple plasticity processes and explains linking of information at three different levels: (1 learning of a single associative memory, (2 rescuing of a weak memory when paired with a strong one, and (3 linking of multiple memories across time. By dissecting synaptic from intrinsic plasticity and neuron-wide from dendritically restricted protein capture, the model reveals a simple, unifying principle: linked memories share synaptic clusters within the dendrites of overlapping populations of neurons. The model generates numerous experimentally testable predictions regarding the cellular and sub-cellular properties of memory engrams as well as their spatiotemporal interactions.

  20. A Neuron-Based Model of Sleep-Wake Cycles

    Science.gov (United States)

    Postnova, Svetlana; Peters, Achim; Braun, Hans

    2008-03-01

    In recent years it was discovered that a neuropeptide orexin/hypocretin plays a main role in sleep processes. This peptide is produced by the neurons in the lateral hypothalamus, which project to almost all brain areas. We present a computational model of sleep-wake cycles, which is based on the Hodgkin-Huxley type neurons and considers reciprocal glutaminergic projections between the lateral hypothalamus and the prefrontal cortex. Orexin is released as a neuromodulator and is required to keep the neurons firing, which corresponds to the wake state. When orexin is depleted the neurons are getting silent as observed in the sleep state. They can be reactivated by the circadian signal from the suprachiasmatic nucleus and/or external stimuli (alarm clock). Orexin projections to the thalamocortical neurons also can account for their transition from tonic firing activity during wakefulness to synchronized burst discharges during sleep.

  1. The Limited Utility of Multiunit Data in Differentiating Neuronal Population Activity.

    Directory of Open Access Journals (Sweden)

    Corey J Keller

    Full Text Available To date, single neuron recordings remain the gold standard for monitoring the activity of neuronal populations. Since obtaining single neuron recordings is not always possible, high frequency or 'multiunit activity' (MUA is often used as a surrogate. Although MUA recordings allow one to monitor the activity of a large number of neurons, they do not allow identification of specific neuronal subtypes, the knowledge of which is often critical for understanding electrophysiological processes. Here, we explored whether prior knowledge of the single unit waveform of specific neuron types is sufficient to permit the use of MUA to monitor and distinguish differential activity of individual neuron types. We used an experimental and modeling approach to determine if components of the MUA can monitor medium spiny neurons (MSNs and fast-spiking interneurons (FSIs in the mouse dorsal striatum. We demonstrate that when well-isolated spikes are recorded, the MUA at frequencies greater than 100Hz is correlated with single unit spiking, highly dependent on the waveform of each neuron type, and accurately reflects the timing and spectral signature of each neuron. However, in the absence of well-isolated spikes (the norm in most MUA recordings, the MUA did not typically contain sufficient information to permit accurate prediction of the respective population activity of MSNs and FSIs. Thus, even under ideal conditions for the MUA to reliably predict the moment-to-moment activity of specific local neuronal ensembles, knowledge of the spike waveform of the underlying neuronal populations is necessary, but not sufficient.

  2. Conflict Resolution as Near-Threshold Decision-Making: A Spiking Neural Circuit Model with Two-Stage Competition for Antisaccadic Task.

    Science.gov (United States)

    Lo, Chung-Chuan; Wang, Xiao-Jing

    2016-08-01

    Automatic responses enable us to react quickly and effortlessly, but they often need to be inhibited so that an alternative, voluntary action can take place. To investigate the brain mechanism of controlled behavior, we investigated a biologically-based network model of spiking neurons for inhibitory control. In contrast to a simple race between pro- versus anti-response, our model incorporates a sensorimotor remapping module, and an action-selection module endowed with a "Stop" process through tonic inhibition. Both are under the modulation of rule-dependent control. We tested the model by applying it to the well known antisaccade task in which one must suppress the urge to look toward a visual target that suddenly appears, and shift the gaze diametrically away from the target instead. We found that the two-stage competition is crucial for reproducing the complex behavior and neuronal activity observed in the antisaccade task across multiple brain regions. Notably, our model demonstrates two types of errors: fast and slow. Fast errors result from failing to inhibit the quick automatic responses and therefore exhibit very short response times. Slow errors, in contrast, are due to incorrect decisions in the remapping process and exhibit long response times comparable to those of correct antisaccade responses. The model thus reveals a circuit mechanism for the empirically observed slow errors and broad distributions of erroneous response times in antisaccade. Our work suggests that selecting between competing automatic and voluntary actions in behavioral control can be understood in terms of near-threshold decision-making, sharing a common recurrent (attractor) neural circuit mechanism with discrimination in perception.

  3. Conflict Resolution as Near-Threshold Decision-Making: A Spiking Neural Circuit Model with Two-Stage Competition for Antisaccadic Task.

    Directory of Open Access Journals (Sweden)

    Chung-Chuan Lo

    2016-08-01

    Full Text Available Automatic responses enable us to react quickly and effortlessly, but they often need to be inhibited so that an alternative, voluntary action can take place. To investigate the brain mechanism of controlled behavior, we investigated a biologically-based network model of spiking neurons for inhibitory control. In contrast to a simple race between pro- versus anti-response, our model incorporates a sensorimotor remapping module, and an action-selection module endowed with a "Stop" process through tonic inhibition. Both are under the modulation of rule-dependent control. We tested the model by applying it to the well known antisaccade task in which one must suppress the urge to look toward a visual target that suddenly appears, and shift the gaze diametrically away from the target instead. We found that the two-stage competition is crucial for reproducing the complex behavior and neuronal activity observed in the antisaccade task across multiple brain regions. Notably, our model demonstrates two types of errors: fast and slow. Fast errors result from failing to inhibit the quick automatic responses and therefore exhibit very short response times. Slow errors, in contrast, are due to incorrect decisions in the remapping process and exhibit long response times comparable to those of correct antisaccade responses. The model thus reveals a circuit mechanism for the empirically observed slow errors and broad distributions of erroneous response times in antisaccade. Our work suggests that selecting between competing automatic and voluntary actions in behavioral control can be understood in terms of near-threshold decision-making, sharing a common recurrent (attractor neural circuit mechanism with discrimination in perception.

  4. Electroencephalographic precursors of spike-wave discharges in a genetic rat model of absence epilepsy: Power spectrum and coherence EEG analyses.

    Science.gov (United States)

    Sitnikova, Evgenia; van Luijtelaar, Gilles

    2009-04-01

    Periods in the electroencephalogram (EEG) that immediately precede the onset of spontaneous spike-wave discharges (SWD) were examined in WAG/Rij rat model of absence epilepsy. Precursors of SWD (preSWD) were classified based on the distribution of EEG power in delta-theta-alpha frequency bands as measured in the frontal cortex. In 95% of preSWD, an elevation of EEG power was detected in delta band (1-4Hz). 73% of preSWD showed high power in theta frequencies (4.5-8Hz); these preSWD might correspond to 5-9Hz oscillations that were found in GAERS before SWD onset [Pinault, D., Vergnes, M., Marescaux, C., 2001. Medium-voltage 5-9Hz oscillations give rise to spike-and-wave discharges in a genetic model of absence epilepsy: in vivo dual extracellular recording of thalamic relay and reticular neurons. Neuroscience 105, 181-201], however, theta component of preSWD in our WAG/Rij rats was not shaped into a single rhythm. It is concluded that a coalescence of delta and theta in the cortex is favorable for the occurrence of SWD. The onset of SWD was associated with strengthening of intracortical and thalamo-cortical coherence in 9.5-14Hz and in double beta frequencies. No features of EEG coherence can be considered as unique for any of preSWD subtype. Reticular and ventroposteromedial thalamic nuclei were strongly coupled even before the onset of SWD. All this suggests that SWD derive from an intermixed delta-theta EEG background; seizure onset associates with reinforcement of intracortical and cortico-thalamic associations.

  5. Wavelet analysis of epileptic spikes

    CERN Document Server

    Latka, M; Kozik, A; West, B J; Latka, Miroslaw; Was, Ziemowit; Kozik, Andrzej; West, Bruce J.

    2003-01-01

    Interictal spikes and sharp waves in human EEG are characteristic signatures of epilepsy. These potentials originate as a result of synchronous, pathological discharge of many neurons. The reliable detection of such potentials has been the long standing problem in EEG analysis, especially after long-term monitoring became common in investigation of epileptic patients. The traditional definition of a spike is based on its amplitude, duration, sharpness, and emergence from its background. However, spike detection systems built solely around this definition are not reliable due to the presence of numerous transients and artifacts. We use wavelet transform to analyze the properties of EEG manifestations of epilepsy. We demonstrate that the behavior of wavelet transform of epileptic spikes across scales can constitute the foundation of a relatively simple yet effective detection algorithm.

  6. ViSAPy: a Python tool for biophysics-based generation of virtual spiking activity for evaluation of spike-sorting algorithms.

    Science.gov (United States)

    Hagen, Espen; Ness, Torbjørn V; Khosrowshahi, Amir; Sørensen, Christina; Fyhn, Marianne; Hafting, Torkel; Franke, Felix; Einevoll, Gaute T

    2015-04-30

    New, silicon-based multielectrodes comprising hundreds or more electrode contacts offer the possibility to record spike trains from thousands of neurons simultaneously. This potential cannot be realized unless accurate, reliable automated methods for spike sorting are developed, in turn requiring benchmarking data sets with known ground-truth spike times. We here present a general simulation tool for computing benchmarking data for evaluation of spike-sorting algorithms entitled ViSAPy (Virtual Spiking Activity in Python). The tool is based on a well-established biophysical forward-modeling scheme and is implemented as a Python package built on top of the neuronal simulator NEURON and the Python tool LFPy. ViSAPy allows for arbitrary combinations of multicompartmental neuron models and geometries of recording multielectrodes. Three example benchmarking data sets are generated, i.e., tetrode and polytrode data mimicking in vivo cortical recordings and microelectrode array (MEA) recordings of in vitro activity in salamander retinas. The synthesized example benchmarking data mimics salient features of typical experimental recordings, for example, spike waveforms depending on interspike interval. ViSAPy goes beyond existing methods as it includes biologically realistic model noise, synaptic activation by recurrent spiking networks, finite-sized electrode contacts, and allows for inhomogeneous electrical conductivities. ViSAPy is optimized to allow for generation of long time series of benchmarking data, spanning minutes of biological time, by parallel execution on multi-core computers. ViSAPy is an open-ended tool as it can be generalized to produce benchmarking data or arbitrary recording-electrode geometries and with various levels of complexity. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  7. What the training of a neuronal network optimizes.

    Science.gov (United States)

    Tabor, Zbisław

    2007-09-01

    In the study a model of training of neuronal networks built of integrate-and-fire neurons is investigated. Neurons are assembled into complex networks of Watts-Strogatz type. Every neuronal network contains a single receptor neuron. The receptor neuron, stimulated by an external signal, evokes spikes in equal time intervals. The spikes generated by the receptor neuron induce subsequent activity of a whole network. The depolarization signals, traveling the network, modify synaptic couplings according to a kick-and-delay rule, whose process is termed "training." It is shown that the training decreases the mean length of paths along which a depolarization signal is transmitted from the receptor neuron. Consequently, the training also decreases the reaction time and the energy expense necessary for the network to react to the external stimulus. It is shown that the initial distribution of synaptic couplings crucially determines the performance of trained networks.

  8. Chronic sodium salicylate administration enhances population spike long-term potentiation following a combination of theta frequency primed-burst stimulation and the transient application of pentylenetetrazol in rat CA1 hippocampal neurons.

    Science.gov (United States)

    Gholami, Masoumeh; Moradpour, Farshad; Semnanian, Saeed; Naghdi, Nasser; Fathollahi, Yaghoub

    2015-11-15

    The effect of chronic administration of sodium salicylate (NaSal) on the excitability and synaptic plasticity of the rodent hippocampus was investigated. Repeated systemic treatment with NaSal reliably induced tolerance to the anti-nociceptive effect of NaSal (one i.p. injection per day for 6 consecutive days). Following chronic NaSal or vehicle treatment, a series of electrophysiological experiments on acute hippocampal slices (focusing on the CA1 circuitry) were tested whether tolerance to NaSal would augment pentylenetetrazol (PTZ)-induced long-term potentiation (LTP) and /or epileptic activity, and whether the augmentation was the same after priming activity with a natural stimulus pattern prior to PTZ. We noted an altered synaptic input-to-spike transformation, such that neuronal firing increased after a given synaptic drive. Population spike-LTP (PS-LTP) was increased in the NaSal-tolerant animals, but only when it was induced via a combination of electrical stimulation (theta pattern primed-burst stimulation) and the transient application of PTZ. Identifying and understanding these changes in neuronal excitability and synaptic plasticity following chronic salicylate treatment could prove useful in the clinical diagnosis or treatment of chronic aspirin-induced, or even idiopathic, seizure activity. Copyright © 2015 Elsevier B.V. All rights reserved.

  9. Responses of single neurons and neuronal ensembles in frog first- and second-order olfactory neurons

    Czech Academy of Sciences Publication Activity Database

    Rospars, J. P.; Šanda, Pavel; Lánský, Petr; Duchamp-Viret, P.

    2013-01-01

    Roč. 1536, NOV 6 (2013), s. 144-158 ISSN 0006-8993 R&D Projects: GA ČR(CZ) GBP304/12/G069; GA ČR(CZ) GAP103/11/0282 Institutional support: RVO:67985823 Keywords : olfaction * spiking activity * neuronal model Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.828, year: 2013

  10. Modeling the Development of Goal-Specificity in Mirror Neurons.

    Science.gov (United States)

    Thill, Serge; Svensson, Henrik; Ziemke, Tom

    2011-12-01

    Neurophysiological studies have shown that parietal mirror neurons encode not only actions but also the goal of these actions. Although some mirror neurons will fire whenever a certain action is perceived (goal-independently), most will only fire if the motion is perceived as part of an action with a specific goal. This result is important for the action-understanding hypothesis as it provides a potential neurological basis for such a cognitive ability. It is also relevant for the design of artificial cognitive systems, in particular robotic systems that rely on computational models of the mirror system in their interaction with other agents. Yet, to date, no computational model has explicitly addressed the mechanisms that give rise to both goal-specific and goal-independent parietal mirror neurons. In the present paper, we present a computational model based on a self-organizing map, which receives artificial inputs representing information about both the observed or executed actions and the context in which they were executed. We show that the map develops a biologically plausible organization in which goal-specific mirror neurons emerge. We further show that the fundamental cause for both the appearance and the number of goal-specific neurons can be found in geometric relationships between the different inputs to the map. The results are important to the action-understanding hypothesis as they provide a mechanism for the emergence of goal-specific parietal mirror neurons and lead to a number of predictions: (1) Learning of new goals may mostly reassign existing goal-specific neurons rather than recruit new ones; (2) input differences between executed and observed actions can explain observed corresponding differences in the number of goal-specific neurons; and (3) the percentage of goal-specific neurons may differ between motion primitives.

  11. 3D-printer visualization of neuron models

    Directory of Open Access Journals (Sweden)

    Robert A McDougal

    2015-06-01

    Full Text Available Neurons come in a wide variety of shapes and sizes. In a quest to understand this neuronal diversity, researchers have three-dimensionally traced tens of thousands of neurons; many of these tracings are freely available through online repositories like NeuroMorpho.Org and ModelDB. Tracings can be visualized on the computer screen, used for statistical analysis of the properties of different cell types, used to simulate neuronal behavior, and more. We introduce the use of 3D printing as a technique for visualizing traced morphologies. Our method for generating printable versions of a cell or group of cells is to expand dendrite and axon diameters and then to transform the wireframe tracing into a 3D object with a neuronal surface generating algorithm like Constructive Tessellated Neuronal Geometry (CTNG. We show that 3D printed cells can be readily examined, manipulated, and compared with other neurons to gain insight into both the biology and the reconstruction process. We share our printable models in a new database, 3DModelDB, and encourage others to do the same with cells that they generate using our code or other methods. To provide additional context, 3DModelDB provides a simulatable version of each cell, links to papers that use or describe it, and links to associated entries in other databases.

  12. 3D-printer visualization of neuron models.

    Science.gov (United States)

    McDougal, Robert A; Shepherd, Gordon M

    2015-01-01

    Neurons come in a wide variety of shapes and sizes. In a quest to understand this neuronal diversity, researchers have three-dimensionally traced tens of thousands of neurons; many of these tracings are freely available through online repositories like NeuroMorpho.Org and ModelDB. Tracings can be visualized on the computer screen, used for statistical analysis of the properties of different cell types, used to simulate neuronal behavior, and more. We introduce the use of 3D printing as a technique for visualizing traced morphologies. Our method for generating printable versions of a cell or group of cells is to expand dendrite and axon diameters and then to transform the tracing into a 3D object with a neuronal surface generating algorithm like Constructive Tessellated Neuronal Geometry (CTNG). We show that 3D printed cells can be readily examined, manipulated, and compared with other neurons to gain insight into both the biology and the reconstruction process. We share our printable models in a new database, 3DModelDB, and encourage others to do the same with cells that they generate using our code or other methods. To provide additional context, 3DModelDB provides a simulatable version of each cell, links to papers that use or describe it, and links to associated entries in other databases.

  13. Degradation of cefquinome in spiked milk as a model for bioremediation of dairy farm waste milk containing cephalosporin residues.

    Science.gov (United States)

    Horton, R A; Randall, L P; Bailey-Horne, V; Heinrich, K; Sharman, M; Brunton, L A; La Ragione, R M; Jones, J R

    2015-04-01

    The aims of this work were to develop a model of dairy farm waste milk and to investigate methods for the bioremediation of milk containing cefquinome residues. Unpasteurized milk and UHT milk that had both been spiked with cefquinome at a concentration of 2 μg ml(-1) were used as a model for waste milk containing cephalosporin residues. Adjustment of the spiked UHT milk to pH 10 or treatment with conditioned medium from bacterial growth producing cefotaximase, were the most effective methods for decreasing the cefquinome concentrations within 24 h. A large-scale experiment (10 l of cefquinome-spiked unpasteurized milk) suggested that fermentation for 22 h at 37°C followed by heating at 60°C for 2 h was sufficient to decrease cefquinome concentrations to below the limit of quantification (milk. Treatment of waste milk to decrease cephalosporin residue concentrations and also to kill bacteria prior to feeding to dairy calves could decrease the risk of selection for ESBL bacteria on dairy farms. © 2015 Crown copyright. Journal of Applied Microbiology © 2015 The Society for Applied Microbiology. This article is published with the permission of the Controller of HMSO and the Queen's Printer for Scotland.

  14. To sort or not to sort: the impact of spike-sorting on neural decoding performance

    Science.gov (United States)

    Todorova, Sonia; Sadtler, Patrick; Batista, Aaron; Chase, Steven; Ventura, Valérie

    2014-10-01

    Objective. Brain-computer interfaces (BCIs) are a promising technology for restoring motor ability to paralyzed patients. Spiking-based BCIs have successfully been used in clinical trials to control multi-degree-of-freedom robotic devices. Current implementations of these devices require a lengthy spike-sorting step, which is an obstacle to moving this technology from the lab to the clinic. A viable alternative is to avoid spike-sorting, treating all threshold crossings of the voltage waveform on an electrode as coming from one putative neuron. It is not known, however, how much decoding information might be lost by ignoring spike identity. Approach. We present a full analysis of the effects of spike-sorting schemes on decoding performance. Specifically, we compare how well two common decoders, the optimal linear estimator and the Kalman filter, reconstruct the arm movements of non-human primates performing reaching tasks, when receiving input from various sorting schemes. The schemes we tested included: using threshold crossings without spike-sorting; expert-sorting discarding the noise; expert-sorting, including the noise as if it were another neuron; and automatic spike-sorting using waveform features. We also decoded from a joint statistical model for the waveforms and tuning curves, which does not involve an explicit spike-sorting step. Main results. Discarding the threshold crossings that cannot be assigned to neurons degrades decoding: no spikes should be discarded. Decoding based on spike-sorted units outperforms decoding based on electrodes voltage crossings: spike-sorting is useful. The four waveform based spike-sorting methods tested here yield similar decoding efficiencies: a fast and simple method is competitive. Decoding using the joint waveform and tuning model shows promise but is not consistently superior. Significance. Our results indicate that simple automated spike-sorting performs as well as the more computationally or manually intensive

  15. A high-throughput model for investigating neuronal function and synaptic transmission in cultured neuronal networks.

    Science.gov (United States)

    Virdee, Jasmeet K; Saro, Gabriella; Fouillet, Antoine; Findlay, Jeremy; Ferreira, Filipa; Eversden, Sarah; O'Neill, Michael J; Wolak, Joanna; Ursu, Daniel

    2017-11-03

    Loss of synapses or alteration of synaptic activity is associated with cognitive impairment observed in a number of psychiatric and neurological disorders, such as schizophrenia and Alzheimer's disease. Therefore successful development of in vitro methods that can investigate synaptic function in a high-throughput format could be highly impactful for neuroscience drug discovery. We present here the development, characterisation and validation of a novel high-throughput in vitro model for assessing neuronal function and synaptic transmission in primary rodent neurons. The novelty of our approach resides in the combination of the electrical field stimulation (EFS) with data acquisition in spatially separated areas of an interconnected neuronal network. We integrated our methodology with state of the art drug discovery instrumentation (FLIPR Tetra) and used selective tool compounds to perform a systematic pharmacological validation of the model. We investigated pharmacological modulators targeting pre- and post-synaptic receptors (AMPA, NMDA, GABA-A, mGluR2/3 receptors and Nav, Cav voltage-gated ion channels) and demonstrated the ability of our model to discriminate and measure synaptic transmission in cultured neuronal networks. Application of the model described here as an unbiased phenotypic screening approach will help with our long term goals of discovering novel therapeutic strategies for treating neurological disorders.

  16. Perceptron learning rule derived from spike-frequency adaptation and spike-time-dependent plasticity.

    Science.gov (United States)

    D'Souza, Prashanth; Liu, Shih-Chii; Hahnloser, Richard H R

    2010-03-09

    It is widely believed that sensory and motor processing in the brain is based on simple computational primitives rooted in cellular and synaptic physiology. However, many gaps remain in our understanding of the connections between neural computations and biophysical properties of neurons. Here, we show that synaptic spike-time-dependent plasticity (STDP) combined with spike-frequency adaptation (SFA) in a single neuron together approximate the well-known perceptron learning rule. Our calculations and integrate-and-fire simulations reveal that delayed inputs to a neuron endowed with STDP and SFA precisely instruct neural responses to earlier arriving inputs. We demonstrate this mechanism on a developmental example of auditory map formation guided by visual inputs, as observed in the external nucleus of the inferior colliculus (ICX) of barn owls. The interplay of SFA and STDP in model ICX neurons precisely transfers the tuning curve from the visual modality onto the auditory modality, demonstrating a useful computation for multimodal and sensory-guided processing.

  17. Analysis and modeling of ensemble recordings from respiratory pre-motor neurons indicate changes in functional network architecture after acute hypoxia

    Directory of Open Access Journals (Sweden)

    Roberto F Galán

    2010-09-01

    Full Text Available We have combined neurophysiologic recording, statistical analysis, and computational modeling to investigate the dynamics of the respiratory network in the brainstem. Using a multielectrode array, we recorded ensembles of respiratory neurons in perfused in situ rat preparations that produce spontaneous breathing patterns, focusing on inspiratory pre-motor neurons. We compared firing rates and neuronal synchronization among these neurons before and after a brief hypoxic stimulus. We observed a significant decrease in the number of spikes after stimulation, in part due to a transient slowing of the respiratory pattern. However, the median interspike interval did not change, suggesting that the firing threshold of the neurons was not affected but rather the synaptic input was. A bootstrap analysis of synchrony between spike trains revealed that, both before and after brief hypoxia, up to 45 % (but typically less than 5 % of coincident spikes across neuronal pairs was not explained by chance. Most likely, this synchrony resulted from common synaptic input to the pre-motor population, an example of stochastic synchronization. After brief hypoxia most pairs were less synchronized, although some were more, suggesting that the respiratory network was “rewired” transiently after the stimulus. To investigate this hypothesis, we created a simple computational model with feed-forward divergent connections along the inspiratory pathway. Assuming that 1 the number of divergent projections was not the same for all presynaptic cells, but rather spanned a wide range and 2 that the stimulus increased inhibition at the top of the network; this model reproduced the reduction in firing rate and bootstrap-corrected synchrony subsequent to hypoxic stimulation observed in our experimental data.

  18. Robust spike-train learning in spike-event based weight update.

    Science.gov (United States)

    Shrestha, Sumit Bam; Song, Qing

    2017-12-01

    Supervised learning algorithms in a spiking neural network either learn a spike-train pattern for a single neuron receiving input spike-train from multiple input synapses or learn to output the first spike time in a feedforward network setting. In this paper, we build upon spike-event based weight update strategy to learn continuous spike-train in a spiking neural network with a hidden layer using a dead zone on-off based adaptive learning rate rule which ensures convergence of the learning process in the sense of weight convergence and robustness of the learning process to external disturbances. Based on different benchmark problems, we compare this new method with other relevant spike-train learning algorithms. The results show that the speed of learning is much improved and the rate of successful learning is also greatly improved. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Improved SpikeProp for Using Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Falah Y. H. Ahmed

    2013-01-01

    Full Text Available A spiking neurons network encodes information in the timing of individual spike times. A novel supervised learning rule for SpikeProp is derived to overcome the discontinuities introduced by the spiking thresholding. This algorithm is based on an error-backpropagation learning rule suited for supervised learning of spiking neurons that use exact spike time coding. The SpikeProp is able to demonstrate the spiking neurons that can perform complex nonlinear classification in fast temporal coding. This study proposes enhancements of SpikeProp learning algorithm for supervised training of spiking networks which can deal with complex patterns. The proposed methods include the SpikeProp particle swarm optimization (PSO and angle driven dependency learning rate. These methods are presented to SpikeProp network for multilayer learning enhancement and weights optimization. Input and output patterns are encoded as spike trains of precisely timed spikes, and the network learns to transform the input trains into target output trains. With these enhancements, our proposed methods outperformed other conventional neural network architectures.

  20. Selective loss of alpha motor neurons with sparing of gamma motor neurons and spinal cord cholinergic neurons in a mouse model of spinal muscular atrophy.

    Science.gov (United States)

    Powis, Rachael A; Gillingwater, Thomas H

    2016-03-01

    Spinal muscular atrophy (SMA) is a neuromuscular disease characterised primarily by loss of lower motor neurons from the ventral grey horn of the spinal cord and proximal muscle atrophy. Recent experiments utilising mouse models of SMA have demonstrated that not all motor neurons are equally susceptible to the disease, revealing that other populations of neurons can also be affected. Here, we have extended investigations of selective vulnerability of neuronal populations in the spinal cord of SMA mice to include comparative assessments of alpha motor neuron (α-MN) and gamma motor neuron (γ-MN) pools, as well as other populations of cholinergic neurons. Immunohistochemical analyses of late-symptomatic SMA mouse spinal cord revealed that numbers of α-MNs were significantly reduced at all levels of the spinal cord compared with controls, whereas numbers of γ-MNs remained stable. Likewise, the average size of α-MN cell somata was decreased in SMA mice with no change occurring in γ-MNs. Evaluation of other pools of spinal cord cholinergic neurons revealed that pre-ganglionic sympathetic neurons, central canal cluster interneurons, partition interneurons and preganglionic autonomic dorsal commissural nucleus neuron numbers all remained unaffected in SMA mice. Taken together, these findings indicate that α-MNs are uniquely vulnerable among cholinergic neuron populations in the SMA mouse spinal cord, with γ-MNs and other cholinergic neuronal populations being largely spared. © 2015 Anatomical Society.

  1. Gradient Learning in Spiking Neural Networks by Dynamic Perturbation of Conductances

    International Nuclear Information System (INIS)

    Fiete, Ila R.; Seung, H. Sebastian

    2006-01-01

    We present a method of estimating the gradient of an objective function with respect to the synaptic weights of a spiking neural network. The method works by measuring the fluctuations in the objective function in response to dynamic perturbation of the membrane conductances of the neurons. It is compatible with recurrent networks of conductance-based model neurons with dynamic synapses. The method can be interpreted as a biologically plausible synaptic learning rule, if the dynamic perturbations are generated by a special class of 'empiric' synapses driven by random spike trains from an external source

  2. A phase plane analysis of neuron-astrocyte interactions.

    Science.gov (United States)

    Amiri, Mahmood; Montaseri, Ghazal; Bahrami, Fariba

    2013-08-01

    Intensive experimental studies have shown that astrocytes are active partners in modulation of synaptic transmission. In the present research, we study neuron-astrocyte signaling using a biologically inspired model of one neuron synapsing one astrocyte. In this model, the firing dynamics of the neuron is described by the Morris-Lecar model and the Ca(2+) dynamics of a single astrocyte explained by a functional model introduced by Postnov and colleagues. Using the coupled neuron-astrocyte model and based on the results of the phase plane analyses, it is demonstrated that the astrocyte is able to activate the silent neuron or change the neuron spiking frequency through bidirectional communication. This suggests that astrocyte feedback signaling is capable of modulating spike transmission frequency by changing neuron spiking frequency. This effect is described by a saddle-node on invariant circle bifurcation in the coupled neuron-astrocyte model. In this way, our results suggest that the neuron-astrocyte crosstalk has a fundamental role in producing diverse neuronal activities and therefore enhances the information processing capabilities of the brain. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  3. Spiking neural circuits with dendritic stimulus processors : encoding, decoding, and identification in reproducing kernel Hilbert spaces.

    Science.gov (United States)

    Lazar, Aurel A; Slutskiy, Yevgeniy B

    2015-02-01

    We present a multi-input multi-output neural circuit architecture for nonlinear processing and encoding of stimuli in the spike domain. In this architecture a bank of dendritic stimulus processors implements nonlinear transformations of multiple temporal or spatio-temporal signals such as spike trains or auditory and visual stimuli in the analog domain. Dendritic stimulus processors may act on both individual stimuli and on groups of stimuli, thereby executing complex computations that arise as a result of interactions between concurrently received signals. The results of the analog-domain computations are then encoded into a multi-dimensional spike train by a population of spiking neurons modeled as nonlinear dynamical systems. We investigate general conditions under which such circuits faithfully represent stimuli and demonstrate algorithms for (i) stimulus recovery, or decoding, and (ii) identification of dendritic stimulus processors from the observed spikes. Taken together, our results demonstrate a fundamental duality between the identification of the dendritic stimulus processor of a single neuron and the decoding of stimuli encoded by a population of neurons with a bank of dendritic stimulus processors. This duality result enabled us to derive lower bounds on the number of experiments to be performed and the total number of spikes that need to be recorded for identifying a neural circuit.

  4. Frequency dependence of CA3 spike phase response arising from h-current properties

    Directory of Open Access Journals (Sweden)

    Melodie eBorel

    2013-12-01

    Full Text Available The phase of firing of hippocampal neurons during theta oscillations encodes spatial information. Moreover, the spike phase response to synaptic inputs in individual cells depends on the expression of the hyperpolarisation-activated mixed cation current (Ih, which differs between CA3 and CA1 pyramidal neurons. Here, we compared the phase response of these two cell types, as well as their intrinsic membrane properties. We found that both CA3 and CA1 pyramidal neurons show a voltage sag in response to negative current steps but that this voltage sag is significantly smaller in CA3 cells. Moreover, CA3 pyramidal neurons have less prominent resonance properties compared to CA1 pyramidal neurons. This is consistent with differential expression of Ih by the two cell types. Despite their distinct intrinsic membrane properties, both CA3 and CA1 pyramidal neurons displayed bidirectional spike phase control by excitatory conductance inputs during theta oscillations. In particular, excitatory inputs delivered at the descending phase of a dynamic clamp-induced membrane potential oscillation delayed the subsequent spike by nearly 50 mrad. The effect was shown to be mediated by Ih and was counteracted by increasing inhibitory conductance driving the membrane potential oscillation. Using our experimental data to feed a computational model, we showed that differences in Ih between CA3 and CA1 pyramidal neurons could predict frequency-dependent differences in phase response properties between these cell types. We confirmed experimentally such frequency-dependent spike phase control in CA3 neurons. Therefore, a decrease in theta frequency, which is observed in intact animals during novelty, might switch the CA3 spike phase response from unidirectional to bidirectional and thereby promote encoding of the new context.

  5. Enhanced polychronisation in a spiking network with metaplasticity

    Directory of Open Access Journals (Sweden)

    Mira eGuise

    2015-02-01

    Full Text Available Computational models of metaplasticity have usually focused on the modeling of single synapses (Shouval et al., 2002. In this paper we study the effect of metaplasticity on network behavior. Our guiding assumption is that the primary purpose of metaplasticity is to regulate synaptic plasticity, by increasing it when input is low and decreasing it when input is high. For our experiments we adopt a model of metaplasticity that demonstrably has this effect for a single synapse; our primary interest is in how metaplasticity thus defined affects network-level phenomena. We focus on a network-level phenomenon called polychronicity, that has a potential role in representation and memory. A network with polychronicity has the ability to produce non-synchronous but precisely timed sequences of neural firing events that can arise from strongly connected groups of neurons called polychronous neural groups (Izhikevich et al., 2004; Izhikevich, 2006a. Polychronous groups (PNGs develop readily when spiking networks are exposed to repeated spatio-temporal stimuli under the influence of spike-timing-dependent plasticity (STDP, but are sensitive to changes in synaptic weight distribution. We use a technique we have recently developed called Response Fingerprinting to show that PNGs formed in the presence of metaplasticity are significantly larger than those with no metaplasticity. A potential mechanism for this enhancement is proposed that links an inherent property of integrator type neurons called spike latency to an increase in the tolerance of PNG neurons to jitter in their inputs.

  6. Brian: a simulator for spiking neural networks in Python

    Directory of Open Access Journals (Sweden)

    Dan F M Goodman

    2008-11-01

    Full Text Available Brian is a new simulator for spiking neural networks, written in Python (http://brian.di.ens.fr. It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.

  7. Leaders of neuronal cultures in a quorum percolation model

    Directory of Open Access Journals (Sweden)

    Jean-Pierre Eckmann

    2010-09-01

    Full Text Available We present a theoretical framework using quorum-percolation for describing the initiation of activity in a neural culture. The cultures are modeled as random graphs, whose nodes are neurons with $kin$ inputs and $kout$ outputs, and whose input degrees $kin=k$ obey given distribution functions $p_k$. We examine the firing activity of the population of neurons according to their input degree ($k$ classes and calculate for each class its firing probability $Phi_k(t$ as a function of $t$. The probability of a node to fire is found to be determined by its in-degree $k$, and the first-to-fire neurons are those that have a high $k$. A small minority of high-$k$ classes may be called ``Leaders,'' as they form an inter-connected subnetwork that consistently fires much before the rest of the culture. Once initiated, the activity spreads from the Leaders to the less connected majority of the culture. We then use the distribution of in-degree of the Leaders to study the growth rate of the number of neurons active in a burst, which was experimentally measured to be initially exponential. We find that this kind of growth rate is best described by a population that has an in-degree distribution that is a Gaussian centered around $k=75$ with width $sigma=31$ for the majority of the neurons, but also has a power law tail with exponent $-2$ for ten percent of the population. Neurons in the tail may have as many as $k=4,700$ inputs. We explore and discuss the correspondence between the degree distribution and a dynamic neuronal threshold, showing that from the functional point of view, structure and elementary dynamics are interchangeable. We discuss possible geometric origins of this distribution, and comment on the importance of size, or of having a large number of neurons, in the culture.

  8. iSpike: a spiking neural interface for the iCub robot

    International Nuclear Information System (INIS)

    Gamez, D; Fidjeland, A K; Lazdins, E

    2012-01-01

    This paper presents iSpike: a C++ library that interfaces between spiking neural network simulators and the iCub humanoid robot. It uses a biologically inspired approach to convert the robot’s sensory information into spikes that are passed to the neural network simulator, and it decodes output spikes from the network into motor signals that are sent to control the robot. Applications of iSpike range from embodied models of the brain to the development of intelligent robots using biologically inspired spiking neural networks. iSpike is an open source library that is available for free download under the terms of the GPL. (paper)

  9. Generative modelling of regulated dynamical behavior in cultured neuronal networks

    Science.gov (United States)

    Volman, Vladislav; Baruchi, Itay; Persi, Erez; Ben-Jacob, Eshel

    2004-04-01

    The spontaneous activity of cultured in vitro neuronal networks exhibits rich dynamical behavior. Despite the artificial manner of their construction, the networks’ activity includes features which seemingly reflect the action of underlying regulating mechanism rather than arbitrary causes and effects. Here, we study the cultured networks dynamical behavior utilizing a generative modelling approach. The idea is to include the minimal required generic mechanisms to capture the non-autonomous features of the behavior, which can be reproduced by computer modelling, and then, to identify the additional features of biotic regulation in the observed behavior which are beyond the scope of the model. Our model neurons are composed of soma described by the two Morris-Lecar dynamical variables (voltage and fraction of open potassium channels), with dynamical synapses described by the Tsodyks-Markram three variables dynamics. The model neuron satisfies our self-consistency test: when fed with data recorded from a real cultured networks, it exhibits dynamical behavior very close to that of the networks’ “representative” neuron. Specifically, it shows similar statistical scaling properties (approximated by similar symmetric Lévy distribution with finite mean). A network of such M-L elements spontaneously generates (when weak “structured noise” is added) synchronized bursting events (SBEs) similar to the observed ones. Both the neuronal statistical scaling properties within the bursts and the properties of the SBEs time series show generative (a new discussed concept) agreement with the recorded data. Yet, the model network exhibits different structure of temporal variations and does not recover the observed hierarchical temporal ordering, unless fed with recorded special neurons (with much higher rates of activity), thus indicating the existence of self-regulation mechanisms. It also implies that the spontaneous activity is not simply noise-induced. Instead, the

  10. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    Directory of Open Access Journals (Sweden)

    Rodrigo Cofré

    2018-01-01

    Full Text Available The spiking activity of neuronal networks follows laws that are not time-reversal symmetric; the notion of pre-synaptic and post-synaptic neurons, stimulus correlations and noise correlations have a clear time order. Therefore, a biologically realistic statistical model for the spiking activity should be able to capture some degree of time irreversibility. We use the thermodynamic formalism to build a framework in the context maximum entropy models to quantify the degree of time irreversibility, providing an explicit formula for the information entropy production of the inferred maximum entropy Markov chain. We provide examples to illustrate our results and discuss the importance of time irreversibility for modeling the spike train statistics.

  11. Analysis of the noise-induced bursting-spiking transition in a pancreatic beta-cell model

    DEFF Research Database (Denmark)

    Aguirre, J.; Mosekilde, Erik; Sanjuan, M.A.F.

    2004-01-01

    A stochastic model of the electrophysiological behavior of the pancreatic beta cell is studied, as a paradigmatic example of a bursting biological cell embedded in a noisy environment. The analysis is focused on the distortion that a growing noise causes to the basic properties of the membrane...... potential signals, such as their periodic or chaotic nature, and their bursting or spiking behavior. We present effective computational tools to obtain as much information as possible from these signals, and we suggest that the methods could be applied to real time series. Finally, a universal dependence...

  12. Models of utricular bouton afferents: role of afferent-hair cell connectivity in determining spike train regularity.

    Science.gov (United States)

    Holmes, William R; Huwe, Janice A; Williams, Barbara; Rowe, Michael H; Peterson, Ellengene H

    2017-05-01

    Vestibular bouton afferent terminals in turtle utricle can be categorized into four types depending on their location and terminal arbor structure: lateral extrastriolar (LES), striolar, juxtastriolar, and medial extrastriolar (MES). The terminal arbors of these afferents differ in surface area, total length, collecting area, number of boutons, number of bouton contacts per hair cell, and axon diameter (Huwe JA, Logan CJ, Williams B, Rowe MH, Peterson EH. J Neurophysiol 113: 2420-2433, 2015). To understand how differences in terminal morphology and the resulting hair cell inputs might affect afferent response properties, we modeled representative afferents from each region, using reconstructed bouton afferents. Collecting area and hair cell density were used to estimate hair cell-to-afferent convergence. Nonmorphological features were held constant to isolate effects of afferent structure and connectivity. The models suggest that all four bouton afferent types are electrotonically compact and that excitatory postsynaptic potentials are two to four times larger in MES afferents than in other afferents, making MES afferents more responsive to low input levels. The models also predict that MES and LES terminal structures permit higher spontaneous firing rates than those in striola and juxtastriola. We found that differences in spike train regularity are not a consequence of differences in peripheral terminal structure, per se, but that a higher proportion of multiple contacts between afferents and individual hair cells increases afferent firing irregularity. The prediction that afferents having primarily one bouton contact per hair cell will fire more regularly than afferents making multiple bouton contacts per hair cell has implications for spike train regularity in dimorphic and calyx afferents. NEW & NOTEWORTHY Bouton afferents in different regions of turtle utricle have very different morphologies and afferent-hair cell connectivities. Highly detailed

  13. Primary motor cortex neurons during individuated finger and wrist movements: correlation of spike firing rates with the motion of individual digits versus their principal components

    Directory of Open Access Journals (Sweden)

    Evan eKirsch

    2014-05-01

    Full Text Available The joints of the hand provide 24 mechanical degrees of freedom. Yet 2 to 7 principal components (PCs account for 80 to 95 % of the variance in hand joint motion during tasks that vary from grasping to finger spelling. Such findings have led to the hypothesis that the brain may simplify operation of the hand by preferentially controlling PCs. We tested this hypothesis using data recorded from the primary motor cortex (M1 during individuated finger and wrist movements. Principal component analysis (PCA of the simultaneous position of the 5 digits and the wrist showed relatively consistent kinematic synergies across recording sessions in two monkeys. The first 3 PCs typically accounted for 85% of the variance. Cross-correlations then were calculated between the firing rate of single neurons and the simultaneous flexion/extension motion of each of the 5 digits and the wrist, as well as with each of their 6 PCs. For each neuron, we then compared the maximal absolute value of the cross-correlations (MAXC achieved with the motion of any digit or the wrist to the MAXC achieved with motion along any PC axis. The MAXC with a digit and the MAXC with a PC were themselves highly correlated across neurons. A minority of neurons correlated more strongly with a principal component than with any digit. But for the populations of neurons sampled from each of two subjects, MAXCs with digits were slightly but significantly higher than those with PCs. We therefore reject the hypothesis that M1 neurons preferentially control PCs of hand motion. We cannot exclude the possibility that M1 neurons might control kinematic synergies identified using linear or non-linear methods other than PCA. We consider it more likely, however, that neurons in other centers of the motor system—such as the pontomedullary reticular formation and the spinal gray matter—drive synergies of movement and/or muscles, which M1 neurons act to fractionate in producing individuated finger and

  14. Spike sorting for polytrodes: a divide and conquer approach

    OpenAIRE

    Swindale, Nicholas V.; Spacek, Martin A.

    2014-01-01

    In order to determine patterns of neural activity, spike signals recorded by extracellular electrodes have to be clustered (sorted) with the aim of ensuring that each cluster represents all the spikes generated by an individual neuron. Many methods for spike sorting have been proposed but few are easily applicable to recordings from polytrodes which may have 16 or more recording sites. As with tetrodes, these are spaced sufficiently closely that signals from single neurons will usually be rec...

  15. Recurrently connected and localized neuronal communities initiate coordinated spontaneous activity in neuronal networks.

    Science.gov (United States)

    Lonardoni, Davide; Amin, Hayder; Di Marco, Stefano; Maccione, Alessandro; Berdondini, Luca; Nieus, Thierry

    2017-07-01

    Developing neuronal systems intrinsically generate coordinated spontaneous activity that propagates by involving a large number of synchronously firing neurons. In vivo, waves of spikes transiently characterize the activity of developing brain circuits and are fundamental for activity-dependent circuit formation. In vitro, coordinated spontaneous spiking activity, or network bursts (NBs), interleaved within periods of asynchronous spikes emerge during the development of 2D and 3D neuronal cultures. Several studies have investigated this type of activity and its dynamics, but how a neuronal system generates these coordinated events remains unclear. Here, we investigate at a cellular level the generation of network bursts in spontaneously active neuronal cultures by exploiting high-resolution multielectrode array recordings and computational network modelling. Our analysis reveals that NBs are generated in specialized regions of the network (functional neuronal communities) that feature neuronal links with high cross-correlation peak values, sub-millisecond lags and that share very similar structural connectivity motifs providing recurrent interactions. We show that the particular properties of these local structures enable locally amplifying spontaneous asynchronous spikes and that this mechanism can lead to the initiation of NBs. Through the analysis of simulated and experimental data, we also show that AMPA currents drive the coordinated activity, while NMDA and GABA currents are only involved in shaping the dynamics of NBs. Overall, our results suggest that the presence of functional neuronal communities with recurrent local connections allows a neuronal system to generate spontaneous coordinated spiking activity events. As suggested by the rules used for implementing our computational model, such functional communities might naturally emerge during network development by following simple constraints on distance-based connectivity.

  16. Recurrently connected and localized neuronal communities initiate coordinated spontaneous activity in neuronal networks.

    Directory of Open Access Journals (Sweden)

    Davide Lonardoni

    2017-07-01

    Full Text Available Developing neuronal systems intrinsically generate coordinated spontaneous activity that propagates by involving a large number of synchronously firing neurons. In vivo, waves of spikes transiently characterize the activity of developing brain circuits and are fundamental for activity-dependent circuit formation. In vitro, coordinated spontaneous spiking activity, or network bursts (NBs, interleaved within periods of asynchronous spikes emerge during the development of 2D and 3D neuronal cultures. Several studies have investigated this type of activity and its dynamics, but how a neuronal system generates these coordinated events remains unclear. Here, we investigate at a cellular level the generation of network bursts in spontaneously active neuronal cultures by exploiting high-resolution multielectrode array recordings and computational network modelling. Our analysis reveals that NBs are generated in specialized regions of the network (functional neuronal communities that feature neuronal links with high cross-correlation peak values, sub-millisecond lags and that share very similar structural connectivity motifs providing recurrent interactions. We show that the particular properties of these local structures enable locally amplifying spontaneous asynchronous spikes and that this mechanism can lead to the initiation of NBs. Through the analysis of simulated and experimental data, we also show that AMPA currents drive the coordinated activity, while NMDA and GABA currents are only involved in shaping the dynamics of NBs. Overall, our results suggest that the presence of functional neuronal communities with recurrent local connections allows a neuronal system to generate spontaneous coordinated spiking activity events. As suggested by the rules used for implementing our computational model, such functional communities might naturally emerge during network development by following simple constraints on distance-based connectivity.

  17. Determinants of Multiple Semantic Priming: A Meta-Analysis and Spike Frequency Adaptive Model of a Cortical Network

    Science.gov (United States)

    Lavigne, Frederic; Dumercy, Laurent; Darmon, Nelly

    2011-01-01

    Recall and language comprehension while processing sequences of words involves multiple semantic priming between several related and/or unrelated words. Accounting for multiple and interacting priming effects in terms of underlying neuronal structure and dynamics is a challenge for current models of semantic priming. Further elaboration of current…

  18. A spiking neural network model of 3D perception for event-based neuromorphic stereo vision systems

    Science.gov (United States)

    Osswald, Marc; Ieng, Sio-Hoi; Benosman, Ryad; Indiveri, Giacomo

    2017-01-01

    Stereo vision is an important feature that enables machine vision systems to perceive their environment in 3D. While machine vision has spawned a variety of software algorithms to solve the stereo-correspondence problem, their implementation and integration in small, fast, and efficient hardware vision systems remains a difficult challenge. Recent advances made in neuromorphic engineering offer a possible solution to this problem, with the use of a new class of event-based vision sensors and neural processing devices inspired by the organizing principles of the brain. Here we propose a radically novel model that solves the stereo-correspondence problem with a spiking neural network that can be directly implemented with massively parallel, compact, low-latency and low-power neuromorphic engineering devices. We validate the model with experimental results, highlighting features that are in agreement with both computational neuroscience stereo vision theories and experimental findings. We demonstrate its features with a prototype neuromorphic hardware system and provide testable predictions on the role of spike-based representations and temporal dynamics in biological stereo vision processing systems.

  19. Unsupervised neural spike sorting for high-density microelectrode arrays with convolutive independent component analysis.

    Science.gov (United States)

    Leibig, Christian; Wachtler, Thomas; Zeck, Günther

    2016-09-15

    Unsupervised identification of action potentials in multi-channel extracellular recordings, in particular from high-density microelectrode arrays with thousands of sensors, is an unresolved problem. While independent component analysis (ICA) achieves rapid unsupervised sorting, it ignores the convolutive structure of extracellular data, thus limiting the unmixing to a subset of neurons. Here we present a spike sorting algorithm based on convolutive ICA (cICA) to retrieve a larger number of accurately sorted neurons than with instantaneous ICA while accounting for signal overlaps. Spike sorting was applied to datasets with varying signal-to-noise ratios (SNR: 3-12) and 27% spike overlaps, sampled at either 11.5 or 23kHz on 4365 electrodes. We demonstrate how the instantaneity assumption in ICA-based algorithms has to be relaxed in order to improve the spike sorting performance for high-density microelectrode array recordings. Reformulating the convolutive mixture as an instantaneous mixture by modeling several delayed samples jointly is necessary to increase signal-to-noise ratio. Our results emphasize that different cICA algorithms are not equivalent. Spike sorting performance was assessed with ground-truth data generated from experimentally derived templates. The presented spike sorter was able to extract ≈90% of the true spike trains with an error rate below 2%. It was superior to two alternative (c)ICA methods (≈80% accurately sorted neurons) and comparable to a supervised sorting. Our new algorithm represents a fast solution to overcome the current bottleneck in spike sorting of large datasets generated by simultaneous recording with thousands of electrodes. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Modeling of Auditory Neuron Response Thresholds with Cochlear Implants

    Directory of Open Access Journals (Sweden)

    Frederic Venail

    2015-01-01

    Full Text Available The quality of the prosthetic-neural interface is a critical point for cochlear implant efficiency. It depends not only on technical and anatomical factors such as electrode position into the cochlea (depth and scalar placement, electrode impedance, and distance between the electrode and the stimulated auditory neurons, but also on the number of functional auditory neurons. The efficiency of electrical stimulation can be assessed by the measurement of e-CAP in cochlear implant users. In the present study, we modeled the activation of auditory neurons in cochlear implant recipients (nucleus device. The electrical response, measured using auto-NRT (neural responses telemetry algorithm, has been analyzed using multivariate regression with cubic splines in order to take into account the variations of insertion depth of electrodes amongst subjects as well as the other technical and anatomical factors listed above. NRT thresholds depend on the electrode squared impedance (β = −0.11 ± 0.02, P<0.01, the scalar placement of the electrodes (β = −8.50 ± 1.97, P<0.01, and the depth of insertion calculated as the characteristic frequency of auditory neurons (CNF. Distribution of NRT residues according to CNF could provide a proxy of auditory neurons functioning in implanted cochleas.

  1. Neuronal Models for Studying Tau Pathology

    Directory of Open Access Journals (Sweden)

    Thorsten Koechling

    2010-01-01

    Full Text Available Alzheimer's disease (AD is the most frequent neurodegenerative disorder leading to dementia in the aged human population. It is characterized by the presence of two main pathological hallmarks in the brain: senile plaques containing -amyloid peptide and neurofibrillary tangles (NFTs, consisting of fibrillar polymers of abnormally phosphorylated tau protein. Both of these histological characteristics of the disease have been simulated in genetically modified animals, which today include numerous mouse, fish, worm, and fly models of AD. The objective of this review is to present some of the main animal models that exist for reproducing symptoms of the disorder and their advantages and shortcomings as suitable models of the pathological processes. Moreover, we will discuss the results and conclusions which have been drawn from the use of these models so far and their contribution to the development of therapeutic applications for AD.

  2. A latent low-dimensional common input drives a pool of motor neurons: a probabilistic latent state-space model.

    Science.gov (United States)

    Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M

    2017-10-01

    Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal

  3. Noise-induced divisive gain control in neuron models.

    Science.gov (United States)

    Longtin, André; Doiron, Brent; Bulsara, Adi R

    2002-01-01

    A recent computational study of gain control via shunting inhibition has shown that the slope of the frequency-versus-input (f-I) characteristic of a neuron can be decreased by increasing the noise associated with the inhibitory input (Neural Comput. 13, 227-248). This novel noise-induced divisive gain control relies on the concommittant increase of the noise variance with the mean of the total inhibitory conductance. Here we investigate this effect using different neuronal models. The effect is shown to occur in the standard leaky integrate-and-fire (LIF) model with additive Gaussian white noise, and in the LIF with multiplicative noise acting on the inhibitory conductance. The noisy scaling of input currents is also shown to occur in the one-dimensional theta-neuron model, which has firing dynamics, as well as a large scale compartmental model of a pyramidal cell in the electrosensory lateral line lobe of a weakly electric fish. In this latter case, both the inhibition and the excitatory input have Poisson statistics; noise-induced divisive inhibition is thus seen in f-I curves for which the noise increases along with the input I. We discuss how the variation of the noise intensity along with inputs is constrained by the physiological context and the class of model used, and further provide a comparison of the divisive effect across models.

  4. A reanalysis of “Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons” [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Rainer Engelken

    2016-08-01

    Full Text Available Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks of spiking neuron models. A recent study investigated spiking balanced networks of leaky integrate and fire (LIF neurons and compared their dynamics to a matched rate network with identical topology, where single unit input-output functions were chosen from isolated LIF neurons receiving Gaussian white noise input. A mathematical analogy between the chaotic instability in networks of rate units and the spiking network dynamics was proposed. Here we revisit the behavior of the spiking LIF networks and these matched rate networks. We find expected hallmarks of a chaotic instability in the rate network: For supercritical coupling strength near the transition point, the autocorrelation time diverges. For subcritical coupling strengths, we observe critical slowing down in response to small external perturbations. In the spiking network, we found in contrast that the timescale of the autocorrelations is insensitive to the coupling strength and that rate deviations resulting from small input perturbations rapidly decay. The decay speed even accelerates for increasing coupling strength. In conclusion, our reanalysis demonstrates fundamental differences between the behavior of pulse-coupled spiking LIF networks and rate networks with matched topology and input-output function. In particular there is no indication of a corresponding chaotic instability in the spiking network.

  5. A Neuronal Model of Classical Conditioning.

    Science.gov (United States)

    1987-10-01

    Moore (1985), Gelperin, HopfieIG, aria Tank (1985), Blazis, Desmond, Moore, and Lerthier (1986), Tesauro (1986), dnd Donegan and Wagner (1987). Proposals...sometimes called Hopfield networks (Hopfield, 1982; Cohen and Grossberg, 1983; Hopfield, 1984; Hopfield and Tank, 1985, 1986; Tesauro , 1986). These latter... Tesauro , G. (1986). S itple neural models of classical conditioning. F1ulogical Cybernetic., 55, 187-200. Thompson, R. F. (1976). The scarch for the

  6. An Overview of Bayesian Methods for Neural Spike Train Analysis

    Directory of Open Access Journals (Sweden)

    Zhe Chen

    2013-01-01

    Full Text Available Neural spike train analysis is an important task in computational neuroscience which aims to understand neural mechanisms and gain insights into neural circuits. With the advancement of multielectrode recording and imaging technologies, it has become increasingly demanding to develop statistical tools for analyzing large neuronal ensemble spike activity. Here we present a tutorial overview of Bayesian methods and their representative applications in neural spike train analysis, at both single neuron and population levels. On the theoretical side, we focus on various approximate Bayesian inference techniques as applied to latent state and parameter estimation. On the application side, the topics include spike sorting, tuning curve estimation, neural encoding and decoding, deconvolution of spike trains from calcium imaging signals, and inference of neuronal functional connectivity and synchrony. Some research challenges and opportunities for neural spike train analysis are discussed.

  7. Model-Based Design of Stimulus Trains for Selective Microstimulation of Targeted Neuronal Populations

    National Research Council Canada - National Science Library

    McIntyre, Cameron

    2001-01-01

    ... that accurately reproduced the dynamic firing properties of mammalian neurons, The neuron models were coupled to a three-dimensional finite element model of the spinal cord that solved for the potentials...

  8. Modelling LGMD2 visual neuron system

    OpenAIRE

    Fu, Qinbing; Yue, Shigang

    2015-01-01

    Two Lobula Giant Movement Detectors (LGMDs) have been identified in the lobula region of the locust visual system: LGMD1 and LGMD2. LGMD1 had been successfully used in robot navigation to avoid impending collision. LGMD2 also responds to looming stimuli in depth, and shares most the same properties with LGMD1; however, LGMD2 has its specific collision selective responds when dealing with different visual stimulus. Therefore, in this paper, we propose a novel way to model LGMD2, in order to em...

  9. Energetic Constraints Produce Self-sustained Oscillatory Dynamics in Neuronal Networks.

    Science.gov (United States)

    Burroni, Javier; Taylor, P; Corey, Cassian; Vachnadze, Tengiz; Siegelmann, Hava T

    2017-01-01

    Overview: We model energy constraints in a network of spiking neurons, while exploring general questions of resource limitation on network function abstractly. Background: Metabolic states like dietary ketosis or hypoglycemia have a large impact on brain function and disease outcomes. Glia provide metabolic support for neurons, among other functions. Yet, in computational models of glia-neuron cooperation, there have been no previous attempts to explore the effects of direct realistic energy costs on network activity in spiking neurons. Currently, biologically realistic spiking neural networks assume that membrane potential is the main driving factor for neural spiking, and do not take into consideration energetic costs. Methods: We define local energy pools to constrain a neuron model, termed Spiking Neuron Energy Pool (SNEP), which explicitly incorporates energy limitations. Each neuron requires energy to spike, and resources in the pool regenerate over time. Our simulation displays an easy-to-use GUI, which can be run locally in a web browser, and is freely available. Results: Energy dependence drastically changes behavior of these neural networks, causing emergent oscillations similar to those in networks of biological neurons. We analyze the system via Lotka-Volterra equations, producing several observations: (1) energy can drive self-sustained oscillations, (2) the energetic cost of spiking modulates the degree and type of oscillations, (3) harmonics emerge with frequencies determined by energy parameters, and (4) varying energetic costs have non-linear effects on energy consumption and firing rates. Conclusions: Models of neuron function which attempt biological realism may benefit from including energy constraints. Further, we assert that observed oscillatory effects of energy limitations exist in networks of many kinds, and that these findings generalize to abstract graphs and technological applications.

  10. Heavy Ion Track Temperature with the High Level of Specific Inelastic Energy Loss in Materials at the Thermal Spike Model

    CERN Document Server

    Didyk, A Yu; Semina, V K

    2003-01-01

    The thermal spike model in materials under the irradiation by swift heavy ions with high specific energy loss is considered taking into account the temperature dependence along the ion trajectrory. The numerical solutions of the temperature system equations for the temperatures of lattice and electrons are obtained, takinig into account the possible heating of lattice up to the melting and evaporation points, i.e., with the two phase transitions are obtained. The pressure in the volume of heavy ion track and their influence on the changes of thermodynamical parameters are introduced. The influence of defects on the "hot" electron free path is discussed. The numerical analysis of the lattice temperature at low and high temperatures of the thermal conductivity and heat capacity parameter values was carried out.

  11. Heavy ion track temperature with the high level of specific inelastic energy loss in materials at the thermal spike model

    International Nuclear Information System (INIS)

    Didyk, A.Yu.; Robuk, V.N.; Semina, V.K.

    2003-01-01

    The thermal spike model in materials under the irradiation by swift heavy ions with high specific energy loss is considered taking into account the temperature dependence along the ion trajectory. The numerical solutions of the temperature system equations for the temperatures of lattice up to the melting and evaporation points, i.e., with the two phase transitions are obtained. The pressure in the volume of heavy ion track and its influence on the changes of thermodynamical parameters are introduced. The influence of defects on the 'hot' electron free path is discussed. The numerical analysis of the lattice temperature at low and high temperatures of the thermal conductivity and heat capacity parameter values was carried out. (author)

  12. Removal of polycyclic aromatic hydrocarbons in soil spiked with model mixtures of petroleum hydrocarbons and heterocycles using biosurfactants from Rhodococcus ruber IEGM 231.

    Science.gov (United States)

    Ivshina, Irina; Kostina, Ludmila; Krivoruchko, Anastasiya; Kuyukina, Maria; Peshkur, Tatyana; Anderson, Peter; Cunningham, Colin

    2016-07-15

    Removal of polycyclic aromatic hydrocarbons (PAHs) in soil using biosurfactants (BS) produced by Rhodococcus ruber IEGM 231 was studied in soil columns spiked with model mixtures of major petroleum constituents. A crystalline mixture of single PAHs (0.63g/kg), a crystalline mixture of PAHs (0.63g/kg) and polycyclic aromatic sulfur heterocycles (PASHs), and an artificially synthesized non-aqueous phase liquid (NAPL) containing PAHs (3.00g/kg) dissolved in alkanes C10-C19 were used for spiking. Percentage of PAH removal with BS varied from 16 to 69%. Washing activities of BS were 2.5 times greater than those of synthetic surfactant Tween 60 in NAPL-spiked soil and similar to Tween 60 in crystalline-spiked soil. At the same time, amounts of removed PAHs were equal and consisted of 0.3-0.5g/kg dry soil regardless the chemical pattern of a model mixture of petroleum hydrocarbons and heterocycles used for spiking. UV spectra for soil before and after BS treatment were obtained and their applicability for differentiated analysis of PAH and PASH concentration changes in remediated soil was shown. The ratios A254nm/A288nm revealed that BS increased biotreatability of PAH-contaminated soils. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Stress exacerbates neuron loss and microglia proliferation in a rat model of excitotoxic lower motor neuron injury.

    Science.gov (United States)

    Puga, Denise A; Tovar, C Amy; Guan, Zhen; Gensel, John C; Lyman, Matthew S; McTigue, Dana M; Popovich, Phillip G

    2015-10-01

    All individuals experience stress and hormones (e.g., glucocorticoids/GCs) released during stressful events can affect the structure and function of neurons. These effects of stress are best characterized for brain neurons; however, the mechanisms controlling the expression and binding affinity of glucocorticoid receptors in the spinal cord are different than those in the brain. Accordingly, whether stress exerts unique effects on spinal cord neurons, especially in the context of pathology, is unknown. Using a controlled model of focal excitotoxic lower motor neuron injury in rats, we examined the effects of acute or chronic variable stress on spinal cord motor neuron survival and glial activation. New data indicate that stress exacerbates excitotoxic spinal cord motor neuron loss and associated activation of microglia. In contrast, hypertrophy and hyperplasia of astrocytes and NG2+ glia were unaffected or were modestly suppressed by stress. Although excitotoxic lesions cause significant motor neuron loss and stress exacerbates this pathology, overt functional impairment did not develop in the relevant forelimb up to one week post-lesion. These data indicate that stress is a disease-modifying factor capable of altering neuron and glial responses to pathological challenges in the spinal cord. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Discrimination of communication vocalizations by single neurons and groups of neurons in the auditory midbrain.

    Science.gov (United States)

    Schneider, David M; Woolley, Sarah M N

    2010-06-01

    Many social animals including songbirds use communication vocalizations for individual recognition. The perception of vocalizations depends on the encoding of complex sounds by neurons in the ascending auditory system, each of which is tuned to a particular subset of acoustic features. Here, we examined how well the responses of single auditory neurons could be used to discriminate among bird songs and we compared discriminability to spectrotemporal tuning. We then used biologically realistic models of pooled neural responses to test whether the responses of groups of neurons discriminated among songs better than the responses of single neurons and whether discrimination by groups of neurons was related to spectrotemporal tuning and trial-to-trial response variability. The responses of single auditory midbrain neurons could be used to discriminate among vocalizations with a wide range of abilities, ranging from chance to 100%. The ability to discriminate among songs using single neuron responses was not correlated with spectrotemporal tuning. Pooling the responses of pairs of neurons generally led to better discrimination than the average of the two inputs and the most discriminating input. Pooling the responses of three to five single neurons continued to improve neural discrimination. The increase in discriminability was largest for groups of neurons with similar spectrotemporal tuning. Further, we found that groups of neurons with correlated spike trains achieved the largest gains in discriminability. We simulated neurons with varying levels of temporal precision and measured the discriminability of responses from single simulated neurons and groups of simulated neurons. Simulated neurons with biologically observed levels of temporal precision benefited more from pooling correlated inputs than did neurons with highly precise or imprecise spike trains. These findings suggest that pooling correlated neural responses with the levels of precision observed in the

  15. Statistical characteristics of climbing fiber spikes necessary for efficient cerebellar learning.

    Science.gov (United States)

    Kuroda, S; Yamamoto, K; Miyamoto, H; Doya, K; Kawat, M

    2001-03-01

    Mean firing rates (MFRs), with analogue values, have thus far been used as information carriers of neurons in most brain theories of learning. However, the neurons transmit the signal by spikes, which are discrete events. The climbing fibers (CFs), which are known to be essential for cerebellar motor learning, fire at the ultra-low firing rates (around 1 Hz), and it is not yet understood theoretically how high-frequency information can be conveyed and how learning of smooth and fast movements can be achieved. Here we address whether cerebellar learning can be achieved by CF spikes instead of conventional MFR in an eye movement task, such as the ocular following response (OFR), and an arm movement task. There are two major afferents into cerebellar Purkinje cells: parallel fiber (PF) and CF, and the synaptic weights between PFs and Purkinje cells have been shown to be modulated by the stimulation of both types of fiber. The modulation of the synaptic weights is regulated by the cerebellar synaptic plasticity. In this study we simulated cerebellar learning using CF signals as spikes instead of conventional MFR. To generate the spikes we used the following four spike generation models: (1) a Poisson model in which the spike interval probability follows a Poisson distribution, (2) a gamma model in which the spike interval probability follows the gamma distribution, (3) a max model in which a spike is generated when a synaptic input reaches maximum, and (4) a threshold model in which a spike is generated when the input crosses a certain small threshold. We found that, in an OFR task with a constant visual velocity, learning was successful with stochastic models, such as Poisson and gamma models, but not in the deterministic models, such as max and threshold models. In an OFR with a stepwise velocity change and an arm movement task, learning could be achieved only in the Poisson model. In addition, for efficient cerebellar learning, the distribution of CF spike

  16. Collective excitability in a mesoscopic neuronal model of epileptic activity

    Science.gov (United States)

    Jedynak, Maciej; Pons, Antonio J.; Garcia-Ojalvo, Jordi

    2018-01-01

    At the mesoscopic scale, the brain can be understood as a collection of interacting neuronal oscillators, but the extent to which its sustained activity is due to coupling among brain areas is still unclear. Here we address this issue in a simplified situation by examining the effect of coupling between two cortical columns described via Jansen-Rit neural mass models. Our results show that coupling between the two neuronal populations gives rise to stochastic initiations of sustained collective activity, which can be interpreted as epileptic events. For large enough coupling strengths, termination of these events results mainly from the emergence of synchronization between the columns, and thus it is controlled by coupling instead of noise. Stochastic triggering and noise-independent durations are characteristic of excitable dynamics, and thus we interpret our results in terms of collective excitability.

  17. Group spike-and-slab lasso generalized linear models for disease prediction and associated genes detection by incorporating pathway information.

    Science.gov (United States)

    Tang, Zaixiang; Shen, Yueping; Li, Yan; Zhang, Xinyan; Wen, Jia; Qian, Chen'ao; Zhuang, Wenzhuo; Shi, Xinghua; Yi, Nengjun

    2018-03-15

    Large-scale molecular data have been increasingly used as an important resource for prognostic prediction of diseases and detection of associated genes. However, standard approaches for omics data analysis ignore the group structure among genes encoded in functional relationships or pathway information. We propose new Bayesian hierarchical generalized linear models, called group spike-and-slab lasso GLMs, for predicting disease outcomes and detecting associated genes by incorporating large-scale molecular data and group structures. The proposed model employs a mixture double-exponential prior for coefficients that induces self-adaptive shrinkage amount on different coefficients. The group information is incorporated into the model by setting group-specific parameters. We have developed a fast and stable deterministic algorithm to fit the proposed hierarchal GLMs, which can perform variable selection within groups. We assess the performance of the proposed method on several simulated scenarios, by varying the overlap among groups, group size, number of non-null groups, and the correlation within group. Compared with existing methods, the proposed method provides not only more accurate estimates of the parameters but also better prediction. We further demonstrate the application of the proposed procedure on three cancer datasets by utilizing pathway structures of genes. Our results show that the proposed method generates powerful models for predicting disease outcomes and detecting associated genes. The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). nyi@uab.edu. Supplementary data are available at Bioinformatics online.

  18. Electroencephalographic precursors of spike-wave discharges in a genetic rat model of absence epilepsy: Power spectrum and coherence EEG analyses

    NARCIS (Netherlands)

    Sitnikova, E.Y.; Luijtelaar, E.L.J.M. van

    2009-01-01

    Periods in the electroencephalogram (EEG) that immediately precede the onset of spontaneous spike-wave discharges (SWD) were examined in WAG/Rij rat model of absence epilepsy. Precursors of SWD (preSWD) were classified based on the distribution of EEG power in delta-theta-alpha frequency bands as

  19. Metastable states and quasicycles in a stochastic Wilson-Cowan model of neuronal population dynamics

    KAUST Repository

    Bressloff, Paul C.

    2010-11-03

    We analyze a stochastic model of neuronal population dynamics with intrinsic noise. In the thermodynamic limit N→∞, where N determines the size of each population, the dynamics is described by deterministic Wilson-Cowan equations. On the other hand, for finite N the dynamics is described by a master equation that determines the probability of spiking activity within each population. We first consider a single excitatory population that exhibits bistability in the deterministic limit. The steady-state probability distribution of the stochastic network has maxima at points corresponding to the stable fixed points of the deterministic network; the relative weighting of the two maxima depends on the system size. For large but finite N, we calculate the exponentially small rate of noise-induced transitions between the resulting metastable states using a Wentzel-Kramers- Brillouin (WKB) approximation and matched asymptotic expansions. We then consider a two-population excitatory or inhibitory network that supports limit cycle oscillations. Using a diffusion approximation, we reduce the dynamics to a neural Langevin equation, and show how the intrinsic noise amplifies subthreshold oscillations (quasicycles). © 2010 The American Physical Society.

  20. Electrical Activity in a Time-Delay Four-Variable Neuron Model under Electromagnetic Induction

    Directory of Open Access Journals (Sweden)

    Keming Tang

    2017-11-01

    Full Text Available To investigate the effect of electromagnetic induction on the electrical activity of neuron, the variable for magnetic flow is used to improve Hindmarsh–Rose neuron model. Simultaneously, due to the existence of time-delay when signals are propagated between neurons or even in one neuron, it is important to study the role of time-delay in regulating the electrical activity of the neuron. For this end, a four-variable neuron model is proposed to investigate the effects of electromagnetic induction and time-delay. Simulation results suggest that the proposed neuron model can show multiple modes of electrical activity, which is dependent on the time-delay and external forcing current. It means that suitable discharge mode can be obtained by selecting the time-delay or external forcing current, which could be helpful for further investigation of electromagnetic radiation on biological neuronal system.

  1. A supervised learning rule for classification of spatiotemporal spike patterns.

    Science.gov (United States)

    Lilin Guo; Zhenzhong Wang; Adjouadi, Malek

    2016-08-01

    This study introduces a novel supervised algorithm for spiking neurons that take into consideration synapse delays and axonal delays associated with weights. It can be utilized for both classification and association and uses several biologically influenced properties, such as axonal and synaptic delays. This algorithm also takes into consideration spike-timing-dependent plasticity as in Remote Supervised Method (ReSuMe). This paper focuses on the classification aspect alone. Spiked neurons trained according to this proposed learning rule are capable of classifying different categories by the associated sequences of precisely timed spikes. Simulation results have shown that the proposed learning method greatly improves classification accuracy when compared to the Spike Pattern Association Neuron (SPAN) and the Tempotron learning rule.

  2. Peripheral drive in Aα/β-fiber neurons is altered in a rat model of osteoarthritis: changes in following frequency and recovery from inactivation

    Directory of Open Access Journals (Sweden)

    Wu Q

    2013-03-01

    Full Text Available Qi Wu, James L HenryDepartment of Psychiatry and Behavioral Neurosciences, McMaster University, Hamilton, ON, CanadaPurpose: To determine conduction fidelity of Aα/β-fiber low threshold mechanoreceptors in a model of osteoarthritis (OA.Methods: Four weeks after cutting the anterior cruciate ligament and removing the medial meniscus to induce the model, in vivo intracellular recordings were made in ipsilateral L4 dorsal root ganglion neurons. L4 dorsal roots were stimulated to determine the refractory interval and the maximum following frequency of the evoked action potential (AP. Neurons exhibited two types of response to paired pulse stimulation. Results: One type of response was characterized by fractionation of the evoked AP into an initial nonmyelinated-spike and a later larger-amplitude somatic-spike at shorter interstimulus intervals. The other type of response was characterized by an all-or-none AP, where the second evoked AP failed altogether at shorter interstimulus intervals. In OA versus control animals, the refractory interval measured in paired pulse testing was less in all low threshold mechanoreceptors. With train stimulation, the maximum rising rate of the nonmyelinated-spike was greater in OA nonmuscle spindle low threshold mechanoreceptors, possibly due to changes in fast kinetics of currents. Maximum following frequency in Pacinian and muscle spindle neurons was greater in model animals compared to controls. Train stimulation also induced an inactivation and fractionation of the AP in neurons that showed fractionation of the AP in paired pulse testing. However, with train stimulation this fractionation followed a different time course, suggesting more than one type of inactivation.Conclusion: The data suggest that joint damage can lead to changes in the fidelity of AP conduction of large diameter sensory neurons, muscle spindle neurons in particular, arising from articular and nonarticular tissues in OA animals compared to

  3. Prolonging the postcomplex spike pause speeds eyeblink conditioning.

    Science.gov (United States)

    Maiz, Jaione; Karakossian, Movses H; Pakaprot, Narawut; Robleto, Karla; Thompson, Richard F; Otis, Thomas S

    2012-10-09

    Climbing fiber input to the cerebellum is believed to serve as a teaching signal during associative, cerebellum-dependent forms of motor learning. However, it is not understood how this neural pathway coordinates changes in cerebellar circuitry during learning. Here, we use pharmacological manipulations to prolong the postcomplex spike pause, a component of the climbing fiber signal in Purkinje neurons, and show that these manipulations enhance the rate of learning in classical eyelid conditioning. Our findings elucidate an unappreciated aspect of the climbing fiber teaching signal, and are consistent with a model in which convergent postcomplex spike pauses drive learning-related plasticity in the deep cerebellar nucleus. They also suggest a physiological mechanism that could modulate motor learning rates.

  4. Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware

    Directory of Open Access Journals (Sweden)

    Andreas Stöckel

    2017-08-01

    Full Text Available Large-scale neuromorphic hardware platforms, specialized computer systems for energy efficient simulation of spiking neural networks, are being developed around the world, for example as part of the European Human Brain Project (HBP. Due to conceptual differences, a universal performance analysis of these systems in terms of runtime, accuracy and energy efficiency is non-trivial, yet indispensable for further hard- and software development. In this paper we describe a scalable benchmark based on a spiking neural network implementation of the binary neural associative memory. We treat neuromorphic hardware and software simulators as black-boxes and execute exactly the same network description across all devices. Experiments on the HBP platforms under varying configurations of the associative memory show that the presented method allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output.

  5. Natural Firing Patterns Imply Low Sensitivity of Synaptic Plasticity to Spike Timing Compared with Firing Rate.

    Science.gov (United States)

    Graupner, Michael; Wallisch, Pascal; Ostojic, Srdjan

    2016-11-02

    Synaptic plasticity is sensitive to the rate and the timing of presynaptic and postsynaptic action potentials. In experimental protocols inducing plasticity, the imposed spike trains are typically regular and the relative timing between every presynaptic and postsynaptic spike is fixed. This is at odds with firing patterns observed in the cortex of intact animals, where cells fire irregularly and the timing between presynaptic and postsynaptic spikes varies. To investigate synaptic changes elicited by in vivo-like firing, we used numerical simulations and mathematical analysis of synaptic plasticity models. We found that the influence of spike timing on plasticity is weaker than expected from regular stimulation protocols. Moreover, when neurons fire irregularly, synaptic changes induced by precise spike timing can be equivalently induced by a modest firing rate variation. Our findings bridge the gap between existing results on synaptic plasticity and plasticity occurring in vivo, and challenge the dominant role of spike timing in plasticity. Synaptic plasticity, the change in efficacy of connections between neurons, is thought to underlie learning and memory. The dominant paradigm posits that the precise timing of neural action potentials (APs) is central for plasticity induction. This concept is based on experiments using highly regular and stereotyped patterns of APs, in stark contrast with natural neuronal activity. Using synaptic plasticity models, we investigated how irregular, in vivo-like activity shapes synaptic plasticity. We found that synaptic changes induced by precise timing of APs are much weaker than suggested by regular stimulation protocols, and can be equivalently induced by modest variations of the AP rate alone. Our results call into question the dominant role of precise AP timing for plasticity in natural conditions. Copyright © 2016 Graupner et al.

  6. From in silico astrocyte cell models to neuron-astrocyte network models: A review.

    Science.gov (United States)

    Oschmann, Franziska; Berry, Hugues; Obermayer, Klaus; Lenk, Kerstin

    2018-01-01

    The idea that astrocytes may be active partners in synaptic information processing has recently emerged from abundant experimental reports. Because of their spatial proximity to neurons and their bidirectional communication with them, astrocytes are now considered as an important third element of the synapse. Astrocytes integrate and process synaptic information and by doing so generate cytosolic calcium signals that are believed to reflect neuronal transmitter release. Moreover, they regulate neuronal information transmission by releasing gliotransmitters into the synaptic cleft affecting both pre- and postsynaptic receptors. Concurrent with the first experimental reports of the astrocytic impact on neural network dynamics, computational models describing astrocytic functions have been developed. In this review, we give an overview over the published computational models of astrocytic functions, from single-cell dynamics to the tripartite synapse level and network models of astrocytes and neurons. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Higher Order Spike Synchrony in Prefrontal Cortex during visual memory

    Directory of Open Access Journals (Sweden)

    Gordon ePipa

    2011-06-01

    Full Text Available Precise temporal synchrony of spike firing has been postulated as an important neuronal mechanism for signal integration and the induction of plasticity in neocortex. As prefrontal cortex plays an important role in organizing memory and executive functions, the convergence of multiple visual pathways onto PFC predicts that neurons should preferentially synchronize their spiking when stimulus information is processed. Furthermore, synchronous spike firing should intensify if memory processes require the induction of neuronal plasticity, even if this is only for short-term. Here we show with multiple simultaneously recorded units in ventral prefrontal cortex that neurons participate in 3 ms precise synchronous discharges distributed across multiple sites separated by at least 500 µm. The frequency of synchronous firing is modulated by behavioral performance and is specific for the memorized visual stimuli. In particular, during the memory period in which activity is not stimulus driven, larger groups of up to 7 sites exhibit performance dependent modulation of their spike synchronization.

  8. How lateral connections and spiking dynamics may separate multiple objects moving together.

    Directory of Open Access Journals (Sweden)

    Benjamin D Evans

    Full Text Available Over successive stages, the ventral visual system of the primate brain develops neurons that respond selectively to particular objects or faces with translation, size and view invariance. The powerful neural representations found in Inferotemporal cortex form a remarkably rapid and robust basis for object recognition which belies the difficulties faced by the system when learning in natural visual environments. A central issue in understanding the process of biological object recognition is how these neurons learn to form separate representations of objects from complex visual scenes composed of multiple objects. We show how a one-layer competitive network comprised of 'spiking' neurons is able to learn separate transformation-invariant representations (exemplified by one-dimensional translations of visual objects that are always seen together moving in lock-step, but separated in space. This is achieved by combining 'Mexican hat' functional lateral connectivity with cell firing-rate adaptation to temporally segment input representations of competing stimuli through anti-phase oscillations (perceptual cycles. These spiking dynamics are quickly and reliably generated, enabling selective modification of the feed-forward connections to neurons in the next layer through Spike-Time-Dependent Plasticity (STDP, resulting in separate translation-invariant representations of each stimulus. Variations in key properties of the model are investigated with respect to the network's ability to develop appropriate input representations and subsequently output representations through STDP. Contrary to earlier rate-coded models of this learning process, this work shows how spiking neural networks may learn about more than one stimulus together without suffering from the 'superposition catastrophe'. We take these results to suggest that spiking dynamics are key to understanding biological visual object recognition.

  9. Spike-timing theory of working memory.

    Directory of Open Access Journals (Sweden)

    Botond Szatmáry

    Full Text Available Working memory (WM is the part of the brain's memory system that provides temporary storage and manipulation of information necessary for cognition. Although WM has limited capacity at any given time, it has vast memory content in the sense that it acts on the brain's nearly infinite repertoire of lifetime long-term memories. Using simulations, we show that large memory content and WM functionality emerge spontaneously if we take the spike-timing nature of neuronal processing into account. Here, memories are represented by extensively overlapping groups of neurons that exhibit stereotypical time-locked spatiotemporal spike-timing patterns, called polychronous patterns; and synapses forming such polychronous neuronal groups (PNGs are subject to associative synaptic plasticity in the form of both long-term and short-term spike-timing dependent plasticity. While long-term potentiation is essential in PNG formation, we show how short-term plasticity can temporarily strengthen the synapses of selected PNGs and lead to an increase in the spontaneous reactivation rate of these PNGs. This increased reactivation rate, consistent with in vivo recordings during WM tasks, results in high interspike interval variability and irregular, yet systematically changing, elevated firing rate profiles within the neurons of the selected PNGs. Additionally, our theory explains the relationship between such slowly changing firing rates and precisely timed spikes, and it reveals a novel relationship between WM and the perception of time on the order of seconds.

  10. STDP and STDP Variations with Memristors for Spiking Neuromorphic Learning Systems

    Directory of Open Access Journals (Sweden)

    Teresa eSerrano-Gotarredona

    2013-02-01

    Full Text Available In this paper we review several ways of realizing asynchronous Spike-Timing Dependent Plasticity (STDP using memristors as synapses. Our focus is on how to use individual memristors to implement synaptic weight multiplications, in a way such that it is not necessary to (a introduce global synchronization and (b to separate memristor learning phases from memristor performing phases. In the approaches described, neurons fire spikes asynchronously when they wish and memristive synapses perform computation and learn at their own pace, as it happens in biological neural systems. We distinguish between two different memristor physics, depending on whether they respond to the original ``moving wall'' or to the ``filament creation and annihilation'' models. Independent of the memristor physics, we discuss two different types of STDP rules that can be implemented with memristors: either the pure timing-based rule that takes into account the arrival time of the spikes from the pre- and the post-synaptic neurons, or a hybrid rule that takes into account only the timing of pre-synaptic spikes and the membrane potential and other state variables of the post-synaptic neuron. We show how to implement these rules in cross-bar architectures that comprise massive arrays of memristors, and we discuss applications for artificial vision.

  11. Stochastic spike synchronization in a small-world neural network with spike-timing-dependent plasticity.

    Science.gov (United States)

    Kim, Sang-Yoon; Lim, Woochang

    2018-01-01

    We consider the Watts-Strogatz small-world network (SWN) consisting of subthreshold neurons which exhibit noise-induced spikings. This neuronal network has adaptive dynamic synaptic strengths governed by the spike-timing-dependent plasticity (STDP). In previous works without STDP, stochastic spike synchronization (SSS) between noise-induced spikings of subthreshold neurons was found to occur in a range of intermediate noise intensities. Here, we investigate the effect of additive STDP on the SSS by varying the noise intensity. Occurrence of a "Matthew" effect in synaptic plasticity is found due to a positive feedback process. As a result, good synchronization gets better via long-term potentiation of synaptic strengths, while bad synchronization gets worse via long-term depression. Emergences of long-term potentiation and long-term depression of synaptic strengths are intensively investigated via microscopic studies based on the pair-correlations between the pre- and the post-synaptic IISRs (instantaneous individual spike rates) as well as the distributions of time delays between the pre- and the post-synaptic spike times. Furthermore, the effects of multiplicative STDP (which depends on states) on the SSS are studied and discussed in comparison with the case of additive STDP (independent of states). These effects of STDP on the SSS in the SWN are also compared with those in the regular lattice and the random graph. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Spike solutions in Gierer#x2013;Meinhardt model with a time dependent anomaly exponent

    Science.gov (United States)

    Nec, Yana

    2018-01-01

    Experimental evidence of complex dispersion regimes in natural systems, where the growth of the mean square displacement in time cannot be characterised by a single power, has been accruing for the past two decades. In such processes the exponent γ(t) in ⟨r2⟩ ∼ tγ(t) at times might be approximated by a piecewise constant function, or it can be a continuous function. Variable order differential equations are an emerging mathematical tool with a strong potential to model these systems. However, variable order differential equations are not tractable by the classic differential equations theory. This contribution illustrates how a classic method can be adapted to gain insight into a system of this type. Herein a variable order Gierer-Meinhardt model is posed, a generic reaction- diffusion system of a chemical origin. With a fixed order this system possesses a solution in the form of a constellation of arbitrarily situated localised pulses, when the components' diffusivity ratio is asymptotically small. The pattern was shown to exist subject to multiple step-like transitions between normal diffusion and sub-diffusion, as well as between distinct sub-diffusive regimes. The analytical approximation obtained permits qualitative analysis of the impact thereof. Numerical solution for typical cross-over scenarios revealed such features as earlier equilibration and non-monotonic excursions before attainment of equilibrium. The method is general and allows for an approximate numerical solution with any reasonably behaved γ(t).

  13. Spike trains in Hodgkin-Huxley model and ISIs of acupuncture manipulations

    International Nuclear Information System (INIS)

    Wang Jiang; Si Wenjie; Che Yanqiu; Fei Xiangyang

    2008-01-01

    The Hodgkin-Huxley equations (HH) are parameterized by a number of parameters and shows a variety of qualitatively different behaviors depending on the parameter values. Under stimulation of an external periodic voltage, the ISIs (interspike intervals) of a HH model are investigated in this work, while the frequency of the voltage is taken as the controlling parameter. As well-known, the science of acupuncture and moxibustion is an important component of Traditional Chinese Medicine with a long history. Although there are a number of different acupuncture manipulations, the method for distinguishing them is rarely investigated. With the idea of ISI, we study the electrical signal time series at the spinal dorsal horn produced by three different acupuncture manipulations in Zusanli point and present an effective way to distinguish them

  14. Spike trains in Hodgkin-Huxley model and ISIs of acupuncture manipulations

    Energy Technology Data Exchange (ETDEWEB)

    Wang Jiang [School of Electrical and Automation Engineering, Tianjin University, Tianjin 300072 (China)], E-mail: jiangwang@tju.edu.cn; Si Wenjie; Che Yanqiu; Fei Xiangyang [School of Electrical and Automation Engineering, Tianjin University, Tianjin 300072 (China)

    2008-05-15

    The Hodgkin-Huxley equations (HH) are parameterized by a number of parameters and shows a variety of qualitatively different behaviors depending on the parameter values. Under stimulation of an external periodic voltage, the ISIs (interspike intervals) of a HH model are investigated in this work, while the frequency of the voltage is taken as the controlling parameter. As well-known, the science of acupuncture and moxibustion is an important component of Traditional Chinese Medicine with a long history. Although there are a number of different acupuncture manipulations, the method for distinguishing them is rarely investigated. With the idea of ISI, we study the electrical signal time series at the spinal dorsal horn produced by three different acupuncture manipulations in Zusanli point and present an effective way to distinguish them.

  15. Spike-adding in parabolic bursters: The role of folded-saddle canards

    Science.gov (United States)

    Desroches, Mathieu; Krupa, Martin; Rodrigues, Serafim

    2016-09-01

    The present work develops a new approach to studying parabolic bursting, and also proposes a novel four-dimensional canonical and polynomial-based parabolic burster. In addition to this new polynomial system, we also consider the conductance-based model of the Aplysia R15 neuron known as the Plant model, and a reduction of this prototypical biophysical parabolic burster to three variables, including one phase variable, namely the Baer-Rinzel-Carillo (BRC) phase model. Revisiting these models from the perspective of slow-fast dynamics reveals that the number of spikes per burst may vary upon parameter changes, however the spike-adding process occurs in an explosive fashion that involves special solutions called canards. This spike-adding canard explosion phenomenon is analysed by using tools from geometric singular perturbation theory in tandem with numerical bifurcation techniques. We find that the bifurcation structure persists across all considered systems, that is, spikes within the burst are incremented via the crossing of an excitability threshold given by a particular type of canard orbit, namely the true canard of a folded-saddle singularity. However there can be a difference in the spike-adding transitions in parameter space from one case to another, according to whether the process is continuous or discontinuous, which depends upon the geometry of the folded-saddle canard. Using these findings, we construct a new polynomial approximation of the Plant model, which retains all the key elements for parabolic bursting, including the spike-adding transitions mediated by folded-saddle canards. Finally, we briefly investigate the presence of spike-adding via canards in planar phase models of parabolic bursting, namely the theta model by Ermentrout and Kopell.

  16. Enhancement of information transmission with stochastic resonance in hippocampal CA1 neuron models: effects of noise input location.

    Science.gov (United States)

    Kawaguchi, Minato; Mino, Hiroyuki; Durand, Dominique M

    2007-01-01

    Stochastic resonance (SR) has been shown to enhance the signal to noise ratio or detection of signals in neurons. It is not yet clear how this effect of SR on the signal to noise ratio affects signal processing in neural networks. In this paper, we investigate the effects of the location of background noise input on information transmission in a hippocampal CA1 neuron model. In the computer simulation, random sub-threshold spike trains (signal) generated by a filtered homogeneous Poisson process were presented repeatedly to the middle point of the main apical branch, while the homogeneous Poisson shot noise (background noise) was applied to a location of the dendrite in the hippocampal CA1 model consisting of the soma with a sodium, a calcium, and five potassium channels. The location of the background noise input was varied along the dendrites to investigate the effects of background noise input location on information transmission. The computer simulation results show that the information rate reached a maximum value for an optimal amplitude of the background noise amplitude. It is also shown that this optimal amplitude of the background noise is independent of the distance between the soma and the noise input location. The results also show that the location of the background noise input does not significantly affect the maximum values of the information rates generated by stochastic resonance.

  17. Biophysics Model of Heavy-Ion Degradation of Neuron Morphology in Mouse Hippocampal Granular Cell Layer Neurons.

    Science.gov (United States)

    Alp, Murat; Cucinotta, Francis A

    2018-03-01

    Exposure to heavy-ion radiation during cancer treatment or space travel may cause cognitive detriments that have been associated with changes in neuron morphology and plasticity. Observations in mice of reduced neuronal dendritic complexity have revealed a dependence on radiation quality and absorbed dose, suggesting that microscopic energy deposition plays an important role. In this work we used morphological data for mouse dentate granular cell layer (GCL) neurons and a stochastic model of particle track structure and microscopic energy deposition (ED) to develop a predictive model of high-charge and energy (HZE) particle-induced morphological changes to the complex structures of dendritic arbors. We represented dendrites as cylindrical segments of varying diameter with unit aspect ratios, and developed a fast sampling method to consider the stochastic distribution of ED by δ rays (secondary electrons) around the path of heavy ions, to reduce computational times. We introduce probabilistic models with a small number of parameters to describe the induction of precursor lesions that precede dendritic snipping, denoted as snip sites. Predictions for oxygen ( 16 O, 600 MeV/n) and titanium ( 48 Ti, 600 MeV/n) particles with LET of 16.3 and 129 keV/μm, respectively, are considered. Morphometric parameters to quantify changes in neuron morphology are described, including reduction in total dendritic length, number of branch points and branch numbers. Sholl analysis is applied for single neurons to elucidate dose-dependent reductions in dendritic complexity. We predict important differences in measurements from imaging of tissues from brain slices with single neuron cell observations due to the role of neuron death through both soma apoptosis and excessive dendritic length reduction. To further elucidate the role of track structure, random segment excision (snips) models are introduced and a sensitivity study of the effects of the modes of neuron death in predictions

  18. Structured chaos shapes spike-response noise entropy in balanced neural networks

    Directory of Open Access Journals (Sweden)

    Guillaume eLajoie

    2014-10-01

    Full Text Available Large networks of sparsely coupled, excitatory and inhibitory cells occur throughout the brain. For many models of these networks, a striking feature is that their dynamics are chaotic and thus, are sensitive to small perturbations. How does this chaos manifest in the neural code? Specifically, how variable are the spike patterns that such a network produces in response to an input signal? To answer this, we derive a bound for a general measure of variability -- spike-train entropy. This leads to important insights on the variability of multi-cell spike pattern distributions in large recurrent networks of spiking neurons responding to fluctuating inputs. The analysis is based on results from random dynamical systems theory and is complemented by detailed numerical simulations. We find that the spike pattern entropy is an order of magnitude lower than what would be extrapolated from single cells. This holds despite the fact that network coupling becomes vanishingly sparse as network size grows -- a phenomenon that depends on ``extensive chaos, as previously discovered for balanced networks without stimulus drive. Moreover, we show how spike pattern entropy is controlled by temporal features of the inputs. Our findings provide insight into how neural networks may encode stimuli in the presence of inherently chaotic dynamics.

  19. Neuronal Synchronization of Electrical Activity, Using the Hodgkin-Huxley Model and RCLSJ Circuit

    OpenAIRE

    Diaz M, Jose A; Téquita, Oscar; Naranjo, Fernando

    2016-01-01

    We simulated the neuronal electrical activity using the Hodgkin-Huxley model (HH) and a superconductor circuit, containing Josephson junctions. These HH model make possible simulate the main neuronal dynamics characteristics such as action potentials, firing threshold and refractory period. The purpose of the manuscript is show a method to syncronize a RCLshunted Josephson junction to a neuronal dynamics represented by the HH model. Thus the RCLSJ circuit is able to mimics the behavior of the...

  20. Learning intrinsic excitability in medium spiny neurons.

    Science.gov (United States)

    Scheler, Gabriele

    2013-01-01

    We present an unsupervised, local activation-dependent learning rule for intrinsic plasticity (IP) which affects the composition of ion channel conductances for single neurons in a use-dependent way. We use a single-compartment conductance-based model for medium spiny striatal neurons in order to show the effects of parameterization of individual ion channels on the neuronal membrane potential-curent relationship (activation function). We show that parameter changes within the physiological ranges are sufficient to create an ensemble of neurons with significantly different activation functions. We emphasize that the effects of intrinsic neuronal modulation on spiking behavior require a distributed mode of synaptic input and can be eliminated by strongly correlated input. We show how modulation and adaptivity in ion channel conductances can be utilized to store patterns without an additional contribution by synaptic plasticity (SP). The adaptation of the spike response may result in either "positive" or "negative" pattern learning. However, read-out of stored information depends on a distributed pattern of synaptic activity to let intrinsic modulation determine spike response. We briefly discuss the implications of this conditional memory on learning and addiction.

  1. Neurons compute internal models of the physical laws of motion.

    Science.gov (United States)

    Angelaki, Dora E; Shaikh, Aasef G; Green, Andrea M; Dickman, J David

    2004-07-29

    A critical step in self-motion perception and spatial awareness is the integration of motion cues from multiple sensory organs that individually do not provide an accurate representation of the physical world. One of the best-studied sensory ambiguities is found in visual processing, and arises because of the inherent uncertainty in detecting the motion direction of an untextured contour moving within a small aperture. A similar sensory ambiguity arises in identifying the actual motion associated with linear accelerations sensed by the otolith organs in the inner ear. These internal linear accelerometers respond identically during translational motion (for example, running forward) and gravitational accelerations experienced as we reorient the head relative to gravity (that is, head tilt). Using new stimulus combinations, we identify here cerebellar and brainstem motion-sensitive neurons that compute a solution to the inertial motion detection problem. We show that the firing rates of these populations of neurons reflect the computations necessary to construct an internal model representation of the physical equations of motion.

  2. Joint statistics of strongly correlated neurons via dimensionality reduction

    Science.gov (United States)

    Deniz, Taşkın; Rotter, Stefan

    2017-06-01

    The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input.

  3. Dynamical analysis of Parkinsonian state emulated by hybrid Izhikevich neuron models

    Science.gov (United States)

    Liu, Chen; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Li, Huiyan; Loparo, Kenneth A.; Fietkiewicz, Chris

    2015-11-01

    Computational models play a significant role in exploring novel theories to complement the findings of physiological experiments. Various computational models have been developed to reveal the mechanisms underlying brain functions. Particularly, in the development of therapies to modulate behavioral and pathological abnormalities, computational models provide the basic foundations to exhibit transitions between physiological and pathological con