Fitting Neuron Models to Spike Trains
Rossant, Cyrille; Goodman, Dan F. M.; Fontaine, Bertrand; Platkiewicz, Jonathan; Magnusson, Anna K.; Brette, Romain
2011-01-01
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input–output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model. PMID:21415925
Nonsmooth dynamics in spiking neuron models
Coombes, S.; Thul, R.; Wedgwood, K. C. A.
2012-11-01
Large scale studies of spiking neural networks are a key part of modern approaches to understanding the dynamics of biological neural tissue. One approach in computational neuroscience has been to consider the detailed electrophysiological properties of neurons and build vast computational compartmental models. An alternative has been to develop minimal models of spiking neurons with a reduction in the dimensionality of both parameter and variable space that facilitates more effective simulation studies. In this latter case the single neuron model of choice is often a variant of the classic integrate-and-fire model, which is described by a nonsmooth dynamical system. In this paper we review some of the more popular spiking models of this class and describe the types of spiking pattern that they can generate (ranging from tonic to burst firing). We show that a number of techniques originally developed for the study of impact oscillators are directly relevant to their analysis, particularly those for treating grazing bifurcations. Importantly we highlight one particular single neuron model, capable of generating realistic spike trains, that is both computationally cheap and analytically tractable. This is a planar nonlinear integrate-and-fire model with a piecewise linear vector field and a state dependent reset upon spiking. We call this the PWL-IF model and analyse it at both the single neuron and network level. The techniques and terminology of nonsmooth dynamical systems are used to flesh out the bifurcation structure of the single neuron model, as well as to develop the notion of Lyapunov exponents. We also show how to construct the phase response curve for this system, emphasising that techniques in mathematical neuroscience may also translate back to the field of nonsmooth dynamical systems. The stability of periodic spiking orbits is assessed using a linear stability analysis of spiking times. At the network level we consider linear coupling between voltage
A Neuron Model for FPGA Spiking Neuronal Network Implementation
Directory of Open Access Journals (Sweden)
BONTEANU, G.
2011-11-01
Full Text Available We propose a neuron model, able to reproduce the basic elements of the neuronal dynamics, optimized for digital implementation of Spiking Neural Networks. Its architecture is structured in two major blocks, a datapath and a control unit. The datapath consists of a membrane potential circuit, which emulates the neuronal dynamics at the soma level, and a synaptic circuit used to update the synaptic weight according to the spike timing dependent plasticity (STDP mechanism. The proposed model is implemented into a Cyclone II-Altera FPGA device. Our results indicate the neuron model can be used to build up 1K Spiking Neural Networks on reconfigurable logic suport, to explore various network topologies.
Automatic fitting of spiking neuron models to electrophysiological recordings
Directory of Open Access Journals (Sweden)
Cyrille Rossant
2010-03-01
Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.
Stochastic models for spike trains of single neurons
Sampath, G
1977-01-01
1 Some basic neurophysiology 4 The neuron 1. 1 4 1. 1. 1 The axon 7 1. 1. 2 The synapse 9 12 1. 1. 3 The soma 1. 1. 4 The dendrites 13 13 1. 2 Types of neurons 2 Signals in the nervous system 14 2. 1 Action potentials as point events - point processes in the nervous system 15 18 2. 2 Spontaneous activi~ in neurons 3 Stochastic modelling of single neuron spike trains 19 3. 1 Characteristics of a neuron spike train 19 3. 2 The mathematical neuron 23 4 Superposition models 26 4. 1 superposition of renewal processes 26 4. 2 Superposition of stationary point processe- limiting behaviour 34 4. 2. 1 Palm functions 35 4. 2. 2 Asymptotic behaviour of n stationary point processes superposed 36 4. 3 Superposition models of neuron spike trains 37 4. 3. 1 Model 4. 1 39 4. 3. 2 Model 4. 2 - A superposition model with 40 two input channels 40 4. 3. 3 Model 4. 3 4. 4 Discussion 41 43 5 Deletion models 5. 1 Deletion models with 1nd~endent interaction of excitatory and inhibitory sequences 44 VI 5. 1. 1 Model 5. 1 The basic de...
From spiking neuron models to linear-nonlinear models.
Directory of Open Access Journals (Sweden)
Srdjan Ostojic
Full Text Available Neurons transform time-varying inputs into action potentials emitted stochastically at a time dependent rate. The mapping from current input to output firing rate is often represented with the help of phenomenological models such as the linear-nonlinear (LN cascade, in which the output firing rate is estimated by applying to the input successively a linear temporal filter and a static non-linear transformation. These simplified models leave out the biophysical details of action potential generation. It is not a priori clear to which extent the input-output mapping of biophysically more realistic, spiking neuron models can be reduced to a simple linear-nonlinear cascade. Here we investigate this question for the leaky integrate-and-fire (LIF, exponential integrate-and-fire (EIF and conductance-based Wang-Buzsáki models in presence of background synaptic activity. We exploit available analytic results for these models to determine the corresponding linear filter and static non-linearity in a parameter-free form. We show that the obtained functions are identical to the linear filter and static non-linearity determined using standard reverse correlation analysis. We then quantitatively compare the output of the corresponding linear-nonlinear cascade with numerical simulations of spiking neurons, systematically varying the parameters of input signal and background noise. We find that the LN cascade provides accurate estimates of the firing rates of spiking neurons in most of parameter space. For the EIF and Wang-Buzsáki models, we show that the LN cascade can be reduced to a firing rate model, the timescale of which we determine analytically. Finally we introduce an adaptive timescale rate model in which the timescale of the linear filter depends on the instantaneous firing rate. This model leads to highly accurate estimates of instantaneous firing rates.
Iterative learning control algorithm for spiking behavior of neuron model
Li, Shunan; Li, Donghui; Wang, Jiang; Yu, Haitao
2016-11-01
Controlling neurons to generate a desired or normal spiking behavior is the fundamental building block of the treatment of many neurologic diseases. The objective of this work is to develop a novel control method-closed-loop proportional integral (PI)-type iterative learning control (ILC) algorithm to control the spiking behavior in model neurons. In order to verify the feasibility and effectiveness of the proposed method, two single-compartment standard models of different neuronal excitability are specifically considered: Hodgkin-Huxley (HH) model for class 1 neural excitability and Morris-Lecar (ML) model for class 2 neural excitability. ILC has remarkable advantages for the repetitive processes in nature. To further highlight the superiority of the proposed method, the performances of the iterative learning controller are compared to those of classical PI controller. Either in the classical PI control or in the PI control combined with ILC, appropriate background noises are added in neuron models to approach the problem under more realistic biophysical conditions. Simulation results show that the controller performances are more favorable when ILC is considered, no matter which neuronal excitability the neuron belongs to and no matter what kind of firing pattern the desired trajectory belongs to. The error between real and desired output is much smaller under ILC control signal, which suggests ILC of neuron’s spiking behavior is more accurate.
Spiking neuron model for temporal sequence recognition.
Byrnes, Sean; Burkitt, Anthony N; Grayden, David B; Meffin, Hamish
2010-01-01
A biologically inspired neuronal network that stores and recognizes temporal sequences of symbols is described. Each symbol is represented by excitatory input to distinct groups of neurons (symbol pools). Unambiguous storage of multiple sequences with common subsequences is ensured by partitioning each symbol pool into subpools that respond only when the current symbol has been preceded by a particular sequence of symbols. We describe synaptic structure and neural dynamics that permit the selective activation of subpools by the correct sequence. Symbols may have varying durations of the order of hundreds of milliseconds. Physiologically plausible plasticity mechanisms operate on a time scale of tens of milliseconds; an interaction of the excitatory input with periodic global inhibition bridges this gap so that neural events representing successive symbols occur on this much faster timescale. The network is shown to store multiple overlapping sequences of events. It is robust to variation in symbol duration, it is scalable, and its performance degrades gracefully with perturbation of its parameters.
SIMPEL: Circuit model for photonic spike processing laser neurons
Shastri, Bhavin J; Tait, Alexander N; Wu, Ben; Prucnal, Paul R
2014-01-01
We propose an equivalent circuit model for photonic spike processing laser neurons with an embedded saturable absorber---a simulation model for photonic excitable lasers (SIMPEL). We show that by mapping the laser neuron rate equations into a circuit model, SPICE analysis can be used as an efficient and accurate engine for numerical calculations, capable of generalization to a variety of different laser neuron types found in literature. The development of this model parallels the Hodgkin--Huxley model of neuron biophysics, a circuit framework which brought efficiency, modularity, and generalizability to the study of neural dynamics. We employ the model to study various signal-processing effects such as excitability with excitatory and inhibitory pulses, binary all-or-nothing response, and bistable dynamics.
Avalanches in a stochastic model of spiking neurons.
Directory of Open Access Journals (Sweden)
Marc Benayoun
Full Text Available Neuronal avalanches are a form of spontaneous activity widely observed in cortical slices and other types of nervous tissue, both in vivo and in vitro. They are characterized by irregular, isolated population bursts when many neurons fire together, where the number of spikes per burst obeys a power law distribution. We simulate, using the Gillespie algorithm, a model of neuronal avalanches based on stochastic single neurons. The network consists of excitatory and inhibitory neurons, first with all-to-all connectivity and later with random sparse connectivity. Analyzing our model using the system size expansion, we show that the model obeys the standard Wilson-Cowan equations for large network sizes ( neurons. When excitation and inhibition are closely balanced, networks of thousands of neurons exhibit irregular synchronous activity, including the characteristic power law distribution of avalanche size. We show that these avalanches are due to the balanced network having weakly stable functionally feedforward dynamics, which amplifies some small fluctuations into the large population bursts. Balanced networks are thought to underlie a variety of observed network behaviours and have useful computational properties, such as responding quickly to changes in input. Thus, the appearance of avalanches in such functionally feedforward networks indicates that avalanches may be a simple consequence of a widely present network structure, when neuron dynamics are noisy. An important implication is that a network need not be "critical" for the production of avalanches, so experimentally observed power laws in burst size may be a signature of noisy functionally feedforward structure rather than of, for example, self-organized criticality.
Avalanches in a stochastic model of spiking neurons.
Benayoun, Marc; Cowan, Jack D; van Drongelen, Wim; Wallace, Edward
2010-07-08
Neuronal avalanches are a form of spontaneous activity widely observed in cortical slices and other types of nervous tissue, both in vivo and in vitro. They are characterized by irregular, isolated population bursts when many neurons fire together, where the number of spikes per burst obeys a power law distribution. We simulate, using the Gillespie algorithm, a model of neuronal avalanches based on stochastic single neurons. The network consists of excitatory and inhibitory neurons, first with all-to-all connectivity and later with random sparse connectivity. Analyzing our model using the system size expansion, we show that the model obeys the standard Wilson-Cowan equations for large network sizes ( neurons). When excitation and inhibition are closely balanced, networks of thousands of neurons exhibit irregular synchronous activity, including the characteristic power law distribution of avalanche size. We show that these avalanches are due to the balanced network having weakly stable functionally feedforward dynamics, which amplifies some small fluctuations into the large population bursts. Balanced networks are thought to underlie a variety of observed network behaviours and have useful computational properties, such as responding quickly to changes in input. Thus, the appearance of avalanches in such functionally feedforward networks indicates that avalanches may be a simple consequence of a widely present network structure, when neuron dynamics are noisy. An important implication is that a network need not be "critical" for the production of avalanches, so experimentally observed power laws in burst size may be a signature of noisy functionally feedforward structure rather than of, for example, self-organized criticality.
On the applicability of STDP-based learning mechanisms to spiking neuron network models
Sboev, A.; Vlasov, D.; Serenko, A.; Rybka, R.; Moloshnikov, I.
2016-11-01
The ways to creating practically effective method for spiking neuron networks learning, that would be appropriate for implementing in neuromorphic hardware and at the same time based on the biologically plausible plasticity rules, namely, on STDP, are discussed. The influence of the amount of correlation between input and output spike trains on the learnability by different STDP rules is evaluated. A usability of alternative combined learning schemes, involving artificial and spiking neuron models is demonstrated on the iris benchmark task and on the practical task of gender recognition.
Numerical simulation of neuronal spike patterns in a retinal network model
Institute of Scientific and Technical Information of China (English)
Lei Wang; Shenquan Liu; Shanxing Ou
2011-01-01
This study utilized a neuronal compartment model and NEURON software to study the effects of external light stimulation on retinal photoreceptors and spike patterns of neurons in a retinal network. Following light stimulation of different shapes and sizes, changes in the spike features of ganglion cells indicated that different shapes of light stimulation elicited different retinal responses. By manipulating the shape of light stimulation, we investigated the effects of the large number of electrical synapses existing between retinal neurons. Model simulation and analysis suggested that interplexiform cells play an important role in visual signal information processing in the retina, and the findings indicated that our constructed retinal network model was reliable and feasible. In addition, the simulation results demonstrated that ganglion cells exhibited a variety of spike patterns under different light stimulation sizes and different stimulation shapes, which reflect the functions of the retina in signal transmission and processing.
Computing with Spiking Neuron Networks
Paugam-Moisy, H.; Bohte, S.M.; Rozenberg, G.; Baeck, T.H.W.; Kok, J.N.
2012-01-01
Abstract Spiking Neuron Networks (SNNs) are often referred to as the 3rd gener- ation of neural networks. Highly inspired from natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an ac- curate modeling of synaptic interactions between neu
Neuronal spike timing adaptation described with a fractional leaky integrate-and-fire model.
Directory of Open Access Journals (Sweden)
Wondimu Teka
2014-03-01
Full Text Available The voltage trace of neuronal activities can follow multiple timescale dynamics that arise from correlated membrane conductances. Such processes can result in power-law behavior in which the membrane voltage cannot be characterized with a single time constant. The emergent effect of these membrane correlations is a non-Markovian process that can be modeled with a fractional derivative. A fractional derivative is a non-local process in which the value of the variable is determined by integrating a temporal weighted voltage trace, also called the memory trace. Here we developed and analyzed a fractional leaky integrate-and-fire model in which the exponent of the fractional derivative can vary from 0 to 1, with 1 representing the normal derivative. As the exponent of the fractional derivative decreases, the weights of the voltage trace increase. Thus, the value of the voltage is increasingly correlated with the trajectory of the voltage in the past. By varying only the fractional exponent, our model can reproduce upward and downward spike adaptations found experimentally in neocortical pyramidal cells and tectal neurons in vitro. The model also produces spikes with longer first-spike latency and high inter-spike variability with power-law distribution. We further analyze spike adaptation and the responses to noisy and oscillatory input. The fractional model generates reliable spike patterns in response to noisy input. Overall, the spiking activity of the fractional leaky integrate-and-fire model deviates from the spiking activity of the Markovian model and reflects the temporal accumulated intrinsic membrane dynamics that affect the response of the neuron to external stimulation.
Homoclinic Spike adding in a neuronal model in the presence of noise
Fuwape, Ibiyinka; Neiman, Alexander; Shilnikov, Andrey
2008-03-01
We study the influence of noise on a spike adding transitions within the bursting activity in a Hodgkin-Huxley-type model of the leech heart interneuron. Spike adding in this model occur via homoclinic bifurcation of a saddle periodic orbit. Although narrow chaotic regions are observed near bifurcation transition, overall bursting dynamics is regular and is characterized by a constant number of spikes per burst. Experimental studies, however, show variability of bursting patterns whereby number of spikes per burst varies randomly. Thus, introduction of external synaptic noise is a necessary step to account for variability of burst durations observed experimentally. We show that near every such transition the neuron is highly sensitive to random perturbations that lead to and enhance broadly the regions of chaotic dynamics of the cell. For each spike adding transition there is a critical noise level beyond which the dynamics of the neuron becomes chaotic throughout the entire region of the given transition. Noise-induced chaotic dynamics is characterized in terms of the Lyapunov exponents and the Shannon entropy and reflects variability of firing patterns with various numbers of spikes per burst, traversing wide range of the neuron's parameters
Mapping Generative Models onto a Network of Digital Spiking Neurons.
Pedroni, Bruno U; Das, Srinjoy; Arthur, John V; Merolla, Paul A; Jackson, Bryan L; Modha, Dharmendra S; Kreutz-Delgado, Kenneth; Cauwenberghs, Gert
2016-08-01
Stochastic neural networks such as Restricted Boltzmann Machines (RBMs) have been successfully used in applications ranging from speech recognition to image classification, and are particularly interesting because of their potential for generative tasks. Inference and learning in these algorithms use a Markov Chain Monte Carlo procedure called Gibbs sampling, where a logistic function forms the kernel of this sampler. On the other side of the spectrum, neuromorphic systems have shown great promise for low-power and parallelized cognitive computing, but lack well-suited applications and automation procedures. In this work, we propose a systematic method for bridging the RBM algorithm and digital neuromorphic systems, with a generative pattern completion task as proof of concept. For this, we first propose a method of producing the Gibbs sampler using bio-inspired digital noisy integrate-and-fire neurons. Next, we describe the process of mapping generative RBMs trained offline onto the IBM TrueNorth neurosynaptic processor-a low-power digital neuromorphic VLSI substrate. Mapping these algorithms onto neuromorphic hardware presents unique challenges in network connectivity and weight and bias quantization, which, in turn, require architectural and design strategies for the physical realization. Generative performance is analyzed to validate the neuromorphic requirements and to best select the neuron parameters for the model. Lastly, we describe a design automation procedure which achieves optimal resource usage, accounting for the novel hardware adaptations. This work represents the first implementation of generative RBM inference on a neuromorphic VLSI substrate.
A model of reverse spike frequency adaptation and repetitive firing of subthalamic nucleus neurons.
Wilson, Charles J; Weyrick, Angela; Terman, David; Hallworth, Nicholas E; Bevan, Mark D
2004-05-01
Subthalamic nucleus neurons exhibit reverse spike-frequency adaptation. This occurs only at firing rates of 20-50 spikes/s and higher. Over this same frequency range, there is an increase in the steady-state frequency-intensity (F-I) curve's slope (the secondary range). Specific blockade of high-voltage activated calcium currents reduced the F-I curve slope and reverse adaptation. Blockade of calcium-dependent potassium current enhanced secondary range firing. A simple model that exhibited these properties used spike-triggered conductances similar to those in subthalamic neurons. It showed: 1) Nonaccumulating spike afterhyperpolarizations produce positively accelerating F-I curves and spike-frequency adaptation that is complete after the second spike. 2) Combinations of accumulating aftercurrents result in a linear F-I curve, whose slope depends on the relative contributions of inward and outward currents. Spike-frequency adaptation can be gradual. 3) With both accumulating and nonaccumulating aftercurrents, primary and secondary ranges will be present in the F-I curve. The slope of the primary range is determined by the nonaccumulating conductance; the accumulating conductances govern the secondary range. The transition is determined by the relative strengths of accumulating and nonaccumulating currents. 4) Spike-threshold accommodation contributes to the secondary range, reducing its slope at high firing rates. Threshold accommodation can stabilize firing when inward aftercurrents exceed outward ones. 5) Steady-state reverse adaptation results when accumulated inward aftercurrents exceed outward ones. This requires spike-threshold accommodation. Transient speedup arises when inward currents are smaller than outward ones at steady state, but accumulate more rapidly. 6) The same mechanisms alter firing in response to irregular patterns of synaptic conductances, as cell excitability fluctuates with changes in firing rate.
Sidiropoulou, Kyriaki; Poirazi, Panayiota
2012-01-01
Proper functioning of working memory involves the expression of stimulus-selective persistent activity in pyramidal neurons of the prefrontal cortex (PFC), which refers to neural activity that persists for seconds beyond the end of the stimulus. The mechanisms which PFC pyramidal neurons use to discriminate between preferred vs. neutral inputs at the cellular level are largely unknown. Moreover, the presence of pyramidal cell subtypes with different firing patterns, such as regular spiking and intrinsic bursting, raises the question as to what their distinct role might be in persistent firing in the PFC. Here, we use a compartmental modeling approach to search for discriminatory features in the properties of incoming stimuli to a PFC pyramidal neuron and/or its response that signal which of these stimuli will result in persistent activity emergence. Furthermore, we use our modeling approach to study cell-type specific differences in persistent activity properties, via implementing a regular spiking (RS) and an intrinsic bursting (IB) model neuron. We identify synaptic location within the basal dendrites as a feature of stimulus selectivity. Specifically, persistent activity-inducing stimuli consist of activated synapses that are located more distally from the soma compared to non-inducing stimuli, in both model cells. In addition, the action potential (AP) latency and the first few inter-spike-intervals of the neuronal response can be used to reliably detect inducing vs. non-inducing inputs, suggesting a potential mechanism by which downstream neurons can rapidly decode the upcoming emergence of persistent activity. While the two model neurons did not differ in the coding features of persistent activity emergence, the properties of persistent activity, such as the firing pattern and the duration of temporally-restricted persistent activity were distinct. Collectively, our results pinpoint to specific features of the neuronal response to a given stimulus that code
Directory of Open Access Journals (Sweden)
Kyriaki Sidiropoulou
Full Text Available Proper functioning of working memory involves the expression of stimulus-selective persistent activity in pyramidal neurons of the prefrontal cortex (PFC, which refers to neural activity that persists for seconds beyond the end of the stimulus. The mechanisms which PFC pyramidal neurons use to discriminate between preferred vs. neutral inputs at the cellular level are largely unknown. Moreover, the presence of pyramidal cell subtypes with different firing patterns, such as regular spiking and intrinsic bursting, raises the question as to what their distinct role might be in persistent firing in the PFC. Here, we use a compartmental modeling approach to search for discriminatory features in the properties of incoming stimuli to a PFC pyramidal neuron and/or its response that signal which of these stimuli will result in persistent activity emergence. Furthermore, we use our modeling approach to study cell-type specific differences in persistent activity properties, via implementing a regular spiking (RS and an intrinsic bursting (IB model neuron. We identify synaptic location within the basal dendrites as a feature of stimulus selectivity. Specifically, persistent activity-inducing stimuli consist of activated synapses that are located more distally from the soma compared to non-inducing stimuli, in both model cells. In addition, the action potential (AP latency and the first few inter-spike-intervals of the neuronal response can be used to reliably detect inducing vs. non-inducing inputs, suggesting a potential mechanism by which downstream neurons can rapidly decode the upcoming emergence of persistent activity. While the two model neurons did not differ in the coding features of persistent activity emergence, the properties of persistent activity, such as the firing pattern and the duration of temporally-restricted persistent activity were distinct. Collectively, our results pinpoint to specific features of the neuronal response to a given
Koutsou, Achilleas; Bugmann, Guido; Christodoulou, Chris
2015-10-01
Biological systems are able to recognise temporal sequences of stimuli or compute in the temporal domain. In this paper we are exploring whether a biophysical model of a pyramidal neuron can detect and learn systematic time delays between the spikes from different input neurons. In particular, we investigate whether it is possible to reinforce pairs of synapses separated by a dendritic propagation time delay corresponding to the arrival time difference of two spikes from two different input neurons. We examine two subthreshold learning approaches where the first relies on the backpropagation of EPSPs (excitatory postsynaptic potentials) and the second on the backpropagation of a somatic action potential, whose production is supported by a learning-enabling background current. The first approach does not provide a learning signal that sufficiently differentiates between synapses at different locations, while in the second approach, somatic spikes do not provide a reliable signal distinguishing arrival time differences of the order of the dendritic propagation time. It appears that the firing of pyramidal neurons shows little sensitivity to heterosynaptic spike arrival time differences of several milliseconds. This neuron is therefore unlikely to be able to learn to detect such differences.
2014-01-01
We derive a synaptic weight update rule for learning temporally precise spike train to spike train transformations in multilayer feedforward networks of spiking neurons. The framework, aimed at seamlessly generalizing error backpropagation to the deterministic spiking neuron setting, is based strictly on spike timing and avoids invoking concepts pertaining to spike rates or probabilistic models of spiking. The derivation is founded on two innovations. First, an error functional is proposed th...
Bayesian inference for generalized linear models for spiking neurons
Directory of Open Access Journals (Sweden)
Sebastian Gerwinn
2010-05-01
Full Text Available Generalized Linear Models (GLMs are commonly used statistical methods for modelling the relationship between neural population activity and presented stimuli. When the dimension of the parameter space is large, strong regularization has to be used in order to fit GLMs to datasets of realistic size without overfitting. By imposing properly chosen priors over parameters, Bayesian inference provides an effective and principled approach for achieving regularization. Here we show how the posterior distribution over model parameters of GLMs can be approximated by a Gaussian using the Expectation Propagation algorithm. In this way, we obtain an estimate of the posterior mean and posterior covariance, allowing us to calculate Bayesian confidence intervals that characterize the uncertainty about the optimal solution. From the posterior we also obtain a different point estimate, namely the posterior mean as opposed to the commonly used maximum a posteriori estimate. We systematically compare the different inference techniques on simulated as well as on multi-electrode recordings of retinal ganglion cells, and explore the effects of the chosen prior and the performance measure used. We find that good performance can be achieved by choosing an Laplace prior together with the posterior mean estimate.
Ebner, Marc; Hameroff, Stuart
2011-01-01
Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on "autopilot"). Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the "conscious pilot") suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious "auto-pilot" cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways "gap junctions" in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of synchrony, within which the contents of
Spiking neuron network Helmholtz machine.
Sountsov, Pavel; Miller, Paul
2015-01-01
An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule.
What causes a neuron to spike?
Arcas, B A; Arcas, Blaise Aguera y; Fairhall, Adrienne
2003-01-01
The computation performed by a neuron can be formulated as a combination of dimensional reduction in stimulus space and the nonlinearity inherent in a spiking output. White noise stimulus and reverse correlation (the spike-triggered average and spike-triggered covariance) are often used in experimental neuroscience to `ask' neurons which dimensions in stimulus space they are sensitive to, and to characterize the nonlinearity of the response. In this paper, we apply reverse correlation to the simplest model neuron with temporal dynamics--the leaky integrate-and-fire model--and find that even for this simple case standard techniques do not recover the known neural computation. To overcome this, we develop novel reverse correlation techniques by selectively analyzing only `isolated' spikes, and taking explicit account of the extended silences that precede these isolated spikes. We discuss the implications of our methods to the characterization of neural adaptation. Although these methods are developed in the con...
Spiking Neurons for Analysis of Patterns
Huntsberger, Terrance
2008-01-01
Artificial neural networks comprising spiking neurons of a novel type have been conceived as improved pattern-analysis and pattern-recognition computational systems. These neurons are represented by a mathematical model denoted the state-variable model (SVM), which among other things, exploits a computational parallelism inherent in spiking-neuron geometry. Networks of SVM neurons offer advantages of speed and computational efficiency, relative to traditional artificial neural networks. The SVM also overcomes some of the limitations of prior spiking-neuron models. There are numerous potential pattern-recognition, tracking, and data-reduction (data preprocessing) applications for these SVM neural networks on Earth and in exploration of remote planets. Spiking neurons imitate biological neurons more closely than do the neurons of traditional artificial neural networks. A spiking neuron includes a central cell body (soma) surrounded by a tree-like interconnection network (dendrites). Spiking neurons are so named because they generate trains of output pulses (spikes) in response to inputs received from sensors or from other neurons. They gain their speed advantage over traditional neural networks by using the timing of individual spikes for computation, whereas traditional artificial neurons use averages of activity levels over time. Moreover, spiking neurons use the delays inherent in dendritic processing in order to efficiently encode the information content of incoming signals. Because traditional artificial neurons fail to capture this encoding, they have less processing capability, and so it is necessary to use more gates when implementing traditional artificial neurons in electronic circuitry. Such higher-order functions as dynamic tasking are effected by use of pools (collections) of spiking neurons interconnected by spike-transmitting fibers. The SVM includes adaptive thresholds and submodels of transport of ions (in imitation of such transport in biological
Memristors Empower Spiking Neurons With Stochasticity
Al-Shedivat, Maruan
2015-06-01
Recent theoretical studies have shown that probabilistic spiking can be interpreted as learning and inference in cortical microcircuits. This interpretation creates new opportunities for building neuromorphic systems driven by probabilistic learning algorithms. However, such systems must have two crucial features: 1) the neurons should follow a specific behavioral model, and 2) stochastic spiking should be implemented efficiently for it to be scalable. This paper proposes a memristor-based stochastically spiking neuron that fulfills these requirements. First, the analytical model of the memristor is enhanced so it can capture the behavioral stochasticity consistent with experimentally observed phenomena. The switching behavior of the memristor model is demonstrated to be akin to the firing of the stochastic spike response neuron model, the primary building block for probabilistic algorithms in spiking neural networks. Furthermore, the paper proposes a neural soma circuit that utilizes the intrinsic nondeterminism of memristive switching for efficient spike generation. The simulations and analysis of the behavior of a single stochastic neuron and a winner-take-all network built of such neurons and trained on handwritten digits confirm that the circuit can be used for building probabilistic sampling and pattern adaptation machinery in spiking networks. The findings constitute an important step towards scalable and efficient probabilistic neuromorphic platforms. © 2011 IEEE.
Biological modelling of a computational spiking neural network with neuronal avalanches
Li, Xiumin; Chen, Qing; Xue, Fangzheng
2017-05-01
In recent years, an increasing number of studies have demonstrated that networks in the brain can self-organize into a critical state where dynamics exhibit a mixture of ordered and disordered patterns. This critical branching phenomenon is termed neuronal avalanches. It has been hypothesized that the homeostatic level balanced between stability and plasticity of this critical state may be the optimal state for performing diverse neural computational tasks. However, the critical region for high performance is narrow and sensitive for spiking neural networks (SNNs). In this paper, we investigated the role of the critical state in neural computations based on liquid-state machines, a biologically plausible computational neural network model for real-time computing. The computational performance of an SNN when operating at the critical state and, in particular, with spike-timing-dependent plasticity for updating synaptic weights is investigated. The network is found to show the best computational performance when it is subjected to critical dynamic states. Moreover, the active-neuron-dominant structure refined from synaptic learning can remarkably enhance the robustness of the critical state and further improve computational accuracy. These results may have important implications in the modelling of spiking neural networks with optimal computational performance. This article is part of the themed issue `Mathematical methods in medicine: neuroscience, cardiology and pathology'.
Neurodynamics of biased competition and cooperation for attention: a model with spiking neurons.
Deco, Gustavo; Rolls, Edmund T
2005-07-01
Recent neurophysiological experiments have led to a promising "biased competition hypothesis" of the neural basis of attention. According to this hypothesis, attention appears as a sometimes nonlinear property that results from a top-down biasing effect that influences the competitive and cooperative interactions that work both within cortical areas and between cortical areas. In this paper we describe a detailed dynamical analysis of the synaptic and neuronal spiking mechanisms underlying biased competition. We perform a detailed analysis of the dynamical capabilities of the system by exploring the stationary attractors in the parameter space by a mean-field reduction consistent with the underlying synaptic and spiking dynamics. The nonstationary dynamical behavior, as measured in neuronal recording experiments, is studied by an integrate-and-fire model with realistic dynamics. This elucidates the role of cooperation and competition in the dynamics of biased competition and shows why feedback connections between cortical areas need optimally to be weaker by a factor of about 2.5 than the feedforward connections in an attentional network. We modeled the interaction between top-down attention and bottom-up stimulus contrast effects found neurophysiologically and showed that top-down attentional effects can be explained by external attention inputs biasing neurons to move to different parts of their nonlinear activation functions. Further, it is shown that, although NMDA nonlinear effects may be useful in attention, they are not necessary, with nonlinear effects (which may appear multiplicative) being produced in the way just described.
Directory of Open Access Journals (Sweden)
Marc Ebner
2011-01-01
Full Text Available Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on “autopilot”. Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the “conscious pilot” suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious “auto-pilot” cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways “gap junctions” in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of
Directory of Open Access Journals (Sweden)
Sushmita Lakshmi Allam
2012-10-01
Full Text Available Over the past decades, our view of astrocytes has switched from passive support cells to active processing elements in the brain. The current view is that astrocytes shape neuronal communication and also play an important role in many neurodegenerative diseases. Despite the growing awareness of the importance of astrocytes, the exact mechanisms underlying neuron-astrocyte communication and the physiological consequences of astrocytic-neuronal interactions remain largely unclear. In this work, we define a modeling framework that will permit to address unanswered questions regarding the role of astrocytes. Our computational model of a detailed glutamatergic synapse facilitates the analysis of neural system responses to various stimuli and conditions that are otherwise difficult to obtain experimentally, in particular the readouts at the sub-cellular level. In this paper, we extend a detailed glutamatergic synaptic model, to include astrocytic glutamate transporters. We demonstrate how these glial transporters, responsible for the majority of glutamate uptake, modulate synaptic transmission mediated by ionotropic AMPA and NMDA receptors at glutamatergic synapses. Furthermore, we investigate how these local signaling effects at the synaptic level are translated into varying spatio-temporal patterns of neuron firing. Paired pulse stimulation results reveal that the effect of astrocytic glutamate uptake is more apparent when the input inter-spike interval is sufficiently long to allow the receptors to recover from desensitization. These results suggest an important functional role of astrocytes in spike timing dependent processes and demand further investigation of the molecular basis of certain neurological diseases specifically related to alterations in astrocytic glutamate uptake, such as epilepsy.
Reinforcement learning of targeted movement in a spiking neuronal model of motor cortex.
Directory of Open Access Journals (Sweden)
George L Chadderdon
Full Text Available Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint "forearm" to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. The model consisted of 144 excitatory and 64 inhibitory event-based neurons, each with AMPA, NMDA, and GABA synapses. Proprioceptive cell input to this model encoded the 2 muscle lengths. Plasticity was only enabled in feedforward connections between input and output excitatory units, using spike-timing-dependent eligibility traces for synaptic credit or blame assignment. Learning resulted from a global 3-valued signal: reward (+1, no learning (0, or punishment (-1, corresponding to phasic increases, lack of change, or phasic decreases of dopaminergic cell firing, respectively. Successful learning only occurred when both reward and punishment were enabled. In this case, 5 target angles were learned successfully within 180 s of simulation time, with a median error of 8 degrees. Motor babbling allowed exploratory learning, but decreased the stability of the learned behavior, since the hand continued moving after reaching the target. Our model demonstrated that a global reinforcement signal, coupled with eligibility traces for synaptic plasticity, can train a spiking sensorimotor network to perform goal-directed motor behavior.
Reinforcement learning of targeted movement in a spiking neuronal model of motor cortex.
Chadderdon, George L; Neymotin, Samuel A; Kerr, Cliff C; Lytton, William W
2012-01-01
Sensorimotor control has traditionally been considered from a control theory perspective, without relation to neurobiology. In contrast, here we utilized a spiking-neuron model of motor cortex and trained it to perform a simple movement task, which consisted of rotating a single-joint "forearm" to a target. Learning was based on a reinforcement mechanism analogous to that of the dopamine system. This provided a global reward or punishment signal in response to decreasing or increasing distance from hand to target, respectively. Output was partially driven by Poisson motor babbling, creating stochastic movements that could then be shaped by learning. The virtual forearm consisted of a single segment rotated around an elbow joint, controlled by flexor and extensor muscles. The model consisted of 144 excitatory and 64 inhibitory event-based neurons, each with AMPA, NMDA, and GABA synapses. Proprioceptive cell input to this model encoded the 2 muscle lengths. Plasticity was only enabled in feedforward connections between input and output excitatory units, using spike-timing-dependent eligibility traces for synaptic credit or blame assignment. Learning resulted from a global 3-valued signal: reward (+1), no learning (0), or punishment (-1), corresponding to phasic increases, lack of change, or phasic decreases of dopaminergic cell firing, respectively. Successful learning only occurred when both reward and punishment were enabled. In this case, 5 target angles were learned successfully within 180 s of simulation time, with a median error of 8 degrees. Motor babbling allowed exploratory learning, but decreased the stability of the learned behavior, since the hand continued moving after reaching the target. Our model demonstrated that a global reinforcement signal, coupled with eligibility traces for synaptic plasticity, can train a spiking sensorimotor network to perform goal-directed motor behavior.
Fahrion, Jennifer K; Komuro, Yutaro; Li, Ying; Ohno, Nobuhiko; Littner, Yoav; Raoult, Emilie; Galas, Ludovic; Vaudry, David; Komuro, Hitoshi
2012-03-27
In the brains of patients with fetal Minamata disease (FMD), which is caused by exposure to methylmercury (MeHg) during development, many neurons are hypoplastic, ectopic, and disoriented, indicating disrupted migration, maturation, and growth. MeHg affects a myriad of signaling molecules, but little is known about which signals are primary targets for MeHg-induced deficits in neuronal development. In this study, using a mouse model of FMD, we examined how MeHg affects the migration of cerebellar granule cells during early postnatal development. The cerebellum is one of the most susceptible brain regions to MeHg exposure, and profound loss of cerebellar granule cells is detected in the brains of patients with FMD. We show that MeHg inhibits granule cell migration by reducing the frequency of somal Ca(2+) spikes through alterations in Ca(2+), cAMP, and insulin-like growth factor 1 (IGF1) signaling. First, MeHg slows the speed of granule cell migration in a dose-dependent manner, independent of the mode of migration. Second, MeHg reduces the frequency of spontaneous Ca(2+) spikes in granule cell somata in a dose-dependent manner. Third, a unique in vivo live-imaging system for cell migration reveals that reducing the inhibitory effects of MeHg on somal Ca(2+) spike frequency by stimulating internal Ca(2+) release and Ca(2+) influxes, inhibiting cAMP activity, or activating IGF1 receptors ameliorates the inhibitory effects of MeHg on granule cell migration. These results suggest that alteration of Ca(2+) spike frequency and Ca(2+), cAMP, and IGF1 signaling could be potential therapeutic targets for infants with MeHg intoxication.
Arata, Hiroki; Mino, Hiroyuki
2012-01-01
This article presents an effect of spontaneous spike firing rates on information transmission of the spike trains in a spherical bushy neuron model of antero-ventral cochlear nuclei. In computer simulations, the synaptic current stimuli ascending from auditory nerve fibers (ANFs) were modeled by a filtered inhomogeneous Poisson process modulated with sinusoidal functions, while the stochastic sodium and stochastic high- and low-threshold potassium channels were incorporated into a single compartment model of the soma in spherical bushy neurons. The information rates were estimated from the entropies of the inter-spike intervals of the spike trains to quantitatively evaluate information transmission in the spherical busy neuron model. The results show that the information rates increased, reached a maximum, and then decreased as the rate of spontaneous spikes from the ANFs increased, implying a resonance phenomenon dependent on the rate of spontaneous spikes from ANFs. In conclusion, this phenomenon similar to the stochastic resonance would be observed due to that spontaneous random spike firings coming from auditory nerves may act as an origin of fluctuation or noise, and these findings may play a key role in the design of better auditory prostheses.
Evoking prescribed spike times in stochastic neurons
Doose, Jens; Lindner, Benjamin
2017-09-01
Single cell stimulation in vivo is a powerful tool to investigate the properties of single neurons and their functionality in neural networks. We present a method to determine a cell-specific stimulus that reliably evokes a prescribed spike train with high temporal precision of action potentials. We test the performance of this stimulus in simulations for two different stochastic neuron models. For a broad range of parameters and a neuron firing with intermediate firing rates (20-40 Hz) the reliability in evoking the prescribed spike train is close to its theoretical maximum that is mainly determined by the level of intrinsic noise.
A Spiking Neuron Model of Word Associations for the Remote Associates Test
Kajić, Ivana; Gosmann, Jan; Stewart, Terrence C.; Wennekers, Thomas; Eliasmith, Chris
2017-01-01
Generating associations is important for cognitive tasks including language acquisition and creative problem solving. It remains an open question how the brain represents and processes associations. The Remote Associates Test (RAT) is a task, originally used in creativity research, that is heavily dependent on generating associations in a search for the solutions to individual RAT problems. In this work we present a model that solves the test. Compared to earlier modeling work on the RAT, our hybrid (i.e., non-developmental) model is implemented in a spiking neural network by means of the Neural Engineering Framework (NEF), demonstrating that it is possible for spiking neurons to be organized to store the employed representations and to manipulate them. In particular, the model shows that distributed representations can support sophisticated linguistic processing. The model was validated on human behavioral data including the typical length of response sequences and similarity relationships in produced responses. These data suggest two cognitive processes that are involved in solving the RAT: one process generates potential responses and a second process filters the responses. PMID:28210234
Prospective Coding by Spiking Neurons.
Directory of Open Access Journals (Sweden)
Johanni Brea
2016-06-01
Full Text Available Animals learn to make predictions, such as associating the sound of a bell with upcoming feeding or predicting a movement that a motor command is eliciting. How predictions are realized on the neuronal level and what plasticity rule underlies their learning is not well understood. Here we propose a biologically plausible synaptic plasticity rule to learn predictions on a single neuron level on a timescale of seconds. The learning rule allows a spiking two-compartment neuron to match its current firing rate to its own expected future discounted firing rate. For instance, if an originally neutral event is repeatedly followed by an event that elevates the firing rate of a neuron, the originally neutral event will eventually also elevate the neuron's firing rate. The plasticity rule is a form of spike timing dependent plasticity in which a presynaptic spike followed by a postsynaptic spike leads to potentiation. Even if the plasticity window has a width of 20 milliseconds, associations on the time scale of seconds can be learned. We illustrate prospective coding with three examples: learning to predict a time varying input, learning to predict the next stimulus in a delayed paired-associate task and learning with a recurrent network to reproduce a temporally compressed version of a sequence. We discuss the potential role of the learning mechanism in classical trace conditioning. In the special case that the signal to be predicted encodes reward, the neuron learns to predict the discounted future reward and learning is closely related to the temporal difference learning algorithm TD(λ.
Directory of Open Access Journals (Sweden)
Wassim M. Haddad
2014-07-01
Full Text Available Advances in neuroscience have been closely linked to mathematical modeling beginning with the integrate-and-fire model of Lapicque and proceeding through the modeling of the action potential by Hodgkin and Huxley to the current era. The fundamental building block of the central nervous system, the neuron, may be thought of as a dynamic element that is “excitable”, and can generate a pulse or spike whenever the electrochemical potential across the cell membrane of the neuron exceeds a threshold. A key application of nonlinear dynamical systems theory to the neurosciences is to study phenomena of the central nervous system that exhibit nearly discontinuous transitions between macroscopic states. A very challenging and clinically important problem exhibiting this phenomenon is the induction of general anesthesia. In any specific patient, the transition from consciousness to unconsciousness as the concentration of anesthetic drugs increases is very sharp, resembling a thermodynamic phase transition. This paper focuses on multistability theory for continuous and discontinuous dynamical systems having a set of multiple isolated equilibria and/or a continuum of equilibria. Multistability is the property whereby the solutions of a dynamical system can alternate between two or more mutually exclusive Lyapunov stable and convergent equilibrium states under asymptotically slowly changing inputs or system parameters. In this paper, we extend the theory of multistability to continuous, discontinuous, and stochastic nonlinear dynamical systems. In particular, Lyapunov-based tests for multistability and synchronization of dynamical systems with continuously differentiable and absolutely continuous flows are established. The results are then applied to excitatory and inhibitory biological neuronal networks to explain the underlying mechanism of action for anesthesia and consciousness from a multistable dynamical system perspective, thereby providing a
Bursting and spiking due to additional direct and stochastic currents in neuron models
Institute of Scientific and Technical Information of China (English)
Yang Zhuo-Qin; Lu Qi-Shao
2006-01-01
Neurons at rest can exhibit diverse firing activities patterns in response to various external deterministic and random stimuli, especially additional currents. In this paper, neuronal firing patterns from bursting to spiking, induced by additional direct and stochastic currents, are explored in rest states Corresponding to two values of the parameter VK in the Chay neuron system. Three cases are considered by numerical simulation and fast/slow dynamic analysis, in which only the direct current or the stochastic current exists, or the direct and stochastic currents coexist. Meanwhile, several important bursting patterns in neuronal experiments, such as the period-1 "circle/homoclinic" bursting and the integer multiple "fold/homoclinic" bursting with one spike per burst, as well as the transition from integer multiple bursting to period-1 "circle/homoclinic" bursting and that from stochastic "Hopf/homoclinic" bursting to "Hopf/homoclinic" bursting, are investigated in detail.
Spike generation estimated from stationary spike trains in a variety of neurons in vivo.
Spanne, Anton; Geborek, Pontus; Bengtsson, Fredrik; Jörntell, Henrik
2014-01-01
To any model of brain function, the variability of neuronal spike firing is a problem that needs to be taken into account. Whereas the synaptic integration can be described in terms of the original Hodgkin-Huxley (H-H) formulations of conductance-based electrical signaling, the transformation of the resulting membrane potential into patterns of spike output is subjected to stochasticity that may not be captured with standard single neuron H-H models. The dynamics of the spike output is dependent on the normal background synaptic noise present in vivo, but the neuronal spike firing variability in vivo is not well studied. In the present study, we made long-term whole cell patch clamp recordings of stationary spike firing states across a range of membrane potentials from a variety of subcortical neurons in the non-anesthetized, decerebrated state in vivo. Based on the data, we formulated a simple, phenomenological model of the properties of the spike generation in each neuron that accurately captured the stationary spike firing statistics across all membrane potentials. The model consists of a parametric relationship between the mean and standard deviation of the inter-spike intervals, where the parameter is linearly related to the injected current over the membrane. This enabled it to generate accurate approximations of spike firing also under inhomogeneous conditions with input that varies over time. The parameters describing the spike firing statistics for different neuron types overlapped extensively, suggesting that the spike generation had similar properties across neurons.
Spike generation estimated from stationary spike trains in a variety of neurons in vivo
Directory of Open Access Journals (Sweden)
Anton eSpanne
2014-07-01
Full Text Available To any model of brain function, the variability of neuronal spike firing is a problem that needs to be taken into account. Whereas the synaptic integration can be described in terms of the original Hodgkin-Huxley (H-H formulations of conductance-based electrical signaling, the transformation of the resulting membrane potential into patterns of spike output is subjected to stochasticity that may not be captured with standard single neuron H-H models. The dynamics of the spike output is dependent on the normal background synaptic noise present in vivo, but the neuronal spike firing variability in vivo is not well studied. In the present study, we made long-term whole cell patch clamp recordings of stationary spike firing states across a range of membrane potentials from a variety of subcortical neurons in the non-anesthetized, decerebrated state in vivo. Based on the data, we formulated a simple, phenomenological model of the properties of the spike generation in each neuron that accurately captured the stationary spike firing statistics across all membrane potentials. The model consists of a parametric relationship between the mean and standard deviation of the inter-spike intervals, where the parameter is linearly related to the injected current over the membrane. This enabled it to generate accurate approximations of spike firing also under inhomogeneous conditions with input that varies over time. The parameters describing the spike firing statistics for different neuron types overlapped extensively, suggesting that the spike generation had similar properties across neurons.
Modeling spiking behavior of neurons with time-dependent Poisson processes.
Shinomoto, S; Tsubo, Y
2001-10-01
Three kinds of interval statistics, as represented by the coefficient of variation, the skewness coefficient, and the correlation coefficient of consecutive intervals, are evaluated for three kinds of time-dependent Poisson processes: pulse regulated, sinusoidally regulated, and doubly stochastic. Among these three processes, the sinusoidally regulated and doubly stochastic Poisson processes, in the case when the spike rate varies slowly compared with the mean interval between spikes, are found to be consistent with the three statistical coefficients exhibited by data recorded from neurons in the prefrontal cortex of monkeys.
Modeling spiking behavior of neurons with time-dependent Poisson processes
Shinomoto, Shigeru; Tsubo, Yasuhiro
2001-10-01
Three kinds of interval statistics, as represented by the coefficient of variation, the skewness coefficient, and the correlation coefficient of consecutive intervals, are evaluated for three kinds of time-dependent Poisson processes: pulse regulated, sinusoidally regulated, and doubly stochastic. Among these three processes, the sinusoidally regulated and doubly stochastic Poisson processes, in the case when the spike rate varies slowly compared with the mean interval between spikes, are found to be consistent with the three statistical coefficients exhibited by data recorded from neurons in the prefrontal cortex of monkeys.
Banerjee, Arunava
2016-05-01
We derive a synaptic weight update rule for learning temporally precise spike train-to-spike train transformations in multilayer feedforward networks of spiking neurons. The framework, aimed at seamlessly generalizing error backpropagation to the deterministic spiking neuron setting, is based strictly on spike timing and avoids invoking concepts pertaining to spike rates or probabilistic models of spiking. The derivation is founded on two innovations. First, an error functional is proposed that compares the spike train emitted by the output neuron of the network to the desired spike train by way of their putative impact on a virtual postsynaptic neuron. This formulation sidesteps the need for spike alignment and leads to closed-form solutions for all quantities of interest. Second, virtual assignment of weights to spikes rather than synapses enables a perturbation analysis of individual spike times and synaptic weights of the output, as well as all intermediate neurons in the network, which yields the gradients of the error functional with respect to the said entities. Learning proceeds via a gradient descent mechanism that leverages these quantities. Simulation experiments demonstrate the efficacy of the proposed learning framework. The experiments also highlight asymmetries between synapses on excitatory and inhibitory neurons.
Truccolo, Wilson
2017-01-01
Point process generalized linear models (PP-GLMs) provide an important statistical framework for modeling spiking activity in single-neurons and neuronal networks. Stochastic stability is essential when sampling from these models, as done in computational neuroscience to analyze statistical properties of neuronal dynamics and in neuro-engineering to implement closed-loop applications. Here we show, however, that despite passing common goodness-of-fit tests, PP-GLMs estimated from data are often unstable, leading to divergent firing rates. The inclusion of absolute refractory periods is not a satisfactory solution since the activity then typically settles into unphysiological rates. To address these issues, we derive a framework for determining the existence and stability of fixed points of the expected conditional intensity function (CIF) for general PP-GLMs. Specifically, in nonlinear Hawkes PP-GLMs, the CIF is expressed as a function of the previous spike history and exogenous inputs. We use a mean-field quasi-renewal (QR) approximation that decomposes spike history effects into the contribution of the last spike and an average of the CIF over all spike histories prior to the last spike. Fixed points for stationary rates are derived as self-consistent solutions of integral equations. Bifurcation analysis and the number of fixed points predict that the original models can show stable, divergent, and metastable (fragile) dynamics. For fragile models, fluctuations of the single-neuron dynamics predict expected divergence times after which rates approach unphysiologically high values. This metric can be used to estimate the probability of rates to remain physiological for given time periods, e.g., for simulation purposes. We demonstrate the use of the stability framework using simulated single-neuron examples and neurophysiological recordings. Finally, we show how to adapt PP-GLM estimation procedures to guarantee model stability. Overall, our results provide a
Luo, X; Gee, S; Sohal, V; Small, D
2016-02-10
Optogenetics is a new tool to study neuronal circuits that have been genetically modified to allow stimulation by flashes of light. We study recordings from single neurons within neural circuits under optogenetic stimulation. The data from these experiments present a statistical challenge of modeling a high-frequency point process (neuronal spikes) while the input is another high-frequency point process (light flashes). We further develop a generalized linear model approach to model the relationships between two point processes, employing additive point-process response functions. The resulting model, point-process responses for optogenetics (PRO), provides explicit nonlinear transformations to link the input point process with the output one. Such response functions may provide important and interpretable scientific insights into the properties of the biophysical process that governs neural spiking in response to optogenetic stimulation. We validate and compare the PRO model using a real dataset and simulations, and our model yields a superior area-under-the-curve value as high as 93% for predicting every future spike. For our experiment on the recurrent layer V circuit in the prefrontal cortex, the PRO model provides evidence that neurons integrate their inputs in a sophisticated manner. Another use of the model is that it enables understanding how neural circuits are altered under various disease conditions and/or experimental conditions by comparing the PRO parameters. Copyright © 2015 John Wiley & Sons, Ltd.
Stochastic optimal control of single neuron spike trains
DEFF Research Database (Denmark)
Iolov, Alexandre; Ditlevsen, Susanne; Longtin, Andrë
2014-01-01
Objective. External control of spike times in single neurons can reveal important information about a neuron's sub-threshold dynamics that lead to spiking, and has the potential to improve brain–machine interfaces and neural prostheses. The goal of this paper is the design of optimal electrical...... stimulation of a neuron to achieve a target spike train under the physiological constraint to not damage tissue. Approach. We pose a stochastic optimal control problem to precisely specify the spike times in a leaky integrate-and-fire (LIF) model of a neuron with noise assumed to be of intrinsic or synaptic...... of control degrades with increasing intensity of the noise. Simulations show that our algorithms produce the desired results for the LIF model, but also for the case where the neuron dynamics are given by more complex models than the LIF model. This is illustrated explicitly using the Morris–Lecar spiking...
Luthman, Johannes; Hoebeek, Freek E; Maex, Reinoud; Davey, Neil; Adams, Rod; De Zeeuw, Chris I; Steuber, Volker
2011-12-01
Neurons in the cerebellar nuclei (CN) receive inhibitory inputs from Purkinje cells in the cerebellar cortex and provide the major output from the cerebellum, but their computational function is not well understood. It has recently been shown that the spike activity of Purkinje cells is more regular than previously assumed and that this regularity can affect motor behaviour. We use a conductance-based model of a CN neuron to study the effect of the regularity of Purkinje cell spiking on CN neuron activity. We find that increasing the irregularity of Purkinje cell activity accelerates the CN neuron spike rate and that the mechanism of this recoding of input irregularity as output spike rate depends on the number of Purkinje cells converging onto a CN neuron. For high convergence ratios, the irregularity induced spike rate acceleration depends on short-term depression (STD) at the Purkinje cell synapses. At low convergence ratios, or for synchronised Purkinje cell input, the firing rate increase is independent of STD. The transformation of input irregularity into output spike rate occurs in response to artificial input spike trains as well as to spike trains recorded from Purkinje cells in tottering mice, which show highly irregular spiking patterns. Our results suggest that STD may contribute to the accelerated CN spike rate in tottering mice and they raise the possibility that the deficits in motor control in these mutants partly result as a pathological consequence of this natural form of plasticity.
Urdapilleta, Eugenio
2016-01-01
Spike generation in neurons produces a temporal point process, whose statistics is governed by intrinsic phenomena and the external incoming inputs to be coded. In particular, spike-evoked adaptation currents support a slow temporal process that conditions spiking probability at the present time according to past activity. In this work, we study the statistics of interspike interval correlations arising in such non-renewal spike trains, for a neuron model that reproduces different spike modes in a small adaptation scenario. We found that correlations are stronger as the neuron fires at a particular firing rate, which is defined by the adaptation process. When set in a subthreshold regime, the neuron may sustain this particular firing rate, and thus induce correlations, by noise. Given that, in this regime, interspike intervals are negatively correlated at any lag, this effect surprisingly implies a reduction in the variability of the spike count statistics at a finite noise intensity.
Urdapilleta, Eugenio
2016-09-01
Spike generation in neurons produces a temporal point process, whose statistics is governed by intrinsic phenomena and the external incoming inputs to be coded. In particular, spike-evoked adaptation currents support a slow temporal process that conditions spiking probability at the present time according to past activity. In this work, we study the statistics of interspike interval correlations arising in such non-renewal spike trains, for a neuron model that reproduces different spike modes in a small adaptation scenario. We found that correlations are stronger as the neuron fires at a particular firing rate, which is defined by the adaptation process. When set in a subthreshold regime, the neuron may sustain this particular firing rate, and thus induce correlations, by noise. Given that, in this regime, interspike intervals are negatively correlated at any lag, this effect surprisingly implies a reduction in the variability of the spike count statistics at a finite noise intensity.
Stochastic optimal control of single neuron spike trains
DEFF Research Database (Denmark)
Iolov, Alexandre; Ditlevsen, Susanne; Longtin, Andrë
2014-01-01
Objective. External control of spike times in single neurons can reveal important information about a neuron's sub-threshold dynamics that lead to spiking, and has the potential to improve brain–machine interfaces and neural prostheses. The goal of this paper is the design of optimal electrical...... stimulation of a neuron to achieve a target spike train under the physiological constraint to not damage tissue. Approach. We pose a stochastic optimal control problem to precisely specify the spike times in a leaky integrate-and-fire (LIF) model of a neuron with noise assumed to be of intrinsic or synaptic...... to the spike times (open-loop control). Main results. We have developed a stochastic optimal control algorithm to obtain precise spike times. It is applicable in both the supra-threshold and sub-threshold regimes, under open-loop and closed-loop conditions and with an arbitrary noise intensity; the accuracy...
Reliability of Spike Timing in Neocortical Neurons
Mainen, Zachary F.; Sejnowski, Terrence J.
1995-06-01
It is not known whether the variability of neural activity in the cerebral cortex carries information or reflects noisy underlying mechanisms. In an examination of the reliability of spike generation using recordings from neurons in rat neocortical slices, the precision of spike timing was found to depend on stimulus transients. Constant stimuli led to imprecise spike trains, whereas stimuli with fluctuations resembling synaptic activity produced spike trains with timing reproducible to less than 1 millisecond. These data suggest a low intrinsic noise level in spike generation, which could allow cortical neurons to accurately transform synaptic input into spike sequences, supporting a possible role for spike timing in the processing of cortical information by the neocortex.
Analysis of Neuronal Spike Trains, Deconstructed.
Aljadeff, Johnatan; Lansdell, Benjamin J; Fairhall, Adrienne L; Kleinfeld, David
2016-07-20
As information flows through the brain, neuronal firing progresses from encoding the world as sensed by the animal to driving the motor output of subsequent behavior. One of the more tractable goals of quantitative neuroscience is to develop predictive models that relate the sensory or motor streams with neuronal firing. Here we review and contrast analytical tools used to accomplish this task. We focus on classes of models in which the external variable is compared with one or more feature vectors to extract a low-dimensional representation, the history of spiking and other variables are potentially incorporated, and these factors are nonlinearly transformed to predict the occurrences of spikes. We illustrate these techniques in application to datasets of different degrees of complexity. In particular, we address the fitting of models in the presence of strong correlations in the external variable, as occurs in natural sensory stimuli and in movement. Spectral correlation between predicted and measured spike trains is introduced to contrast the relative success of different methods.
Implementing Signature Neural Networks with Spiking Neurons.
Carrillo-Medina, José Luis; Latorre, Roberto
2016-01-01
Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence
Implementing Signature Neural Networks with Spiking Neurons
Carrillo-Medina, José Luis; Latorre, Roberto
2016-01-01
Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm—i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data—to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the
Directory of Open Access Journals (Sweden)
Don Patrick eBischop
2012-07-01
Full Text Available Calcium binding proteins, such as parvalbumin, are abundantly expressed in very distinctive patterns in the central nervous system but their physiological function remains poorly understood. Notably, at the level of the striatum, parvalbumin is only expressed in the fast spiking (FS interneurons, which form a inhibitory network modulating the output of the striatum by synchronizing medium-sized spiny neurons (MSN. So far the existing conductance-based computational models for FS neurons did not allow the study of the the coupling between parvalbumin concentration and electrical activity. In the present paper, we propose a new mathematical model for the striatal FS interneurons that includes apamin-sensitive small conductance ca -dependent kk channels (SK and takes into account the presence of a calcium buffer. Our results demonstrate that a variation in the concentration of parvalbumin can modulate substantially the intrinsic excitability of the FS interneurons and therefore may be involved in the information processing at the striatal level.
Implementing Signature Neural Networks with Spiking Neurons
Directory of Open Access Journals (Sweden)
José Luis Carrillo-Medina
2016-12-01
Full Text Available Spiking Neural Networks constitute the most promising approach to develop realistic ArtificialNeural Networks (ANNs. Unlike traditional firing rate-based paradigms, information coding inspiking models is based on the precise timing of individual spikes. Spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition. In recent years, majorbreakthroughs in neuroscience research have discovered new relevant computational principles indifferent living neural systems. Could ANNs benefit from some of these recent findings providingnovel elements of inspiration? This is an intriguing question and the development of spiking ANNsincluding novel bio-inspired information coding and processing strategies is gaining attention. Fromthis perspective, in this work, we adapt the core concepts of the recently proposed SignatureNeural Network paradigm – i.e., neural signatures to identify each unit in the network, localinformation contextualization during the processing and multicoding strategies for informationpropagation regarding the origin and the content of the data – to be employed in a spiking neuralnetwork. To the best of our knowledge, none of these mechanisms have been used yet in thecontext of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicabilityin such networks. Computer simulations show that a simple network model like the discussed hereexhibits complex self-organizing properties. The combination of multiple simultaneous encodingschemes allows the network to generate coexisting spatio-temporal patterns of activity encodinginformation in different spatio-temporal spaces. As a function of the network and/or intra-unitparameters shaping the corresponding encoding modality, different forms of competition amongthe evoked patterns can emerge even in the absence of inhibitory connections. These parametersalso
Directory of Open Access Journals (Sweden)
K. Usha
2016-09-01
Full Text Available This paper evaluates the change in metabolic energy required to maintain the signalling activity of neurons in the presence of an external electric field. We have analysed the Hodgkin–Huxley type conductance based fast spiking neuron model as electrical circuit by changing the frequency and amplitude of the applied electric field. The study has shown that, the presence of electric field increases the membrane potential, electrical energy supply and metabolic energy consumption. As the amplitude of applied electric field increases by keeping a constant frequency, the membrane potential increases and consequently the electrical energy supply and metabolic energy consumption increases. On increasing the frequency of the applied field, the peak value of membrane potential after depolarization gradually decreases as a result electrical energy supply decreases which results in a lower rate of hydrolysis of ATP molecules.
Tapson, J; van Schaik, A; Etienne-Cummings, R
2008-01-01
We present a first-order non-homogeneous Markov model for the interspike-interval density of a continuously stimulated spiking neuron. The model allows the conditional interspike-interval density and the stationary interspike-interval density to be expressed as products of two separate functions, one of which describes only the neuron characteristics, and the other of which describes only the signal characteristics. This allows the use of this model to predict the response when the underlying neuron model is not known or well determined. The approximation shows particularly clearly that signal autocorrelations and cross-correlations arise as natural features of the interspike-interval density, and are particularly clear for small signals and moderate noise. We show that this model simplifies the design of spiking neuron cross-correlation systems, and describe a four-neuron mutual inhibition network that generates a cross-correlation output for two input signals.
Neuronal spike initiation modulated by extracellular electric fields.
Directory of Open Access Journals (Sweden)
Guo-Sheng Yi
Full Text Available Based on a reduced two-compartment model, the dynamical and biophysical mechanism underlying the spike initiation of the neuron to extracellular electric fields is investigated in this paper. With stability and phase plane analysis, we first investigate in detail the dynamical properties of neuronal spike initiation induced by geometric parameter and internal coupling conductance. The geometric parameter is the ratio between soma area and total membrane area, which describes the proportion of area occupied by somatic chamber. It is found that varying it could qualitatively alter the bifurcation structures of equilibrium as well as neuronal phase portraits, which remain unchanged when varying internal coupling conductance. By analyzing the activating properties of somatic membrane currents at subthreshold potentials, we explore the relevant biophysical basis of spike initiation dynamics induced by these two parameters. It is observed that increasing geometric parameter could greatly decrease the intensity of the internal current flowing from soma to dendrite, which switches spike initiation dynamics from Hopf bifurcation to SNIC bifurcation; increasing internal coupling conductance could lead to the increase of this outward internal current, whereas the increasing range is so small that it could not qualitatively alter the spike initiation dynamics. These results highlight that neuronal geometric parameter is a crucial factor in determining the spike initiation dynamics to electric fields. The finding is useful to interpret the functional significance of neuronal biophysical properties in their encoding dynamics, which could contribute to uncovering how neuron encodes electric field signals.
Communication through resonance in spiking neuronal networks.
Hahn, Gerald; Bujan, Alejandro F; Frégnac, Yves; Aertsen, Ad; Kumar, Arvind
2014-08-01
The cortex processes stimuli through a distributed network of specialized brain areas. This processing requires mechanisms that can route neuronal activity across weakly connected cortical regions. Routing models proposed thus far are either limited to propagation of spiking activity across strongly connected networks or require distinct mechanisms that create local oscillations and establish their coherence between distant cortical areas. Here, we propose a novel mechanism which explains how synchronous spiking activity propagates across weakly connected brain areas supported by oscillations. In our model, oscillatory activity unleashes network resonance that amplifies feeble synchronous signals and promotes their propagation along weak connections ("communication through resonance"). The emergence of coherent oscillations is a natural consequence of synchronous activity propagation and therefore the assumption of different mechanisms that create oscillations and provide coherence is not necessary. Moreover, the phase-locking of oscillations is a side effect of communication rather than its requirement. Finally, we show how the state of ongoing activity could affect the communication through resonance and propose that modulations of the ongoing activity state could influence information processing in distributed cortical networks.
Spike Code Flow in Cultured Neuronal Networks.
Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime; Kamimura, Takuya; Yagi, Yasushi; Mizuno-Matsumoto, Yuko; Chen, Yen-Wei
2016-01-01
We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of "1101" and "1011," which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the "maximum cross-correlations" among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.
Bistability induces episodic spike communication by inhibitory neurons in neuronal networks
Kazantsev, V. B.; Asatryan, S. Yu.
2011-09-01
Bistability is one of the important features of nonlinear dynamical systems. In neurodynamics, bistability has been found in basic Hodgkin-Huxley equations describing the cell membrane dynamics. When the neuron is clamped near its threshold, the stable rest potential may coexist with the stable limit cycle describing periodic spiking. However, this effect is often neglected in network computations where the neurons are typically reduced to threshold firing units (e.g., integrate-and-fire models). We found that the bistability may induce spike communication by inhibitory coupled neurons in the spiking network. The communication is realized in the form of episodic discharges with synchronous (correlated) spikes during the episodes. A spiking phase map is constructed to describe the synchronization and to estimate basic spike phase locking modes.
Spiking Neural P Systems with Neuron Division and Dissolution
Liu, Xiyu; Wang, Wenping
2016-01-01
Spiking neural P systems are a new candidate in spiking neural network models. By using neuron division and budding, such systems can generate/produce exponential working space in linear computational steps, thus provide a way to solve computational hard problems in feasible (linear or polynomial) time with a “time-space trade-off” strategy. In this work, a new mechanism called neuron dissolution is introduced, by which redundant neurons produced during the computation can be removed. As applications, uniform solutions to two NP-hard problems: SAT problem and Subset Sum problem are constructed in linear time, working in a deterministic way. The neuron dissolution strategy is used to eliminate invalid solutions, and all answers to these two problems are encoded as indices of output neurons. Our results improve the one obtained in Science China Information Sciences, 2011, 1596-1607 by Pan et al. PMID:27627104
Inherently stochastic spiking neurons for probabilistic neural computation
Al-Shedivat, Maruan
2015-04-01
Neuromorphic engineering aims to design hardware that efficiently mimics neural circuitry and provides the means for emulating and studying neural systems. In this paper, we propose a new memristor-based neuron circuit that uniquely complements the scope of neuron implementations and follows the stochastic spike response model (SRM), which plays a cornerstone role in spike-based probabilistic algorithms. We demonstrate that the switching of the memristor is akin to the stochastic firing of the SRM. Our analysis and simulations show that the proposed neuron circuit satisfies a neural computability condition that enables probabilistic neural sampling and spike-based Bayesian learning and inference. Our findings constitute an important step towards memristive, scalable and efficient stochastic neuromorphic platforms. © 2015 IEEE.
Lymperopoulos, Ilias N; Ioannou, George D
2016-10-01
We develop and validate a model of the micro-level dynamics underlying the formation of macro-level information propagation patterns in online social networks. In particular, we address the dynamics at the level of the mechanism regulating a user's participation in an online information propagation process. We demonstrate that this mechanism can be realistically described by the dynamics of noisy spiking neurons driven by endogenous and exogenous, deterministic and stochastic stimuli representing the influence modulating one's intention to be an information spreader. Depending on the dynamically changing influence characteristics, time-varying propagation patterns emerge reflecting the temporal structure, strength, and signal-to-noise ratio characteristics of the stimulation driving the online users' information sharing activity. The proposed model constitutes an overarching, novel, and flexible approach to the modeling of the micro-level mechanisms whereby information propagates in online social networks. As such, it can be used for a comprehensive understanding of the online transmission of information, a process integral to the sociocultural evolution of modern societies. The proposed model is highly adaptable and suitable for the study of the propagation patterns of behavior, opinions, and innovations among others.
Nonlinear electronic circuit with neuron like bursting and spiking dynamics.
Savino, Guillermo V; Formigli, Carlos M
2009-07-01
It is difficult to design electronic nonlinear devices capable of reproducing complex oscillations because of the lack of general constructive rules, and because of stability problems related to the dynamical robustness of the circuits. This is particularly true for current analog electronic circuits that implement mathematical models of bursting and spiking neurons. Here we describe a novel, four-dimensional and dynamically robust nonlinear analog electronic circuit that is intrinsic excitable, and that displays frequency adaptation bursting and spiking oscillations. Despite differences from the classical Hodgkin-Huxley (HH) neuron model, its bifurcation sequences and dynamical properties are preserved, validating the circuit as a neuron model. The circuit's performance is based on a nonlinear interaction of fast-slow circuit blocks that can be clearly dissected, elucidating burst's starting, sustaining and stopping mechanisms, which may also operate in real neurons. Our analog circuit unit is easily linked and may be useful in building networks that perform in real-time.
Cannon, Jonathan
2017-01-01
Mutual information is a commonly used measure of communication between neurons, but little theory exists describing the relationship between mutual information and the parameters of the underlying neuronal interaction. Such a theory could help us understand how specific physiological changes affect the capacity of neurons to synaptically communicate, and, in particular, they could help us characterize the mechanisms by which neuronal dynamics gate the flow of information in the brain. Here we study a pair of linear-nonlinear-Poisson neurons coupled by a weak synapse. We derive an analytical expression describing the mutual information between their spike trains in terms of synapse strength, neuronal activation function, the time course of postsynaptic currents, and the time course of the background input received by the two neurons. This expression allows mutual information calculations that would otherwise be computationally intractable. We use this expression to analytically explore the interaction of excitation, information transmission, and the convexity of the activation function. Then, using this expression to quantify mutual information in simulations, we illustrate the information-gating effects of neural oscillations and oscillatory coherence, which may either increase or decrease the mutual information across the synapse depending on parameters. Finally, we show analytically that our results can quantitatively describe the selection of one information pathway over another when multiple sending neurons project weakly to a single receiving neuron.
Isolated neuron amplitude spike decrease under static magnetic fields
Azanza, María J.; del Moral, A.
1996-05-01
Isolated Helix aspersa neurons under strong enough static magnetic fields B (0.07-0.7 T) show a decrease of the spike depolarization voltage of the form ∼exp(αB2), with α dependent on neuron parameters. A tentative model is proposed which explains such behaviour through a deactivation of Na+K+-ATP-ase pumps due to protein superdiamagnetic rotation. Values for the cluster and protein in cluster numbers are estimated.
A memristive spiking neuron with firing rate coding
Directory of Open Access Journals (Sweden)
Marina eIgnatov
2015-10-01
Full Text Available Perception, decisions, and sensations are all encoded into trains of action potentials in the brain. The relation between stimulus strength and all-or-nothing spiking of neurons is widely believed to be the basis of this coding. This initiated the development of spiking neuron models; one of today's most powerful conceptual tool for the analysis and emulation of neural dynamics. The success of electronic circuit models and their physical realization within silicon field-effect transistor circuits lead to elegant technical approaches. Recently, the spectrum of electronic devices for neural computing has been extended by memristive devices, mainly used to emulate static synaptic functionality. Their capabilities for emulations of neural activity were recently demonstrated using a memristive neuristor circuit, while a memristive neuron circuit has so far been elusive. Here, a spiking neuron model is experimentally realized in a compact circuit comprising memristive and memcapacitive devices based on the strongly correlated electron material vanadium dioxide (VO2 and on the chemical electromigration cell Ag/TiO2-x/Al. The circuit can emulate dynamical spiking patterns in response to an external stimulus including adaptation, which is at the heart of firing rate coding as first observed by E.D. Adrian in 1926.
Directory of Open Access Journals (Sweden)
Praveen K Pilly
Full Text Available Medial entorhinal grid cells and hippocampal place cells provide neural correlates of spatial representation in the brain. A place cell typically fires whenever an animal is present in one or more spatial regions, or places, of an environment. A grid cell typically fires in multiple spatial regions that form a regular hexagonal grid structure extending throughout the environment. Different grid and place cells prefer spatially offset regions, with their firing fields increasing in size along the dorsoventral axes of the medial entorhinal cortex and hippocampus. The spacing between neighboring fields for a grid cell also increases along the dorsoventral axis. This article presents a neural model whose spiking neurons operate in a hierarchy of self-organizing maps, each obeying the same laws. This spiking GridPlaceMap model simulates how grid cells and place cells may develop. It responds to realistic rat navigational trajectories by learning grid cells with hexagonal grid firing fields of multiple spatial scales and place cells with one or more firing fields that match neurophysiological data about these cells and their development in juvenile rats. The place cells represent much larger spaces than the grid cells, which enable them to support navigational behaviors. Both self-organizing maps amplify and learn to categorize the most frequent and energetic co-occurrences of their inputs. The current results build upon a previous rate-based model of grid and place cell learning, and thus illustrate a general method for converting rate-based adaptive neural models, without the loss of any of their analog properties, into models whose cells obey spiking dynamics. New properties of the spiking GridPlaceMap model include the appearance of theta band modulation. The spiking model also opens a path for implementation in brain-emulating nanochips comprised of networks of noisy spiking neurons with multiple-level adaptive weights for controlling autonomous
The stochastic properties of input spike trains control neuronal arithmetic.
Bures, Zbynek
2012-02-01
In the nervous system, the representation of signals is based predominantly on the rate and timing of neuronal discharges. In most everyday tasks, the brain has to carry out a variety of mathematical operations on the discharge patterns. Recent findings show that even single neurons are capable of performing basic arithmetic on the sequences of spikes. However, the interaction of the two spike trains, and thus the resulting arithmetic operation may be influenced by the stochastic properties of the interacting spike trains. If we represent the individual discharges as events of a random point process, then an arithmetical operation is given by the interaction of two point processes. Employing a probabilistic model based on detection of coincidence of random events and complementary computer simulations, we show that the point process statistics control the arithmetical operation being performed and, particularly, that it is possible to switch from subtraction to division solely by changing the distribution of the inter-event intervals of the processes. Consequences of the model for evaluation of binaural information in the auditory brainstem are demonstrated. The results accentuate the importance of the stochastic properties of neuronal discharge patterns for information processing in the brain; further studies related to neuronal arithmetic should therefore consider the statistics of the interacting spike trains.
Spiking models for level-invariant encoding
Directory of Open Access Journals (Sweden)
Romain eBrette
2012-01-01
Full Text Available Levels of ecological sounds vary over several orders of magnitude,but the firing rate and membrane potential of a neuron are much more limited in range.In binaural neurons of the barn owl, tuning to interaural delays is independent oflevel differences. Yet a monaural neuron with a fixed threshold should fire earlier in responseto louder sounds, which would disrupt the tuning of these neurons. %, resulting in shifts in delay tuning for interaural level differences.How could spike timing be independent of input level?Here I derive theoretical conditions for a spiking model tobe insensitive to input level.The key property is a dynamic change in spike threshold.I then show how level invariance can be physiologically implemented,with specific ionic channel properties.It appears that these ingredients are indeed present inmonaural neurons of the sound localization pathway of birds and mammals.
Emergent properties of interacting populations of spiking neurons.
Cardanobile, Stefano; Rotter, Stefan
2011-01-01
Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations.
Emergent Properties of Interacting Populations of Spiking Neurons
Cardanobile, Stefano; Rotter, Stefan
2011-01-01
Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations. PMID:22207844
Computational properties of networks of synchronous groups of spiking neurons.
Dayhoff, Judith E
2007-09-01
We demonstrate a model in which synchronously firing ensembles of neurons are networked to produce computational results. Each ensemble is a group of biological integrate-and-fire spiking neurons, with probabilistic interconnections between groups. An analogy is drawn in which each individual processing unit of an artificial neural network corresponds to a neuronal group in a biological model. The activation value of a unit in the artificial neural network corresponds to the fraction of active neurons, synchronously firing, in a biological neuronal group. Weights of the artificial neural network correspond to the product of the interconnection density between groups, the group size of the presynaptic group, and the postsynaptic potential heights in the synchronous group model. All three of these parameters can modulate connection strengths between neuronal groups in the synchronous group models. We give an example of nonlinear classification (XOR) and a function approximation example in which the capability of the artificial neural network can be captured by a neural network model with biological integrate-and-fire neurons configured as a network of synchronously firing ensembles of such neurons. We point out that the general function approximation capability proven for feedforward artificial neural networks appears to be approximated by networks of neuronal groups that fire in synchrony, where the groups comprise integrate-and-fire neurons. We discuss the advantages of this type of model for biological systems, its possible learning mechanisms, and the associated timing relationships.
Environmental Impacts on Spiking Properties in Hodgkin-Huxley Neuron with Direct Current Stimulus
Institute of Scientific and Technical Information of China (English)
YUAN Chang-Qing; ZHAO Tong-Jun; ZHAN Yong; ZHANG Su-Hua; LIU Hui; ZHANG Yu-Hong
2009-01-01
Based on the well accepted Hodgkin-Huxley neuron model, the neuronal intrinsic excitability is studied when the neuron is subject to varying environmental temperatures, the typical impact for its regulating ways. With computer simulation, it is found that altering environmental temperature can improve or inhibit the neuronal intrinsic excitability so as to influence the neuronal spiking properties. The impacts from environmental factors can be understood that ,the neuronal spiking threshold is essentially influenced by the fluctuations in the environ-ment. With the environmental temperature varying, burst spiking is realized for the neuronal membrane voltage because of the environment-dependent spiking threshold. This burst induced by changes in spiking threshold is different from that excited by input currents or other stimulus.
Estimating nonstationary input signals from a single neuronal spike train
Kim, Hideaki; Shinomoto, Shigeru
2012-01-01
Neurons temporally integrate input signals, translating them into timed output spikes. Because neurons nonperiodically emit spikes, examining spike timing can reveal information about input signals, which are determined by activities in the populations of excitatory and inhibitory presynaptic neurons. Although a number of mathematical methods have been developed to estimate such input parameters as the mean and fluctuation of the input current, these techniques are based on the unrealistic as...
Emergent properties of interacting populations of spiking neurons
Directory of Open Access Journals (Sweden)
Stefano eCardanobile
2011-12-01
Full Text Available Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system.Here, we discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks on the population level is faithfully reflected by a set of non-linear rate equations, describing all interactions on this level. These equations, in turn, are similar in structure to the Lotka-Volterra equations, well known by their use in modeling predator-prey relationships in population biology, but abundant applications to economic theory have also been described.We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of neural populations.
Asynchronous Rate Chaos in Spiking Neuronal Circuits
Harish, Omri; Hansel, David
2015-01-01
The brain exhibits temporally complex patterns of activity with features similar to those of chaotic systems. Theoretical studies over the last twenty years have described various computational advantages for such regimes in neuronal systems. Nevertheless, it still remains unclear whether chaos requires specific cellular properties or network architectures, or whether it is a generic property of neuronal circuits. We investigate the dynamics of networks of excitatory-inhibitory (EI) spiking neurons with random sparse connectivity operating in the regime of balance of excitation and inhibition. Combining Dynamical Mean-Field Theory with numerical simulations, we show that chaotic, asynchronous firing rate fluctuations emerge generically for sufficiently strong synapses. Two different mechanisms can lead to these chaotic fluctuations. One mechanism relies on slow I-I inhibition which gives rise to slow subthreshold voltage and rate fluctuations. The decorrelation time of these fluctuations is proportional to the time constant of the inhibition. The second mechanism relies on the recurrent E-I-E feedback loop. It requires slow excitation but the inhibition can be fast. In the corresponding dynamical regime all neurons exhibit rate fluctuations on the time scale of the excitation. Another feature of this regime is that the population-averaged firing rate is substantially smaller in the excitatory population than in the inhibitory population. This is not necessarily the case in the I-I mechanism. Finally, we discuss the neurophysiological and computational significance of our results. PMID:26230679
Solving constraint satisfaction problems with networks of spiking neurons
Directory of Open Access Journals (Sweden)
Zeno eJonke
2016-03-01
Full Text Available Network of neurons in the brain apply – unlike processors in our current generation ofcomputer hardware – an event-based processing strategy, where short pulses (spikes areemitted sparsely by neurons to signal the occurrence of an event at a particular point intime. Such spike-based computations promise to be substantially more power-efficient thantraditional clocked processing schemes. However it turned out to be surprisingly difficult todesign networks of spiking neurons that can solve difficult computational problems on the levelof single spikes (rather than rates of spikes. We present here a new method for designingnetworks of spiking neurons via an energy function. Furthermore we show how the energyfunction of a network of stochastically firing neurons can be shaped in a quite transparentmanner by composing the networks of simple stereotypical network motifs. We show that thisdesign approach enables networks of spiking neurons to produce approximate solutions todifficult (NP-hard constraint satisfaction problems from the domains of planning/optimizationand verification/logical inference. The resulting networks employ noise as a computationalresource. Nevertheless the timing of spikes (rather than just spike rates plays an essential rolein their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines and Gibbs sampling.
A new supervised learning algorithm for spiking neurons.
Xu, Yan; Zeng, Xiaoqin; Zhong, Shuiming
2013-06-01
The purpose of supervised learning with temporal encoding for spiking neurons is to make the neurons emit a specific spike train encoded by the precise firing times of spikes. If only running time is considered, the supervised learning for a spiking neuron is equivalent to distinguishing the times of desired output spikes and the other time during the running process of the neuron through adjusting synaptic weights, which can be regarded as a classification problem. Based on this idea, this letter proposes a new supervised learning method for spiking neurons with temporal encoding; it first transforms the supervised learning into a classification problem and then solves the problem by using the perceptron learning rule. The experiment results show that the proposed method has higher learning accuracy and efficiency over the existing learning methods, so it is more powerful for solving complex and real-time problems.
Clustering predicts memory performance in networks of spiking and non-spiking neurons
Directory of Open Access Journals (Sweden)
Weiliang eChen
2011-03-01
Full Text Available The problem we address in this paper is that of finding effective and parsimonious patterns of connectivity in sparse associative memories. This problem must be addressed in real neuronal systems, so that results in artificial systems could throw light on real systems. We show that there are efficient patterns of connectivity and that these patterns are effective in models with either spiking or non-spiking neurons. This suggests that there may be some underlying general principles governing good connectivity in such networks. We also show that the clustering of the network, measured by Clustering Coefficient, has a strong linear correlation to the performance of associative memory. This result is important since a purely static measure of network connectivity appears to determine an important dynamic property of the network.
How adaptation shapes spike rate oscillations in recurrent neuronal networks
Directory of Open Access Journals (Sweden)
Moritz eAugustin
2013-02-01
Full Text Available Neural mass signals from in-vivo recordings often show oscillations with frequencies ranging from <1 Hz to 100 Hz. Fast rhythmic activity in the beta and gamma range can be generated by network based mechanisms such as recurrent synaptic excitation-inhibition loops. Slower oscillations might instead depend on neuronal adaptation currents whose timescales range from tens of milliseconds to seconds. Here we investigate how the dynamics of such adaptation currents contribute to spike rate oscillations and resonance properties in recurrent networks of excitatory and inhibitory neurons. Based on a network of sparsely coupled spiking model neurons with two types of adaptation current and conductance based synapses with heterogeneous strengths and delays we use a mean-field approach to analyze oscillatory network activity. For constant external input, we find that spike-triggered adaptation currents provide a mechanism to generate slow oscillations over a wide range of adaptation timescales as long as recurrent synaptic excitation is sufficiently strong. Faster rhythms occur when recurrent inhibition is slower than excitation and oscillation frequency increases with the strength of inhibition. Adaptation facilitates such network based oscillations for fast synaptic inhibition and leads to decreased frequencies. For oscillatory external input, adaptation currents amplify a narrow band of frequencies and cause phase advances for low frequencies in addition to phase delays at higher frequencies. Our results therefore identify the different key roles of neuronal adaptation dynamics for rhythmogenesis and selective signal propagation in recurrent networks.
Channel noise effects on first spike latency of a stochastic Hodgkin-Huxley neuron
Maisel, Brenton; Lindenberg, Katja
2017-02-01
While it is widely accepted that information is encoded in neurons via action potentials or spikes, it is far less understood what specific features of spiking contain encoded information. Experimental evidence has suggested that the timing of the first spike may be an energy-efficient coding mechanism that contains more neural information than subsequent spikes. Therefore, the biophysical features of neurons that underlie response latency are of considerable interest. Here we examine the effects of channel noise on the first spike latency of a Hodgkin-Huxley neuron receiving random input from many other neurons. Because the principal feature of a Hodgkin-Huxley neuron is the stochastic opening and closing of channels, the fluctuations in the number of open channels lead to fluctuations in the membrane voltage and modify the timing of the first spike. Our results show that when a neuron has a larger number of channels, (i) the occurrence of the first spike is delayed and (ii) the variation in the first spike timing is greater. We also show that the mean, median, and interquartile range of first spike latency can be accurately predicted from a simple linear regression by knowing only the number of channels in the neuron and the rate at which presynaptic neurons fire, but the standard deviation (i.e., neuronal jitter) cannot be predicted using only this information. We then compare our results to another commonly used stochastic Hodgkin-Huxley model and show that the more commonly used model overstates the first spike latency but can predict the standard deviation of first spike latencies accurately. We end by suggesting a more suitable definition for the neuronal jitter based upon our simulations and comparison of the two models.
Solving Constraint Satisfaction Problems with Networks of Spiking Neurons.
Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang
2016-01-01
Network of neurons in the brain apply-unlike processors in our current generation of computer hardware-an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling.
A spiking neuron circuit based on a carbon nanotube transistor.
Chen, C-L; Kim, K; Truong, Q; Shen, A; Li, Z; Chen, Y
2012-07-11
A spiking neuron circuit based on a carbon nanotube (CNT) transistor is presented in this paper. The spiking neuron circuit has a crossbar architecture in which the transistor gates are connected to its row electrodes and the transistor sources are connected to its column electrodes. An electrochemical cell is incorporated in the gate of the transistor by sandwiching a hydrogen-doped poly(ethylene glycol)methyl ether (PEG) electrolyte between the CNT channel and the top gate electrode. An input spike applied to the gate triggers a dynamic drift of the hydrogen ions in the PEG electrolyte, resulting in a post-synaptic current (PSC) through the CNT channel. Spikes input into the rows trigger PSCs through multiple CNT transistors, and PSCs cumulate in the columns and integrate into a 'soma' circuit to trigger output spikes based on an integrate-and-fire mechanism. The spiking neuron circuit can potentially emulate biological neuron networks and their intelligent functions.
Bashkirtseva, Irina; Neiman, Alexander B.; Ryashko, Lev
2015-05-01
We study the stochastic dynamics of a Hodgkin-Huxley neuron model in a regime of coexistent stable equilibrium and a limit cycle. In this regime, noise may suppress periodic firing by switching the neuron randomly to a quiescent state. We show that at a critical value of the injected current, the mean firing rate depends weakly on noise intensity, while the neuron exhibits giant variability of the interspike intervals and spike count. To reveal the dynamical origin of this noise-induced effect, we develop the stochastic sensitivity analysis and use the Mahalanobis metric for this four-dimensional stochastic dynamical system. We show that the critical point of giant variability corresponds to the matching of the Mahalanobis distances from attractors (stable equilibrium and limit cycle) to a three-dimensional surface separating their basins of attraction.
Bashkirtseva, Irina; Neiman, Alexander B; Ryashko, Lev
2015-05-01
We study the stochastic dynamics of a Hodgkin-Huxley neuron model in a regime of coexistent stable equilibrium and a limit cycle. In this regime, noise may suppress periodic firing by switching the neuron randomly to a quiescent state. We show that at a critical value of the injected current, the mean firing rate depends weakly on noise intensity, while the neuron exhibits giant variability of the interspike intervals and spike count. To reveal the dynamical origin of this noise-induced effect, we develop the stochastic sensitivity analysis and use the Mahalanobis metric for this four-dimensional stochastic dynamical system. We show that the critical point of giant variability corresponds to the matching of the Mahalanobis distances from attractors (stable equilibrium and limit cycle) to a three-dimensional surface separating their basins of attraction.
Note on the coefficient of variations of neuronal spike trains.
Lengler, Johannes; Steger, Angelika
2017-08-01
It is known that many neurons in the brain show spike trains with a coefficient of variation (CV) of the interspike times of approximately 1, thus resembling the properties of Poisson spike trains. Computational studies have been able to reproduce this phenomenon. However, the underlying models were too complex to be examined analytically. In this paper, we offer a simple model that shows the same effect but is accessible to an analytic treatment. The model is a random walk model with a reflecting barrier; we give explicit formulas for the CV in the regime of excess inhibition. We also analyze the effect of probabilistic synapses in our model and show that it resembles previous findings that were obtained by simulation.
Neuronal spike train entropy estimation by history clustering.
Watters, Nicholas; Reeke, George N
2014-09-01
Neurons send signals to each other by means of sequences of action potentials (spikes). Ignoring variations in spike amplitude and shape that are probably not meaningful to a receiving cell, the information content, or entropy of the signal depends on only the timing of action potentials, and because there is no external clock, only the interspike intervals, and not the absolute spike times, are significant. Estimating spike train entropy is a difficult task, particularly with small data sets, and many methods of entropy estimation have been proposed. Here we present two related model-based methods for estimating the entropy of neural signals and compare them to existing methods. One of the methods is fast and reasonably accurate, and it converges well with short spike time records; the other is impractically time-consuming but apparently very accurate, relying on generating artificial data that are a statistical match to the experimental data. Using the slow, accurate method to generate a best-estimate entropy value, we find that the faster estimator converges to this value more closely and with smaller data sets than many existing entropy estimators.
Empirical Bayesian significance measure of neuronal spike response.
Oba, Shigeyuki; Nakae, Ken; Ikegaya, Yuji; Aki, Shunsuke; Yoshimoto, Junichiro; Ishii, Shin
2016-05-21
Functional connectivity analyses of multiple neurons provide a powerful bottom-up approach to reveal functions of local neuronal circuits by using simultaneous recording of neuronal activity. A statistical methodology, generalized linear modeling (GLM) of the spike response function, is one of the most promising methodologies to reduce false link discoveries arising from pseudo-correlation based on common inputs. Although recent advancement of fluorescent imaging techniques has increased the number of simultaneously recoded neurons up to the hundreds or thousands, the amount of information per pair of neurons has not correspondingly increased, partly because of the instruments' limitations, and partly because the number of neuron pairs increase in a quadratic manner. Consequently, the estimation of GLM suffers from large statistical uncertainty caused by the shortage in effective information. In this study, we propose a new combination of GLM and empirical Bayesian testing for the estimation of spike response functions that enables both conservative false discovery control and powerful functional connectivity detection. We compared our proposed method's performance with those of sparse estimation of GLM and classical Granger causality testing. Our method achieved high detection performance of functional connectivity with conservative estimation of false discovery rate and q values in case of information shortage due to short observation time. We also showed that empirical Bayesian testing on arbitrary statistics in place of likelihood-ratio statistics reduce the computational cost without decreasing the detection performance. When our proposed method was applied to a functional multi-neuron calcium imaging dataset from the rat hippocampal region, we found significant functional connections that are possibly mediated by AMPA and NMDA receptors. The proposed empirical Bayesian testing framework with GLM is promising especially when the amount of information per a
Finding the event structure of neuronal spike trains.
Toups, J Vincent; Fellous, Jean-Marc; Thomas, Peter J; Sejnowski, Terrence J; Tiesinga, Paul H
2011-09-01
Neurons in sensory systems convey information about physical stimuli in their spike trains. In vitro, single neurons respond precisely and reliably to the repeated injection of the same fluctuating current, producing regions of elevated firing rate, termed events. Analysis of these spike trains reveals that multiple distinct spike patterns can be identified as trial-to-trial correlations between spike times (Fellous, Tiesinga, Thomas, & Sejnowski, 2004 ). Finding events in data with realistic spiking statistics is challenging because events belonging to different spike patterns may overlap. We propose a method for finding spiking events that uses contextual information to disambiguate which pattern a trial belongs to. The procedure can be applied to spike trains of the same neuron across multiple trials to detect and separate responses obtained during different brain states. The procedure can also be applied to spike trains from multiple simultaneously recorded neurons in order to identify volleys of near-synchronous activity or to distinguish between excitatory and inhibitory neurons. The procedure was tested using artificial data as well as recordings in vitro in response to fluctuating current waveforms.
Survey on modeling methods for single compartmental spiking neuron%单房室脉冲神经元建模方法综述
Institute of Scientific and Technical Information of China (English)
蔺想红; 巩祖正
2011-01-01
要通过人工神经网络来模拟神经系统的功能并对实际问题进行求解,构建合适的脉冲神经元模型非常重要.为了使研究者了解此问题的研究进展,对目前的单房室脉冲神经元建模方法进行了综述.根据复杂程度将这些模型分为三类:具有生物可解释性的生理模型,具有脉冲生成机制的非线性模型和具有固定阈值的线性模型,对各类不同建模方法进行了阐述和分析,并讨论了各自的优缺点.%To simulate the function of the nervous system and solve practical problems using artificial neural networks,modeling suitable spiking neuron models is very importantAn overview is provided for the researchers to catch up with the lately research progress of the modeling methods of single compartmental spiking neuron.According to the complexity,these models will be divided into three categories:physiological models with biologically plausibility,nonlinear models with spiking mecha nism and linear models with fixed threshold.Different methods are described and analyzed respectively, and their advantages and disadvantages are discussed.
A systematic method for configuring VLSI networks of spiking neurons.
Neftci, Emre; Chicca, Elisabetta; Indiveri, Giacomo; Douglas, Rodney
2011-10-01
An increasing number of research groups are developing custom hybrid analog/digital very large scale integration (VLSI) chips and systems that implement hundreds to thousands of spiking neurons with biophysically realistic dynamics, with the intention of emulating brainlike real-world behavior in hardware and robotic systems rather than simply simulating their performance on general-purpose digital computers. Although the electronic engineering aspects of these emulation systems is proceeding well, progress toward the actual emulation of brainlike tasks is restricted by the lack of suitable high-level configuration methods of the kind that have already been developed over many decades for simulations on general-purpose computers. The key difficulty is that the dynamics of the CMOS electronic analogs are determined by transistor biases that do not map simply to the parameter types and values used in typical abstract mathematical models of neurons and their networks. Here we provide a general method for resolving this difficulty. We describe a parameter mapping technique that permits an automatic configuration of VLSI neural networks so that their electronic emulation conforms to a higher-level neuronal simulation. We show that the neurons configured by our method exhibit spike timing statistics and temporal dynamics that are the same as those observed in the software simulated neurons and, in particular, that the key parameters of recurrent VLSI neural networks (e.g., implementing soft winner-take-all) can be precisely tuned. The proposed method permits a seamless integration between software simulations with hardware emulations and intertranslatability between the parameters of abstract neuronal models and their emulation counterparts. Most important, our method offers a route toward a high-level task configuration language for neuromorphic VLSI systems.
The chronotron: a neuron that learns to fire temporally precise spike patterns.
Directory of Open Access Journals (Sweden)
Răzvan V Florian
Full Text Available In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons, one that provides high memory capacity (E-learning, and one that has a higher biological plausibility (I-learning. With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.
Spectral components of cytosolic [Ca2+] spiking in neurons
DEFF Research Database (Denmark)
Kardos, J; Szilágyi, N; Juhász, G
1998-01-01
. Delayed complex responses of large [Ca2+]c spiking observed in cells from a different set of cultures were synthesized by a set of frequencies within the range 0.018-0.117 Hz. Differential frequency patterns are suggested as characteristics of the [Ca2+]c spiking responses of neurons under different...
Predicting spike occurrence and neuronal responsiveness from LFPs in primary somatosensory cortex.
Directory of Open Access Journals (Sweden)
Riccardo Storchi
Full Text Available Local Field Potentials (LFPs integrate multiple neuronal events like synaptic inputs and intracellular potentials. LFP spatiotemporal features are particularly relevant in view of their applications both in research (e.g. for understanding brain rhythms, inter-areal neural communication and neuronal coding and in the clinics (e.g. for improving invasive Brain-Machine Interface devices. However the relation between LFPs and spikes is complex and not fully understood. As spikes represent the fundamental currency of neuronal communication this gap in knowledge strongly limits our comprehension of neuronal phenomena underlying LFPs. We investigated the LFP-spike relation during tactile stimulation in primary somatosensory (S-I cortex in the rat. First we quantified how reliably LFPs and spikes code for a stimulus occurrence. Then we used the information obtained from our analyses to design a predictive model for spike occurrence based on LFP inputs. The model was endowed with a flexible meta-structure whose exact form, both in parameters and structure, was estimated by using a multi-objective optimization strategy. Our method provided a set of nonlinear simple equations that maximized the match between models and true neurons in terms of spike timings and Peri Stimulus Time Histograms. We found that both LFPs and spikes can code for stimulus occurrence with millisecond precision, showing, however, high variability. Spike patterns were predicted significantly above chance for 75% of the neurons analysed. Crucially, the level of prediction accuracy depended on the reliability in coding for the stimulus occurrence. The best predictions were obtained when both spikes and LFPs were highly responsive to the stimuli. Spike reliability is known to depend on neuron intrinsic properties (i.e. on channel noise and on spontaneous local network fluctuations. Our results suggest that the latter, measured through the LFP response variability, play a dominant role.
Energetics based spike generation of a single neuron: simulation results and analysis
Directory of Open Access Journals (Sweden)
Nagarajan eVenkateswaran
2012-02-01
Full Text Available Existing current based models that capture spike activity, though useful in studying information processing capabilities of neurons, fail to throw light on their internal functioning. It is imperative to develop a model that captures the spike train of a neuron as a function of its intra cellular parameters for non-invasive diagnosis of diseased neurons. This is the first ever article to present such an integrated model that quantifies the inter-dependency between spike activity and intra cellular energetics. The generated spike trains from our integrated model will throw greater light on the intra-cellular energetics than existing current models. Now, an abnormality in the spike of a diseased neuron can be linked and hence effectively analyzed at the energetics level. The spectral analysis of the generated spike trains in a time-frequency domain will help identify abnormalities in the internals of a neuron. As a case study, the parameters of our model are tuned for Alzheimer disease and its resultant spike trains are studied and presented.
Balanced Networks of Spiking Neurons with Spatially Dependent Recurrent Connections
Rosenbaum, Robert; Doiron, Brent
2014-04-01
Networks of model neurons with balanced recurrent excitation and inhibition capture the irregular and asynchronous spiking activity reported in cortex. While mean-field theories of spatially homogeneous balanced networks are well understood, a mean-field analysis of spatially heterogeneous balanced networks has not been fully developed. We extend the analysis of balanced networks to include a connection probability that depends on the spatial separation between neurons. In the continuum limit, we derive that stable, balanced firing rate solutions require that the spatial spread of external inputs be broader than that of recurrent excitation, which in turn must be broader than or equal to that of recurrent inhibition. Notably, this implies that network models with broad recurrent inhibition are inconsistent with the balanced state. For finite size networks, we investigate the pattern-forming dynamics arising when balanced conditions are not satisfied. Our study highlights the new challenges that balanced networks pose for the spatiotemporal dynamics of complex systems.
Learning probabilistic models of connectivity from multiple spike train data
2010-01-01
Neuronal circuits or cell assemblies carry out brain function through complex coordinated firing patterns [1]. Inferring topology of neuronal circuits from simultaneously recorded spike train data is a challenging problem in neuroscience. In this work we present a new class of dynamic Bayesian networks to infer polysynaptic excitatory connectivity between spiking cortical neurons [2]. The emphasis on excitatory networks allows us to learn connectivity models by exploiting fast data mining alg...
Directory of Open Access Journals (Sweden)
Takashi eTakekawa
2012-03-01
Full Text Available This study introduces a new spike sorting method that classifies spike waveforms from multiunit recordings into spike trains of individual neurons. In particular, we develop a method to sort a spike mixture generated by a heterogeneous neural population. Such a spike sorting has a significant practical value, but was previously difficult. The method combines a feature extraction method, which we may term multimodality-weighted principal component analysis (mPCA, and a clustering method by variational Bayes for Student’s t mixture model (SVB. The performance of the proposed method was compared with that of other conventional methods for simulated and experimental data sets. We found that the mPCA efficiently extracts highly informative features as clusters clearly separable in a relatively low-dimensional feature space. The SVB was implemented explicitly without relying on Maximum-A-Posterior (MAP inference for the degree of freedom parameters. The explicit SVB is faster than the conventional SVB derived with MAP inference and works more reliably over various data sets that include spiking patterns difficult to sort. For instance, spikes of a single bursting neuron may be separated incorrectly into multiple clusters, whereas those of a sparsely firing neuron tend to be merged into clusters for other neurons. Our method showed significantly improved performance in spike sorting of these difficult neurons. A parallelized implementation of the proposed algorithm (EToS version 3 is available as open-source code at http://etos.sourceforge.net/.
Input-output relation and energy efficiency in the neuron with different spike threshold dynamics.
Yi, Guo-Sheng; Wang, Jiang; Tsang, Kai-Ming; Wei, Xi-Le; Deng, Bin
2015-01-01
Neuron encodes and transmits information through generating sequences of output spikes, which is a high energy-consuming process. The spike is initiated when membrane depolarization reaches a threshold voltage. In many neurons, threshold is dynamic and depends on the rate of membrane depolarization (dV/dt) preceding a spike. Identifying the metabolic energy involved in neural coding and their relationship to threshold dynamic is critical to understanding neuronal function and evolution. Here, we use a modified Morris-Lecar model to investigate neuronal input-output property and energy efficiency associated with different spike threshold dynamics. We find that the neurons with dynamic threshold sensitive to dV/dt generate discontinuous frequency-current curve and type II phase response curve (PRC) through Hopf bifurcation, and weak noise could prohibit spiking when bifurcation just occurs. The threshold that is insensitive to dV/dt, instead, results in a continuous frequency-current curve, a type I PRC and a saddle-node on invariant circle bifurcation, and simultaneously weak noise cannot inhibit spiking. It is also shown that the bifurcation, frequency-current curve and PRC type associated with different threshold dynamics arise from the distinct subthreshold interactions of membrane currents. Further, we observe that the energy consumption of the neuron is related to its firing characteristics. The depolarization of spike threshold improves neuronal energy efficiency by reducing the overlap of Na(+) and K(+) currents during an action potential. The high energy efficiency is achieved at more depolarized spike threshold and high stimulus current. These results provide a fundamental biophysical connection that links spike threshold dynamics, input-output relation, energetics and spike initiation, which could contribute to uncover neural encoding mechanism.
Estimating nonstationary input signals from a single neuronal spike train.
Kim, Hideaki; Shinomoto, Shigeru
2012-11-01
Neurons temporally integrate input signals, translating them into timed output spikes. Because neurons nonperiodically emit spikes, examining spike timing can reveal information about input signals, which are determined by activities in the populations of excitatory and inhibitory presynaptic neurons. Although a number of mathematical methods have been developed to estimate such input parameters as the mean and fluctuation of the input current, these techniques are based on the unrealistic assumption that presynaptic activity is constant over time. Here, we propose tracking temporal variations in input parameters with a two-step analysis method. First, nonstationary firing characteristics comprising the firing rate and non-Poisson irregularity are estimated from a spike train using a computationally feasible state-space algorithm. Then, information about the firing characteristics is converted into likely input parameters over time using a transformation formula, which was constructed by inverting the neuronal forward transformation of the input current to output spikes. By analyzing spike trains recorded in vivo, we found that neuronal input parameters are similar in the primary visual cortex V1 and middle temporal area, whereas parameters in the lateral geniculate nucleus of the thalamus were markedly different.
Spike library based simulator for extracellular single unit neuronal signals.
Thorbergsson, P T; Jorntell, H; Bengtsson, F; Garwicz, M; Schouenborg, J; Johansson, A
2009-01-01
A well defined set of design criteria is of great importance in the process of designing brain machine interfaces (BMI) based on extracellular recordings with chronically implanted micro-electrode arrays in the central nervous system (CNS). In order to compare algorithms and evaluate their performance under various circumstances, ground truth about their input needs to be present. Obtaining ground truth from real data would require optimal algorithms to be used, given that those exist. This is not possible since it relies on the very algorithms that are to be evaluated. Using realistic models of the recording situation facilitates the simulation of extracellular recordings. The simulation gives access to a priori known signal characteristics such as spike times and identities. In this paper, we describe a simulator based on a library of spikes obtained from recordings in the cat cerebellum and observed statistics of neuronal behavior during spontaneous activity. The simulator has proved to be useful in the task of generating extracellular recordings with realistic background noise and known ground truth to use in the evaluation of algorithms for spike detection and sorting.
Chaos-based mixed signal implementation of spiking neurons.
Rossello, Josep L; Canals, Vincent; Morro, Antoni; Verd, Jaume
2009-12-01
A new design of Spiking Neural Networks is proposed and fabricated using a 0.35 microm CMOS technology. The architecture is based on the use of both digital and analog circuitry. The digital circuitry is dedicated to the inter-neuron communication while the analog part implements the internal non-linear behavior associated to spiking neurons. The main advantages of the proposed system are the small area of integration with respect to digital solutions, its implementation using a standard CMOS process only and the reliability of the inter-neuron communication.
Spiking irregularity and frequency modulate the behavioral report of single-neuron stimulation.
Doron, Guy; von Heimendahl, Moritz; Schlattmann, Peter; Houweling, Arthur R; Brecht, Michael
2014-02-01
The action potential activity of single cortical neurons can evoke measurable sensory effects, but it is not known how spiking parameters and neuronal subtypes affect the evoked sensations. Here, we examined the effects of spike train irregularity, spike frequency, and spike number on the detectability of single-neuron stimulation in rat somatosensory cortex. For regular-spiking, putative excitatory neurons, detectability increased with spike train irregularity and decreasing spike frequencies but was not affected by spike number. Stimulation of single, fast-spiking, putative inhibitory neurons led to a larger sensory effect compared to regular-spiking neurons, and the effect size depended only on spike irregularity. An ideal-observer analysis suggests that, under our experimental conditions, rats were using integration windows of a few hundred milliseconds or more. Our data imply that the behaving animal is sensitive to single neurons' spikes and even to their temporal patterning.
Time Resolution Dependence of Information Measures for Spiking Neurons: Scaling and Universality
Directory of Open Access Journals (Sweden)
James P Crutchfield
2015-08-01
Full Text Available The mutual information between stimulus and spike-train response is commonly used to monitor neural coding efficiency, but neuronal computation broadly conceived requires more refined and targeted information measures of input-output joint processes. A first step towards that larger goal is todevelop information measures for individual output processes, including information generation (entropy rate, stored information (statisticalcomplexity, predictable information (excess entropy, and active information accumulation (bound information rate. We calculate these for spike trains generated by a variety of noise-driven integrate-and-fire neurons as a function of time resolution and for alternating renewal processes. We show that their time-resolution dependence reveals coarse-grained structural properties of interspike interval statistics; e.g., $tau$-entropy rates that diverge less quickly than the firing rate indicate interspike interval correlations. We also find evidence that the excess entropy and regularized statistical complexity of different types of integrate-and-fire neurons are universal in the continuous-time limit in the sense that they do not depend on mechanism details. This suggests a surprising simplicity in the spike trains generated by these model neurons. Interestingly, neurons with gamma-distributed ISIs and neurons whose spike trains are alternating renewal processes do not fall into the same universality class. These results lead to two conclusions. First, the dependence of information measures on time resolution reveals mechanistic details about spike train generation. Second, information measures can be used as model selection tools for analyzing spike train processes.
Time resolution dependence of information measures for spiking neurons: scaling and universality.
Marzen, Sarah E; DeWeese, Michael R; Crutchfield, James P
2015-01-01
The mutual information between stimulus and spike-train response is commonly used to monitor neural coding efficiency, but neuronal computation broadly conceived requires more refined and targeted information measures of input-output joint processes. A first step toward that larger goal is to develop information measures for individual output processes, including information generation (entropy rate), stored information (statistical complexity), predictable information (excess entropy), and active information accumulation (bound information rate). We calculate these for spike trains generated by a variety of noise-driven integrate-and-fire neurons as a function of time resolution and for alternating renewal processes. We show that their time-resolution dependence reveals coarse-grained structural properties of interspike interval statistics; e.g., τ-entropy rates that diverge less quickly than the firing rate indicated by interspike interval correlations. We also find evidence that the excess entropy and regularized statistical complexity of different types of integrate-and-fire neurons are universal in the continuous-time limit in the sense that they do not depend on mechanism details. This suggests a surprising simplicity in the spike trains generated by these model neurons. Interestingly, neurons with gamma-distributed ISIs and neurons whose spike trains are alternating renewal processes do not fall into the same universality class. These results lead to two conclusions. First, the dependence of information measures on time resolution reveals mechanistic details about spike train generation. Second, information measures can be used as model selection tools for analyzing spike train processes.
Sastry, P S
2008-01-01
In this paper we consider the problem of detecting statistically significant sequential patterns in multi-neuronal spike trains. These patterns are characterized by an ordered sequences of spikes from different neurons with specific delays between spikes. We have previously proposed a data mining scheme to efficiently discover such patterns which are frequent in the sense that the count of non-overlapping occurrences of the pattern in the data stream is above a threshold. Here we propose a method to determine the statistical significance of these repeating patterns and to set the thresholds automatically. The novelty of our approach is that we use a compound null hypothesis that includes not only models of independent neurons but also models where neurons have weak dependencies. The strength of interaction among the neurons is represented in terms of certain pair-wise conditional probabilities. We specify our null hypothesis by putting an upper bound on all such conditional probabilities. We construct a proba...
Neuronal spike sorting based on radial basis function neural networks
Directory of Open Access Journals (Sweden)
Taghavi Kani M
2011-02-01
Full Text Available "nBackground: Studying the behavior of a society of neurons, extracting the communication mechanisms of brain with other tissues, finding treatment for some nervous system diseases and designing neuroprosthetic devices, require an algorithm to sort neuralspikes automatically. However, sorting neural spikes is a challenging task because of the low signal to noise ratio (SNR of the spikes. The main purpose of this study was to design an automatic algorithm for classifying neuronal spikes that are emitted from a specific region of the nervous system."n "nMethods: The spike sorting process usually consists of three stages: detection, feature extraction and sorting. We initially used signal statistics to detect neural spikes. Then, we chose a limited number of typical spikes as features and finally used them to train a radial basis function (RBF neural network to sort the spikes. In most spike sorting devices, these signals are not linearly discriminative. In order to solve this problem, the aforesaid RBF neural network was used."n "nResults: After the learning process, our proposed algorithm classified any arbitrary spike. The obtained results showed that even though the proposed Radial Basis Spike Sorter (RBSS reached to the same error as the previous methods, however, the computational costs were much lower compared to other algorithms. Moreover, the competitive points of the proposed algorithm were its good speed and low computational complexity."n "nConclusion: Regarding the results of this study, the proposed algorithm seems to serve the purpose of procedures that require real-time processing and spike sorting.
Directory of Open Access Journals (Sweden)
Yansong Chua
2016-07-01
Full Text Available The role of dendritic spiking mechanisms in neural processing is so far poorly understood. To investigate the role of calcium spikes in the functional properties of the single neuron and recurrent networks, we investigated a three compartment neuron model of the layer 5 pyramidal neuron with calcium dynamics in the distal compartment. By performing single neuron simulations with noisy synaptic input and occasional large coincident input at either just the distal compartment or at both somatic and distal compartments, we show that the presence of calcium spikes confers a substantial advantage for coincidence detection in the former case and a lesser advantage in the latter. We further show that the experimentally observed critical frequency phenomenon is not exhibited by a neuron receiving realistically noisy synaptic input, and so is unlikely to be a necessary component of coincidence detection. We next investigate the effect of calcium spikes in propagation of spiking activities in a feed-forward network embedded in a balanced recurrent network. The excitatory neurons in the network are again connected to either just the distal, or both somatic and distal compartments. With purely distal connectivity, activity propagation is stable and distinguishable for a large range of recurrent synaptic strengths if the feed-forward connections are sufficiently strong, but propagation does not occur in the absence of calcium spikes. When connections are made to both the somatic and the distal compartments, activity propagation is achieved for neurons with active calcium dynamics at a much smaller number of neurons per pool, compared to a network of passive neurons, but quickly becomes unstable as the strength of recurrent synapses increases. Activity propagation at higher scaling factors can be stabilized by increasing network inhibition or introducing short term depression in the excitatory synapses, but the signal to noise ration remains low. Our results
J. Luthman (Johannes); F.E. Hoebeek (Freek); R. Maex (Reinoud); N. Davey (Neil); R. Adams (Rod); C.I. de Zeeuw (Chris); V. Steuber (Volker)
2011-01-01
textabstractNeurons in the cerebellar nuclei (CN) receive inhibitory inputs from Purkinje cells in the cerebellar cortex and provide the major output from the cerebellum, but their computational function is not well understood. It has recently been shown that the spike activity of Purkinje cells is
Spectral components of cytosolic [Ca2+] spiking in neurons
DEFF Research Database (Denmark)
Kardos, J; Szilágyi, N; Juhász, G;
1998-01-01
into evolutionary spectra of a characteristic set of frequencies. Non-delayed small spikes on top of sustained [Ca2+]c were synthesized by a main component frequency, 0.132+/-0.012 Hz, showing its maximal amplitude in phase with the start of depolarization (25 mM KCI) combined with caffeine (10 mM) application....... Delayed complex responses of large [Ca2+]c spiking observed in cells from a different set of cultures were synthesized by a set of frequencies within the range 0.018-0.117 Hz. Differential frequency patterns are suggested as characteristics of the [Ca2+]c spiking responses of neurons under different...
Detection of bursts in neuronal spike trains by the mean inter-spike interval method
Institute of Scientific and Technical Information of China (English)
Lin Chen; Yong Deng; Weihua Luo; Zhen Wang; Shaoqun Zeng
2009-01-01
Bursts are electrical spikes firing with a high frequency, which are the most important property in synaptic plasticity and information processing in the central nervous system. However, bursts are difficult to identify because bursting activities or patterns vary with phys-iological conditions or external stimuli. In this paper, a simple method automatically to detect bursts in spike trains is described. This method auto-adaptively sets a parameter (mean inter-spike interval) according to intrinsic properties of the detected burst spike trains, without any arbitrary choices or any operator judgrnent. When the mean value of several successive inter-spike intervals is not larger than the parameter, a burst is identified. By this method, bursts can be automatically extracted from different bursting patterns of cultured neurons on multi-electrode arrays, as accurately as by visual inspection. Furthermore, significant changes of burst variables caused by electrical stimulus have been found in spontaneous activity of neuronal network. These suggest that the mean inter-spike interval method is robust for detecting changes in burst patterns and characteristics induced by environmental alterations.
Indiveri, Giacomo; Chicca, Elisabetta; Douglas, Rodney
2006-01-01
We present a mixed-mode analog/digital VLSI device comprising an array of leaky integrate-and-fire (I&F) neurons, adaptive synapses with spike-timing dependent plasticity, and an asynchronous event based communication infrastructure that allows the user to (re)configure networks of spiking neurons with arbitrary topologies. The asynchronous communication protocol used by the silicon neurons to transmit spikes (events) off-chip and the silicon synapses to receive spikes from the outside is based on the "address-event representation" (AER). We describe the analog circuits designed to implement the silicon neurons and synapses and present experimental data showing the neuron's response properties and the synapses characteristics, in response to AER input spike trains. Our results indicate that these circuits can be used in massively parallel VLSI networks of I&F neurons to simulate real-time complex spike-based learning algorithms.
Spiking Neural P Systems with Neuron Division and Budding
Pan, Linqiang; Paun, Gheorghe; Pérez Jiménez, Mario de Jesús
2009-01-01
In order to enhance the e±ciency of spiking neural P systems, we introduce the features of neuron division and neuron budding, which are processes inspired by neural stem cell division. As expected (as it is the case for P systems with active membranes), in this way we get the possibility to solve computationally hard problems in polynomial time. We illustrate this possibility with SAT problem.
Robinson, Brian S; Song, Dong; Berger, Theodore W
2014-01-01
This paper presents a methodology to estimate a learning rule that governs activity-dependent plasticity from behaviorally recorded spiking events. To demonstrate this framework, we simulate a probabilistic spiking neuron with spike-timing-dependent plasticity (STDP) and estimate all model parameters from the simulated spiking data. In the neuron model, output spiking activity is generated by the combination of noise, feedback from the output, and an input-feedforward component whose magnitude is modulated by synaptic weight. The synaptic weight is calculated with STDP with the following features: (1) weight change based on the relative timing of input-output spike pairs, (2) prolonged plasticity induction, and (3) considerations for system stability. Estimation of all model parameters is achieved iteratively by formulating the model as a generalized linear model with Volterra kernels and basis function expansion. Successful estimation of all model parameters in this study demonstrates the feasibility of this approach for in-vivo experimental studies. Furthermore, the consideration of system stability and prolonged plasticity induction enhances the ability to capture how STDP affects a neural population's signal transformation properties over a realistic time course. Plasticity characterization with this estimation method could yield insights into functional implications of STDP and be incorporated into a cortical prosthesis.
Consensus-Based Sorting of Neuronal Spike Waveforms
Fournier, Julien; Mueller, Christian M.; Shein-Idelson, Mark; Hemberger, Mike
2016-01-01
Optimizing spike-sorting algorithms is difficult because sorted clusters can rarely be checked against independently obtained “ground truth” data. In most spike-sorting algorithms in use today, the optimality of a clustering solution is assessed relative to some assumption on the distribution of the spike shapes associated with a particular single unit (e.g., Gaussianity) and by visual inspection of the clustering solution followed by manual validation. When the spatiotemporal waveforms of spikes from different cells overlap, the decision as to whether two spikes should be assigned to the same source can be quite subjective, if it is not based on reliable quantitative measures. We propose a new approach, whereby spike clusters are identified from the most consensual partition across an ensemble of clustering solutions. Using the variability of the clustering solutions across successive iterations of the same clustering algorithm (template matching based on K-means clusters), we estimate the probability of spikes being clustered together and identify groups of spikes that are not statistically distinguishable from one another. Thus, we identify spikes that are most likely to be clustered together and therefore correspond to consistent spike clusters. This method has the potential advantage that it does not rely on any model of the spike shapes. It also provides estimates of the proportion of misclassified spikes for each of the identified clusters. We tested our algorithm on several datasets for which there exists a ground truth (simultaneous intracellular data), and show that it performs close to the optimum reached by a support vector machine trained on the ground truth. We also show that the estimated rate of misclassification matches the proportion of misclassified spikes measured from the ground truth data. PMID:27536990
Deep Learning with Dynamic Spiking Neurons and Fixed Feedback Weights.
Samadi, Arash; Lillicrap, Timothy P; Tweed, Douglas B
2017-03-01
Recent work in computer science has shown the power of deep learning driven by the backpropagation algorithm in networks of artificial neurons. But real neurons in the brain are different from most of these artificial ones in at least three crucial ways: they emit spikes rather than graded outputs, their inputs and outputs are related dynamically rather than by piecewise-smooth functions, and they have no known way to coordinate arrays of synapses in separate forward and feedback pathways so that they change simultaneously and identically, as they do in backpropagation. Given these differences, it is unlikely that current deep learning algorithms can operate in the brain, but we that show these problems can be solved by two simple devices: learning rules can approximate dynamic input-output relations with piecewise-smooth functions, and a variation on the feedback alignment algorithm can train deep networks without having to coordinate forward and feedback synapses. Our results also show that deep spiking networks learn much better if each neuron computes an intracellular teaching signal that reflects that cell's nonlinearity. With this mechanism, networks of spiking neurons show useful learning in synapses at least nine layers upstream from the output cells and perform well compared to other spiking networks in the literature on the MNIST digit recognition task.
Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J
2014-01-01
This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.
Inference of neuronal network spike dynamics and topology from calcium imaging data.
Lütcke, Henry; Gerhard, Felipe; Zenke, Friedemann; Gerstner, Wulfram; Helmchen, Fritjof
2013-01-01
Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP) occurrence ("spike trains") from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR) and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties.
Inference of neuronal network spike dynamics and topology from calcium imaging data
Directory of Open Access Journals (Sweden)
Henry eLütcke
2013-12-01
Full Text Available Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP occurrence ('spike trains' from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties.
Bursts and isolated spikes code for opposite movement directions in midbrain electrosensory neurons.
Directory of Open Access Journals (Sweden)
Navid Khosravi-Hashemi
Full Text Available Directional selectivity, in which neurons respond strongly to an object moving in a given direction but weakly or not at all to the same object moving in the opposite direction, is a crucial computation that is thought to provide a neural correlate of motion perception. However, directional selectivity has been traditionally quantified by using the full spike train, which does not take into account particular action potential patterns. We investigated how different action potential patterns, namely bursts (i.e. packets of action potentials followed by quiescence and isolated spikes, contribute to movement direction coding in a mathematical model of midbrain electrosensory neurons. We found that bursts and isolated spikes could be selectively elicited when the same object moved in opposite directions. In particular, it was possible to find parameter values for which our model neuron did not display directional selectivity when the full spike train was considered but displayed strong directional selectivity when bursts or isolated spikes were instead considered. Further analysis of our model revealed that an intrinsic burst mechanism based on subthreshold T-type calcium channels was not required to observe parameter regimes for which bursts and isolated spikes code for opposite movement directions. However, this burst mechanism enhanced the range of parameter values for which such regimes were observed. Experimental recordings from midbrain neurons confirmed our modeling prediction that bursts and isolated spikes can indeed code for opposite movement directions. Finally, we quantified the performance of a plausible neural circuit and found that it could respond more or less selectively to isolated spikes for a wide range of parameter values when compared with an interspike interval threshold. Our results thus show for the first time that different action potential patterns can differentially encode movement and that traditional measures of
Discovering Patterns in Multi-neuronal Spike Trains using the Frequent Episode Method
Unnikrishnan, K P; Sastry, P S
2007-01-01
Discovering the 'Neural Code' from multi-neuronal spike trains is an important task in neuroscience. For such an analysis, it is important to unearth interesting regularities in the spiking patterns. In this report, we present an efficient method for automatically discovering synchrony, synfire chains, and more general sequences of neuronal firings. We use the Frequent Episode Discovery framework of Laxman, Sastry, and Unnikrishnan (2005), in which the episodes are represented and recognized using finite-state automata. Many aspects of functional connectivity between neuronal populations can be inferred from the episodes. We demonstrate these using simulated multi-neuronal data from a Poisson model. We also present a method to assess the statistical significance of the discovered episodes. Since the Temporal Data Mining (TDM) methods used in this report can analyze data from hundreds and potentially thousands of neurons, we argue that this framework is appropriate for discovering the `Neural Code'.
Chua, Yansong; Morrison, Abigail
2016-01-01
The role of dendritic spiking mechanisms in neural processing is so far poorly understood. To investigate the role of calcium spikes in the functional properties of the single neuron and recurrent networks, we investigated a three compartment neuron model of the layer 5 pyramidal neuron with calcium dynamics in the distal compartment. By performing single neuron simulations with noisy synaptic input and occasional large coincident input at either just the distal compartment or at both somatic and distal compartments, we show that the presence of calcium spikes confers a substantial advantage for coincidence detection in the former case and a lesser advantage in the latter. We further show that the experimentally observed critical frequency phenomenon, in which action potentials triggered by stimuli near the soma above a certain frequency trigger a calcium spike at distal dendrites, leading to further somatic depolarization, is not exhibited by a neuron receiving realistically noisy synaptic input, and so is unlikely to be a necessary component of coincidence detection. We next investigate the effect of calcium spikes in propagation of spiking activities in a feed-forward network (FFN) embedded in a balanced recurrent network. The excitatory neurons in the network are again connected to either just the distal, or both somatic and distal compartments. With purely distal connectivity, activity propagation is stable and distinguishable for a large range of recurrent synaptic strengths if the feed-forward connections are sufficiently strong, but propagation does not occur in the absence of calcium spikes. When connections are made to both the somatic and the distal compartments, activity propagation is achieved for neurons with active calcium dynamics at a much smaller number of neurons per pool, compared to a network of passive neurons, but quickly becomes unstable as the strength of recurrent synapses increases. Activity propagation at higher scaling factors can be
Detecting dependencies between spike trains of pairs of neurons through copulas
DEFF Research Database (Denmark)
Sacerdote, Laura; Tamborrino, Massimiliano; Zucca, Cristina
2011-01-01
The dynamics of a neuron are influenced by the connections with the network where it lies. Recorded spike trains exhibit patterns due to the interactions between neurons. However, the structure of the network is not known. A challenging task is to investigate it from the analysis of simultaneously...... recorded spike trains. We develop a non-parametric method based on copulas, that we apply to simulated data according to different bivariate Leaky In- tegrate and Fire models. The method discerns dependencies determined by the surround- ing network, from those determined by direct interactions between...
Rule, Michael E; Vargas-Irwin, Carlos; Donoghue, John P; Truccolo, Wilson
2015-01-01
Understanding the sources of variability in single-neuron spiking responses is an important open problem for the theory of neural coding. This variability is thought to result primarily from spontaneous collective dynamics in neuronal networks. Here, we investigate how well collective dynamics reflected in motor cortex local field potentials (LFPs) can account for spiking variability during motor behavior. Neural activity was recorded via microelectrode arrays implanted in ventral and dorsal premotor and primary motor cortices of non-human primates performing naturalistic 3-D reaching and grasping actions. Point process models were used to quantify how well LFP features accounted for spiking variability not explained by the measured 3-D reach and grasp kinematics. LFP features included the instantaneous magnitude, phase and analytic-signal components of narrow band-pass filtered (δ,θ,α,β) LFPs, and analytic signal and amplitude envelope features in higher-frequency bands. Multiband LFP features predicted single-neuron spiking (1ms resolution) with substantial accuracy as assessed via ROC analysis. Notably, however, models including both LFP and kinematics features displayed marginal improvement over kinematics-only models. Furthermore, the small predictive information added by LFP features to kinematic models was redundant to information available in fast-timescale (<100 ms) spiking history. Overall, information in multiband LFP features, although predictive of single-neuron spiking during movement execution, was redundant to information available in movement parameters and spiking history. Our findings suggest that, during movement execution, collective dynamics reflected in motor cortex LFPs primarily relate to sensorimotor processes directly controlling movement output, adding little explanatory power to variability not accounted by movement parameters.
Directory of Open Access Journals (Sweden)
Michael Everett Rule
2015-06-01
Full Text Available Understanding the sources of variability in single-neuron spiking responses is an important open problem for the theory of neural coding. This variability is thought to result primarily from spontaneous collective dynamics in neuronal networks. Here, we investigate how well collective dynamics reflected in motor cortex local field potentials (LFPs can account for spiking variability during motor behavior. Neural activity was recorded via microelectrode arrays implanted in ventral and dorsal premotor and primary motor cortices of non-human primates performing naturalistic 3-D reaching and grasping actions. Point process models were used to quantify how well LFP features accounted for spiking variability not explained by the measured 3-D reach and grasp kinematics. LFP features included the instantaneous magnitude, phase and analytic-signal components of narrow band-pass filtered (δ, θ, α, β LFPs, and analytic signal and amplitude envelope features in higher-frequency bands. Multiband LFP features predicted single-neuron spiking (1ms resolution with substantial accuracy as assessed via ROC analysis. Notably, however, models including both LFP and kinematics features displayed marginal improvement over kinematics-only models. Furthermore, the small predictive information added by LFP features to kinematic models was redundant to information available in fast-timescale (<100ms spiking history. Overall, information in multiband LFP features, although predictive of single-neuron spiking during movement execution, was redundant to information available in movement parameters and spiking history. Our findings suggest that, during movement execution, collective dynamics reflected in motor cortex LFPs primarily relate to sensorimotor processes directly controlling movement output, adding little explanatory power to variability not accounted by movement parameters.
Adaptive Spike Threshold Enables Robust and Temporally Precise Neuronal Encoding.
Directory of Open Access Journals (Sweden)
Chao Huang
2016-06-01
Full Text Available Neural processing rests on the intracellular transformation of information as synaptic inputs are translated into action potentials. This transformation is governed by the spike threshold, which depends on the history of the membrane potential on many temporal scales. While the adaptation of the threshold after spiking activity has been addressed before both theoretically and experimentally, it has only recently been demonstrated that the subthreshold membrane state also influences the effective spike threshold. The consequences for neural computation are not well understood yet. We address this question here using neural simulations and whole cell intracellular recordings in combination with information theoretic analysis. We show that an adaptive spike threshold leads to better stimulus discrimination for tight input correlations than would be achieved otherwise, independent from whether the stimulus is encoded in the rate or pattern of action potentials. The time scales of input selectivity are jointly governed by membrane and threshold dynamics. Encoding information using adaptive thresholds further ensures robust information transmission across cortical states i.e. decoding from different states is less state dependent in the adaptive threshold case, if the decoding is performed in reference to the timing of the population response. Results from in vitro neural recordings were consistent with simulations from adaptive threshold neurons. In summary, the adaptive spike threshold reduces information loss during intracellular information transfer, improves stimulus discriminability and ensures robust decoding across membrane states in a regime of highly correlated inputs, similar to those seen in sensory nuclei during the encoding of sensory information.
Auto and crosscorrelograms for the spike response of LIF neurons with slow synapses
Moreno-Bote, Ruben
2006-01-01
An analytical description of the response properties of simple but realistic neuron models in the presence of noise is still lacking. We determine completely up to the second order the firing statistics of a single and a pair of leaky integrate-and-fire neurons (LIFs) receiving some common slowly filtered white noise. In particular, the auto- and cross-correlation functions of the output spike trains of pairs of cells are obtained from an improvement of the adiabatic approximation introduced in \\cite{Mor+04}. These two functions define the firing variability and firing synchronization between neurons, and are of much importance for understanding neuron communication.
Directory of Open Access Journals (Sweden)
Tilo Schwalger
2017-04-01
Full Text Available Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50-2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations.
Directory of Open Access Journals (Sweden)
Robert C Cannon
Full Text Available Neuronal activity is mediated through changes in the probability of stochastic transitions between open and closed states of ion channels. While differences in morphology define neuronal cell types and may underlie neurological disorders, very little is known about influences of stochastic ion channel gating in neurons with complex morphology. We introduce and validate new computational tools that enable efficient generation and simulation of models containing stochastic ion channels distributed across dendritic and axonal membranes. Comparison of five morphologically distinct neuronal cell types reveals that when all simulated neurons contain identical densities of stochastic ion channels, the amplitude of stochastic membrane potential fluctuations differs between cell types and depends on sub-cellular location. For typical neurons, the amplitude of membrane potential fluctuations depends on channel kinetics as well as open probability. Using a detailed model of a hippocampal CA1 pyramidal neuron, we show that when intrinsic ion channels gate stochastically, the probability of initiation of dendritic or somatic spikes by dendritic synaptic input varies continuously between zero and one, whereas when ion channels gate deterministically, the probability is either zero or one. At physiological firing rates, stochastic gating of dendritic ion channels almost completely accounts for probabilistic somatic and dendritic spikes generated by the fully stochastic model. These results suggest that the consequences of stochastic ion channel gating differ globally between neuronal cell-types and locally between neuronal compartments. Whereas dendritic neurons are often assumed to behave deterministically, our simulations suggest that a direct consequence of stochastic gating of intrinsic ion channels is that spike output may instead be a probabilistic function of patterns of synaptic input to dendrites.
Steyn-Ross, Moira L.; Steyn-Ross, D. A.
2016-02-01
Mean-field models of the brain approximate spiking dynamics by assuming that each neuron responds to its neighbors via a naive spatial average that neglects local fluctuations and correlations in firing activity. In this paper we address this issue by introducing a rigorous formalism to enable spatial coarse-graining of spiking dynamics, scaling from the microscopic level of a single type 1 (integrator) neuron to a macroscopic assembly of spiking neurons that are interconnected by chemical synapses and nearest-neighbor gap junctions. Spiking behavior at the single-neuron scale ℓ ≈10 μ m is described by Wilson's two-variable conductance-based equations [H. R. Wilson, J. Theor. Biol. 200, 375 (1999), 10.1006/jtbi.1999.1002], driven by fields of incoming neural activity from neighboring neurons. We map these equations to a coarser spatial resolution of grid length B ℓ , with B ≫1 being the blocking ratio linking micro and macro scales. Our method systematically eliminates high-frequency (short-wavelength) spatial modes q ⃗ in favor of low-frequency spatial modes Q ⃗ using an adiabatic elimination procedure that has been shown to be equivalent to the path-integral coarse graining applied to renormalization group theory of critical phenomena. This bottom-up neural regridding allows us to track the percolation of synaptic and ion-channel noise from the single neuron up to the scale of macroscopic population-average variables. Anticipated applications of neural regridding include extraction of the current-to-firing-rate transfer function, investigation of fluctuation criticality near phase-transition tipping points, determination of spatial scaling laws for avalanche events, and prediction of the spatial extent of self-organized macrocolumnar structures. As a first-order exemplar of the method, we recover nonlinear corrections for a coarse-grained Wilson spiking neuron embedded in a network of identical diffusively coupled neurons whose chemical synapses have
A 4-fJ/Spike Artificial Neuron in 65 nm CMOS Technology
Sourikopoulos, Ilias; Hedayat, Sara; Loyez, Christophe; Danneville, François; Hoel, Virginie; Mercier, Eric; Cappy, Alain
2017-01-01
As Moore's law reaches its end, traditional computing technology based on the Von Neumann architecture is facing fundamental limits. Among them is poor energy efficiency. This situation motivates the investigation of different processing information paradigms, such as the use of spiking neural networks (SNNs), which also introduce cognitive characteristics. As applications at very high scale are addressed, the energy dissipation needs to be minimized. This effort starts from the neuron cell. In this context, this paper presents the design of an original artificial neuron, in standard 65 nm CMOS technology with optimized energy efficiency. The neuron circuit response is designed as an approximation of the Morris-Lecar theoretical model. In order to implement the non-linear gating variables, which control the ionic channel currents, transistors operating in deep subthreshold are employed. Two different circuit variants describing the neuron model equations have been developed. The first one features spike characteristics, which correlate well with a biological neuron model. The second one is a simplification of the first, designed to exhibit higher spiking frequencies, targeting large scale bio-inspired information processing applications. The most important feature of the fabricated circuits is the energy efficiency of a few femtojoules per spike, which improves prior state-of-the-art by two to three orders of magnitude. This performance is achieved by minimizing two key parameters: the supply voltage and the related membrane capacitance. Meanwhile, the obtained standby power at a resting output does not exceed tens of picowatts. The two variants were sized to 200 and 35 μm2 with the latter reaching a spiking output frequency of 26 kHz. This performance level could address various contexts, such as highly integrated neuro-processors for robotics, neuroscience or medical applications. PMID:28360831
Studying a Chaotic Spiking Neural Model
Directory of Open Access Journals (Sweden)
Mohammad Alhawarat
2013-09-01
Full Text Available Dynamics of a chaotic spiking neuron model are being studied mathematically and experimentally. The Nonlinear Dynamic State neuron (NDS is analysed to further understand the model and improve it.Chaos has many interesting properties such as sensitivity to initial conditions, space filling, control and synchronization. As suggested by biologists, these properties may be exploited and play vital role in carrying out computational tasks in human brain. The NDS model has some limitations; in thus paper the model is investigated to overcomesome of these limitations in order to enhance the model. Therefore,the model’s parameters are tuned and the resulted dynamics are studied. Also, the discretization method of the model is considered. Moreover, a mathematical analysis is carried out to reveal the underlying dynamics of the model after tuning of its parameters. The results of the aforementioned methods revealedsome facts regarding the NDS attractor and suggest the stabilization of a large number of unstable periodic orbits (UPOs which might correspond to memories in phase space
STUDYING A CHAOTIC SPIKING NEURAL MODEL
Directory of Open Access Journals (Sweden)
Mohammad Alhawarat
2013-09-01
Full Text Available Dynamics of a chaotic spiking neuron model are being studied mathematically and experimentally. The Nonlinear Dynamic State neuron (NDS is analysed to further understand the model and improve it. Chaos has many interesting properties such as sensitivity to initial conditions, space filling, control and synchronization. As suggested by biologists, these properties may be exploited and play vital role in carrying out computational tasks in human brain. The NDS model has some limitations; in thus paper the model is investigated to overcome some of these limitations in order to enhance the model. Therefore, the model’s parameters are tuned and the resulted dynamics are studied. Also, the discretization method of the model is considered. Moreover, a mathematical analysis is carried out to reveal the underlying dynamics of the model after tuning of its parameters. The results of the aforementioned methods revealed some facts regarding the NDS attractor and suggest the stabilization of a large number of unstable periodic orbits (UPOs which might correspond to memories in phase space.
Directory of Open Access Journals (Sweden)
Jan eHahne
2015-09-01
Full Text Available Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy...
Dong, Yi; Mihalas, Stefan; Russell, Alexander; Etienne-Cummings, Ralph; Niebur, Ernst
2011-11-01
When a neuronal spike train is observed, what can we deduce from it about the properties of the neuron that generated it? A natural way to answer this question is to make an assumption about the type of neuron, select an appropriate model for this type, and then choose the model parameters as those that are most likely to generate the observed spike train. This is the maximum likelihood method. If the neuron obeys simple integrate-and-fire dynamics, Paninski, Pillow, and Simoncelli (2004) showed that its negative log-likelihood function is convex and that, at least in principle, its unique global minimum can thus be found by gradient descent techniques. Many biological neurons are, however, known to generate a richer repertoire of spiking behaviors than can be explained in a simple integrate-and-fire model. For instance, such a model retains only an implicit (through spike-induced currents), not an explicit, memory of its input; an example of a physiological situation that cannot be explained is the absence of firing if the input current is increased very slowly. Therefore, we use an expanded model (Mihalas & Niebur, 2009 ), which is capable of generating a large number of complex firing patterns while still being linear. Linearity is important because it maintains the distribution of the random variables and still allows maximum likelihood methods to be used. In this study, we show that although convexity of the negative log-likelihood function is not guaranteed for this model, the minimum of this function yields a good estimate for the model parameters, in particular if the noise level is treated as a free parameter. Furthermore, we show that a nonlinear function minimization method (r-algorithm with space dilation) usually reaches the global minimum.
Carter, Brett C.; Bean, Bruce P.
2009-01-01
We measured the time course of sodium entry during action potentials of mouse central neurons at 37 °C to examine how efficiently sodium entry is coupled to depolarization. In cortical pyramidal neurons, sodium entry was nearly completely confined to the rising phase of the spike: only ~25% more sodium enters than the theoretical minimum necessary for spike depolarization. However, in fast-spiking GABAergic neurons (cerebellar Purkinje cells and cortical interneurons), twice as much sodium en...
Charge-Balanced Minimum-Power Controls for Spiking Neuron Oscillators
Dasanayake, Isuru
2011-01-01
In this paper, we study the optimal control of phase models for spiking neuron oscillators. We focus on the design of minimum-power current stimuli that elicit spikes in neurons at desired times. We furthermore take the charge-balanced constraint into account because in practice undesirable side effects may occur due to the accumulation of electric charge resulting from external stimuli. Charge-balanced minimum-power controls are derived for a general phase model using the maximum principle, where the cases with unbounded and bounded control amplitude are examined. The latter is of practical importance since phase models are more accurate for weak forcing. The developed optimal control strategies are then applied to both mathematically ideal and experimentally observed phase models to demonstrate their applicability, including the phase model for the widely studied Hodgkin-Huxley equations.
Marchenkova, Anna; van den Maagdenberg, Arn M J M; Nistri, Andrea
2016-09-07
Purinergic P2X3 receptors (P2X3Rs) play an important role in pain pathologies, including migraine. In trigeminal neurons, P2X3Rs are constitutively downregulated by endogenous brain natriuretic peptide (BNP). In a mouse knock-in (KI) model of familial hemiplegic migraine type-1 with upregulated calcium CaV2.1 channel function, trigeminal neurons exhibit hyperexcitability with gain-of-function of P2X3Rs and their deficient BNP-mediated inhibition. We studied whether the absent BNP-induced control over P2X3Rs activity in KI cultures may be functionally expressed in altered firing activity of KI trigeminal neurons. Patch-clamp experiments investigated the excitability of wild-type and KI trigeminal neurons induced by either current or agonists for P2X3Rs or transient receptor potential vanilloid-1 (TRPV1) receptors. Consistent with the constitutive inhibition of P2X3Rs by BNP, sustained pharmacological block of BNP receptors selectively enhanced P2X3R-mediated excitability of wild-type neurons without affecting firing evoked by the other protocols. This effect included increased number of action potentials, lower spike threshold and shift of the firing pattern distribution toward higher spiking activity. Thus, inactivation of BNP signaling transformed the wild-type excitability phenotype into the one typical for KI. BNP receptor block did not influence excitability of KI neurons in accordance with the lack of BNP-induced P2X3R modulation. Our study suggests that, in wild-type trigeminal neurons, negative control over P2X3Rs by the BNP pathway is translated into tonic suppression of P2X3Rs-mediated excitability. Lack of this inhibition in KI cultures results in a hyperexcitability phenotype and might contribute to facilitated trigeminal pain transduction relevant for migraine. Copyright © 2016 IBRO. Published by Elsevier Ltd. All rights reserved.
Tuckwell, Henry C; Jost, Jürgen
2010-05-27
Many neurons have epochs in which they fire action potentials in an approximately periodic fashion. To see what effects noise of relatively small amplitude has on such repetitive activity we recently examined the response of the Hodgkin-Huxley (HH) space-clamped system to such noise as the mean and variance of the applied current vary, near the bifurcation to periodic firing. This article is concerned with a more realistic neuron model which includes spatial extent. Employing the Hodgkin-Huxley partial differential equation system, the deterministic component of the input current is restricted to a small segment whereas the stochastic component extends over a region which may or may not overlap the deterministic component. For mean values below, near and above the critical values for repetitive spiking, the effects of weak noise of increasing strength is ascertained by simulation. As in the point model, small amplitude noise near the critical value dampens the spiking activity and leads to a minimum as noise level increases. This was the case for both additive noise and conductance-based noise. Uniform noise along the whole neuron is only marginally more effective in silencing the cell than noise which occurs near the region of excitation. In fact it is found that if signal and noise overlap in spatial extent, then weak noise may inhibit spiking. If, however, signal and noise are applied on disjoint intervals, then the noise has no effect on the spiking activity, no matter how large its region of application, though the trajectories are naturally altered slightly by noise. Such effects could not be discerned in a point model and are important for real neuron behavior. Interference with the spike train does nevertheless occur when the noise amplitude is larger, even when noise and signal do not overlap, being due to the instigation of secondary noise-induced wave phenomena rather than switching the system from one attractor (firing regularly) to another (a stable
Directory of Open Access Journals (Sweden)
Henry C Tuckwell
2010-05-01
Full Text Available Many neurons have epochs in which they fire action potentials in an approximately periodic fashion. To see what effects noise of relatively small amplitude has on such repetitive activity we recently examined the response of the Hodgkin-Huxley (HH space-clamped system to such noise as the mean and variance of the applied current vary, near the bifurcation to periodic firing. This article is concerned with a more realistic neuron model which includes spatial extent. Employing the Hodgkin-Huxley partial differential equation system, the deterministic component of the input current is restricted to a small segment whereas the stochastic component extends over a region which may or may not overlap the deterministic component. For mean values below, near and above the critical values for repetitive spiking, the effects of weak noise of increasing strength is ascertained by simulation. As in the point model, small amplitude noise near the critical value dampens the spiking activity and leads to a minimum as noise level increases. This was the case for both additive noise and conductance-based noise. Uniform noise along the whole neuron is only marginally more effective in silencing the cell than noise which occurs near the region of excitation. In fact it is found that if signal and noise overlap in spatial extent, then weak noise may inhibit spiking. If, however, signal and noise are applied on disjoint intervals, then the noise has no effect on the spiking activity, no matter how large its region of application, though the trajectories are naturally altered slightly by noise. Such effects could not be discerned in a point model and are important for real neuron behavior. Interference with the spike train does nevertheless occur when the noise amplitude is larger, even when noise and signal do not overlap, being due to the instigation of secondary noise-induced wave phenomena rather than switching the system from one attractor (firing regularly to
The neuronal response at extended timescales: a linearized spiking input-output relation
Directory of Open Access Journals (Sweden)
Daniel eSoudry
2014-04-01
Full Text Available Many biological systems are modulated by unknown slow processes. This can severely hinder analysis - especially in excitable neurons, which are highly non-linear and stochastic systems. We show the analysis simplifies considerably if the input matches the sparse spiky nature of the output. In this case, a linearized spiking Input-Output (I/O relation can be derived semi-analytically, relating input spike trains to output spikes based on known biophysical properties. Using this I/O relation we obtain closed-form expressions for all second order statistics (input - internal state - output correlations and spectra, construct optimal linear estimators for the neuronal response and internal state and perform parameter identification. These results are guaranteed to hold, for a general stochastic biophysical neuron model, with only a few assumptions (mainly, timescale separation. We numerically test the resulting expressions for various models, and show that they hold well, even in cases where our assumptions fail to hold. In a companion paper we demonstrate how this approach enables us to fit a biophysical neuron model so it reproduces experimentally observed temporal firing statistics on days-long experiments.
The neuronal response at extended timescales: a linearized spiking input-output relation.
Soudry, Daniel; Meir, Ron
2014-01-01
Many biological systems are modulated by unknown slow processes. This can severely hinder analysis - especially in excitable neurons, which are highly non-linear and stochastic systems. We show the analysis simplifies considerably if the input matches the sparse "spiky" nature of the output. In this case, a linearized spiking Input-Output (I/O) relation can be derived semi-analytically, relating input spike trains to output spikes based on known biophysical properties. Using this I/O relation we obtain closed-form expressions for all second order statistics (input - internal state - output correlations and spectra), construct optimal linear estimators for the neuronal response and internal state and perform parameter identification. These results are guaranteed to hold, for a general stochastic biophysical neuron model, with only a few assumptions (mainly, timescale separation). We numerically test the resulting expressions for various models, and show that they hold well, even in cases where our assumptions fail to hold. In a companion paper we demonstrate how this approach enables us to fit a biophysical neuron model so it reproduces experimentally observed temporal firing statistics on days-long experiments.
2012-01-01
Action potentials at the neurons and graded signals at the synapses are primary codes in the brain. In terms of their functional interaction, the studies were focused on the influence of presynaptic spike patterns on synaptic activities. How the synapse dynamics quantitatively regulates the encoding of postsynaptic digital spikes remains unclear. We investigated this question at unitary glutamatergic synapses on cortical GABAergic neurons, especially the quantitative influences of release probability on synapse dynamics and neuronal encoding. Glutamate release probability and synaptic strength are proportionally upregulated by presynaptic sequential spikes. The upregulation of release probability and the efficiency of probability-driven synaptic facilitation are strengthened by elevating presynaptic spike frequency and Ca2+. The upregulation of release probability improves spike capacity and timing precision at postsynaptic neuron. These results suggest that the upregulation of presynaptic glutamate release facilitates a conversion of synaptic analogue signals into digital spikes in postsynaptic neurons, i.e., a functional compatibility between presynaptic and postsynaptic partners. PMID:22852823
Self-organization of repetitive spike patterns in developing neuronal networks in vitro.
Sun, Jyh-Jang; Kilb, Werner; Luhmann, Heiko J
2010-10-01
The appearance of spontaneous correlated activity is a fundamental feature of developing neuronal networks in vivo and in vitro. To elucidate whether the ontogeny of correlated activity is paralleled by the appearance of specific spike patterns we used a template-matching algorithm to detect repetitive spike patterns in multi-electrode array recordings from cultures of dissociated mouse neocortical neurons between 6 and 15 days in vitro (div). These experiments demonstrated that the number of spiking neurons increased significantly between 6 and 15 div, while a significantly synchronized network activity appeared at 9 div and became the main discharge pattern in the subsequent div. Repetitive spike patterns with a low complexity were first observed at 8 div. The number of repetitive spike patterns in each dataset as well as their complexity and recurrence increased during development in vitro. The number of links between neurons implicated in repetitive spike patterns, as well as their strength, showed a gradual increase during development. About 8% of the spike sequences contributed to more than one repetitive spike patterns and were classified as core patterns. These results demonstrate for the first time that defined neuronal assemblies, as represented by repetitive spike patterns, appear quite early during development in vitro, around the time synchronized network burst become the dominant network pattern. In summary, these findings suggest that dissociated neurons can self-organize into complex neuronal networks that allow reliable flow and processing of neuronal information already during early phases of development.
Exact computation of the maximum-entropy potential of spiking neural-network models.
Cofré, R; Cessac, B
2014-05-01
Understanding how stimuli and synaptic connectivity influence the statistics of spike patterns in neural networks is a central question in computational neuroscience. The maximum-entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. However, in spite of good performance in terms of prediction, the fitting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuromimetic models) provide a probabilistic mapping between the stimulus, network architecture, and spike patterns in terms of conditional probabilities. In this paper we build an exact analytical mapping between neuromimetic and maximum-entropy models.
Conditional Probability Analyses of the Spike Activity of Single Neurons
Gray, Peter R.
1967-01-01
With the objective of separating stimulus-related effects from refractory effects in neuronal spike data, various conditional probability analyses have been developed. These analyses are introduced and illustrated with examples based on electrophysiological data from auditory nerve fibers. The conditional probability analyses considered here involve the estimation of the conditional probability of a firing in a specified time interval (defined relative to the time of the stimulus presentation), given that the last firing occurred during an earlier specified time interval. This calculation enables study of the stimulus-related effects in the spike data with the time-since-the-last-firing as a controlled variable. These calculations indicate that auditory nerve fibers “recover” from the refractory effects that follow a firing in the following sense: after a “recovery time” of approximately 20 msec, the firing probabilities no longer depend on the time-since-the-last-firing. Probabilities conditional on this minimum time since the last firing are called “recovered probabilities.” The recovered probabilities presented in this paper are contrasted with the corresponding poststimulus time histograms, and the differences are related to the refractory properties of the nerve fibers. Imagesp[762]-a PMID:19210997
The spike timing precision of FitzHugh-Nagumo neuron network coupled by gap junctions
Institute of Scientific and Technical Information of China (English)
Zhang Su-Hua; Zhan Yong; Yu Hui; An Hai-Long; Zhao Tong-Jun
2006-01-01
It has been proved recently that the spike timing can play an important role in information transmission, so in this paper we develop a network with N-unit FitzHugh-Nagumo neurons coupled by gap junctions and discuss the dependence of the spike timing precision on synaptic coupling strength, the noise intensity and the size of the neuron ensemble. The calculated results show that the spike timing precision decreases as the noise intensity increases; and the ensemble spike timing precision increases with coupling strength increasing. The electric synapse coupling has a more important effect on the spike timing precision than the chemical synapse coupling.
Thermal impact on spiking properties in Hodgkin-Huxley neuron with synaptic stimulus
Indian Academy of Sciences (India)
Shenbing Kuang; Jiafu Wang; Ting Zeng; Aiyin Cao
2008-01-01
The effect of environmental temperature on neuronal spiking behaviors is investigated by numerically simulating the temperature dependence of spiking threshold of the Hodgkin-Huxley neuron subject to synaptic stimulus. We find that the spiking threshold exhibits a global minimum in a specific temperature range where spike initiation needs weakest synaptic strength, which form the engineering perspective indicates the occurrence of optimal use of synaptic transmission in the nervous system. We further explore the biophysical origin of this phenomenon associated with ion channel gating kinetics and also discuss its possible biological relevance in information processing in neuronal systems.
Spike neural models (part I: The Hodgkin-Huxley model
Directory of Open Access Journals (Sweden)
Johnson, Melissa G.
2017-05-01
Full Text Available Artificial neural networks, or ANNs, have grown a lot since their inception back in the 1940s. But no matter the changes, one of the most important components of neural networks is still the node, which represents the neuron. Within spiking neural networks, the node is especially important because it contains the functions and properties of neurons that are necessary for their network. One important aspect of neurons is the ionic flow which produces action potentials, or spikes. Forces of diffusion and electrostatic pressure work together with the physical properties of the cell to move ions around changing the cell membrane potential which ultimately produces the action potential. This tutorial reviews the Hodkgin-Huxley model and shows how it simulates the ionic flow of the giant squid axon via four differential equations. The model is implemented in Matlab using Euler's Method to approximate the differential equations. By using Euler's method, an extra parameter is created, the time step. This new parameter needs to be carefully considered or the results of the node may be impaired.
The power ratio and the interval map spiking models and extracellular data
Reich, D S; Knight, B W; Reich, Daniel S.; Victor, Jonathan D.; Knight, Bruce W.
1998-01-01
We describe a new, computationally simple method for analyzing the dynamics of neuronal spike trains driven by external stimuli. The goal of our method is to test the predictions of simple spike-generating models against extracellularly recorded neuronal responses. Through a new statistic called the power ratio, we distinguish between two broad classes of responses: (1) responses that can be completely characterized by a variable firing rate, (for example, modulated Poisson and gamma spike trains); and (2) responses for which firing rate variations alone are not sufficient to characterize response dynamics (for example, leaky integrate-and-fire spike trains as well as Poisson spike trains with long absolute refractory periods). We show that the responses of many visual neurons in the cat retinal ganglion, cat lateral geniculate nucleus, and macaque primary visual cortex fall into the second class, which implies that the pattern of spike times can carry significant information about visual stimuli. Our results...
Broadband shifts in LFP power spectra are correlated with single-neuron spiking in humans
Manning, Jeremy R.; Jacobs, Joshua; Fried, Itzhak; Kahana, Michael J.
2010-01-01
A fundamental question in neuroscience concerns the relation between the spiking of individual neurons and the aggregate electrical activity of neuronal ensembles as seen in local-field potentials (LFPs). Because LFPs reflect both spiking activity and subthreshold events, this question is not simply one of data aggregation. Recording from 20 neurosurgical patients, we directly examined the relation between LFPs and neuronal spiking. Examining 2,030 neurons in widespread brain regions, we found that firing rates were positively correlated with broadband (2 – 150 Hz) shifts in the LFP power spectrum. In contrast, narrowband oscillations correlated both positively and negatively with firing rates at different recording sites. Broadband power shifts were a more-reliable predictor of neuronal spiking than narrowband power shifts. These findings suggest that broadband LFP power provides valuable information concerning neuronal activity beyond that contained in narrowband oscillations. PMID:19864573
Coherent stochastic oscillations enhance signal detection in spiking neurons.
Engel, Tatiana A; Helbig, Brian; Russell, David F; Schimansky-Geier, Lutz; Neiman, Alexander B
2009-08-01
We study the effect of noisy oscillatory input on the signal discrimination by spontaneously firing neurons. Using analytically tractable model, we contrast signal detection in two situations: (i) when the neuron is driven by coherent oscillations and (ii) when the coherence of oscillations is destroyed. Analytical calculations revealed a region in the parameter space of the model where oscillations act to reduce the variability of neuronal firing and to enhance the discriminability of weak signals. These analytical results are employed to unveil a possible role of coherent oscillations in peripheral electrosensory system of paddlefish in improvement of detection of weak stimuli. The proposed mechanism may be relevant to a wide range of phenomena involving coherently driven oscillators.
Directory of Open Access Journals (Sweden)
Risako Kato
2016-11-01
Full Text Available Pentobarbital potentiates γ-aminobutyric acid (GABA-mediated inhibitory synaptic transmission by prolonging the open time of GABAA receptors. However, it is unknown how pentobarbital regulates cortical neuronal activities via local circuits in vivo. To examine this question, we performed extracellular unit recording in rat insular cortex under awake and anesthetic conditions. Not a few studies apply time-rescaling theorem to detect the features of repetitive spike firing. Similar to these methods, we define an average spike interval locally in time using random matrix theory (RMT, which enables us to compare different activity states on a universal scale. Neurons with high spontaneous firing frequency (> 5 Hz and bursting were classified as HFB neurons (n = 10, and those with low spontaneous firing frequency (< 10 Hz and without bursting were classified as non-HFB neurons (n = 48. Pentobarbital injection (30 mg/kg reduced firing frequency in all HFB neurons and in 78% of non-HFB neurons. RMT analysis demonstrated that pentobarbital increased in the number of neurons with repulsion in both HFB and non-HFB neurons, suggesting that there is a correlation between spikes within a short interspike interval. Under awake conditions, in 50% of HFB and 40% of non-HFB neurons, the decay phase of normalized histograms of spontaneous firing were fitted to an exponential function, which indicated that the first spike had no correlation with subsequent spikes. In contrast, under pentobarbital-induced anesthesia conditions, the number of non-HFB neurons that were fitted to an exponential function increased to 80%, but almost no change in HFB neurons was observed. These results suggest that under both awake and pentobarbital-induced anesthetized conditions, spike firing in HFB neurons is more robustly regulated by preceding spikes than by non-HFB neurons, which may reflect the GABAA receptor-mediated regulation of cortical activities. Whole-cell patch
Kato, Risako; Yamanaka, Masanori; Yokota, Eiko; Koshikawa, Noriaki; Kobayashi, Masayuki
2016-01-01
Pentobarbital potentiates γ-aminobutyric acid (GABA)-mediated inhibitory synaptic transmission by prolonging the open time of GABAA receptors. However, it is unknown how pentobarbital regulates cortical neuronal activities via local circuits in vivo. To examine this question, we performed extracellular unit recording in rat insular cortex under awake and anesthetic conditions. Not a few studies apply time-rescaling theorem to detect the features of repetitive spike firing. Similar to these methods, we define an average spike interval locally in time using random matrix theory (RMT), which enables us to compare different activity states on a universal scale. Neurons with high spontaneous firing frequency (>5 Hz) and bursting were classified as HFB neurons (n = 10), and those with low spontaneous firing frequency (<10 Hz) and without bursting were classified as non-HFB neurons (n = 48). Pentobarbital injection (30 mg/kg) reduced firing frequency in all HFB neurons and in 78% of non-HFB neurons. RMT analysis demonstrated that pentobarbital increased in the number of neurons with repulsion in both HFB and non-HFB neurons, suggesting that there is a correlation between spikes within a short interspike interval (ISI). Under awake conditions, in 50% of HFB and 40% of non-HFB neurons, the decay phase of normalized histograms of spontaneous firing were fitted to an exponential function, which indicated that the first spike had no correlation with subsequent spikes. In contrast, under pentobarbital-induced anesthesia conditions, the number of non-HFB neurons that were fitted to an exponential function increased to 80%, but almost no change in HFB neurons was observed. These results suggest that under both awake and pentobarbital-induced anesthetized conditions, spike firing in HFB neurons is more robustly regulated by preceding spikes than by non-HFB neurons, which may reflect the GABAA receptor-mediated regulation of cortical activities. Whole-cell patch-clamp recording in
Guo, Xinmeng; Wang, Jiang; Liu, Jing; Yu, Haitao; Galán, Roberto F.; Cao, Yibin; Deng, Bin
2017-02-01
Channel noise, which is generated by the random transitions of ion channels between open and closed states, is distinguished from external sources of physiological variability such as spontaneous synaptic release and stimulus fluctuations. This inherent stochasticity in ion-channel current can lead to variability of the timing of spikes occurring both spontaneously and in response to stimuli. In this paper, we investigate how intrinsic channel noise affects the response of stochastic Hodgkin-Huxley (HH) neuron to external fluctuating inputs with different amplitudes and correlation time. It is found that there is an optimal correlation time of input fluctuations for the maximal spiking coherence, where the input current has a fluctuating rate approximately matching the inherent oscillation of stochastic HH model and plays a dominating role in the timing of spike firing. We also show that the reliability of spike timing in the model is very sensitive to the properties of the current input. An optimal time scale of input fluctuations exists to induce the most reliable firing. The channel-noise-induced unreliability can be mostly overridden by injecting a fluctuating current with an appropriate correlation time. The spiking coherence and reliability can also be regulated by the size of channel stochasticity. As the membrane area (or total channel number) of the neuron increases, the spiking coherence decreases but the spiking reliability increases.
Directory of Open Access Journals (Sweden)
Frank Rattay
Full Text Available BACKGROUND: Our knowledge about the neural code in the auditory nerve is based to a large extent on experiments on cats. Several anatomical differences between auditory neurons in human and cat are expected to lead to functional differences in speed and safety of spike conduction. METHODOLOGY/PRINCIPAL FINDINGS: Confocal microscopy was used to systematically evaluate peripheral and central process diameters, commonness of myelination and morphology of spiral ganglion neurons (SGNs along the cochlea of three human and three cats. Based on these morphometric data, model analysis reveales that spike conduction in SGNs is characterized by four phases: a postsynaptic delay, constant velocity in the peripheral process, a presomatic delay and constant velocity in the central process. The majority of SGNs are type I, connecting the inner hair cells with the brainstem. In contrast to those of humans, type I neurons of the cat are entirely myelinated. Biophysical model evaluation showed delayed and weak spikes in the human soma region as a consequence of a lack of myelin. The simulated spike conduction times are in accordance with normal interwave latencies from auditory brainstem response recordings from man and cat. Simulated 400 pA postsynaptic currents from inner hair cell ribbon synapses were 15 times above threshold. They enforced quick and synchronous spiking. Both of these properties were not present in type II cells as they receive fewer and much weaker (∼26 pA synaptic stimuli. CONCLUSIONS/SIGNIFICANCE: Wasting synaptic energy boosts spike initiation, which guarantees the rapid transmission of temporal fine structure of auditory signals. However, a lack of myelin in the soma regions of human type I neurons causes a large delay in spike conduction in comparison with cat neurons. The absent myelin, in combination with a longer peripheral process, causes quantitative differences of temporal parameters in the electrically stimulated human cochlea
Spike Sorting by Joint Probabilistic Modeling of Neural Spike Trains and Waveforms
Matthews, Brett A.; Clements, Mark A.
2014-01-01
This paper details a novel probabilistic method for automatic neural spike sorting which uses stochastic point process models of neural spike trains and parameterized action potential waveforms. A novel likelihood model for observed firing times as the aggregation of hidden neural spike trains is derived, as well as an iterative procedure for clustering the data and finding the parameters that maximize the likelihood. The method is executed and evaluated on both a fully labeled semiartificial dataset and a partially labeled real dataset of extracellular electric traces from rat hippocampus. In conditions of relatively high difficulty (i.e., with additive noise and with similar action potential waveform shapes for distinct neurons) the method achieves significant improvements in clustering performance over a baseline waveform-only Gaussian mixture model (GMM) clustering on the semiartificial set (1.98% reduction in error rate) and outperforms both the GMM and a state-of-the-art method on the real dataset (5.04% reduction in false positive + false negative errors). Finally, an empirical study of two free parameters for our method is performed on the semiartificial dataset. PMID:24829568
Reinforcement learning using a continuous time actor-critic framework with spiking neurons.
Frémaux, Nicolas; Sprekeler, Henning; Gerstner, Wulfram
2013-04-01
Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD) learning of Doya (2000) to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity.
Reinforcement learning using a continuous time actor-critic framework with spiking neurons.
Directory of Open Access Journals (Sweden)
Nicolas Frémaux
2013-04-01
Full Text Available Animals repeat rewarded behaviors, but the physiological basis of reward-based learning has only been partially elucidated. On one hand, experimental evidence shows that the neuromodulator dopamine carries information about rewards and affects synaptic plasticity. On the other hand, the theory of reinforcement learning provides a framework for reward-based learning. Recent models of reward-modulated spike-timing-dependent plasticity have made first steps towards bridging the gap between the two approaches, but faced two problems. First, reinforcement learning is typically formulated in a discrete framework, ill-adapted to the description of natural situations. Second, biologically plausible models of reward-modulated spike-timing-dependent plasticity require precise calculation of the reward prediction error, yet it remains to be shown how this can be computed by neurons. Here we propose a solution to these problems by extending the continuous temporal difference (TD learning of Doya (2000 to the case of spiking neurons in an actor-critic network operating in continuous time, and with continuous state and action representations. In our model, the critic learns to predict expected future rewards in real time. Its activity, together with actual rewards, conditions the delivery of a neuromodulatory TD signal to itself and to the actor, which is responsible for action choice. In simulations, we show that such an architecture can solve a Morris water-maze-like navigation task, in a number of trials consistent with reported animal performance. We also use our model to solve the acrobot and the cartpole problems, two complex motor control tasks. Our model provides a plausible way of computing reward prediction error in the brain. Moreover, the analytically derived learning rule is consistent with experimental evidence for dopamine-modulated spike-timing-dependent plasticity.
Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State.
Directory of Open Access Journals (Sweden)
Fereshteh Lagzi
Full Text Available We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the "within" versus "between" connection weights (bifurcation parameter, the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed "winnerless competition", which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations
Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State.
Lagzi, Fereshteh; Rotter, Stefan
2015-01-01
We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the "within" versus "between" connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed "winnerless competition", which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might suggest a
A-current modifies the spike of C-type neurones in the rabbit nodose ganglion.
Ducreux, C; Puizillout, J J
1995-07-15
1. In the rabbit nodose ganglion, C-type fibre neurones (C neurones) can be divided into two subtypes according to their after-hyperpolarizing potential (AHP) i.e. those with a fast AHP only and those with a fast AHP and a subsequent slow AHP produced by a slow calcium-dependent potassium current. In addition we have shown that some C neurones can be divided into two groups according to the effect of membrane hyperpolarization on their spikes i.e. type 1 in which duration and amplitude do not change and type 2 in which duration and amplitude decrease significantly. 2. In the present report we studied the effect of A-current (IA) on spike duration, amplitude and slow AHP using intracellular recording techniques. 3. To detect the presence of IA, we first applied a series of increasing rectangular hyperpolarizing pulses to remove IA inactivation and then a short depolarizing pulse to trigger a spike. In type 1 C neurones the lag time of the spike in relation to hyperpolarization remains constant whereas in type 2 C neurones the spike only appears after IA inactivation and lag time in relation to hyperpolarization is lengthened. Thus, type 2 C neurones have an IA while type 1 C neurones do not. The fact that addition of cadmium did not change the lag time in type 2 C neurones shows that the IA is not calcium dependent. 4. Nodose neurones can be orthodromically activated by stimulation of the vagal peripheral process. In this way, after a hyperpolarizing pulse, IA can be fully activated by the orthodromic spike itself. Under these conditions it is possible to analyse the effects of IA on the spike. This was done by increasing either the hyperpolarizing potential, pulse duration, or the delay of the spike after the end of the pulse. We observed that maximum IA inactivation removal was always associated with the lowest duration and amplitude of the spike. 5. When IA inhibitors, 4-aminopyridine (4-AP) or catechol, were applied to type 2 C neurones, the delay of the spike
The Site of Spontaneous Ectopic Spike Initiation Facilitates Signal Integration in a Sensory Neuron.
Städele, Carola; Stein, Wolfgang
2016-06-22
Essential to understanding the process of neuronal signal integration is the knowledge of where within a neuron action potentials (APs) are generated. Recent studies support the idea that the precise location where APs are initiated and the properties of spike initiation zones define the cell's information processing capabilities. Notably, the location of spike initiation can be modified homeostatically within neurons to adjust neuronal activity. Here we show that this potential mechanism for neuronal plasticity can also be exploited in a rapid and dynamic fashion. We tested whether dislocation of the spike initiation zone affects signal integration by studying ectopic spike initiation in the anterior gastric receptor neuron (AGR) of the stomatogastric nervous system of Cancer borealis Like many other vertebrate and invertebrate neurons, AGR can generate ectopic APs in regions distinct from the axon initial segment. Using voltage-sensitive dyes and electrophysiology, we determined that AGR's ectopic spike activity was consistently initiated in the neuropil region of the stomatogastric ganglion motor circuits. At least one neurite branched off the AGR axon in this area; and indeed, we found that AGR's ectopic spike activity was influenced by local motor neurons. This sensorimotor interaction was state-dependent in that focal axon modulation with the biogenic amine octopamine, abolished signal integration at the primary spike initiation zone by dislocating spike initiation to a distant region of the axon. We demonstrate that the site of ectopic spike initiation is important for signal integration and that axonal neuromodulation allows for a dynamic adjustment of signal integration. Although it is known that action potentials are initiated at specific sites in the axon, it remains to be determined how the precise location of action potential initiation affects neuronal activity and signal integration. We addressed this issue by studying ectopic spiking in the axon of
DL-ReSuMe: A Delay Learning-Based Remote Supervised Method for Spiking Neurons.
Taherkhani, Aboozar; Belatreche, Ammar; Li, Yuhua; Maguire, Liam P
2015-12-01
Recent research has shown the potential capability of spiking neural networks (SNNs) to model complex information processing in the brain. There is biological evidence to prove the use of the precise timing of spikes for information coding. However, the exact learning mechanism in which the neuron is trained to fire at precise times remains an open problem. The majority of the existing learning methods for SNNs are based on weight adjustment. However, there is also biological evidence that the synaptic delay is not constant. In this paper, a learning method for spiking neurons, called delay learning remote supervised method (DL-ReSuMe), is proposed to merge the delay shift approach and ReSuMe-based weight adjustment to enhance the learning performance. DL-ReSuMe uses more biologically plausible properties, such as delay learning, and needs less weight adjustment than ReSuMe. Simulation results have shown that the proposed DL-ReSuMe approach achieves learning accuracy and learning speed improvements compared with ReSuMe.
Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.
Directory of Open Access Journals (Sweden)
Gabriel Koch Ocker
2015-08-01
Full Text Available The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.
Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.
Ocker, Gabriel Koch; Litwin-Kumar, Ashok; Doiron, Brent
2015-08-01
The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.
Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua
2015-01-01
In this paper, we investigate how clustering factors influent spiking regularity of the neuronal network of subnetworks. In order to do so, we fix the averaged coupling probability and the averaged coupling strength, and take the cluster number M, the ratio of intra-connection probability and inter-connection probability R, the ratio of intra-coupling strength and inter-coupling strength S as controlled parameters. With the obtained simulation results, we find that spiking regularity of the neuronal networks has little variations with changing of R and S when M is fixed. However, cluster number M could reduce the spiking regularity to low level when the uniform neuronal network's spiking regularity is at high level. Combined the obtained results, we can see that clustering factors have little influences on the spiking regularity when the entire energy is fixed, which could be controlled by the averaged coupling strength and the averaged connection probability.
A Neuron Model Based Ultralow Current Sensor System for Bioapplications
Directory of Open Access Journals (Sweden)
A. K. M. Arifuzzman
2016-01-01
Full Text Available An ultralow current sensor system based on the Izhikevich neuron model is presented in this paper. The Izhikevich neuron model has been used for its superior computational efficiency and greater biological plausibility over other well-known neuron spiking models. Of the many biological neuron spiking features, regular spiking, chattering, and neostriatal spiny projection spiking have been reproduced by adjusting the parameters associated with the model at hand. This paper also presents a modified interpretation of the regular spiking feature in which the firing pattern is similar to that of the regular spiking but with improved dynamic range offering. The sensor current ranges between 2 pA and 8 nA and exhibits linearity in the range of 0.9665 to 0.9989 for different spiking features. The efficacy of the sensor system in detecting low amount of current along with its high linearity attribute makes it very suitable for biomedical applications.
Yu, Haitao; Galán, Roberto F.; Wang, Jiang; Cao, Yibin; Liu, Jing
2017-04-01
The random transitions of ion channels between open and closed states are a major source of noise in neurons. In this study, we investigate the stochastic dynamics of a single Hodgkin-Huxley (HH) neuron with realistic, physiological channel noise, which depends on the channel number and the voltage potential of the membrane. Without external input, the stochastic HH model can generate spontaneous spikes induced by ion-channel noise, and the variability of inter-spike intervals attains a minimum for an optimal membrane area, a phenomenon known as coherence resonance. When a subthreshold periodic input current is added, the neuron can optimally detect the input frequency for an intermediate membrane area, corresponding to the phenomenon of stochastic resonance. We also investigate spike timing reliability of neuronal responses to repeated presentations of the same stimulus with different realizations of channel noise. We show that, with increasing membrane area, the reliability of neuronal response decreases for subthreshold periodic inputs, and attains a minimum for suprathreshold inputs. Furthermore, Arnold tongues of high reliability arise in a two-dimensional plot of frequency and amplitude of the sinusoidal input current, resulting from the resonance effect of spike timing reliability.
Qiao, Ning; Mostafa, Hesham; Corradi, Federico; Osswald, Marc; Stefanini, Fabio; Sumislawska, Dora; Indiveri, Giacomo
2015-01-01
Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks, with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm(2), and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.
Spiking patterns of neocortical L5 pyramidal neurons in vitro change with temperature
Directory of Open Access Journals (Sweden)
Tristan eHedrick
2011-01-01
Full Text Available A subset of pyramidal neurons in layer 5 of the mammalian neocortex can fire action potentials in brief, high-frequency bursts while others fire spikes at regularly-spaced intervals. Here we show that individual layer 5 pyramidal neurons in acute slices from mouse primary motor cortex can adopt both regular and burst spiking patterns. During constant current injection at the soma, neurons displayed a regular firing pattern at 36-37 °C, but switched to burst spiking patterns upon cooling the slice to 24-26 °C. This change in firing pattern was reversible and repeatable and was independent of the somatic resting membrane potential. Hence these spiking patterns are not inherent to discrete populations of pyramidal neurons and are more interchangeable than previously thought. Burst spiking in these neurons is the result of electrical interactions between the soma and distal apical dendritic tree. Presumably the interactions between soma and distal dendrite are temperature-sensitive, suggesting that the manner in which layer 5 pyramidal neurons translate synaptic input into an output spiking pattern is fundamentally altered at sub-physiological temperatures.
Directory of Open Access Journals (Sweden)
Agnieszka I Wlodarczyk
2013-12-01
Full Text Available Because of a complex dendritic structure, pyramidal neurons have a large membrane surface relative to other cells and so a large electrical capacitance and a large membrane time constant (τm. This results in slow depolarizations in response to excitatory synaptic inputs, and consequently increased and variable action potential latencies, which may be computationally undesirable. Tonic activation of GABAA receptors increases membrane conductance and thus regulates neuronal excitability by shunting inhibition. In addition, tonic increases in membrane conductance decrease the membrane time constant (τm, and improve the temporal fidelity of neuronal firing. Here we performed whole-cell current clamp recordings from hippocampal CA1 pyramidal neurons and found that bath application of 10 µM GABA indeed decreases τm in these cells. GABA also decreased first spike latency and jitter (standard deviation of the latency produced by current injection of 2 rheobases (500 ms. However, when larger current injections (3-6 rheobases were used, GABA produced no significant effect on spike jitter, which was low. Using mathematical modelling we demonstrate that the tonic GABAA conductance decreases rise time, decay time and half-width of EPSPs in pyramidal neurons. A similar effect was observed on EPSP/IPSP pairs produced by stimulation of Schaffer collaterals: the EPSP part of the response became shorter after application of GABA. Consistent with the current injection data, a significant decrease in spike latency and jitter was obtained in cell attached recordings only at near-threshold stimulation (50% success rate, S50. When stimulation was increased to 2- or 3- times S50, GABA significantly affected neither spike latency nor spike jitter. Our results suggest that a decrease in τm associated with elevations in ambient GABA can improve EPSP-spike precision at near-threshold synaptic inputs.
Wlodarczyk, Agnieszka I; Xu, Chun; Song, Inseon; Doronin, Maxim; Wu, Yu-Wei; Walker, Matthew C; Semyanov, Alexey
2013-01-01
Because of a complex dendritic structure, pyramidal neurons have a large membrane surface relative to other cells and so a large electrical capacitance and a large membrane time constant (τm). This results in slow depolarizations in response to excitatory synaptic inputs, and consequently increased and variable action potential latencies, which may be computationally undesirable. Tonic activation of GABAA receptors increases membrane conductance and thus regulates neuronal excitability by shunting inhibition. In addition, tonic increases in membrane conductance decrease the membrane time constant (τm), and improve the temporal fidelity of neuronal firing. Here we performed whole-cell current clamp recordings from hippocampal CA1 pyramidal neurons and found that bath application of 10μM GABA indeed decreases τm in these cells. GABA also decreased first spike latency and jitter (standard deviation of the latency) produced by current injection of 2 rheobases (500 ms). However, when larger current injections (3-6 rheobases) were used, GABA produced no significant effect on spike jitter, which was low. Using mathematical modeling we demonstrate that the tonic GABAA conductance decreases rise time, decay time and half-width of EPSPs in pyramidal neurons. A similar effect was observed on EPSP/IPSP pairs produced by stimulation of Schaffer collaterals: the EPSP part of the response became shorter after application of GABA. Consistent with the current injection data, a significant decrease in spike latency and jitter was obtained in cell attached recordings only at near-threshold stimulation (50% success rate, S50). When stimulation was increased to 2- or 3- times S50, GABA significantly affected neither spike latency nor spike jitter. Our results suggest that a decrease in τm associated with elevations in ambient GABA can improve EPSP-spike precision at near-threshold synaptic inputs.
An FPGA Platform for Real-Time Simulation of Spiking Neuronal Networks.
Pani, Danilo; Meloni, Paolo; Tuveri, Giuseppe; Palumbo, Francesca; Massobrio, Paolo; Raffo, Luigi
2017-01-01
In the last years, the idea to dynamically interface biological neurons with artificial ones has become more and more urgent. The reason is essentially due to the design of innovative neuroprostheses where biological cell assemblies of the brain can be substituted by artificial ones. For closed-loop experiments with biological neuronal networks interfaced with in silico modeled networks, several technological challenges need to be faced, from the low-level interfacing between the living tissue and the computational model to the implementation of the latter in a suitable form for real-time processing. Field programmable gate arrays (FPGAs) can improve flexibility when simple neuronal models are required, obtaining good accuracy, real-time performance, and the possibility to create a hybrid system without any custom hardware, just programming the hardware to achieve the required functionality. In this paper, this possibility is explored presenting a modular and efficient FPGA design of an in silico spiking neural network exploiting the Izhikevich model. The proposed system, prototypically implemented on a Xilinx Virtex 6 device, is able to simulate a fully connected network counting up to 1,440 neurons, in real-time, at a sampling rate of 10 kHz, which is reasonable for small to medium scale extra-cellular closed-loop experiments.
An FPGA Platform for Real-Time Simulation of Spiking Neuronal Networks
Pani, Danilo; Meloni, Paolo; Tuveri, Giuseppe; Palumbo, Francesca; Massobrio, Paolo; Raffo, Luigi
2017-01-01
In the last years, the idea to dynamically interface biological neurons with artificial ones has become more and more urgent. The reason is essentially due to the design of innovative neuroprostheses where biological cell assemblies of the brain can be substituted by artificial ones. For closed-loop experiments with biological neuronal networks interfaced with in silico modeled networks, several technological challenges need to be faced, from the low-level interfacing between the living tissue and the computational model to the implementation of the latter in a suitable form for real-time processing. Field programmable gate arrays (FPGAs) can improve flexibility when simple neuronal models are required, obtaining good accuracy, real-time performance, and the possibility to create a hybrid system without any custom hardware, just programming the hardware to achieve the required functionality. In this paper, this possibility is explored presenting a modular and efficient FPGA design of an in silico spiking neural network exploiting the Izhikevich model. The proposed system, prototypically implemented on a Xilinx Virtex 6 device, is able to simulate a fully connected network counting up to 1,440 neurons, in real-time, at a sampling rate of 10 kHz, which is reasonable for small to medium scale extra-cellular closed-loop experiments. PMID:28293163
Multivariate Autoregressive Modeling and Granger Causality Analysis of Multiple Spike Trains
Directory of Open Access Journals (Sweden)
Michael Krumin
2010-01-01
Full Text Available Recent years have seen the emergence of microelectrode arrays and optical methods allowing simultaneous recording of spiking activity from populations of neurons in various parts of the nervous system. The analysis of multiple neural spike train data could benefit significantly from existing methods for multivariate time-series analysis which have proven to be very powerful in the modeling and analysis of continuous neural signals like EEG signals. However, those methods have not generally been well adapted to point processes. Here, we use our recent results on correlation distortions in multivariate Linear-Nonlinear-Poisson spiking neuron models to derive generalized Yule-Walker-type equations for fitting ‘‘hidden’’ Multivariate Autoregressive models. We use this new framework to perform Granger causality analysis in order to extract the directed information flow pattern in networks of simulated spiking neurons. We discuss the relative merits and limitations of the new method.
A Reward-Maximizing Spiking Neuron as a Bounded Rational Decision Maker.
Leibfried, Felix; Braun, Daniel A
2015-08-01
Rate distortion theory describes how to communicate relevant information most efficiently over a channel with limited capacity. One of the many applications of rate distortion theory is bounded rational decision making, where decision makers are modeled as information channels that transform sensory input into motor output under the constraint that their channel capacity is limited. Such a bounded rational decision maker can be thought to optimize an objective function that trades off the decision maker's utility or cumulative reward against the information processing cost measured by the mutual information between sensory input and motor output. In this study, we interpret a spiking neuron as a bounded rational decision maker that aims to maximize its expected reward under the computational constraint that the mutual information between the neuron's input and output is upper bounded. This abstract computational constraint translates into a penalization of the deviation between the neuron's instantaneous and average firing behavior. We derive a synaptic weight update rule for such a rate distortion optimizing neuron and show in simulations that the neuron efficiently extracts reward-relevant information from the input by trading off its synaptic strengths against the collected reward.
Spiking neurons can discover predictive features by aggregate-label learning.
Gütig, Robert
2016-03-01
The brain routinely discovers sensory clues that predict opportunities or dangers. However, it is unclear how neural learning processes can bridge the typically long delays between sensory clues and behavioral outcomes. Here, I introduce a learning concept, aggregate-label learning, that enables biologically plausible model neurons to solve this temporal credit assignment problem. Aggregate-label learning matches a neuron's number of output spikes to a feedback signal that is proportional to the number of clues but carries no information about their timing. Aggregate-label learning outperforms stochastic reinforcement learning at identifying predictive clues and is able to solve unsegmented speech-recognition tasks. Furthermore, it allows unsupervised neural networks to discover reoccurring constellations of sensory features even when they are widely dispersed across space and time.
Rate-synchrony relationship between input and output of spike trains in neuronal networks
Wang, Sentao; Zhou, Changsong
2010-01-01
Neuronal networks interact via spike trains. How the spike trains are transformed by neuronal networks is critical for understanding the underlying mechanism of information processing in the nervous system. Both the rate and synchrony of the spikes can affect the transmission, while the relationship between them has not been fully understood. Here we investigate the mapping between input and output spike trains of a neuronal network in terms of firing rate and synchrony. With large enough input rate, the working mode of the neurons is gradually changed from temporal integrators into coincidence detectors when the synchrony degree of input spike trains increases. Since the membrane potentials of the neurons can be depolarized to near the firing threshold by uncorrelated input spikes, small input synchrony can cause great output synchrony. On the other hand, the synchrony in the output may be reduced when the input rate is too small. The case of the feedforward network can be regarded as iterative process of such an input-output relationship. The activity in deep layers of the feedforward network is in an all-or-none manner depending on the input rate and synchrony.
EVENT-DRIVEN SIMULATION OF INTEGRATE-AND-FIRE MODELS WITH SPIKE-FREQUENCY ADAPTATION
Institute of Scientific and Technical Information of China (English)
Lin Xianghong; Zhang Tianwen
2009-01-01
The evoked spike discharges of a neuron depend critically on the recent history of its electrical activity. A well-known example is the phenomenon of spike-frequency adaptation that is a commonly observed property of neurons. In this paper, using a leaky integrate-and-fire model that includes an adaptation current, we propose an event-driven strategy to simulate integrate-and-fire models with spike-frequency adaptation. Such approach is more precise than traditional clock-driven numerical integration approach because the timing of spikes is treated exactly. In experiments, using event-driven and clock-driven strategies we simulated the adaptation time course of single neuron and the random network with spike-timing dependent plasticity, the results indicate that (1) the temporal precision of spiking events impacts on neuronal dynamics of single as well as network in the different simulation strategies and (2) the simulation time scales linearly with the total number of spiking events in the event-driven simulation strategies.
Hong, Jin Hee; Min, Cheol Hong; Jeong, Byeongha; Kojiya, Tomoyoshi; Morioka, Eri; Nagai, Takeharu; Ikeda, Masayuki; Lee, Kyoung J
2010-03-10
Circadian rhythms in spontaneous action potential (AP) firing frequencies and in cytosolic free calcium concentrations have been reported for mammalian circadian pacemaker neurons located within the hypothalamic suprachiasmatic nucleus (SCN). Also reported is the existence of "Ca(2+) spikes" (i.e., [Ca(2+)](c) transients having a bandwidth of 10 approximately 100 seconds) in SCN neurons, but it is unclear if these SCN Ca(2+) spikes are related to the slow circadian rhythms. We addressed this issue based on a Ca(2+) indicator dye (fluo-4) and a protein Ca(2+) sensor (yellow cameleon). Using fluo-4 AM dye, we found spontaneous Ca(2+) spikes in 18% of rat SCN cells in acute brain slices, but the Ca(2+) spiking frequencies showed no day/night variation. We repeated the same experiments with rat (and mouse) SCN slice cultures that expressed yellow cameleon genes for a number of different circadian phases and, surprisingly, spontaneous Ca(2+) spike was barely observed (fluo-4 AM or BAPTA-AM was loaded in addition to the cameleon-expressing SCN cultures, however, the number of cells exhibiting Ca(2+) spikes was increased to 13 approximately 14%. Despite our extensive set of experiments, no evidence of a circadian rhythm was found in the spontaneous Ca(2+) spiking activity of SCN. Furthermore, our study strongly suggests that the spontaneous Ca(2+) spiking activity is caused by the Ca(2+) chelating effect of the BAPTA-based fluo-4 dye. Therefore, this induced activity seems irrelevant to the intrinsic circadian rhythm of [Ca(2+)](c) in SCN neurons. The problems with BAPTA based dyes are widely known and our study provides a clear case for concern, in particular, for SCN Ca(2+) spikes. On the other hand, our study neither invalidates the use of these dyes as a whole, nor undermines the potential role of SCN Ca(2+) spikes in the function of SCN.
Directory of Open Access Journals (Sweden)
Jin Hee Hong
Full Text Available BACKGROUND: Circadian rhythms in spontaneous action potential (AP firing frequencies and in cytosolic free calcium concentrations have been reported for mammalian circadian pacemaker neurons located within the hypothalamic suprachiasmatic nucleus (SCN. Also reported is the existence of "Ca(2+ spikes" (i.e., [Ca(2+](c transients having a bandwidth of 10 approximately 100 seconds in SCN neurons, but it is unclear if these SCN Ca(2+ spikes are related to the slow circadian rhythms. METHODOLOGY/PRINCIPAL FINDINGS: We addressed this issue based on a Ca(2+ indicator dye (fluo-4 and a protein Ca(2+ sensor (yellow cameleon. Using fluo-4 AM dye, we found spontaneous Ca(2+ spikes in 18% of rat SCN cells in acute brain slices, but the Ca(2+ spiking frequencies showed no day/night variation. We repeated the same experiments with rat (and mouse SCN slice cultures that expressed yellow cameleon genes for a number of different circadian phases and, surprisingly, spontaneous Ca(2+ spike was barely observed (<3%. When fluo-4 AM or BAPTA-AM was loaded in addition to the cameleon-expressing SCN cultures, however, the number of cells exhibiting Ca(2+ spikes was increased to 13 approximately 14%. CONCLUSIONS/SIGNIFICANCE: Despite our extensive set of experiments, no evidence of a circadian rhythm was found in the spontaneous Ca(2+ spiking activity of SCN. Furthermore, our study strongly suggests that the spontaneous Ca(2+ spiking activity is caused by the Ca(2+ chelating effect of the BAPTA-based fluo-4 dye. Therefore, this induced activity seems irrelevant to the intrinsic circadian rhythm of [Ca(2+](c in SCN neurons. The problems with BAPTA based dyes are widely known and our study provides a clear case for concern, in particular, for SCN Ca(2+ spikes. On the other hand, our study neither invalidates the use of these dyes as a whole, nor undermines the potential role of SCN Ca(2+ spikes in the function of SCN.
Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State
Lagzi, Fereshteh; Rotter, Stefan
2015-01-01
We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the “within” versus “between” connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed “winnerless competition”, which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
Based on the coupled stochastic Hodgkin-Huxley neurons, we numerically studied the effect of gating currents of ion channels, as well as coupling and the number of neurons, on the collective spiking rate and regularity in the coupled system. It was found, for a given coupling strength and with a relatively large number of neurons, when gating currents are applied, the collective spiking regularity decreases; meanwhile, the collective spiking rate increases, indicating that gating currents can aggravate the de-synchronization of the spikings of all neurons. However, gating currents caused hardly any effect in the spiking of any individual neuron of the coupled system. This result, different from the reduction of the spiking rate by gating currents in a single neuron, provides a new insight into the effect of gating cur-rents on the global information processing and signal transduction in real neural systems.
Institute of Scientific and Technical Information of China (English)
GONG YuBing; XIE YanHang; XU Bo; MA XiaoGuang
2009-01-01
Based on the coupled stochastic Hodgkin-Huxley neurons, we numerically studied the effect of gating currents of ion channels, as well as coupling and the number of neurons, on the collective spiking rate and regularity in the coupled system, it was found, for a given coupling strength and with a relatively large number of neurons, when gating currents are applied, the collective spiking regularity decreases; meanwhile, the collective spiking rate increases, indicating that gating currents can aggravate the de-synchronization of the spikings of all neurons. However, gating currents caused hardly any effect in the spiking of any individual neuron of the coupled system. This result, different from the reduction of the spiking rate by gating currents in a single neuron, provides a new insight into the effect of gating cur-rents on the global information processing and signal transduction in real neural systems.
Strange nonchaotic spiking in the quasiperiodically-forced Hodgkin-Huxley neuron
Energy Technology Data Exchange (ETDEWEB)
Lim, Woochang; Kim, Sangyoon [Kangwon National University, Chunchon (Korea, Republic of)
2010-06-15
We study the transition from a silent state to a spiking state by varying the dc stimulus in the quasiperiodically-forced Hodgkin-Huxley neuron. For this quasiperiodically-forced case, a new type of strange nonchaotic (SN) spiking state is found to appear between the silent state and the chaotic spiking state as intermediate one. Using a rational approximation to the quasiperiodic forcing, we investigate the mechanism for the appearance of such an SN spiking state. We thus find that a smooth torus (corresponding to the silent state) is transformed into an SN spiking attractor via a phase-dependent saddle-node bifurcation. This is in contrast to the periodically-forced case where the silent state transforms directly to a chaotic spiking state. SN spiking states are characterized in terms of the interspike interval, so they are found to be aperiodic complex ones, as in the case of chaotic spiking states. Hence, aperiodic complex spikings may result from two dynamically different states with strange geometry (one is chaotic and the other one is nonchaotic).
The Geometry of Spontaneous Spiking in Neuronal Networks
Medvedev, Georgi S.; Zhuravytska, Svitlana
2012-10-01
The mathematical theory of pattern formation in electrically coupled networks of excitable neurons forced by small noise is presented in this work. Using the Freidlin-Wentzell large-deviation theory for randomly perturbed dynamical systems and the elements of the algebraic graph theory, we identify and analyze the main regimes in the network dynamics in terms of the key control parameters: excitability, coupling strength, and network topology. The analysis reveals the geometry of spontaneous dynamics in electrically coupled network. Specifically, we show that the location of the minima of a certain continuous function on the surface of the unit n-cube encodes the most likely activity patterns generated by the network. By studying how the minima of this function evolve under the variation of the coupling strength, we describe the principal transformations in the network dynamics. The minimization problem is also used for the quantitative description of the main dynamical regimes and transitions between them. In particular, for the weak and strong coupling regimes, we present asymptotic formulas for the network activity rate as a function of the coupling strength and the degree of the network. The variational analysis is complemented by the stability analysis of the synchronous state in the strong coupling regime. The stability estimates reveal the contribution of the network connectivity and the properties of the cycle subspace associated with the graph of the network to its synchronization properties. This work is motivated by the experimental and modeling studies of the ensemble of neurons in the Locus Coeruleus, a nucleus in the brainstem involved in the regulation of cognitive performance and behavior.
Neuronal spike-train responses in the presence of threshold noise.
Coombes, S; Thul, R; Laudanski, J; Palmer, A R; Sumner, C J
2011-03-01
The variability of neuronal firing has been an intense topic of study for many years. From a modelling perspective it has often been studied in conductance based spiking models with the use of additive or multiplicative noise terms to represent channel fluctuations or the stochastic nature of neurotransmitter release. Here we propose an alternative approach using a simple leaky integrate-and-fire model with a noisy threshold. Initially, we develop a mathematical treatment of the neuronal response to periodic forcing using tools from linear response theory and use this to highlight how a noisy threshold can enhance downstream signal reconstruction. We further develop a more general framework for understanding the responses to large amplitude forcing based on a calculation of first passage times. This is ideally suited to understanding stochastic mode-locking, for which we numerically determine the Arnol'd tongue structure. An examination of data from regularly firing stellate neurons within the ventral cochlear nucleus, responding to sinusoidally amplitude modulated pure tones, shows tongue structures consistent with these predictions and highlights that stochastic, as opposed to deterministic, mode-locking is utilised at the level of the single stellate cell to faithfully encode periodic stimuli.
Spiking sychronization regulated by noise in three types of Hodgkin-Huxley neuronal networks
Institute of Scientific and Technical Information of China (English)
Zhang Zheng-Zhen; Zeng Shang-You; Tang Wen-Yan; Hu Jin-Lin; Zeng Shao-Wen; Ning Wei-Lian; Qiu Yi; Wu Hui-Si
2012-01-01
In this paper,we study spiking synchronization in three different types of Hodgkin-Huxley neuronal networks,which are the small-world,regular,and random neuronal networks. All the neurons are subjected to subthreshold stimulus and external noise. It is found that in each of all the neuronal networks there is an optimal strength of noise to induce the maximal spiking synchronization.We further demonstrate that in each of the neuronal networks there is a range of synaptic conductance to induce the effect that an optimal strength of noise maximizes the spiking synchronization.Only when the magnitude of the synaptic conductance is moderate,will the effect be considerable.However,if the synaptic conductance is small or large,the effect vanishes.As the connections between neurons increase,the synaptic conductance to maximize the effect decreases.Therefore,we show quantitatively that the noise-induced maximal synchronization in the Hodgkin-Huxley neuronal network is a general effect,regardless of the specific type of neuronal network.
Exact computation of the Maximum Entropy Potential of spiking neural networks models
Cofre, Rodrigo
2014-01-01
Understanding how stimuli and synaptic connectivity in uence the statistics of spike patterns in neural networks is a central question in computational neuroscience. Maximum Entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. But, in spite of good performance in terms of prediction, the ?tting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuro-mimetic models) provide a probabilistic mapping between stimulus, network architecture and spike patterns in terms of conditional proba- bilities. In this paper we build an exact analytical mapping between neuro-mimetic and Maximum Entropy models.
Yu, Haitao; Guo, Xinmeng; Wang, Jiang; Deng, Bin; Wei, Xile
2015-10-01
The phenomenon of vibrational resonance is investigated in adaptive Newman-Watts small-world neuronal networks, where the strength of synaptic connections between neurons is modulated based on spike-timing-dependent plasticity. Numerical results demonstrate that there exists appropriate amplitude of high-frequency driving which is able to optimize the neural ensemble response to the weak low-frequency periodic signal. The effect of networked vibrational resonance can be significantly affected by spike-timing-dependent plasticity. It is shown that spike-timing-dependent plasticity with dominant depression can always improve the efficiency of vibrational resonance, and a small adjusting rate can promote the transmission of weak external signal in small-world neuronal networks. In addition, the network topology plays an important role in the vibrational resonance in spike-timing-dependent plasticity-induced neural systems, where the system response to the subthreshold signal is maximized by an optimal network structure. Furthermore, it is demonstrated that the introduction of inhibitory synapses can considerably weaken the phenomenon of vibrational resonance in the hybrid small-world neuronal networks with spike-timing-dependent plasticity.
Enhancement of spike coherence by the departure from Gaussian noise in a Hodgkin-Huxley neuron
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
Experimental study has shown that non-Gaussian noise exists in sensory systems like neurons.The departure from Gaussian behavior is a characteristic parameter of non-Gaussian noise.In this paper,we have numerically studied the effect of a particular kind of non-Gaussian colored noise(NGN),especially its departure q from Gaussian noise(q = 1),on the spiking activity in a deterministic Hodgkin-Huxley(HH) neuron driven by sub-threshold periodic stimulus.Simulation results show that the departure q can affect the spiking activity induced by noise intensity D.For smaller q values,the minimum in the variation coefficient(CV) as a function of noise intensity(D) becomes smaller,showing that D-induced stochastic resonance(SR) becomes strengthened.Meanwhile,depending on the value of D,q can either enhance or reduce the spiking regularity.Interestingly,CV changes non-monotonously with varying q and passes through a minimum at an intermediate q,representing the presence of "departure-induced SR".This result shows that appropriate departures of the NGN can enhance the spike coherence in the HH neuron.Since the departure of the NGN determines the probability distribution and hence may denote the type of the noise,"departure-induced SR" shows that different types of noise can enhance the spike coherence,and hence may improve the timing precision of sub-threshold signal encoding in the HH neuron.
Ushakov, Yuriy V.; Dubkov, Alexander A.; Spagnolo, Bernardo
2010-04-01
The phenomena of dissonance and consonance in a simple auditory sensory model composed of three neurons are considered. Two of them, here so-called sensory neurons, are driven by noise and subthreshold periodic signals with different ratio of frequencies, and its outputs plus noise are applied synaptically to a third neuron, so-called interneuron. We present a theoretical analysis with a probabilistic approach to investigate the interspike intervals statistics of the spike train generated by the interneuron. We find that tones with frequency ratios that are considered consonant by musicians produce at the third neuron inter-firing intervals statistics densities that are very distinctive from densities obtained using tones with ratios that are known to be dissonant. In other words, at the output of the interneuron, inharmonious signals give rise to blurry spike trains, while the harmonious signals produce more regular, less noisy, spike trains. Theoretical results are compared with numerical simulations.
Acetic acid modulates spike rate and spike latency to salt in peripheral gustatory neurons of rats
Breza, Joseph M.
2012-01-01
Sour and salt taste interactions are not well understood in the peripheral gustatory system. Therefore, we investigated the interaction of acetic acid and NaCl on taste processing by rat chorda tympani neurons. We recorded multi-unit responses from the severed chorda tympani nerve (CT) and single-cell responses from intact narrowly tuned and broadly tuned salt-sensitive neurons in the geniculate ganglion simultaneously with stimulus-evoked summated potentials to signal when the stimulus contacted the lingual epithelium. Artificial saliva served as the rinse and solvent for all stimuli [0.3 M NH4Cl, 0.5 M sucrose, 0.1 M NaCl, 0.01 M citric acid, 0.02 M quinine hydrochloride (QHCl), 0.1 M KCl, 0.003–0.1 M acetic acid, and 0.003–0.1 M acetic acid mixed with 0.1 M NaCl]. We used benzamil to assess NaCl responses mediated by the epithelial sodium channel (ENaC). The CT nerve responses to acetic acid/NaCl mixtures were less than those predicted by summing the component responses. Single-unit analyses revealed that acetic acid activated acid-generalist neurons exclusively in a concentration-dependent manner: increasing acid concentration increased response frequency and decreased response latency in a parallel fashion. Acetic acid suppressed NaCl responses in ENaC-dependent NaCl-specialist neurons, whereas acetic acid-NaCl mixtures were additive in acid-generalist neurons. These data suggest that acetic acid attenuates sodium responses in ENaC-expressing-taste cells in contact with NaCl-specialist neurons, whereas acetic acid-NaCl mixtures activate distinct receptor/cellular mechanisms on taste cells in contact with acid-generalist neurons. We speculate that NaCl-specialist neurons are in contact with type I cells, whereas acid-generalist neurons are in contact with type III cells in fungiform taste buds. PMID:22896718
Real-time prediction of neuronal population spiking activity using FPGA.
Li, Will X Y; Cheung, Ray C C; Chan, Rosa H M; Song, Dong; Berger, Theodore W
2013-08-01
A field-programmable gate array (FPGA)-based hardware architecture is proposed and utilized for prediction of neuronal population firing activity. The hardware system adopts the multi-input multi-output (MIMO) generalized Laguerre-Volterra model (GLVM) structure to describe the nonlinear dynamic neural process of mammalian brain and can switch between the two important functions: estimation of GLVM coefficients and prediction of neuronal population spiking activity (model outputs). The model coefficients are first estimated using the in-sample training data; then the output is predicted using the out-of-sample testing data and the field estimated coefficients. Test results show that compared with previous software implementation of the generalized Laguerre-Volterra algorithm running on an Intel Core i7-2620M CPU, the FPGA-based hardware system can achieve up to 2.66×10(3) speedup in doing model parameters estimation and 698.84 speedup in doing model output prediction. The proposed hardware platform will facilitate research on the highly nonlinear neural process of the mammal brain, and the cognitive neural prosthesis design.
Institute of Scientific and Technical Information of China (English)
2008-01-01
Toxins, such as tetraethylammonium (TEA) and tetrodotoxin (TTX), can make potassium or sodium ion channels poisoned, respectively, and hence reduce the number of working ion channels and lead to the diminishment of conductance. In this paper, we have studied by numerical simulations the effects of sodium and potassium ion channel poisoning on the collective spiking activity of an array of coupled stochastic Hodgkin-Huxley (HH) neurons. It is found for a given number of neurons sodium or potas- sium ion channel block can either enhance or reduce the collective spiking regularity, depending on the membrane patch size. For a given smaller or larger patch size, potassium and sodium ion channel block can reduce or enhance the collective spiking regularity, but they have different patch size ranges for the transformation. This result shows that sodium or potassium ion channel block might have dif- ferent effects on the collective spiking activity in coupled HH neurons from the effects for a single neuron, which represents the interplay among the diminishment of maximal conductance and the in- crease of channel noise strength due to the channel blocks, as well as the bi-directional coupling be- tween the neurons.
Versatile Networks of Simulated Spiking Neurons Displaying Winner-Take-All Behavior
Directory of Open Access Journals (Sweden)
Yanqing eChen
2013-03-01
Full Text Available We describe simulations of large-scale networks of excitatory and inhibitory spiking neurons that can generate dynamically stable winner-take-all (WTA behavior. The network connectivity is a variant of center-surround architecture that we call center-annular-surround (CAS. In this architecture each neuron is excited by nearby neighbors and inhibited by more distant neighbors in an annular-surround region. The neural units of these networks simulate conductance-based spiking neurons that interact via mechanisms susceptible to both short-term synaptic plasticity and STDP. We show that such CAS networks display robust WTA behavior unlike the center-surround networks and other control architectures that we have studied. We find that a large-scale network of spiking neurons with separate populations of excitatory and inhibitory neurons can give rise to smooth maps of sensory input. In addition, we show that a humanoid Brain-Based-Device (BBD under the control of a spiking WTA neural network can learn to reach to target positions in its visual field, thus demonstrating the acquisition of sensorimotor coordination.
Institute of Scientific and Technical Information of China (English)
GONG YuBing; XU Bo; MA XiaoGuang; HAN JiQu
2008-01-01
Toxins, such as tetraethylammonium (TEA) and tetrodotoxin (TTX), can make potassium or sodium ion channels poisoned, respectively, and hence reduce the number of working ion channels and lead to the diminishment of conductance. In this paper, we have studied by numerical simulations the effects of sodium and potassium ion channel poisoning on the collective spiking activity of an array of coupled stochastic Hodgkin-Huxley (HH) neurons. It is found for a given number of neurons sodium or potas-sium ion channel block can either enhance or reduce the collective spiking regularity, depending on the membrane patch size. For a given smaller or larger patch size, potassium and sodium ion channel block can reduce or enhance the collective spiking regularity, but they have different patch size ranges for the transformation. This result shows that sodium or potassium ion channel block might have dif-ferent effects on the collective spiking activity in coupled HH neurons from the effects for a single neuron, which represents the interplay among the diminishment of maximal conductance and the in-crease of channel noise strength due to the channel blocks, as well as the bi-directional coupling be-tween the neurons.
Firing statistics and correlations in spiking neurons: a level-crossing approach.
Badel, Laurent
2011-10-01
We present a time-dependent level-crossing theory for linear dynamical systems perturbed by colored Gaussian noise. We apply these results to approximate the firing statistics of conductance-based integrate-and-fire neurons receiving excitatory and inhibitory Poissonian inputs. Analytical expressions are obtained for three key quantities characterizing the neuronal response to time-varying inputs: the mean firing rate, the linear response to sinusoidally modulated inputs, and the pairwise spike correlation for neurons receiving correlated inputs. The theory yields tractable results that are shown to accurately match numerical simulations and provides useful tools for the analysis of interconnected neuronal populations.
Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.
Directory of Open Access Journals (Sweden)
Stojan Jovanović
2016-06-01
Full Text Available The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.
Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.
Jovanović, Stojan; Rotter, Stefan
2016-06-01
The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs) are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.
Cutsuridis, Vassilis
2013-01-01
Spike timing-dependent plasticity (STDP) experiments have shown that a synapse is strengthened when a presynaptic spike precedes a postsynaptic one and depressed vice versa. The canonical form of STDP has been shown to have an asymmetric shape with the peak long-term potentiation at +6 ms and the peak long-term depression at -5 ms. Experiments in hippocampal cultures with more complex stimuli such as triplets (one presynaptic spike combined with two postsynaptic spikes or one postsynaptic spike with two presynaptic spikes) have shown that pre-post-pre spike triplets result in no change in synaptic strength, whereas post-pre-post spike triplets lead to significant potentiation. The sign and magnitude of STDP have also been experimentally hypothesized to be modulated by inhibition. Recently, a computational study showed that the asymmetrical form of STDP in the CA1 pyramidal cell dendrite when two spikes interact switches to a symmetrical one in the presence of inhibition under certain conditions. In the present study, I investigate computationally how inhibition modulates STDP in the CA1 pyramidal neuron dendrite when it is driven by triplets. The model uses calcium as the postsynaptic signaling agent for STDP and is shown to be consistent with the experimental triplet observations in the absence of inhibition: simulated pre-post-pre spike triplets result in no change in synaptic strength, whereas simulated post-pre-post spike triplets lead to significant potentiation. When inhibition is bounded by the onset and offset of the triplet stimulation, then the strength of the synapse is decreased as the strength of inhibition increases. When inhibition arrives either few milliseconds before or at the onset of the last spike in the pre-post-pre triplet stimulation, then the synapse is potentiated. Variability in the frequency of inhibition (50 vs. 100 Hz) produces no change in synaptic strength. Finally, a 5% variation in model's calcium parameters (calcium thresholds
Harmony perception and regularity of spike trains in a simple auditory model
Spagnolo, B.; Ushakov, Y. V.; Dubkov, A. A.
2013-01-01
A probabilistic approach for investigating the phenomena of dissonance and consonance in a simple auditory sensory model, composed by two sensory neurons and one interneuron, is presented. We calculated the interneuron's firing statistics, that is the interspike interval statistics of the spike train at the output of the interneuron, for consonant and dissonant inputs in the presence of additional "noise", representing random signals from other, nearby neurons and from the environment. We find that blurry interspike interval distributions (ISIDs) characterize dissonant accords, while quite regular ISIDs characterize consonant accords. The informational entropy of the non-Markov spike train at the output of the interneuron and its dependence on the frequency ratio of input sinusoidal signals is estimated. We introduce the regularity of spike train and suggested the high or low regularity level of the auditory system's spike trains as an indicator of feeling of harmony during sound perception or disharmony, respectively.
Analog frontend for multichannel neuronal recording system with spike and LFP separation.
Perelman, Yevgeny; Ginosar, Ran
2006-05-15
A 0.35microm CMOS integrated circuit for multi-channel neuronal recording with twelve true-differential channels, band separation and digital offset calibration is presented. The measured signal is separated into a low-frequency local field potential and high-frequency spike data. Digitally programmable gains of up to 60 and 80 dB for the local field potential and spike bands are provided. DC offsets are compensated on both bands by means of digitally programmable DACs. Spike band is limited by a second order low-pass filter with digitally programmable cutoff frequency. The IC has been fabricated and tested. 3microV input referred noise on the spike data band was measured.
Amplitude-dependent spike-broadening and enhanced Ca(2+) signaling in GnRH-secreting neurons.
Van Goor, F; LeBeau, A P; Krsmanovic, L Z; Sherman, A; Catt, K J; Stojilkovic, S S
2000-09-01
In GnRH-secreting (GT1) neurons, activation of Ca(2+)-mobilizing receptors induces a sustained membrane depolarization that shifts the profile of the action potential (AP) waveform from sharp, high-amplitude to broad, low-amplitude spikes. Here we characterize this shift in the firing pattern and its impact on Ca(2+) influx experimentally by using prerecorded sharp and broad APs as the voltage-clamp command pulse. As a quantitative test of the experimental data, a mathematical model based on the membrane and ionic current properties of GT1 neurons was also used. Both experimental and modeling results indicated that inactivation of the tetrodotoxin-sensitive Na(+) channels by sustained depolarization accounted for a reduction in the amplitude of the spike upstroke. The ensuing decrease in tetraethylammonium-sensitive K(+) current activation slowed membrane repolarization, leading to AP broadening. This change in firing pattern increased the total L-type Ca(2+) current and facilitated AP-driven Ca(2+) entry. The leftward shift in the current-voltage relation of the L-type Ca(2+) channels expressed in GT1 cells allowed the depolarization-induced AP broadening to facilitate Ca(2+) entry despite a decrease in spike amplitude. Thus the gating properties of the L-type Ca(2+) channels expressed in GT1 neurons are suitable for promoting AP-driven Ca(2+) influx in receptor- and non-receptor-depolarized cells.
Slow fluctuations in recurrent networks of spiking neurons
Wieland, Stefan; Bernardi, Davide; Schwalger, Tilo; Lindner, Benjamin
2015-10-01
Networks of fast nonlinear elements may display slow fluctuations if interactions are strong. We find a transition in the long-term variability of a sparse recurrent network of perfect integrate-and-fire neurons at which the Fano factor switches from zero to infinity and the correlation time is minimized. This corresponds to a bifurcation in a linear map arising from the self-consistency of temporal input and output statistics. More realistic neural dynamics with a leak current and refractory period lead to smoothed transitions and modified critical couplings that can be theoretically predicted.
Reconstructing stimuli from the spike-times of leaky integrate and fire neurons
Directory of Open Access Journals (Sweden)
Sebastian eGerwinn
2011-02-01
Full Text Available Reconstructing stimuli from the spike-trains of neurons is an important approach for understanding the neural code. One of the difficulties associated with this task is that signals which are varying continuously in time are encoded into sequences of discrete events or spikes. An important problem is to determine how much information about the continuously varying stimulus can be extracted from the time-points at which spikes were observed, especially if these time-points are subject to some sort of randomness. For the special case of spike trains generated by leaky integrate and fire neurons, noise can be introduced by allowing variations in the threshold every time a spike is released. A simple decoding algorithm previously derived for the noiseless case can be extended to the stochastic case, but turns out to be biased. Here, we review a solution to this problem, by presenting a simple yet efficient algorithm which greatly reduces the bias, and therefore leads to better decoding performance in the stochastic case.
Aldworth, Zane N; Miller, John P; Gedeon, Tomás; Cummins, Graham I; Dimitrov, Alexander G
2005-06-01
What is the meaning associated with a single action potential in a neural spike train? The answer depends on the way the question is formulated. One general approach toward formulating this question involves estimating the average stimulus waveform preceding spikes in a spike train. Many different algorithms have been used to obtain such estimates, ranging from spike-triggered averaging of stimuli to correlation-based extraction of "stimulus-reconstruction" kernels or spatiotemporal receptive fields. We demonstrate that all of these approaches miscalculate the stimulus feature selectivity of a neuron. Their errors arise from the manner in which the stimulus waveforms are aligned to one another during the calculations. Specifically, the waveform segments are locked to the precise time of spike occurrence, ignoring the intrinsic "jitter" in the stimulus-to-spike latency. We present an algorithm that takes this jitter into account. "Dejittered" estimates of the feature selectivity of a neuron are more accurate (i.e., provide a better estimate of the mean waveform eliciting a spike) and more precise (i.e., have smaller variance around that waveform) than estimates obtained using standard techniques. Moreover, this approach yields an explicit measure of spike-timing precision. We applied this technique to study feature selectivity and spike-timing precision in two types of sensory interneurons in the cricket cercal system. The dejittered estimates of the mean stimulus waveforms preceding spikes were up to three times larger than estimates based on the standard techniques used in previous studies and had power that extended into higher-frequency ranges. Spike timing precision was approximately 5 ms.
The adaptation of spike backpropagation delays in cortical neurons
Directory of Open Access Journals (Sweden)
Yossi eBuskila
2013-10-01
Full Text Available We measured the action potential backpropagation delays in apical dendrites of layer 5 pyramidal neurons of the somatosensory cortex under different stimulation regimes that exclude synaptic involvement. These delays showed robust features and did not correlate to either transient change in the stimulus strength or low frequency stimulation of suprathreshold membrane oscillations. However, our results indicate that backpropagation delays correlate with high frequency (>10 Hz stimulation of membrane oscillations, and that persistent suprathreshold sinusoidal stimulation injected directly into the soma results in an increase of the backpropagation delay, suggesting an intrinsic adaptation of the bAP, which does not involve any synaptic modifications. Moreover, the calcium chelator BAPTA eliminated the alterations in the backpropagation delays, strengthening the hypothesis that increased calcium concentration in the dendrites modulates dendritic excitability and can impact the backpropagation velocity. These results emphasize the impact of dendritic excitability on bAP velocity along the dendritic tree, which affects the precision of the bAP arrival at the synapse during specific stimulus regimes, and is capable of shifting the extent and polarity of synaptic strength during suprathreshold synaptic processes such as STDP.
Characteristics of fast-spiking neurons in the striatum of behaving monkeys.
Yamada, Hiroshi; Inokawa, Hitoshi; Hori, Yukiko; Pan, Xiaochuan; Matsuzaki, Ryuichi; Nakamura, Kae; Samejima, Kazuyuki; Shidara, Munetaka; Kimura, Minoru; Sakagami, Masamichi; Minamimoto, Takafumi
2016-04-01
Inhibitory interneurons are the fundamental constituents of neural circuits that organize network outputs. The striatum as part of the basal ganglia is involved in reward-directed behaviors. However, the role of the inhibitory interneurons in this process remains unclear, especially in behaving monkeys. We recorded the striatal single neuron activity while monkeys performed reward-directed hand or eye movements. Presumed parvalbumin-containing GABAergic interneurons (fast-spiking neurons, FSNs) were identified based on narrow spike shapes in three independent experiments, though they were a small population (4.2%, 42/997). We found that FSNs are characterized by high-frequency and less-bursty discharges, which are distinct from the basic firing properties of the presumed projection neurons (phasically active neurons, PANs). Besides, the encoded information regarding actions and outcomes was similar between FSNs and PANs in terms of proportion of neurons, but the discharge selectivity was higher in PANs than that of FSNs. The coding of actions and outcomes in FSNs and PANs was consistently observed under various behavioral contexts in distinct parts of the striatum (caudate nucleus, putamen, and anterior striatum). Our results suggest that FSNs may enhance the discharge selectivity of postsynaptic output neurons (PANs) in encoding crucial variables for a reward-directed behavior.
Zhong, Ping; Yan, Zhen
2016-01-01
Dopamine D4 receptor (D4R), which is strongly linked to neuropsychiatric disorders, such as attention-deficit hyperactivity disorder and schizophrenia, is highly expressed in pyramidal neurons and GABAergic interneurons in prefrontal cortex (PFC). In this study, we examined the impact of D4R on the excitability of these 2 neuronal populations. We found that D4R activation decreased the frequency of spontaneous action potentials (sAPs) in PFC pyramidal neurons, whereas it induced a transient increase followed by a decrease of sAP frequency in PFC parvalbumin-positive (PV+) interneurons. D4R activation also induced distinct effects in both types of PFC neurons on spontaneous excitatory and inhibitory postsynaptic currents, which drive the generation of sAP. Moreover, dopamine substantially decreased sAP frequency in PFC pyramidal neurons, but markedly increased sAP frequency in PV+ interneurons, and both effects were partially mediated by D4R activation. In the phencyclidine model of schizophrenia, the decreasing effect of D4R on sAP frequency in both types of PFC neurons was attenuated, whereas the increasing effect of D4R on sAP in PV+ interneurons was intact. These results suggest that D4R activation elicits distinct effects on synaptically driven excitability in PFC projection neurons versus fast-spiking interneurons, which are differentially altered in neuropsychiatric disorder-related conditions.
Stream-based Hebbian eigenfilter for real-time neuronal spike discrimination
Directory of Open Access Journals (Sweden)
Yu Bo
2012-04-01
Full Text Available Abstract Background Principal component analysis (PCA has been widely employed for automatic neuronal spike sorting. Calculating principal components (PCs is computationally expensive, and requires complex numerical operations and large memory resources. Substantial hardware resources are therefore needed for hardware implementations of PCA. General Hebbian algorithm (GHA has been proposed for calculating PCs of neuronal spikes in our previous work, which eliminates the needs of computationally expensive covariance analysis and eigenvalue decomposition in conventional PCA algorithms. However, large memory resources are still inherently required for storing a large volume of aligned spikes for training PCs. The large size memory will consume large hardware resources and contribute significant power dissipation, which make GHA difficult to be implemented in portable or implantable multi-channel recording micro-systems. Method In this paper, we present a new algorithm for PCA-based spike sorting based on GHA, namely stream-based Hebbian eigenfilter, which eliminates the inherent memory requirements of GHA while keeping the accuracy of spike sorting by utilizing the pseudo-stationarity of neuronal spikes. Because of the reduction of large hardware storage requirements, the proposed algorithm can lead to ultra-low hardware resources and power consumption of hardware implementations, which is critical for the future multi-channel micro-systems. Both clinical and synthetic neural recording data sets were employed for evaluating the accuracy of the stream-based Hebbian eigenfilter. The performance of spike sorting using stream-based eigenfilter and the computational complexity of the eigenfilter were rigorously evaluated and compared with conventional PCA algorithms. Field programmable logic arrays (FPGAs were employed to implement the proposed algorithm, evaluate the hardware implementations and demonstrate the reduction in both power consumption and
Cassidy, Andrew S; Georgiou, Julius; Andreou, Andreas G
2013-09-01
We present a design framework for neuromorphic architectures in the nano-CMOS era. Our approach to the design of spiking neurons and STDP learning circuits relies on parallel computational structures where neurons are abstracted as digital arithmetic logic units and communication processors. Using this approach, we have developed arrays of silicon neurons that scale to millions of neurons in a single state-of-the-art Field Programmable Gate Array (FPGA). We demonstrate the validity of the design methodology through the implementation of cortical development in a circuit of spiking neurons, STDP synapses, and neural architecture optimization. Copyright © 2013 Elsevier Ltd. All rights reserved.
van Welie, I.; Remme, M.W.H.; van Hooft, J.A.; Wadman, W.J.
2006-01-01
Pyramidal neurons in the subiculum typically display either bursting or regular-spiking behaviour. Although this classification into two neuronal classes is well described, it is unknown how these two classes of neurons contribute to the integration of input to the subiculum. Here, we report that
Responses of Hodgkin-Huxley Neuronal Systems to Spike-Train Inputs
Institute of Scientific and Technical Information of China (English)
CHANG Wen-Li; WANG Sheng-Jun; WANG Ying-Hai
2007-01-01
We investigate responses of the Hodgkin-Huxley globally neuronal systems to periodic spike-train inputs. The firing activities of the neuronal networks show different rhythmic patterns for different parameters. These rhythmic patterns can be used to explain cycles of firing in real brain. These activity patterns, average activity and coherence measure are affected by two quantities such as the percentage of excitatory couplings and stimulus intensity, in which the percentage of excitatory couplings is more important than stimulus intensity since the transition phenomenon of average activity comes about.
Pharmacological study of the one spike spherical neuron phenotype in Gymnotus omarorum.
Nogueira, J; Caputi, A A
2014-01-31
The intrinsic properties of spherical neurons play a fundamental role in the sensory processing of self-generated signals along a fast electrosensory pathway in electric fish. Previous results indicate that the spherical neuron's intrinsic properties depend mainly on the presence of two resonant currents that tend to clamp the voltage near the resting potential. Here we show that these are: a low-threshold potassium current blocked by 4-aminopyridine and a mixed cationic current blocked by cesium chloride. We also show that the low-threshold potassium current also causes the long refractory period, explaining the necessary properties that implement the dynamic filtering of the self-generated signals previously described. Comparative data from other fish and from the auditory system indicate that other single spiking onset neurons might differ in the channel repertoire observed in the spherical neurons of Gymnotus omarorum. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
Inferring Neuronal Network Connectivity from Spike Data: A Temporal Datamining Approach
Patnaik, Debprakash; Unnikrishnan, K P
2008-01-01
Understanding the functioning of a neural system in terms of its underlying circuitry is an important problem in neuroscience. Recent developments in electrophysiology and imaging allow one to simultaneously record activities of hundreds of neurons. Inferring the underlying neuronal connectivity patterns from such multi-neuronal spike train data streams is a challenging statistical and computational problem. This task involves finding significant temporal patterns from vast amounts of symbolic time series data. In this paper we show that the frequent episode mining methods from the field of temporal data mining can be very useful in this context. In the frequent episode discovery framework, the data is viewed as a sequence of events, each of which is characterized by an event type and its time of occurrence and episodes are certain types of temporal patterns in such data. Here we show that, using the set of discovered frequent episodes from multi-neuronal data, one can infer different types of connectivity pa...
Non-Markovian spiking statistics of a neuron with delayed feedback in presence of refractoriness.
Kravchuk, Kseniia; Vidybida, Alexander
2014-02-01
Spiking statistics of a self-inhibitory neuron is considered. The neuron receives excitatory input from a Poisson stream and inhibitory impulses through a feedback line with a delay. After triggering, the neuron is in the refractory state for a positive period of time. Recently, [35,6], it was proven for a neuron with delayed feedback and without the refractory state, that the output stream of interspike intervals (ISI) cannot be represented as a Markov process. The refractory state presence, in a sense limits the memory range in the spiking process, which might restore Markov property to the ISI stream. Here we check such a possibility. For this purpose, we calculate the conditional probability density P (tn+1 l tn,...,t1,t0), and prove exactly that it does not reduce to P (tn+1 l tn,...,t1) for any n ⋝0. That means, that activity of the system with refractory state as well cannot be represented as a Markov process of any order. We conclude that it is namely the delayed feedback presence which results in non-Markovian statistics of neuronal firing. As delayed feedback lines are common for any realistic neural network, the non-Markovian statistics of the network activity should be taken into account in processing of experimental data.
Transmitter modulation of spike-evoked calcium transients in arousal related neurons
DEFF Research Database (Denmark)
Kohlmeier, Kristi Anne; Leonard, Christopher S
2006-01-01
Nitric oxide synthase (NOS)-containing cholinergic neurons in the laterodorsal tegmentum (LDT) influence behavioral and motivational states through their projections to the thalamus, ventral tegmental area and a brainstem 'rapid eye movement (REM)-induction' site. Action potential-evoked intracel......Nitric oxide synthase (NOS)-containing cholinergic neurons in the laterodorsal tegmentum (LDT) influence behavioral and motivational states through their projections to the thalamus, ventral tegmental area and a brainstem 'rapid eye movement (REM)-induction' site. Action potential......-evoked intracellular calcium transients dampen excitability and stimulate NO production in these neurons. In this study, we investigated the action of several arousal-related neurotransmitters and the role of specific calcium channels in these LDT Ca(2+)-transients by simultaneous whole-cell recording and calcium...... of cholinergic LDT neurons and that inhibition of spike-evoked Ca(2+)-transients is a common action of neurotransmitters that also activate GIRK channels in these neurons. Because spike-evoked calcium influx dampens excitability, our findings suggest that these 'inhibitory' transmitters could boost firing rate...
Efficient Spike-Coding with Multiplicative Adaptation in a Spike Response Model
Bohte, S.M.
2012-01-01
Neural adaptation underlies the ability of neurons to maximize encoded informa- tion over a wide dynamic range of input stimuli. While adaptation is an intrinsic feature of neuronal models like the Hodgkin-Huxley model, the challenge is to in- tegrate adaptation in models of neural computation. Rece
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.
Burbank, Kendra S
2015-12-01
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.
Spiking Neurons with ASNN Based-Methods for the Neural Block Cipher
Al-Omari, Saleh Ali K
2010-01-01
Problem statement: This paper examines Artificial Spiking Neural Network (ASNN) which inter-connects group of artificial neurons that uses a mathematical model with the aid of block cipher. The aim of undertaken this research is to come up with a block cipher where by the keys are randomly generated by ASNN which can then have any variable block length. This will show the private key is kept and do not have to be exchange to the other side of the communication channel so it present a more secure procedure of key scheduling. The process enables for a faster change in encryption keys and a network level encryption to be implemented at a high speed without the headache of factorization. Approach: The block cipher is converted in public cryptosystem and had a low level of vulnerability to attack from brute, and moreover can able to defend against linear attacks since the Artificial Neural Networks (ANN) architecture convey non-linearity to the encryption/decryption procedures. Result: In this paper is present to ...
Shevchenko, Talent; Teruyama, Ryoichi; Armstrong, William E
2004-11-01
We identified Kv3-like high-threshold K+ currents in hypothalamic supraoptic neurons using whole cell recordings in hypothalamic slices and in acutely dissociated neurons. Tetraethylammonium (TEA)-sensitive currents (Kv3-like channels. In slices, tests with 0.01-0.7 mM TEA produced an IC50 of 200-300 nM for both fast and persistent currents. The fast transient current was similar to currents associated with the expression of Kv3.4 subunits, given that it was sensitive to BDS-I (100 nM). The persistent TEA-sensitive current appeared similar to those attributed to Kv3.1/3.2 subunits. Although qualitatively similar, oxytocin (OT) and vasopressin (VP) neurons in slices differed in the stronger presence of persistent current in VP neurons. In both cell types, the IC50 for TEA-induced spike broadening was similar to that observed for current suppression in voltage clamp. However, TEA had a greater effect on the spike width of VP neurons than of OT neurons. Immunochemical studies revealed a stronger expression of the Kv3.1b alpha-subunit in VP neurons, which may be related to the greater importance of this current type in VP spike repolarization. Because OT and VP neurons are not considered fast firing, but do exhibit frequency- and calcium-dependent spike broadening, Kv3-like currents may be important for maintaining spike width and calcium influx within acceptable limits during repetitive firing.
Directory of Open Access Journals (Sweden)
Ning eQiao
2015-04-01
Full Text Available Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm 2 , and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.
Lyamzin, Dmitry R; Macke, Jakob H; Lesica, Nicholas A
2010-01-01
As multi-electrode and imaging technology begin to provide us with simultaneous recordings of large neuronal populations, new methods for modeling such data must also be developed. Here, we present a model for the type of data commonly recorded in early sensory pathways: responses to repeated trials of a sensory stimulus in which each neuron has it own time-varying spike rate (as described by its PSTH) and the dependencies between cells are characterized by both signal and noise correlations. This model is an extension of previous attempts to model population spike trains designed to control only the total correlation between cells. In our model, the response of each cell is represented as a binary vector given by the dichotomized sum of a deterministic "signal" that is repeated on each trial and a Gaussian random "noise" that is different on each trial. This model allows the simulation of population spike trains with PSTHs, trial-to-trial variability, and pairwise correlations that match those measured experimentally. Furthermore, the model also allows the noise correlations in the spike trains to be manipulated independently of the signal correlations and single-cell properties. To demonstrate the utility of the model, we use it to simulate and manipulate experimental responses from the mammalian auditory and visual systems. We also present a general form of the model in which both the signal and noise are Gaussian random processes, allowing the mean spike rate, trial-to-trial variability, and pairwise signal and noise correlations to be specified independently. Together, these methods for modeling spike trains comprise a potentially powerful set of tools for both theorists and experimentalists studying population responses in sensory systems.
Neuron-specific stimulus masking reveals interference in spike timing at the cortical level.
Larson, Eric; Maddox, Ross K; Perrone, Ben P; Sen, Kamal; Billimoria, Cyrus P
2012-02-01
The auditory system is capable of robust recognition of sounds in the presence of competing maskers (e.g., other voices or background music). This capability arises despite the fact that masking stimuli can disrupt neural responses at the cortical level. Since the origins of such interference effects remain unknown, in this study, we work to identify and quantify neural interference effects that originate due to masking occurring within and outside receptive fields of neurons. We record from single and multi-unit auditory sites from field L, the auditory cortex homologue in zebra finches. We use a novel method called spike timing-based stimulus filtering that uses the measured response of each neuron to create an individualized stimulus set. In contrast to previous adaptive experimental approaches, which have typically focused on the average firing rate, this method uses the complete pattern of neural responses, including spike timing information, in the calculation of the receptive field. When we generate and present novel stimuli for each neuron that mask the regions within the receptive field, we find that the time-varying information in the neural responses is disrupted, degrading neural discrimination performance and decreasing spike timing reliability and sparseness. We also find that, while removing stimulus energy from frequency regions outside the receptive field does not significantly affect neural responses for many sites, adding a masker in these frequency regions can nonetheless have a significant impact on neural responses and discriminability without a significant change in the average firing rate. These findings suggest that maskers can interfere with neural responses by disrupting stimulus timing information with power either within or outside the receptive fields of neurons.
Mechanism of frequency-dependent broadening of molluscan neurone soma spikes.
Aldrich, R W; Getting, P A; Thompson, S H
1979-06-01
1. Action potentials recorded from isolated dorid neurone somata increase in duration, i.e. broaden, during low frequency repetitive firing. Spike broadening is substantially reduced by external Co ions and implicates an inward Ca current. 2. During repetitive voltage clamp steps at frequencies slower than 1 Hz, in 100 mM-tetraethyl ammonium ions (TEA) inward Ca currents do not increase in amplitude. 3. Repetitive action potentials result in inactivation of delayed outward current. Likewise, repetitive voltage clamp steps which cause inactivation of delayed outward current also result in longer duration action potentials. 4. The frequency dependence of spike broadening and inactivation of the voltage dependent component (IK) of delayed outward current are similar. 5. Inactivation of IK is observed in all cells, however, only cells with relative large inward Ca currents show significant spike broadening. Spike broadening apparently results from the frequency dependent inactivation of IK which increases the expression of inward Ca current as a prominent shoulder on the repolarizing phase of the action potential. In addition, the presence of a prolonged Ca current increases the duration of the first action potential thereby allowing sufficient time for inactivation of IK.
Flexible models for spike count data with both over- and under- dispersion.
Stevenson, Ian H
2016-08-01
A key observation in systems neuroscience is that neural responses vary, even in controlled settings where stimuli are held constant. Many statistical models assume that trial-to-trial spike count variability is Poisson, but there is considerable evidence that neurons can be substantially more or less variable than Poisson depending on the stimuli, attentional state, and brain area. Here we examine a set of spike count models based on the Conway-Maxwell-Poisson (COM-Poisson) distribution that can flexibly account for both over- and under-dispersion in spike count data. We illustrate applications of this noise model for Bayesian estimation of tuning curves and peri-stimulus time histograms. We find that COM-Poisson models with group/observation-level dispersion, where spike count variability is a function of time or stimulus, produce more accurate descriptions of spike counts compared to Poisson models as well as negative-binomial models often used as alternatives. Since dispersion is one determinant of parameter standard errors, COM-Poisson models are also likely to yield more accurate model comparison. More generally, these methods provide a useful, model-based framework for inferring both the mean and variability of neural responses.
Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.
2017-01-01
We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.
Gandolfo, Daniel; Rodriguez, Roger; Tuckwell, Henry C.
2017-03-01
We investigate the dynamics of large-scale interacting neural populations, composed of conductance based, spiking model neurons with modifiable synaptic connection strengths, which are possibly also subjected to external noisy currents. The network dynamics is controlled by a set of neural population probability distributions (PPD) which are constructed along the same lines as in the Klimontovich approach to the kinetic theory of plasmas. An exact non-closed, nonlinear, system of integro-partial differential equations is derived for the PPDs. As is customary, a closing procedure leads to a mean field limit. The equations we have obtained are of the same type as those which have been recently derived using rigorous techniques of probability theory. The numerical solutions of these so called McKean-Vlasov-Fokker-Planck equations, which are only valid in the limit of infinite size networks, actually shows that the statistical measures as obtained from PPDs are in good agreement with those obtained through direct integration of the stochastic dynamical system for large but finite size networks. Although numerical solutions have been obtained for networks of Fitzhugh-Nagumo model neurons, which are often used to approximate Hodgkin-Huxley model neurons, the theory can be readily applied to networks of general conductance-based model neurons of arbitrary dimension.
A computational model of motor neuron degeneration.
Le Masson, Gwendal; Przedborski, Serge; Abbott, L F
2014-08-20
To explore the link between bioenergetics and motor neuron degeneration, we used a computational model in which detailed morphology and ion conductance are paired with intracellular ATP production and consumption. We found that reduced ATP availability increases the metabolic cost of a single action potential and disrupts K+/Na+ homeostasis, resulting in a chronic depolarization. The magnitude of the ATP shortage at which this ionic instability occurs depends on the morphology and intrinsic conductance characteristic of the neuron. If ATP shortage is confined to the distal part of the axon, the ensuing local ionic instability eventually spreads to the whole neuron and involves fasciculation-like spiking events. A shortage of ATP also causes a rise in intracellular calcium. Our modeling work supports the notion that mitochondrial dysfunction can account for salient features of the paralytic disorder amyotrophic lateral sclerosis, including motor neuron hyperexcitability, fasciculation, and differential vulnerability of motor neuron subpopulations.
Mandelblat-Cerf, Yael; Ramesh, Rohan N; Burgess, Christian R; Patella, Paola; Yang, Zongfang; Lowell, Bradford B; Andermann, Mark L
2015-01-01
Agouti-related-peptide (AgRP) neurons—interoceptive neurons in the arcuate nucleus of the hypothalamus (ARC)—are both necessary and sufficient for driving feeding behavior. To better understand the functional roles of AgRP neurons, we performed optetrode electrophysiological recordings from AgRP neurons in awake, behaving AgRP-IRES-Cre mice. In free-feeding mice, we observed a fivefold increase in AgRP neuron firing with mounting caloric deficit in afternoon vs morning recordings. In food-restricted mice, as food became available, AgRP neuron firing dropped, yet remained elevated as compared to firing in sated mice. The rapid drop in spiking activity of AgRP neurons at meal onset may reflect a termination of the drive to find food, while residual, persistent spiking may reflect a sustained drive to consume food. Moreover, nearby neurons inhibited by AgRP neuron photostimulation, likely including satiety-promoting pro-opiomelanocortin (POMC) neurons, demonstrated opposite changes in spiking. Finally, firing of ARC neurons was also rapidly modulated within seconds of individual licks for liquid food. These findings suggest novel roles for antagonistic AgRP and POMC neurons in the regulation of feeding behaviors across multiple timescales. DOI: http://dx.doi.org/10.7554/eLife.07122.001 PMID:26159614
Detection of M-Sequences from Spike Sequence in Neuronal Networks
Directory of Open Access Journals (Sweden)
Yoshi Nishitani
2012-01-01
Full Text Available In circuit theory, it is well known that a linear feedback shift register (LFSR circuit generates pseudorandom bit sequences (PRBS, including an M-sequence with the maximum period of length. In this study, we tried to detect M-sequences known as a pseudorandom sequence generated by the LFSR circuit from time series patterns of stimulated action potentials. Stimulated action potentials were recorded from dissociated cultures of hippocampal neurons grown on a multielectrode array. We could find several M-sequences from a 3-stage LFSR circuit (M3. These results show the possibility of assembling LFSR circuits or its equivalent ones in a neuronal network. However, since the M3 pattern was composed of only four spike intervals, the possibility of an accidental detection was not zero. Then, we detected M-sequences from random spike sequences which were not generated from an LFSR circuit and compare the result with the number of M-sequences from the originally observed raster data. As a result, a significant difference was confirmed: a greater number of “0–1” reversed the 3-stage M-sequences occurred than would have accidentally be detected. This result suggests that some LFSR equivalent circuits are assembled in neuronal networks.
Model reduction of strong-weak neurons
Steven James Cox; Bosen eDu; Danny eSorensen
2014-01-01
We consider neurons with large dendritic trees that are weakly excitable in the sense that back propagating action potentials are severly attenuated as they travel from the small, strongly excitable, spike initiation zone. In previous work we have shown that the computational size of weakly excitable cell models may be reduced by two or more orders of magnitude, and that the size of strongly excitable models may be reduced by at least one order of magnitude, without sacrificing the spatio–tem...
Bill, Johannes; Buesing, Lars; Habenschuss, Stefan; Nessler, Bernhard; Maass, Wolfgang; Legenstein, Robert
2015-01-01
During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input. PMID:26284370
Bill, Johannes; Buesing, Lars; Habenschuss, Stefan; Nessler, Bernhard; Maass, Wolfgang; Legenstein, Robert
2015-01-01
During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input.
Directory of Open Access Journals (Sweden)
Johannes Bill
Full Text Available During the last decade, Bayesian probability theory has emerged as a framework in cognitive science and neuroscience for describing perception, reasoning and learning of mammals. However, our understanding of how probabilistic computations could be organized in the brain, and how the observed connectivity structure of cortical microcircuits supports these calculations, is rudimentary at best. In this study, we investigate statistical inference and self-organized learning in a spatially extended spiking network model, that accommodates both local competitive and large-scale associative aspects of neural information processing, under a unified Bayesian account. Specifically, we show how the spiking dynamics of a recurrent network with lateral excitation and local inhibition in response to distributed spiking input, can be understood as sampling from a variational posterior distribution of a well-defined implicit probabilistic model. This interpretation further permits a rigorous analytical treatment of experience-dependent plasticity on the network level. Using machine learning theory, we derive update rules for neuron and synapse parameters which equate with Hebbian synaptic and homeostatic intrinsic plasticity rules in a neural implementation. In computer simulations, we demonstrate that the interplay of these plasticity rules leads to the emergence of probabilistic local experts that form distributed assemblies of similarly tuned cells communicating through lateral excitatory connections. The resulting sparse distributed spike code of a well-adapted network carries compressed information on salient input features combined with prior experience on correlations among them. Our theory predicts that the emergence of such efficient representations benefits from network architectures in which the range of local inhibition matches the spatial extent of pyramidal cells that share common afferent input.
Ba, Demba; Temereanca, Simona; Brown, Emery N
2014-01-01
Understanding how ensembles of neurons represent and transmit information in the patterns of their joint spiking activity is a fundamental question in computational neuroscience. At present, analyses of spiking activity from neuronal ensembles are limited because multivariate point process (MPP) models cannot represent simultaneous occurrences of spike events at an arbitrarily small time resolution. Solo recently reported a simultaneous-event multivariate point process (SEMPP) model to correct this key limitation. In this paper, we show how Solo's discrete-time formulation of the SEMPP model can be efficiently fit to ensemble neural spiking activity using a multinomial generalized linear model (mGLM). Unlike existing approximate procedures for fitting the discrete-time SEMPP model, the mGLM is an exact algorithm. The MPP time-rescaling theorem can be used to assess model goodness-of-fit. We also derive a new marked point-process (MkPP) representation of the SEMPP model that leads to new thinning and time-rescaling algorithms for simulating an SEMPP stochastic process. These algorithms are much simpler than multivariate extensions of algorithms for simulating a univariate point process, and could not be arrived at without the MkPP representation. We illustrate the versatility of the SEMPP model by analyzing neural spiking activity from pairs of simultaneously-recorded rat thalamic neurons stimulated by periodic whisker deflections, and by simulating SEMPP data. In the data analysis example, the SEMPP model demonstrates that whisker motion significantly modulates simultaneous spiking activity at the 1 ms time scale and that the stimulus effect is more than one order of magnitude greater for simultaneous activity compared with non-simultaneous activity. Together, the mGLM, the MPP time-rescaling theorem and the MkPP representation of the SEMPP model offer a theoretically sound, practical tool for measuring joint spiking propensity in a neuronal ensemble.
Fiorillo, Christopher D; Kim, Jaekyung K; Hong, Su Z
2014-01-01
The conventional interpretation of spikes is from the perspective of an external observer with knowledge of a neuron's inputs and outputs who is ignorant of the contents of the "black box" that is the neuron. Here we consider a neuron to be an observer and we interpret spikes from the neuron's perspective. We propose both a descriptive hypothesis based on physics and logic, and a prescriptive hypothesis based on biological optimality. Our descriptive hypothesis is that a neuron's membrane excitability is "known" and the amplitude of a future excitatory postsynaptic conductance (EPSG) is "unknown". Therefore excitability is an expectation of EPSG amplitude and a spike is generated only when EPSG amplitude exceeds its expectation ("prediction error"). Our prescriptive hypothesis is that a diversity of synaptic inputs and voltage-regulated ion channels implement "predictive homeostasis", working to insure that the expectation is accurate. The homeostatic ideal and optimal expectation would be achieved when an EPSP reaches precisely to spike threshold, so that spike output is exquisitely sensitive to small variations in EPSG input. To an external observer who knows neither EPSG amplitude nor membrane excitability, spikes would appear random if the neuron is making accurate predictions. We review experimental evidence that spike probabilities are indeed maintained near an average of 0.5 under natural conditions, and we suggest that the same principles may also explain why synaptic vesicle release appears to be "stochastic". Whereas the present hypothesis accords with principles of efficient coding dating back to Barlow (1961), it contradicts decades of assertions that neural activity is substantially "random" or "noisy". The apparent randomness is by design, and like many other examples of apparent randomness, it corresponds to the ignorance of external macroscopic observers about the detailed inner workings of a microscopic system.
Song, Dong; Chan, Rosa H M; Marmarelis, Vasilis Z; Hampson, Robert E; Deadwyler, Sam A; Berger, Theodore W
2007-01-01
Multiple-input multiple-output nonlinear dynamic model of spike train to spike train transformations was previously formulated for hippocampal-cortical prostheses. This paper further described the statistical methods of selecting significant inputs (self-terms) and interactions between inputs (cross-terms) of this Volterra kernel-based model. In our approach, model structure was determined by progressively adding self-terms and cross-terms using a forward stepwise model selection technique. Model coefficients were then pruned based on Wald test. Results showed that the reduced kernel models, which contained much fewer coefficients than the full Volterra kernel model, gave good fits to the novel data. These models could be used to analyze the functional interactions between neurons during behavior.
Efficient transmission of subthreshold signals in complex networks of spiking neurons.
Directory of Open Access Journals (Sweden)
Joaquin J Torres
Full Text Available We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances--that naturally balances the network with excitatory and inhibitory synapses--and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest.
Hentall, I D; Zorman, G; Kansky, S; Fields, H L
1984-05-01
This report describes how the threshold for extracellular, electrical stimulation of cell bodies in the rat's rostromedial medulla depends on the distance to the stimulating electrode. A monopolar microelectrode both delivered current pulses near medullospinal neurons and, after decay of the stimulus artifact, detected whether an orthodromic spike had occurred by collision of that spike with a suitably timed antidromic spike initiated at the thoracic spinal cord. The liminal current and the height of antidromic spikes were noted at a series of vertical electrode positions. Regression analysis was performed to determine whether threshold and the inverse of peak-to-peak spike height varied more as the radial distance or its square. The square relationship provided a much better fit for threshold and a marginally better fit for the inverse of spike height. The spatial decline in excitability (K2) averaged 859 microA/mm2, falling within the range of values found for fibers and cell bodies in other studies. The constant of spatial decline in spike height (C2) in millivolts per square millimeter was positively correlated with K2. Both C2 and K2 were negatively correlated with conduction velocity. From threshold distance curves fitted by regression analysis, the mean separation of sites of spike maxima and threshold minima along each electrode path was 16 micron; the estimated distance from these sites to, respectively, the loci of spike generation and spike excitation were positively correlated and similar. The variation of C2 and K2 with conduction velocity may be due either to an influence of the size and shape of the dendritic tree on the spatial decrement of excitability and spike height or to a confounding in the studied equations of the space-independent effect of the size of a cell body on spike height and excitability.
Sun, Qian; Srinivas, Kalyan V; Sotayo, Alaba; Siegelbaum, Steven A
2014-01-01
Synaptic inputs from different brain areas are often targeted to distinct regions of neuronal dendritic arbors. Inputs to proximal dendrites usually produce large somatic EPSPs that efficiently trigger action potential (AP) output, whereas inputs to distal dendrites are greatly attenuated and may largely modulate AP output. In contrast to most other cortical and hippocampal neurons, hippocampal CA2 pyramidal neurons show unusually strong excitation by their distal dendritic inputs from entorhinal cortex (EC). In this study, we demonstrate that the ability of these EC inputs to drive CA2 AP output requires the firing of local dendritic Na(+) spikes. Furthermore, we find that CA2 dendritic geometry contributes to the efficient coupling of dendritic Na(+) spikes to AP output. These results provide a striking example of how dendritic spikes enable direct cortical inputs to overcome unfavorable distal synaptic locale to trigger axonal AP output and thereby enable efficient cortico-hippocampal information flow.
Spike-timing computation properties of a feed-forward neural network model
Directory of Open Access Journals (Sweden)
Drew Benjamin Sinha
2014-01-01
Full Text Available Brain function is characterized by dynamical interactions among networks of neurons. These interactions are mediated by network topology at many scales ranging from microcircuits to brain areas. Understanding how networks operate can be aided by understanding how the transformation of inputs depends upon network connectivity patterns, e.g. serial and parallel pathways. To tractably determine how single synapses or groups of synapses in such pathways shape transformations, we modeled feed-forward networks of 7-22 neurons in which synaptic strength changed according to a spike-timing dependent plasticity rule. We investigated how activity varied when dynamics were perturbed by an activity-dependent electrical stimulation protocol (spike-triggered stimulation; STS in networks of different topologies and background input correlations. STS can successfully reorganize functional brain networks in vivo, but with a variability in effectiveness that may derive partially from the underlying network topology. In a simulated network with a single disynaptic pathway driven by uncorrelated background activity, structured spike-timing relationships between polysynaptically connected neurons were not observed. When background activity was correlated or parallel disynaptic pathways were added, however, robust polysynaptic spike timing relationships were observed, and application of STS yielded predictable changes in synaptic strengths and spike-timing relationships. These observations suggest that precise input-related or topologically induced temporal relationships in network activity are necessary for polysynaptic signal propagation. Such constraints for polysynaptic computation suggest potential roles for higher-order topological structure in network organization, such as maintaining polysynaptic correlation in the face of relatively weak synapses.
Power-Law Dynamics of Membrane Conductances Increase Spiking Diversity in a Hodgkin-Huxley Model.
Teka, Wondimu; Stockton, David; Santamaria, Fidel
2016-03-01
We studied the effects of non-Markovian power-law voltage dependent conductances on the generation of action potentials and spiking patterns in a Hodgkin-Huxley model. To implement slow-adapting power-law dynamics of the gating variables of the potassium, n, and sodium, m and h, conductances we used fractional derivatives of order η≤1. The fractional derivatives were used to solve the kinetic equations of each gate. We systematically classified the properties of each gate as a function of η. We then tested if the full model could generate action potentials with the different power-law behaving gates. Finally, we studied the patterns of action potential that emerged in each case. Our results show the model produces a wide range of action potential shapes and spiking patterns in response to constant current stimulation as a function of η. In comparison with the classical model, the action potential shapes for power-law behaving potassium conductance (n gate) showed a longer peak and shallow hyperpolarization; for power-law activation of the sodium conductance (m gate), the action potentials had a sharp rise time; and for power-law inactivation of the sodium conductance (h gate) the spikes had wider peak that for low values of η replicated pituitary- and cardiac-type action potentials. With all physiological parameters fixed a wide range of spiking patterns emerged as a function of the value of the constant input current and η, such as square wave bursting, mixed mode oscillations, and pseudo-plateau potentials. Our analyses show that the intrinsic memory trace of the fractional derivative provides a negative feedback mechanism between the voltage trace and the activity of the power-law behaving gate variable. As a consequence, power-law behaving conductances result in an increase in the number of spiking patterns a neuron can generate and, we propose, expand the computational capacity of the neuron.
Power-Law Dynamics of Membrane Conductances Increase Spiking Diversity in a Hodgkin-Huxley Model.
Directory of Open Access Journals (Sweden)
Wondimu Teka
2016-03-01
Full Text Available We studied the effects of non-Markovian power-law voltage dependent conductances on the generation of action potentials and spiking patterns in a Hodgkin-Huxley model. To implement slow-adapting power-law dynamics of the gating variables of the potassium, n, and sodium, m and h, conductances we used fractional derivatives of order η≤1. The fractional derivatives were used to solve the kinetic equations of each gate. We systematically classified the properties of each gate as a function of η. We then tested if the full model could generate action potentials with the different power-law behaving gates. Finally, we studied the patterns of action potential that emerged in each case. Our results show the model produces a wide range of action potential shapes and spiking patterns in response to constant current stimulation as a function of η. In comparison with the classical model, the action potential shapes for power-law behaving potassium conductance (n gate showed a longer peak and shallow hyperpolarization; for power-law activation of the sodium conductance (m gate, the action potentials had a sharp rise time; and for power-law inactivation of the sodium conductance (h gate the spikes had wider peak that for low values of η replicated pituitary- and cardiac-type action potentials. With all physiological parameters fixed a wide range of spiking patterns emerged as a function of the value of the constant input current and η, such as square wave bursting, mixed mode oscillations, and pseudo-plateau potentials. Our analyses show that the intrinsic memory trace of the fractional derivative provides a negative feedback mechanism between the voltage trace and the activity of the power-law behaving gate variable. As a consequence, power-law behaving conductances result in an increase in the number of spiking patterns a neuron can generate and, we propose, expand the computational capacity of the neuron.
Kv3.4 subunits enhance the repolarizing efficiency of Kv3.1 channels in fast-spiking neurons.
Baranauskas, Gytis; Tkatch, Tatiana; Nagata, Keiichi; Yeh, Jay Z; Surmeier, D James
2003-03-01
Neurons with the capacity to discharge at high rates--'fast-spiking' (FS) neurons--are critical participants in central motor and sensory circuits. It is widely accepted that K+ channels with Kv3.1 or Kv3.2 subunits underlie fast, delayed-rectifier (DR) currents that endow neurons with this FS ability. Expression of these subunits in heterologous systems, however, yields channels that open at more depolarized potentials than do native Kv3 family channels, suggesting that they differ. One possibility is that native channels incorporate a subunit that modifies gating. Molecular, electrophysiological and pharmacological studies reported here suggest that a splice variant of the Kv3.4 subunit coassembles with Kv3.1 subunits in rat brain FS neurons. Coassembly enhances the spike repolarizing efficiency of the channels, thereby reducing spike duration and enabling higher repetitive spike rates. These results suggest that manipulation of K3.4 subunit expression could be a useful means of controlling the dynamic range of FS neurons.
A hidden Markov model for decoding and the analysis of replay in spike trains.
Box, Marc; Jones, Matt W; Whiteley, Nick
2016-12-01
We present a hidden Markov model that describes variation in an animal's position associated with varying levels of activity in action potential spike trains of individual place cell neurons. The model incorporates a coarse-graining of position, which we find to be a more parsimonious description of the system than other models. We use a sequential Monte Carlo algorithm for Bayesian inference of model parameters, including the state space dimension, and we explain how to estimate position from spike train observations (decoding). We obtain greater accuracy over other methods in the conditions of high temporal resolution and small neuronal sample size. We also present a novel, model-based approach to the study of replay: the expression of spike train activity related to behaviour during times of motionlessness or sleep, thought to be integral to the consolidation of long-term memories. We demonstrate how we can detect the time, information content and compression rate of replay events in simulated and real hippocampal data recorded from rats in two different environments, and verify the correlation between the times of detected replay events and of sharp wave/ripples in the local field potential.
Liston, Adam D; Salek-Haddadi, Afraim; Kiebel, Stefan J; Hamandi, Khalid; Turner, Robert; Lemieux, Louis
2004-12-01
Previously, an analysis of activations observed in a patient with idiopathic generalized epilepsy using electroencephalogram-correlated functional magnetic resonance imaging (MRI) during runs of 3-Hz generalized spike-wave discharge (GSWD) was presented by Salek-Haddadi. Time-locked, bilateral, thalamic blood oxygenation level-dependent increases were reported to be accompanied by widespread, symmetric, cortical deactivation with a frontal maximum. In light of recent investigations into MRI detection of the magnetic field perturbations caused by neuronal current loops during depolarization, we revisited the analysis of the data of Salek-Haddadi as a preliminary search for a neuroelectric signal. We modeled the MRI response as the sum of a fast signal and a slower signal and demonstrated significant MRI activity at a time scale of the order of 30 ms associated with GSWDs. Further work is necessary before firm conclusions may be drawn about the nature of this signal.
Greenwood, Priscilla E
2016-01-01
This book describes a large number of open problems in the theory of stochastic neural systems, with the aim of enticing probabilists to work on them. This includes problems arising from stochastic models of individual neurons as well as those arising from stochastic models of the activities of small and large networks of interconnected neurons. The necessary neuroscience background to these problems is outlined within the text, so readers can grasp the context in which they arise. This book will be useful for graduate students and instructors providing material and references for applying probability to stochastic neuron modeling. Methods and results are presented, but the emphasis is on questions where additional stochastic analysis may contribute neuroscience insight. An extensive bibliography is included. Dr. Priscilla E. Greenwood is a Professor Emerita in the Department of Mathematics at the University of British Columbia. Dr. Lawrence M. Ward is a Professor in the Department of Psychology and the Brain...
Merolla, Paul A; Arthur, John V; Alvarez-Icaza, Rodrigo; Cassidy, Andrew S; Sawada, Jun; Akopyan, Filipp; Jackson, Bryan L; Imam, Nabil; Guo, Chen; Nakamura, Yutaka; Brezzo, Bernard; Vo, Ivan; Esser, Steven K; Appuswamy, Rathinakumar; Taba, Brian; Amir, Arnon; Flickner, Myron D; Risk, William P; Manohar, Rajit; Modha, Dharmendra S
2014-08-01
Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts.
Borges, R. R.; Borges, F. S.; Lameu, E. L.; Batista, A. M.; Iarosz, K. C.; Caldas, I. L.; Viana, R. L.; Sanjuán, M. A. F.
2016-05-01
In this paper, we study the effects of spike timing-dependent plasticity on synchronisation in a network of Hodgkin-Huxley neurons. Neuron plasticity is a flexible property of a neuron and its network to change temporarily or permanently their biochemical, physiological, and morphological characteristics, in order to adapt to the environment. Regarding the plasticity, we consider Hebbian rules, specifically for spike timing-dependent plasticity (STDP), and with regard to network, we consider that the connections are randomly distributed. We analyse the synchronisation and desynchronisation according to an input level and probability of connections. Moreover, we verify that the transition for synchronisation depends on the neuronal network architecture, and the external perturbation level.
Fleidervish, I A; Friedman, A; Gutnick, M J
1996-05-15
1. Spike adaptation of neocortical pyramidal neurones was studied with sharp electrode recordings in slices of guinea-pig parietal cortex and whole-cell patch recordings of mouse somatosensory cortex. Repetitive intracellular stimulation with 1 s depolarizing pulses delivered at intervals of pS and an extrapolated reversal potential of 127 +/- 6 mV above resting potential (Vr) (mean +/- S.E.M.; n = 5). Vr was estimated at -72 +/- 3 mV (n = 8), based on the voltage dependence of the steady-state inactivation (h infinity) curve. 5. Slow inactivation (SI) of Na+ channels had a mono-exponential onset with tau on between 0.86 and 2.33 s (n = 3). Steady-state SI was half-maximal at -43.8 mV and had a slope of 14.4 mV (e-fold)-1. Recovery from a 2 s conditioning pulse was bi-exponential and voltage dependent; the slow time constant ranged between 0.45 and 2.5 s at voltages between-128 and -68 mV. 6. The experimentally determined parameters of SI were adequate to simulate slow cumulative adaptation of spike firing in a single-compartment computer model. 7. Persistent Na+ current, which was recorded in whole-cell configuration during slow voltage ramps (35 mV s-1), also underwent pronounced SI, which was apparent when the ramp was preceded by a prolonged depolarizing pulse.
Cyr, André; Boukadoum, Mounir; Thériault, Frédéric
2014-01-01
In this paper, we investigate the operant conditioning (OC) learning process within a bio-inspired paradigm, using artificial spiking neural networks (ASNN) to act as robot brain controllers. In biological agents, OC results in behavioral changes learned from the consequences of previous actions, based on progressive prediction adjustment from rewarding or punishing signals. In a neurorobotics context, virtual and physical autonomous robots may benefit from a similar learning skill when facing unknown and unsupervised environments. In this work, we demonstrate that a simple invariant micro-circuit can sustain OC in multiple learning scenarios. The motivation for this new OC implementation model stems from the relatively complex alternatives that have been described in the computational literature and recent advances in neurobiology. Our elementary kernel includes only a few crucial neurons, synaptic links and originally from the integration of habituation and spike-timing dependent plasticity as learning rules. Using several tasks of incremental complexity, our results show that a minimal neural component set is sufficient to realize many OC procedures. Hence, with the proposed OC module, designing learning tasks with an ASNN and a bio-inspired robot context leads to simpler neural architectures for achieving complex behaviors.
A compound memristive synapse model for statistical learning through STDP in spiking neural networks
Directory of Open Access Journals (Sweden)
Johannes eBill
2014-12-01
Full Text Available Memristors have recently emerged as promising circuit elements to mimic the function of biological synapses in neuromorphic computing. The fabrication of reliable nanoscale memristive synapses, that feature continuous conductance changes based on the timing of pre- and postsynaptic spikes, has however turned out to be challenging. In this article, we propose an alternative approach, the compound memristive synapse, that circumvents this problem by the use of memristors with binary memristive states. A compound memristive synapse employs multiple bistable memristors in parallel to jointly form one synapse, thereby providing a spectrum of synaptic efficacies. We investigate the computational implications of synaptic plasticity in the compound synapse by integrating the recently observed phenomenon of stochastic filament formation into an abstract model of stochastic switching. Using this abstract model, we first show how standard pulsing schemes give rise to spike-timing dependent plasticity (STDP with a stabilizing weight dependence in compound synapses. In a next step, we study unsupervised learning with compound synapses in networks of spiking neurons organized in a winner-take-all architecture. Our theoretical analysis reveals that compound-synapse STDP implements generalized Expectation-Maximization in the spiking network. Specifically, the emergent synapse configuration represents the most salient features of the input distribution in a Mixture-of-Gaussians generative model. Furthermore, the network’s spike response to spiking input streams approximates a well-defined Bayesian posterior distribution. We show in computer simulations how such networks learn to represent high-dimensional distributions over images of handwritten digits with high fidelity even in presence of substantial device variations and under severe noise conditions. Therefore, the compound memristive synapse may provide a synaptic design principle for future neuromorphic
Bill, Johannes; Legenstein, Robert
2014-01-01
Memristors have recently emerged as promising circuit elements to mimic the function of biological synapses in neuromorphic computing. The fabrication of reliable nanoscale memristive synapses, that feature continuous conductance changes based on the timing of pre- and postsynaptic spikes, has however turned out to be challenging. In this article, we propose an alternative approach, the compound memristive synapse, that circumvents this problem by the use of memristors with binary memristive states. A compound memristive synapse employs multiple bistable memristors in parallel to jointly form one synapse, thereby providing a spectrum of synaptic efficacies. We investigate the computational implications of synaptic plasticity in the compound synapse by integrating the recently observed phenomenon of stochastic filament formation into an abstract model of stochastic switching. Using this abstract model, we first show how standard pulsing schemes give rise to spike-timing dependent plasticity (STDP) with a stabilizing weight dependence in compound synapses. In a next step, we study unsupervised learning with compound synapses in networks of spiking neurons organized in a winner-take-all architecture. Our theoretical analysis reveals that compound-synapse STDP implements generalized Expectation-Maximization in the spiking network. Specifically, the emergent synapse configuration represents the most salient features of the input distribution in a Mixture-of-Gaussians generative model. Furthermore, the network's spike response to spiking input streams approximates a well-defined Bayesian posterior distribution. We show in computer simulations how such networks learn to represent high-dimensional distributions over images of handwritten digits with high fidelity even in presence of substantial device variations and under severe noise conditions. Therefore, the compound memristive synapse may provide a synaptic design principle for future neuromorphic architectures.
Directory of Open Access Journals (Sweden)
Yasuhiro Tsubo
Full Text Available The brain is considered to use a relatively small amount of energy for its efficient information processing. Under a severe restriction on the energy consumption, the maximization of mutual information (MMI, which is adequate for designing artificial processing machines, may not suit for the brain. The MMI attempts to send information as accurate as possible and this usually requires a sufficient energy supply for establishing clearly discretized communication bands. Here, we derive an alternative hypothesis for neural code from the neuronal activities recorded juxtacellularly in the sensorimotor cortex of behaving rats. Our hypothesis states that in vivo cortical neurons maximize the entropy of neuronal firing under two constraints, one limiting the energy consumption (as assumed previously and one restricting the uncertainty in output spike sequences at given firing rate. Thus, the conditional maximization of firing-rate entropy (CMFE solves a tradeoff between the energy cost and noise in neuronal response. In short, the CMFE sends a rich variety of information through broader communication bands (i.e., widely distributed firing rates at the cost of accuracy. We demonstrate that the CMFE is reflected in the long-tailed, typically power law, distributions of inter-spike intervals obtained for the majority of recorded neurons. In other words, the power-law tails are more consistent with the CMFE rather than the MMI. Thus, we propose the mathematical principle by which cortical neurons may represent information about synaptic input into their output spike trains.
Directory of Open Access Journals (Sweden)
Wilten eNicola
2016-02-01
Full Text Available A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF. The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks
BULT, R; SCHULING, FH; MASTEBROEK, HAK
1991-01-01
Long-term extracellular recordings from a spiking, movement-sensitive giant neuron (H1) in the third optic ganglion of the blowfly Calliphora vicina (L.) revealed periodic endogenous sensitivity fluctuations. The sensitivity changes showed properties typical of an endogenous circadian rhythm. This
A model of hippocampal spiking responses to items during learning of a context-dependent task
Directory of Open Access Journals (Sweden)
Florian eRaudies
2014-09-01
Full Text Available Single unit recordings in the rat hippocampus have demonstrated shifts in the specificity of spiking activity during learning of a contextual item-reward association task. In this task, rats received reward for responding to different items dependent upon the context an item appeared in, but not dependent upon the location an item appears at. Initially, neurons in the rat hippocampus primarily show firing based on place, but as the rat learns the task this firing became more selective for items. We simulated this effect using a simple circuit model with discrete inputs driving spiking activity representing place and item followed sequentially by a discrete representation of the motor actions involving a response to an item (digging for food or the movement to a different item (movement to a different pot for food. We implemented spiking replay in the network representing neural activity observed during sharp-wave ripple events, and modified synaptic connections based on a simple representation of spike-timing dependent synaptic plasticity. This simple network was able to consistently learn the context-dependent responses, and transitioned from dominant coding of place to a gradual increase in specificity to items consistent with analysis of the experimental data. In addition, the model showed an increase in specificity toward context. The increase of selectivity in the model is accompanied by an increase in binariness of the synaptic weights for cells that are part of the functional network.
Dissecting the Phase Response of a Model Bursting Neuron
Sherwood, William Erik
2009-01-01
We investigate the phase response properties of the Hindmarsh-Rose model of neuronal bursting using burst phase response curves (BPRCs) computed with an infinitesimal perturbation approximation and by direct simulation of synaptic input. The resulting BPRCs have a significantly more complicated structure than the usual Type I and Type II PRCs of spiking neuronal models, and they exhibit highly timing-sensitive changes in the number of spikes per burst that lead to large magnitude phase responses. We use fast-slow dissection and isochron calculations to analyze the phase response dynamics in both weak and strong perturbation regimes.
Lopour, Beth A.; Staba, Richard J.; Stern, John M.; Fried, Itzhak; Ringach, Dario L.
2016-04-01
Objective. Quantifying the relationship between microelectrode-recorded multi-unit activity (MUA) and local field potentials (LFPs) in distinct brain regions can provide detailed information on the extent of functional connectivity in spatially widespread networks. These methods are common in studies of cognition using non-human animal models, but are rare in humans. Here we applied a neuronal spike-triggered impulse response to electrophysiological recordings from the human epileptic brain for the first time, and we evaluate functional connectivity in relation to brain areas supporting the generation of seizures. Approach. Broadband interictal electrophysiological data were recorded from microwires adapted to clinical depth electrodes that were implanted bilaterally using stereotactic techniques in six presurgical patients with medically refractory epilepsy. MUA and LFPs were isolated in each microwire, and we calculated the impulse response between the MUA on one microwire and the LFPs on a second microwire for all possible MUA/LFP pairs. Results were compared to clinical seizure localization, including sites of seizure onset and interictal epileptiform discharges. Main results. We detected significant interictal long-range functional connections in each subject, in some cases across hemispheres. Results were consistent between two independent datasets, and the timing and location of significant impulse responses reflected anatomical connectivity. However, within individual subjects, the spatial distribution of impulse responses was unique. In two subjects with clear seizure localization and successful surgery, the epileptogenic zone was associated with significant impulse responses. Significance. The results suggest that the spike-triggered impulse response can provide valuable information about the neuronal networks that contribute to seizures using only interictal data. This technique will enable testing of specific hypotheses regarding functional connectivity
Synaptic channel model including effects of spike width variation
2015-01-01
Synaptic Channel Model Including Effects of Spike Width Variation Hamideh Ramezani Next-generation and Wireless Communications Laboratory (NWCL) Department of Electrical and Electronics Engineering Koc University, Istanbul, Turkey Ozgur B. Akan Next-generation and Wireless Communications Laboratory (NWCL) Department of Electrical and Electronics Engineering Koc University, Istanbul, Turkey ABSTRACT An accu...
Experimental and modelling investigation of surface EMG spike analysis.
Gabriel, David A; Christie, Anita; Inglis, J Greig; Kamen, Gary
2011-05-01
A pattern classification method based on five measures extracted from the surface electromyographic (sEMG) signal is used to provide a unique characterization of the interference pattern for different motor unit behaviours. This study investigated the sensitivity of the five sEMG measures during the force gradation process. Tissue and electrode filtering effects were further evaluated using a sEMG model. Subjects (N=8) performed isometric elbow flexion contractions from 0 to 100% MVC. The sEMG signals from the biceps brachii were recorded simultaneously with force. The basic building block of the sEMG model was the detection of single fibre action potentials (SFAPs) through a homogeneous, equivalent isotropic, infinite volume conduction medium. The SFAPs were summed to generate single motor unit action potentials. The physiologic properties from a well-known muscle model and motor unit recruitment and firing rate schemes were combined to generate synthetic sEMG signals. The following pattern classification measures were calculated: mean spike amplitude, mean spike frequency, mean spike slope, mean spike duration, and the mean number of peaks per spike. Root-mean-square amplitude and mean power frequency were also calculated. Taken together, the experimental data and modelling analysis showed that below 50% MVC, the pattern classification measures were more sensitive to changes in force than traditional time and frequency measures. However, there are additional limitations associated with electrode distance from the source that must be explored further. Future experimental work should ensure that the inter-electrode distance is no greater than 1cm to mitigate the effects of tissue filtering.
Hahnloser, Richard H R; Wang, Claude Z-H; Nager, Aymeric; Naie, Katja
2008-05-07
In mammals, the thalamus plays important roles for cortical processing, such as relay of sensory information and induction of rhythmical firing during sleep. In neurons of the avian cerebrum, in analogy with cortical up and down states, complex patterns of regular-spiking and dense-bursting modes are frequently observed during sleep. However, the roles of thalamic inputs for shaping these firing modes are largely unknown. A suspected key player is the avian thalamic nucleus uvaeformis (Uva). Uva is innervated by polysensory input, receives indirect cerebral feedback via the midbrain, and projects to the cerebrum via two distinct pathways. Using pharmacological manipulation, electrical stimulation, and extracellular recordings of Uva projection neurons, we study the involvement of Uva in zebra finches for the generation of spontaneous activity and auditory responses in premotor area HVC (used as a proper name) and the downstream robust nucleus of the arcopallium (RA). In awake and sleeping birds, we find that single Uva spikes suppress and spike bursts enhance spontaneous and auditory-evoked bursts in HVC and RA neurons. Strong burst suppression is mediated mainly via tonically firing HVC-projecting Uva neurons, whereas a fast burst drive is mediated indirectly via Uva neurons projecting to the nucleus interface of the nidopallium. Our results reveal that cerebral sleep-burst epochs and arousal-related burst suppression are both shaped by sophisticated polysynaptic thalamic mechanisms.
Sadeh, Sadra; Rotter, Stefan
2015-01-01
The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not
Directory of Open Access Journals (Sweden)
Sadra Sadeh
2015-01-01
Full Text Available The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are
A Spiking Neural Network Model of the Lateral Geniculate Nucleus on the SpiNNaker Machine
Directory of Open Access Journals (Sweden)
Basabdatta Sen-Bhattacharya
2017-08-01
Full Text Available We present a spiking neural network model of the thalamic Lateral Geniculate Nucleus (LGN developed on SpiNNaker, which is a state-of-the-art digital neuromorphic hardware built with very-low-power ARM processors. The parallel, event-based data processing in SpiNNaker makes it viable for building massively parallel neuro-computational frameworks. The LGN model has 140 neurons representing a “basic building block” for larger modular architectures. The motivation of this work is to simulate biologically plausible LGN dynamics on SpiNNaker. Synaptic layout of the model is consistent with biology. The model response is validated with existing literature reporting entrainment in steady state visually evoked potentials (SSVEP—brain oscillations corresponding to periodic visual stimuli recorded via electroencephalography (EEG. Periodic stimulus to the model is provided by: a synthetic spike-train with inter-spike-intervals in the range 10–50 Hz at a resolution of 1 Hz; and spike-train output from a state-of-the-art electronic retina subjected to a light emitting diode flashing at 10, 20, and 40 Hz, simulating real-world visual stimulus to the model. The resolution of simulation is 0.1 ms to ensure solution accuracy for the underlying differential equations defining Izhikevichs neuron model. Under this constraint, 1 s of model simulation time is executed in 10 s real time on SpiNNaker; this is because simulations on SpiNNaker work in real time for time-steps dt ⩾ 1 ms. The model output shows entrainment with both sets of input and contains harmonic components of the fundamental frequency. However, suppressing the feed-forward inhibition in the circuit produces subharmonics within the gamma band (>30 Hz implying a reduced information transmission fidelity. These model predictions agree with recent lumped-parameter computational model-based predictions, using conventional computers. Scalability of the framework is demonstrated by a multi
Detection of neuronal spikes using an adaptive threshold based on the max-min spread sorting method.
Chan, Hsiao-Lung; Lin, Ming-An; Wu, Tony; Lee, Shih-Tseng; Tsai, Yu-Tai; Chao, Pei-Kuang
2008-07-15
Neuronal spike information can be used to correlate neuronal activity to various stimuli, to find target neural areas for deep brain stimulation, and to decode intended motor command for brain-machine interface. Typically, spike detection is performed based on the adaptive thresholds determined by running root-mean-square (RMS) value of the signal. Yet conventional detection methods are susceptible to threshold fluctuations caused by neuronal spike intensity. In the present study we propose a novel adaptive threshold based on the max-min spread sorting method. On the basis of microelectrode recording signals and simulated signals with Gaussian noises and colored noises, the novel method had the smallest threshold variations, and similar or better spike detection performance than either the RMS-based method or other improved methods. Moreover, the detection method described in this paper uses the reduced features of raw signal to determine the threshold, thereby giving a simple data manipulation that is beneficial for reducing the computational load when dealing with very large amounts of data (as multi-electrode recordings).
Directory of Open Access Journals (Sweden)
William eLennon
2014-12-01
Full Text Available While the anatomy of the cerebellar microcircuit is well studied, how it implements cerebellar function is not understood. A number of models have been proposed to describe this mechanism but few emphasize the role of the vast network Purkinje cells (PKJs form with the molecular layer interneurons (MLIs – the stellate and basket cells. We propose a model of the MLI-PKJ network composed of simple spiking neurons incorporating the major anatomical and physiological features. In computer simulations, the model reproduces the irregular firing patterns observed in PKJs and MLIs in vitro and a shift toward faster, more regular firing patterns when inhibitory synaptic currents are blocked. In the model, the time between PKJ spikes is shown to be proportional to the amount of feedforward inhibition from an MLI on average. The two key elements of the model are: (1 spontaneously active PKJs and MLIs due to an endogenous depolarizing current, and (2 adherence to known anatomical connectivity along a parasagittal strip of cerebellar cortex. We propose this model to extend previous spiking network models of the cerebellum and for further computational investigation into the role of irregular firing and MLIs in cerebellar learning and function.
Giulioni, Massimilian; Pannunzi, Mario; Badoni, Davide; Dante, Vittorio; Del Giudice, Paolo
2009-11-01
We describe the implementation and illustrate the learning performance of an analog VLSI network of 32 integrate-and-fire neurons with spike-frequency adaptation and 2016 Hebbian bistable spike-driven stochastic synapses, endowed with a self-regulating plasticity mechanism, which avoids unnecessary synaptic changes. The synaptic matrix can be flexibly configured and provides both recurrent and external connectivity with address-event representation compliant devices. We demonstrate a marked improvement in the efficiency of the network in classifying correlated patterns, owing to the self-regulating mechanism.
Monitoring spike train synchrony.
Kreuz, Thomas; Chicharro, Daniel; Houghton, Conor; Andrzejak, Ralph G; Mormann, Florian
2013-03-01
Recently, the SPIKE-distance has been proposed as a parameter-free and timescale-independent measure of spike train synchrony. This measure is time resolved since it relies on instantaneous estimates of spike train dissimilarity. However, its original definition led to spuriously high instantaneous values for eventlike firing patterns. Here we present a substantial improvement of this measure that eliminates this shortcoming. The reliability gained allows us to track changes in instantaneous clustering, i.e., time-localized patterns of (dis)similarity among multiple spike trains. Additional new features include selective and triggered temporal averaging as well as the instantaneous comparison of spike train groups. In a second step, a causal SPIKE-distance is defined such that the instantaneous values of dissimilarity rely on past information only so that time-resolved spike train synchrony can be estimated in real time. We demonstrate that these methods are capable of extracting valuable information from field data by monitoring the synchrony between neuronal spike trains during an epileptic seizure. Finally, the applicability of both the regular and the real-time SPIKE-distance to continuous data is illustrated on model electroencephalographic (EEG) recordings.
Processing of sounds by population spikes in a model of primary auditory cortex
Directory of Open Access Journals (Sweden)
Alex Loebel
2007-10-01
Full Text Available We propose a model of the primary auditory cortex (A1, in which each iso-frequency column is represented by a recurrent neural network with short-term synaptic depression. Such networks can emit Population Spikes, in which most of the neurons fire synchronously for a short time period. Different columns are interconnected in a way that reflects the tonotopic map in A1, and population spikes can propagate along the map from one column to the next, in a temporally precise manner that depends on the specific input presented to the network. The network, therefore, processes incoming sounds by precise sequences of population spikes that are embedded in a continuous asynchronous activity, with both of these response components carrying information about the inputs and interacting with each other. With these basic characteristics, the model can account for a wide range of experimental findings. We reproduce neuronal frequency tuning curves, whose width depends on the strength of the intracortical inhibitory and excitatory connections. Non-simultaneous two-tone stimuli show forward masking depending on their temporal separation, as well as on the duration of the first stimulus. The model also exhibits non-linear suppressive interactions between sub-threshold tones and broad-band noise inputs, similar to the hypersensitive locking suppression recently demonstrated in auditory cortex.We derive several predictions from the model. In particular, we predict that spontaneous activity in primary auditory cortex gates the temporally locked responses of A1 neurons to auditory stimuli. Spontaneous activity could, therefore, be a mechanism for rapid and reversible modulation of cortical processing.
Dynamic Behavior of Artificial Hodgkin-Huxley Neuron Model Subject to Additive Noise.
Kang, Qi; Huang, BingYao; Zhou, MengChu
2016-09-01
Motivated by neuroscience discoveries during the last few years, many studies consider pulse-coupled neural networks with spike-timing as an essential component in information processing by the brain. There also exists some technical challenges while simulating the networks of artificial spiking neurons. The existing studies use a Hodgkin-Huxley (H-H) model to describe spiking dynamics and neuro-computational properties of each neuron. But they fail to address the effect of specific non-Gaussian noise on an artificial H-H neuron system. This paper aims to analyze how an artificial H-H neuron responds to add different types of noise using an electrical current and subunit noise model. The spiking and bursting behavior of this neuron is also investigated through numerical simulations. In addition, through statistic analysis, the intensity of different kinds of noise distributions is discussed to obtain their relationship with the mean firing rate, interspike intervals, and stochastic resonance.
Point process modeling and estimation: Advances in the analysis of dynamic neural spiking data
Deng, Xinyi
A common interest of scientists in many fields is to understand the relationship between the dynamics of a physical system and the occurrences of discrete events within such physical system. Seismologists study the connection between mechanical vibrations of the Earth and the occurrences of earthquakes so that future earthquakes can be better predicted. Astrophysicists study the association between the oscillating energy of celestial regions and the emission of photons to learn the Universe's various objects and their interactions. Neuroscientists study the link between behavior and the millisecond-timescale spike patterns of neurons to understand higher brain functions. Such relationships can often be formulated within the framework of state-space models with point process observations. The basic idea is that the dynamics of the physical systems are driven by the dynamics of some stochastic state variables and the discrete events we observe in an interval are noisy observations with distributions determined by the state variables. This thesis proposes several new methodological developments that advance the framework of state-space models with point process observations at the intersection of statistics and neuroscience. In particular, we develop new methods 1) to characterize the rhythmic spiking activity using history-dependent structure, 2) to model population spike activity using marked point process models, 3) to allow for real-time decision making, and 4) to take into account the need for dimensionality reduction for high-dimensional state and observation processes. We applied these methods to a novel problem of tracking rhythmic dynamics in the spiking of neurons in the subthalamic nucleus of Parkinson's patients with the goal of optimizing placement of deep brain stimulation electrodes. We developed a decoding algorithm that can make decision in real-time (for example, to stimulate the neurons or not) based on various sources of information present in
Model reduction of strong-weak neurons.
Du, Bosen; Sorensen, Danny; Cox, Steven J
2014-01-01
We consider neurons with large dendritic trees that are weakly excitable in the sense that back propagating action potentials are severly attenuated as they travel from the small, strongly excitable, spike initiation zone. In previous work we have shown that the computational size of weakly excitable cell models may be reduced by two or more orders of magnitude, and that the size of strongly excitable models may be reduced by at least one order of magnitude, without sacrificing the spatio-temporal nature of its inputs (in the sense we reproduce the cell's precise mapping of inputs to outputs). We combine the best of these two strategies via a predictor-corrector decomposition scheme and achieve a drastically reduced highly accurate model of a caricature of the neuron responsible for collision detection in the locust.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
Energy Technology Data Exchange (ETDEWEB)
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin, E-mail: dengbin@tju.edu.cn; Chan, Wai-lok [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)
2016-06-15
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
Reconstruction of neuronal input through modeling single-neuron dynamics and computations
Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok
2016-06-01
Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.
Spike-Threshold Variability Originated from Separatrix-Crossing in Neuronal Dynamics
Wang, Longfei; Wang, Hengtong; Yu, Lianchun; Chen, Yong
2016-08-01
The threshold voltage for action potential generation is a key regulator of neuronal signal processing, yet the mechanism of its dynamic variation is still not well described. In this paper, we propose that threshold phenomena can be classified as parameter thresholds and state thresholds. Voltage thresholds which belong to the state threshold are determined by the ‘general separatrix’ in state space. We demonstrate that the separatrix generally exists in the state space of neuron models. The general form of separatrix was assumed as the function of both states and stimuli and the previously assumed threshold evolving equation versus time is naturally deduced from the separatrix. In terms of neuronal dynamics, the threshold voltage variation, which is affected by different stimuli, is determined by crossing the separatrix at different points in state space. We suggest that the separatrix-crossing mechanism in state space is the intrinsic dynamic mechanism for threshold voltages and post-stimulus threshold phenomena. These proposals are also systematically verified in example models, three of which have analytic separatrices and one is the classic Hodgkin-Huxley model. The separatrix-crossing framework provides an overview of the neuronal threshold and will facilitate understanding of the nature of threshold variability.
Model Reduction of Strong-Weak Neurons
Directory of Open Access Journals (Sweden)
Steven James Cox
2014-12-01
Full Text Available We consider neurons with large dendritic trees that are weakly excitable in the sense that back propagating action potentials are severly attenuated as they travelfrom the small, strongly excitable, spike initiation zone. In previous workwe have shown that the computational size of weakly excitable cell modelsmay be reduced by two or more orders of magnitude, and that the size of stronglyexcitable models may be reduced by at least one order of magnitude,without sacrificing thespatio-temporal nature of its inputs (in the sense we reproduce the cell's precise mapping of inputs to outputs. We combine the best of these twostrategies via a predictor--corrector decomposition scheme andachieve a drastically reduced highly accurate model of a caricature of the neuron responsible for collision detection in the locust.
Martina, Marco; Metz, Alexia E; Bean, Bruce P
2007-01-01
We characterized the kinetics and pharmacological properties of voltage-activated potassium currents in rat cerebellar Purkinje neurons using recordings from nucleated patches, which allowed high resolution of activation and deactivation kinetics. Activation was exceptionally rapid, with 10-90% activation in about 400 mus at +30 mV, near the peak of the spike. Deactivation was also extremely rapid, with a decay time constant of about 300 mus near -80 mV. These rapid activation and deactivation kinetics are consistent with mediation by Kv3-family channels but are even faster than reported for Kv3-family channels in other neurons. The peptide toxin BDS-I had very little blocking effect on potassium currents elicited by 100-ms depolarizing steps, but the potassium current evoked by action potential waveforms was inhibited nearly completely. The mechanism of inhibition by BDS-I involves slowing of activation rather than total channel block, consistent with the effects described in cloned Kv3-family channels and this explains the dramatically different effects on currents evoked by short spikes versus voltage steps. As predicted from this mechanism, the effects of toxin on spike width were relatively modest (broadening by roughly 25%). These results show that BDS-I-sensitive channels with ultrafast activation and deactivation kinetics carry virtually all of the voltage-dependent potassium current underlying repolarization during normal Purkinje cell spikes.
Effects of phase on homeostatic spike rates.
Fisher, Nicholas; Talathi, Sachin S; Carney, Paul R; Ditto, William L
2010-05-01
Recent experimental results by Talathi et al. (Neurosci Lett 455:145-149, 2009) showed a divergence in the spike rates of two types of population spike events, representing the putative activity of the excitatory and inhibitory neurons in the CA1 area of an animal model for temporal lobe epilepsy. The divergence in the spike rate was accompanied by a shift in the phase of oscillations between these spike rates leading to a spontaneous epileptic seizure. In this study, we propose a model of homeostatic synaptic plasticity which assumes that the target spike rate of populations of excitatory and inhibitory neurons in the brain is a function of the phase difference between the excitatory and inhibitory spike rates. With this model of homeostatic synaptic plasticity, we are able to simulate the spike rate dynamics seen experimentally by Talathi et al. in a large network of interacting excitatory and inhibitory neurons using two different spiking neuron models. A drift analysis of the spike rates resulting from the homeostatic synaptic plasticity update rule allowed us to determine the type of synapse that may be primarily involved in the spike rate imbalance in the experimental observation by Talathi et al. We find excitatory neurons, particularly those in which the excitatory neuron is presynaptic, have the most influence in producing the diverging spike rates and causing the spike rates to be anti-phase. Our analysis suggests that the excitatory neuronal population, more specifically the excitatory to excitatory synaptic connections, could be implicated in a methodology designed to control epileptic seizures.
Holbrook, Andrew; Vandenberg-Rodes, Alexander; Fortin, Norbert; Shahbaba, Babak
2017-01-01
Neuroscientists are increasingly collecting multimodal data during experiments and observational studies. Different data modalities-such as EEG, fMRI, LFP, and spike trains-offer different views of the complex systems contributing to neural phenomena. Here, we focus on joint modeling of LFP and spike train data, and present a novel Bayesian method for neural decoding to infer behavioral and experimental conditions. This model performs supervised dual-dimensionality reduction: it learns low-dimensional representations of two different sources of information that not only explain variation in the input data itself, but also predict extra-neuronal outcomes. Despite being one probabilistic unit, the model consists of multiple modules: exponential PCA and wavelet PCA are used for dimensionality reduction in the spike train and LFP modules, respectively; these modules simultaneously interface with a Bayesian binary regression module. We demonstrate how this model may be used for prediction, parametric inference, and identification of influential predictors. In prediction, the hierarchical model outperforms other models trained on LFP alone, spike train alone, and combined LFP and spike train data. We compare two methods for modeling the loading matrix and find them to perform similarly. Finally, model parameters and their posterior distributions yield scientific insights.
What is the most realistic single-compartment model of spike initiation?
Directory of Open Access Journals (Sweden)
Romain Brette
2015-04-01
Full Text Available A large variety of neuron models are used in theoretical and computational neuroscience, and among these, single-compartment models are a popular kind. These models do not explicitly include the dendrites or the axon, and range from the Hodgkin-Huxley (HH model to various flavors of integrate-and-fire (IF models. The main classes of models differ in the way spikes are initiated. Which one is the most realistic? Starting with some general epistemological considerations, I show that the notion of realism comes in two dimensions: empirical content (the sort of predictions that a model can produce and empirical accuracy (whether these predictions are correct. I then examine the realism of the main classes of single-compartment models along these two dimensions, in light of recent experimental evidence.
Statistical properties of superimposed stationary spike trains.
Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan
2012-06-01
The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.
Cotterill, Ellese; Charlesworth, Paul; Thomas, Christopher W; Paulsen, Ole; Eglen, Stephen J
2016-08-01
Accurate identification of bursting activity is an essential element in the characterization of neuronal network activity. Despite this, no one technique for identifying bursts in spike trains has been widely adopted. Instead, many methods have been developed for the analysis of bursting activity, often on an ad hoc basis. Here we provide an unbiased assessment of the effectiveness of eight of these methods at detecting bursts in a range of spike trains. We suggest a list of features that an ideal burst detection technique should possess and use synthetic data to assess each method in regard to these properties. We further employ each of the methods to reanalyze microelectrode array (MEA) recordings from mouse retinal ganglion cells and examine their coherence with bursts detected by a human observer. We show that several common burst detection techniques perform poorly at analyzing spike trains with a variety of properties. We identify four promising burst detection techniques, which are then applied to MEA recordings of networks of human induced pluripotent stem cell-derived neurons and used to describe the ontogeny of bursting activity in these networks over several months of development. We conclude that no current method can provide "perfect" burst detection results across a range of spike trains; however, two burst detection techniques, the MaxInterval and logISI methods, outperform compared with others. We provide recommendations for the robust analysis of bursting activity in experimental recordings using current techniques.
Fast Na+ spike generation in dendrites of guinea-pig substantia nigra pars compacta neurons
DEFF Research Database (Denmark)
Nedergaard, S; Hounsgaard, Jørn Dybkjær
1996-01-01
were not inhibited in the presence of glutamate receptor antagonists or during Ca2+ channel blockade. Blockers of gap junctional conductance (sodium propionate, octanol and halothane) did not affect the field-induced spikes. The spike generation was highly sensitive to changes in membrane conductance...
Neuronal Networks in Children with Continuous Spikes and Waves during Slow Sleep
Siniatchkin, Michael; Groening, Kristina; Moehring, Jan; Moeller, Friederike; Boor, Rainer; Brodbeck, Verena; Michel, Christoph M.; Rodionov, Roman; Lemieux, Louis; Stephani, Ulrich
2010-01-01
Epileptic encephalopathy with continuous spikes and waves during slow sleep is an age-related disorder characterized by the presence of interictal epileptiform discharges during at least greater than 85% of sleep and cognitive deficits associated with this electroencephalography pattern. The pathophysiological mechanisms of continuous spikes and…
The dynamic brain: from spiking neurons to neural masses and cortical fields.
Directory of Open Access Journals (Sweden)
Gustavo Deco
2008-08-01
Full Text Available The cortex is a complex system, characterized by its dynamics and architecture, which underlie many functions such as action, perception, learning, language, and cognition. Its structural architecture has been studied for more than a hundred years; however, its dynamics have been addressed much less thoroughly. In this paper, we review and integrate, in a unifying framework, a variety of computational approaches that have been used to characterize the dynamics of the cortex, as evidenced at different levels of measurement. Computational models at different space-time scales help us understand the fundamental mechanisms that underpin neural processes and relate these processes to neuroscience data. Modeling at the single neuron level is necessary because this is the level at which information is exchanged between the computing elements of the brain; the neurons. Mesoscopic models tell us how neural elements interact to yield emergent behavior at the level of microcolumns and cortical columns. Macroscopic models can inform us about whole brain dynamics and interactions between large-scale neural systems such as cortical regions, the thalamus, and brain stem. Each level of description relates uniquely to neuroscience data, from single-unit recordings, through local field potentials to functional magnetic resonance imaging (fMRI, electroencephalogram (EEG, and magnetoencephalogram (MEG. Models of the cortex can establish which types of large-scale neuronal networks can perform computations and characterize their emergent properties. Mean-field and related formulations of dynamics also play an essential and complementary role as forward models that can be inverted given empirical data. This makes dynamic models critical in integrating theory and experiments. We argue that elaborating principled and informed models is a prerequisite for grounding empirical neuroscience in a cogent theoretical framework, commensurate with the achievements in the
Edge detection based on Hodgkin-Huxley neuron model simulation.
Yedjour, Hayat; Meftah, Boudjelal; Lézoray, Olivier; Benyettou, Abdelkader
2017-04-03
In this paper, we propose a spiking neural network model for edge detection in images. The proposed model is biologically inspired by the mechanisms employed by natural vision systems, more specifically by the biologically fulfilled function of simple cells of the human primary visual cortex that are selective for orientation. Several aspects are studied in this model according to three characteristics: feedforward spiking neural structure; conductance-based model of the Hodgkin-Huxley neuron and Gabor receptive fields structure. A visualized map is generated using the firing rate of neurons representing the orientation map of the visual cortex area. We have simulated the proposed model on different images. Successful computer simulation results are obtained. For comparison, we have chosen five methods for edge detection. We finally evaluate and compare the performances of our model toward contour detection using a public dataset of natural images with associated contour ground truths. Experimental results show the ability and high performance of the proposed network model.
Analyzing multiple spike trains with nonparametric Granger causality.
Nedungadi, Aatira G; Rangarajan, Govindan; Jain, Neeraj; Ding, Mingzhou
2009-08-01
Simultaneous recordings of spike trains from multiple single neurons are becoming commonplace. Understanding the interaction patterns among these spike trains remains a key research area. A question of interest is the evaluation of information flow between neurons through the analysis of whether one spike train exerts causal influence on another. For continuous-valued time series data, Granger causality has proven an effective method for this purpose. However, the basis for Granger causality estimation is autoregressive data modeling, which is not directly applicable to spike trains. Various filtering options distort the properties of spike trains as point processes. Here we propose a new nonparametric approach to estimate Granger causality directly from the Fourier transforms of spike train data. We validate the method on synthetic spike trains generated by model networks of neurons with known connectivity patterns and then apply it to neurons simultaneously recorded from the thalamus and the primary somatosensory cortex of a squirrel monkey undergoing tactile stimulation.
Fujiwara, Terufumi; Kazawa, Tomoki; Haupt, Stephan Shuichi; Kanzaki, Ryohei
2014-01-01
Although odorant concentration-response characteristics of olfactory neurons have been widely investigated in a variety of animal species, the effect of odorant concentration on neural processing at circuit level is still poorly understood. Using calcium imaging in the silkmoth (Bombyx mori) pheromone processing circuit of the antennal lobe (AL), we studied the effect of odorant concentration on second-order projection neuron (PN) responses. While PN calcium responses of dendrites showed monotonic increases with odorant concentration, calcium responses of somata showed decreased responses at higher odorant concentrations due to postsynaptic inhibition. Simultaneous calcium imaging and electrophysiology revealed that calcium responses of PN somata but not dendrites reflect spiking activity. Inhibition shortened spike response duration rather than decreasing peak instantaneous spike frequency (ISF). Local interneurons (LNs) that were specifically activated at high odorant concentrations at which PN responses were suppressed are the putative source of inhibition. Our results imply the existence of an intraglomerular mechanism that preserves time resolution in olfactory processing over a wide odorant concentration range.
Directory of Open Access Journals (Sweden)
Terufumi Fujiwara
Full Text Available Although odorant concentration-response characteristics of olfactory neurons have been widely investigated in a variety of animal species, the effect of odorant concentration on neural processing at circuit level is still poorly understood. Using calcium imaging in the silkmoth (Bombyx mori pheromone processing circuit of the antennal lobe (AL, we studied the effect of odorant concentration on second-order projection neuron (PN responses. While PN calcium responses of dendrites showed monotonic increases with odorant concentration, calcium responses of somata showed decreased responses at higher odorant concentrations due to postsynaptic inhibition. Simultaneous calcium imaging and electrophysiology revealed that calcium responses of PN somata but not dendrites reflect spiking activity. Inhibition shortened spike response duration rather than decreasing peak instantaneous spike frequency (ISF. Local interneurons (LNs that were specifically activated at high odorant concentrations at which PN responses were suppressed are the putative source of inhibition. Our results imply the existence of an intraglomerular mechanism that preserves time resolution in olfactory processing over a wide odorant concentration range.
A Statistical Model for In Vivo Neuronal Dynamics.
Directory of Open Access Journals (Sweden)
Simone Carlo Surace
Full Text Available Single neuron models have a long tradition in computational neuroscience. Detailed biophysical models such as the Hodgkin-Huxley model as well as simplified neuron models such as the class of integrate-and-fire models relate the input current to the membrane potential of the neuron. Those types of models have been extensively fitted to in vitro data where the input current is controlled. Those models are however of little use when it comes to characterize intracellular in vivo recordings since the input to the neuron is not known. Here we propose a novel single neuron model that characterizes the statistical properties of in vivo recordings. More specifically, we propose a stochastic process where the subthreshold membrane potential follows a Gaussian process and the spike emission intensity depends nonlinearly on the membrane potential as well as the spiking history. We first show that the model has a rich dynamical repertoire since it can capture arbitrary subthreshold autocovariance functions, firing-rate adaptations as well as arbitrary shapes of the action potential. We then show that this model can be efficiently fitted to data without overfitting. We finally show that this model can be used to characterize and therefore precisely compare various intracellular in vivo recordings from different animals and experimental conditions.
Supervised Learning in Multilayer Spiking Neural Networks
Sporea, Ioana
2012-01-01
The current article introduces a supervised learning algorithm for multilayer spiking neural networks. The algorithm presented here overcomes some limitations of existing learning algorithms as it can be applied to neurons firing multiple spikes and it can in principle be applied to any linearisable neuron model. The algorithm is applied successfully to various benchmarks, such as the XOR problem and the Iris data set, as well as complex classifications problems. The simulations also show the flexibility of this supervised learning algorithm which permits different encodings of the spike timing patterns, including precise spike trains encoding.
Advances in spike localization with EEG dipole modeling.
Rose, Sandra; Ebersole, John S
2009-10-01
EEG interpretation by visual inspection of waveforms, using the assumption that activity at a given electrode is a representation of only the activity of the cortex immediately beneath it, has been the traditional form of EEG analysis since its inception. The relatively recent advent of digital EEG has allowed more advanced analysis of EEG data and has shown that the simple visual inspection described above is a simplistic form of analysis. This is especially true when one is attempting to localize an epileptogenic focus using EEG spikes or seizure onset data. Spatiotemporal analysis of scalp voltage fields has allowed for improved localization of likely cerebral origins of such waveforms. Equivalent dipole source modeling is one such technique and, although not perfect, provides improved characterization of spike and seizure sources as compared to previous methods when properly interpreted. The use of other modern techniques, such as 3D MRI reconstructions and realistic head models, can further improve accuracy of dipole localization and allow for the synthesis of EEG and imaging data, which may be invaluable, especially in cases of pre-surgical epilepsy evaluation.
Ferguson, Alexandra L; Stone, Trevor W
2010-04-01
The presence of high concentrations of glutamate in the extracellular fluid following brain trauma or ischaemia may contribute substantially to subsequent impairments of neuronal function. In this study, glutamate was applied to hippocampal slices for several minutes, producing over-depolarization, which was reflected in an initial loss of evoked population potential size in the CA1 region. Orthodromic population spikes recovered only partially over the following 60 min, whereas antidromic spikes and excitatory postsynaptic potentials (EPSPs) showed greater recovery, implying a change in EPSP-spike coupling (E-S coupling), which was confirmed by intracellular recording from CA1 pyramidal cells. The recovery of EPSPs was enhanced further by dizocilpine, suggesting that the long-lasting glutamate-induced change in E-S coupling involves NMDA receptors. This was supported by experiments showing that when isolated NMDA-receptor-mediated EPSPs were studied in isolation, there was only partial recovery following glutamate, unlike the composite EPSPs. The recovery of orthodromic population spikes and NMDA-receptor-mediated EPSPs following glutamate was enhanced by the adenosine A1 receptor blocker DPCPX, the A2A receptor antagonist SCH58261 or adenosine deaminase, associated with a loss of restoration to normal of the glutamate-induced E-S depression. The results indicate that the long-lasting depression of neuronal excitability following recovery from glutamate is associated with a depression of E-S coupling. This effect is partly dependent on activation of NMDA receptors, which modify adenosine release or the sensitivity of adenosine receptors. The results may have implications for the use of A1 and A2A receptor ligands as cognitive enhancers or neuroprotectants.
Directory of Open Access Journals (Sweden)
Luis I Angel-Chavez
Full Text Available In signal transduction research natural or synthetic molecules are commonly used to target a great variety of signaling proteins. For instance, forskolin, a diterpene activator of adenylate cyclase, has been widely used in cellular preparations to increase the intracellular cAMP level. However, it has been shown that forskolin directly inhibits some cloned K+ channels, which in excitable cells set up the resting membrane potential, the shape of action potential and regulate repetitive firing. Despite the growing evidence indicating that K+ channels are blocked by forskolin, there are no studies yet assessing the impact of this mechanism of action on neuron excitability and firing patterns. In sympathetic neurons, we find that forskolin and its derivative 1,9-Dideoxyforskolin, reversibly suppress the delayed rectifier K+ current (IKV. Besides, forskolin reduced the spike afterhyperpolarization and enhanced the spike frequency-dependent adaptation. Given that IKV is mostly generated by Kv2.1 channels, HEK-293 cells were transfected with cDNA encoding for the Kv2.1 α subunit, to characterize the mechanism of forskolin action. Both drugs reversible suppressed the Kv2.1-mediated K+ currents. Forskolin inhibited Kv2.1 currents and IKV with an IC50 of ~32 μM and ~24 µM, respectively. Besides, the drug induced an apparent current inactivation and slowed-down current deactivation. We suggest that forskolin reduces the excitability of sympathetic neurons by enhancing the spike frequency-dependent adaptation, partially through a direct block of their native Kv2.1 channels.
Li, Meng; Tsien, Joe Z
2017-01-01
A major stumbling block to cracking the real-time neural code is neuronal variability - neurons discharge spikes with enormous variability not only across trials within the same experiments but also in resting states. Such variability is widely regarded as a noise which is often deliberately averaged out during data analyses. In contrast to such a dogma, we put forth the Neural Self-Information Theory that neural coding is operated based on the self-information principle under which variability in the time durations of inter-spike-intervals (ISI), or neuronal silence durations, is self-tagged with discrete information. As the self-information processor, each ISI carries a certain amount of information based on its variability-probability distribution; higher-probability ISIs which reflect the balanced excitation-inhibition ground state convey minimal information, whereas lower-probability ISIs which signify rare-occurrence surprisals in the form of extremely transient or prolonged silence carry most information. These variable silence durations are naturally coupled with intracellular biochemical cascades, energy equilibrium and dynamic regulation of protein and gene expression levels. As such, this silence variability-based self-information code is completely intrinsic to the neurons themselves, with no need for outside observers to set any reference point as typically used in the rate code, population code and temporal code models. Moreover, temporally coordinated ISI surprisals across cell population can inherently give rise to robust real-time cell-assembly codes which can be readily sensed by the downstream neural clique assemblies. One immediate utility of this self-information code is a general decoding strategy to uncover a variety of cell-assembly patterns underlying external and internal categorical or continuous variables in an unbiased manner.
Is realistic neuronal modeling realistic?
Almog, Mara; Korngreen, Alon
2016-11-01
Scientific models are abstractions that aim to explain natural phenomena. A successful model shows how a complex phenomenon arises from relatively simple principles while preserving major physical or biological rules and predicting novel experiments. A model should not be a facsimile of reality; it is an aid for understanding it. Contrary to this basic premise, with the 21st century has come a surge in computational efforts to model biological processes in great detail. Here we discuss the oxymoronic, realistic modeling of single neurons. This rapidly advancing field is driven by the discovery that some neurons don't merely sum their inputs and fire if the sum exceeds some threshold. Thus researchers have asked what are the computational abilities of single neurons and attempted to give answers using realistic models. We briefly review the state of the art of compartmental modeling highlighting recent progress and intrinsic flaws. We then attempt to address two fundamental questions. Practically, can we realistically model single neurons? Philosophically, should we realistically model single neurons? We use layer 5 neocortical pyramidal neurons as a test case to examine these issues. We subject three publically available models of layer 5 pyramidal neurons to three simple computational challenges. Based on their performance and a partial survey of published models, we conclude that current compartmental models are ad hoc, unrealistic models functioning poorly once they are stretched beyond the specific problems for which they were designed. We then attempt to plot possible paths for generating realistic single neuron models. Copyright © 2016 the American Physiological Society.
Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer
Directory of Open Access Journals (Sweden)
Michael eHines
2011-11-01
Full Text Available The performance of several spike exchange methods using a Blue Gene/P supercomputerhas been tested with 8K to 128K cores using randomly connected networks of up to 32M cells with 1k connections per cell and 4M cells with 10k connections per cell. The spike exchange methods used are the standard Message Passing Interface collective, MPI_Allgather, and several variants of the non-blocking multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access communication available on the Blue Gene/P. In all cases the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods --- the persistent multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast;and a two phase multisend in which a DCMF_Multicast is used to first send to a subset of phase 1 destination cores which then pass it on to their subset of phase 2 destination cores --- had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the multisend methods is almost completely due to load imbalance caused by the largevariation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a direct memory access controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect neural net architecture but be randomly distributed so that sets of cells which are burst firing together should be on different processors with their targets on as large a set of processors as possible.
Krumin, Michael; Reutsky, Inna; Shoham, Shy
2010-01-01
The correlation structure of neural activity is believed to play a major role in the encoding and possibly the decoding of information in neural populations. Recently, several methods were developed for exactly controlling the correlation structure of multi-channel synthetic spike trains (Brette, 2009; Krumin and Shoham, 2009; Macke et al., 2009; Gutnisky and Josic, 2010; Tchumatchenko et al., 2010) and, in a related work, correlation-based analysis of spike trains was used for blind identification of single-neuron models (Krumin et al., 2010), for identifying compact auto-regressive models for multi-channel spike trains, and for facilitating their causal network analysis (Krumin and Shoham, 2010). However, the diversity of correlation structures that can be explained by the feed-forward, non-recurrent, generative models used in these studies is limited. Hence, methods based on such models occasionally fail when analyzing correlation structures that are observed in neural activity. Here, we extend this framework by deriving closed-form expressions for the correlation structure of a more powerful multivariate self- and mutually exciting Hawkes model class that is driven by exogenous non-negative inputs. We demonstrate that the resulting Linear-Non-linear-Hawkes (LNH) framework is capable of capturing the dynamics of spike trains with a generally richer and more biologically relevant multi-correlation structure, and can be used to accurately estimate the Hawkes kernels or the correlation structure of external inputs in both simulated and real spike trains (recorded from visually stimulated mouse retinal ganglion cells). We conclude by discussing the method's limitations and the broader significance of strengthening the links between neural spike train analysis and classical system identification.
Institute of Scientific and Technical Information of China (English)
YOOER Chi-Feng; XU Jian-Xue; ZHANG Xin-Hua
2009-01-01
Mechanism of period-adding cascades with chaos in a reduced leech neuron model is suggested as the bifurcation of a saddle-node limit cycle with homoclinic orbits satisfying the "small lobe condition",instead of the blue-sky catastrophe.In every spiking adding,the new spike emerges at the end of the spiking phase of the bursters.
Xie, Huijuan; Gong, Yubing; Wang, Qi
2016-06-01
In this paper, we numerically study how time delay induces multiple coherence resonance (MCR) and synchronization transitions (ST) in adaptive Hodgkin-Huxley neuronal networks with spike-timing dependent plasticity (STDP). It is found that MCR induced by time delay STDP can be either enhanced or suppressed as the adjusting rate Ap of STDP changes, and ST by time delay varies with the increase of Ap, and there is optimal Ap by which the ST becomes strongest. It is also found that there are optimal network randomness and network size by which ST by time delay becomes strongest, and when Ap increases, the optimal network randomness and optimal network size increase and related ST is enhanced. These results show that STDP can either enhance or suppress MCR and optimal STDP can enhance ST induced by time delay in the adaptive neuronal networks. These findings provide a new insight into STDP's role for the information processing and transmission in neural systems.
Gong, Yubing; Wang, Baoying; Xie, Huijuan
2016-12-01
In this paper, we numerically study the effect of spike-timing-dependent plasticity (STDP) on synchronization transitions induced by autaptic activity in adaptive Newman-Watts Hodgkin-Huxley neuron networks. It is found that synchronization transitions induced by autaptic delay vary with the adjusting rate Ap of STDP and become strongest at a certain Ap value, and the Ap value increases when network randomness or network size increases. It is also found that the synchronization transitions induced by autaptic delay become strongest at a certain network randomness and network size, and the values increase and related synchronization transitions are enhanced when Ap increases. These results show that there is optimal STDP that can enhance the synchronization transitions induced by autaptic delay in the adaptive neuronal networks. These findings provide a new insight into the roles of STDP and autapses for the information transmission in neural systems. Copyright Â© 2016 Elsevier Ireland Ltd. All rights reserved.
Haegens, Saskia; Nácher, Verónica; Luna, Rogelio; Romo, Ranulfo; Jensen, Ole
2011-01-01
Extensive work in humans using magneto- and electroencephalography strongly suggests that decreased oscillatory α-activity (8–14 Hz) facilitates processing in a given region, whereas increased α-activity serves to actively suppress irrelevant or interfering processing. However, little work has been done to understand how α-activity is linked to neuronal firing. Here, we simultaneously recorded local field potentials and spikes from somatosensory, premotor, and motor regions while a trained monkey performed a vibrotactile discrimination task. In the local field potentials we observed strong activity in the α-band, which decreased in the sensorimotor regions during the discrimination task. This α-power decrease predicted better discrimination performance. Furthermore, the α-oscillations demonstrated a rhythmic relation with the spiking, such that firing was highest at the trough of the α-cycle. Firing rates increased with a decrease in α-power. These findings suggest that α-oscillations exercise a strong inhibitory influence on both spike timing and firing rate. Thus, the pulsed inhibition by α-oscillations plays an important functional role in the extended sensorimotor system. PMID:22084106
Directory of Open Access Journals (Sweden)
Paul eChorley
2011-05-01
Full Text Available Dopaminergic neurons in the mammalian substantia nigra displaycharacteristic phasic responses to stimuli which reliably predict thereceipt of primary rewards. These responses have been suggested toencode reward prediction-errors similar to those used in reinforcementlearning. Here, we propose a model of dopaminergic activity in whichprediction error signals are generated by the joint action ofshort-latency excitation and long-latency inhibition, in a networkundergoing dopaminergic neuromodulation of both spike-timing dependentsynaptic plasticity and neuronal excitability. In contrast toprevious models, sensitivity to recent events is maintained by theselective modification of specific striatal synapses, efferent tocortical neurons exhibiting stimulus-specific, temporally extendedactivity patterns. Our model shows, in the presence of significantbackground activity, (i a shift in dopaminergic response from rewardto reward predicting stimuli, (ii preservation of a response tounexpected rewards, and (iii a precisely-timed below-baseline dip inactivity observed when expected rewards are omitted.
Hyperbolic Plykin attractor can exist in neuron models
DEFF Research Database (Denmark)
Belykh, V.; Belykh, I.; Mosekilde, Erik
2005-01-01
Strange hyperbolic attractors are hard to find in real physical systems. This paper provides the first example of a realistic system, a canonical three-dimensional (3D) model of bursting neurons, that is likely to have a strange hyperbolic attractor. Using a geometrical approach to the study...... of the neuron model, we derive a flow-defined Poincare map giving ail accurate account of the system's dynamics. In a parameter region where the neuron system undergoes bifurcations causing transitions between tonic spiking and bursting, this two-dimensional map becomes a map of a disk with several periodic...... holes. A particular case is the map of a disk with three holes, matching the Plykin example of a planar hyperbolic attractor. The corresponding attractor of the 3D neuron model appears to be hyperbolic (this property is not verified in the present paper) and arises as a result of a two-loop (secondary...
Gastrein, Philippe; Campanac, Emilie; Gasselin, Célia; Cudmore, Robert H; Bialowas, Andrzej; Carlier, Edmond; Fronzaroli-Molinieres, Laure; Ankri, Norbert; Debanne, Dominique
2011-08-01
Hyperpolarization-activated cyclic nucleotide modulated current (I(h)) sets resonance frequency within the θ-range (5–12 Hz) in pyramidal neurons. However, its precise contribution to the temporal fidelity of spike generation in response to stimulation of excitatory or inhibitory synapses remains unclear. In conditions where pharmacological blockade of I(h) does not affect synaptic transmission, we show that postsynaptic h-channels improve spike time precision in CA1 pyramidal neurons through two main mechanisms. I(h) enhances precision of excitatory postsynaptic potential (EPSP)--spike coupling because I(h) reduces peak EPSP duration. I(h) improves the precision of rebound spiking following inhibitory postsynaptic potentials (IPSPs) in CA1 pyramidal neurons and sets pacemaker activity in stratum oriens interneurons because I(h) accelerates the decay of both IPSPs and after-hyperpolarizing potentials (AHPs). The contribution of h-channels to intrinsic resonance and EPSP waveform was comparatively much smaller in CA3 pyramidal neurons. Our results indicate that the elementary mechanisms by which postsynaptic h-channels control fidelity of spike timing at the scale of individual neurons may account for the decreased theta-activity observed in hippocampal and neocortical networks when h-channel activity is pharmacologically reduced.
Comparison of neuronal spike exchange methods on a Blue Gene/P supercomputer.
Hines, Michael; Kumar, Sameer; Schürmann, Felix
2011-01-01
For neural network simulations on parallel machines, interprocessor spike communication can be a significant portion of the total simulation time. The performance of several spike exchange methods using a Blue Gene/P (BG/P) supercomputer has been tested with 8-128 K cores using randomly connected networks of up to 32 M cells with 1 k connections per cell and 4 M cells with 10 k connections per cell, i.e., on the order of 4·10(10) connections (K is 1024, M is 1024(2), and k is 1000). The spike exchange methods used are the standard Message Passing Interface (MPI) collective, MPI_Allgather, and several variants of the non-blocking Multisend method either implemented via non-blocking MPI_Isend, or exploiting the possibility of very low overhead direct memory access (DMA) communication available on the BG/P. In all cases, the worst performing method was that using MPI_Isend due to the high overhead of initiating a spike communication. The two best performing methods-the persistent Multisend method using the Record-Replay feature of the Deep Computing Messaging Framework DCMF_Multicast; and a two-phase multisend in which a DCMF_Multicast is used to first send to a subset of phase one destination cores, which then pass it on to their subset of phase two destination cores-had similar performance with very low overhead for the initiation of spike communication. Departure from ideal scaling for the Multisend methods is almost completely due to load imbalance caused by the large variation in number of cells that fire on each processor in the interval between synchronization. Spike exchange time itself is negligible since transmission overlaps with computation and is handled by a DMA controller. We conclude that ideal performance scaling will be ultimately limited by imbalance between incoming processor spikes between synchronization intervals. Thus, counterintuitively, maximization of load balance requires that the distribution of cells on processors should not reflect
Eguchi, Akihiro; Neymotin, Samuel A; Stringer, Simon M
2014-01-01
Although many computational models have been proposed to explain orientation maps in primary visual cortex (V1), it is not yet known how similar clusters of color-selective neurons in macaque V1/V2 are connected and develop. In this work, we address the problem of understanding the cortical processing of color information with a possible mechanism of the development of the patchy distribution of color selectivity via computational modeling. Each color input is decomposed into a red, green, and blue representation and transmitted to the visual cortex via a simulated optic nerve in a luminance channel and red-green and blue-yellow opponent color channels. Our model of the early visual system consists of multiple topographically-arranged layers of excitatory and inhibitory neurons, with sparse intra-layer connectivity and feed-forward connectivity between layers. Layers are arranged based on anatomy of early visual pathways, and include a retina, lateral geniculate nucleus, and layered neocortex. Each neuron in the V1 output layer makes synaptic connections to neighboring neurons and receives the three types of signals in the different channels from the corresponding photoreceptor position. Synaptic weights are randomized and learned using spike-timing-dependent plasticity (STDP). After training with natural images, the neurons display heightened sensitivity to specific colors. Information-theoretic analysis reveals mutual information between particular stimuli and responses, and that the information reaches a maximum with fewer neurons in the higher layers, indicating that estimations of the input colors can be done using the output of fewer cells in the later stages of cortical processing. In addition, cells with similar color receptive fields form clusters. Analysis of spiking activity reveals increased firing synchrony between neurons when particular color inputs are presented or removed (ON-cell/OFF-cell).
Kim, Sooyun; Guzman, Segundo J.; Hu, Hua; Jonas, Peter
2012-01-01
CA3 pyramidal neurons are important for memory formation and pattern completion in the hippocampal network. It is generally thought that proximal synapses from the mossy fibers activate these neurons most efficiently, whereas distal inputs from the perforant path have a weaker modulatory influence. We used confocally targeted patch-clamp recording from dendrites and axons to map the activation of rat CA3 pyramidal neurons at the subcellular level. Our results reveal two distinct dendritic dom...
The ionic mechanism of gamma resonance in rat striatal fast-spiking neurons.
Sciamanna, Giuseppe; Wilson, Charles J
2011-12-01
Striatal fast-spiking (FS) cells in slices fire in the gamma frequency range and in vivo are often phase-locked to gamma oscillations in the field potential. We studied the firing patterns of these cells in slices from rats ages 16-23 days to determine the mechanism of their gamma resonance. The resonance of striatal FS cells was manifested as a minimum frequency for repetitive firing. At rheobase, cells fired a doublet of action potentials or doublets separated by pauses, with an instantaneous firing rate averaging 44 spikes/s. The minimum rate for sustained firing was also responsible for the stuttering firing pattern. Firing rate adapted during each episode of firing, and bursts were terminated when firing was reduced to the minimum sustainable rate. Resonance and stuttering continued after blockade of Kv3 current using tetraethylammonium (0.1-1 mM). Both gamma resonance and stuttering were strongly dependent on Kv1 current. Blockade of Kv1 channels with dendrotoxin-I (100 nM) completely abolished the stuttering firing pattern, greatly lowered the minimum firing rate, abolished gamma-band subthreshold oscillations, and slowed spike frequency adaptation. The loss of resonance could be accounted for by a reduction in potassium current near spike threshold and the emergence of a fixed spike threshold. Inactivation of the Kv1 channel combined with the minimum firing rate could account for the stuttering firing pattern. The resonant properties conferred by this channel were shown to be adequate to account for their phase-locking to gamma-frequency inputs as seen in vivo.
Modeling neuronal vulnerability in ALS.
Roselli, Francesco; Caroni, Pico
2014-08-20
Using computational models of motor neuron ion fluxes, firing properties, and energy requirements, Le Masson et al. (2014) reveal how local imbalances in energy homeostasis may self-amplify and contribute to neurodegeneration in ALS.
Directory of Open Access Journals (Sweden)
Steven J Ryan
Full Text Available The basolateral complex of the amygdala (BLA is a critical component of the neural circuit regulating fear learning. During fear learning and recall, the amygdala and other brain regions, including the hippocampus and prefrontal cortex, exhibit phase-locked oscillations in the high delta/low theta frequency band (∼2-6 Hz that have been shown to contribute to the learning process. Network oscillations are commonly generated by inhibitory synaptic input that coordinates action potentials in groups of neurons. In the rat BLA, principal neurons spontaneously receive synchronized, inhibitory input in the form of compound, rhythmic, inhibitory postsynaptic potentials (IPSPs, likely originating from burst-firing parvalbumin interneurons. Here we investigated the role of compound IPSPs in the rat and rhesus macaque BLA in regulating action potential synchrony and spike-timing precision. Furthermore, because principal neurons exhibit intrinsic oscillatory properties and resonance between 4 and 5 Hz, in the same frequency band observed during fear, we investigated whether compound IPSPs and intrinsic oscillations interact to promote rhythmic activity in the BLA at this frequency. Using whole-cell patch clamp in brain slices, we demonstrate that compound IPSPs, which occur spontaneously and are synchronized across principal neurons in both the rat and primate BLA, significantly improve spike-timing precision in BLA principal neurons for a window of ∼300 ms following each IPSP. We also show that compound IPSPs coordinate the firing of pairs of BLA principal neurons, and significantly improve spike synchrony for a window of ∼130 ms. Compound IPSPs enhance a 5 Hz calcium-dependent membrane potential oscillation (MPO in these neurons, likely contributing to the improvement in spike-timing precision and synchronization of spiking. Activation of the cAMP-PKA signaling cascade enhanced the MPO, and inhibition of this cascade blocked the MPO. We discuss
Energy Technology Data Exchange (ETDEWEB)
Yu, Haitao; Guo, Xinmeng; Wang, Jiang, E-mail: jiangwang@tju.edu.cn; Deng, Bin; Wei, Xile [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)
2014-09-01
The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient for the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks.
Yu, Haitao; Guo, Xinmeng; Wang, Jiang; Deng, Bin; Wei, Xile
2014-09-01
The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient for the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks.
Kim, Suhwan; Jung, Unsang; Baek, Juyoung; Lee, Sangwon; Jung, Woonggyu; Kim, Jeehyun; Kang, Shinwon
2013-01-01
Recently, mouse neuroblastoma cells have been considered as an attractive model for the study of human neurological and prion diseases, and they have been intensively used as a model system in different areas. For example, the differentiation of neuro2a (N2A) cells, receptor-mediated ion current, and glutamate-induced physiological responses have been actively investigated with these cells. These mouse neuroblastoma N2A cells are of interest because they grow faster than other cells of neural origin and have a number of other advantages. The calcium oscillations and neural spikes of mouse neuroblastoma N2A cells in epileptic conditions are evaluated. Based on our observations of neural spikes in these cells with our proposed imaging modality, we reported that they can be an important model in epileptic activity studies. We concluded that mouse neuroblastoma N2A cells produce epileptic spikes in vitro in the same way as those produced by neurons or astrocytes. This evidence suggests that increased levels of neurotransmitter release due to the enhancement of free calcium from 4-aminopyridine causes the mouse neuroblastoma N2A cells to produce epileptic spikes and calcium oscillations.
Kim, Suhwan; Baek, Juyeong; Jung, Unsang; Lee, Sangwon; Jung, Woonggyu; Kim, Jeehyun; Kang, Shinwon
2013-05-01
Recently, Mouse neuroblastoma cells are considered as an attractive model for the study of human neurological and prion diseases, and intensively used as a model system in different areas. Among those areas, differentiation of neuro2a (N2A) cells, receptor mediated ion current, and glutamate induced physiological response are actively investigated. The reason for the interest to mouse neuroblastoma N2A cells is that they have a fast growing rate than other cells in neural origin with a few another advantages. This study evaluated the calcium oscillations and neural spikes recording of mouse neuroblastoma N2A cells in an epileptic condition. Based on our observation of neural spikes in mouse N2A cell with our proposed imaging modality, we report that mouse neuroblastoma N2A cells can be an important model related to epileptic activity studies. It is concluded that the mouse neuroblastoma N2A cells produce the epileptic spikes in vitro in the same way as produced by the neurons or the astrocytes. This evidence advocates the increased and strong level of neurotransmitters release by enhancement in free calcium using the 4-aminopyridine which causes the mouse neuroblastoma N2A cells to produce the epileptic spikes and calcium oscillation.
Analysis of Chaotic Resonance in Izhikevich Neuron Model.
Nobukawa, Sou; Nishimura, Haruhiko; Yamanishi, Teruya; Liu, Jian-Qin
2015-01-01
In stochastic resonance (SR), the presence of noise helps a nonlinear system amplify a weak (sub-threshold) signal. Chaotic resonance (CR) is a phenomenon similar to SR but without stochastic noise, which has been observed in neural systems. However, no study to date has investigated and compared the characteristics and performance of the signal responses of a spiking neural system in some chaotic states in CR. In this paper, we focus on the Izhikevich neuron model, which can reproduce major spike patterns that have been experimentally observed. We examine and classify the chaotic characteristics of this model by using Lyapunov exponents with a saltation matrix and Poincaré section methods in order to address the measurement challenge posed by the state-dependent jump in the resetting process. We found the existence of two distinctive states, a chaotic state involving primarily turbulent movement and an intermittent chaotic state. In order to assess the signal responses of CR in these classified states, we introduced an extended Izhikevich neuron model by considering weak periodic signals, and defined the cycle histogram of neuron spikes as well as the corresponding mutual correlation and information. Through computer simulations, we confirmed that both chaotic states in CR can sensitively respond to weak signals. Moreover, we found that the intermittent chaotic state exhibited a prompter response than the chaotic state with primarily turbulent movement.
Neuroaminidase reduces interictal spikes in a rat temporal lobe epilepsy model.
Isaev, Dmytro; Zhao, Qian; Kleen, Jonathan K; Lenck-Santini, Pierre Pascal; Adstamongkonkul, Dusit; Isaeva, Elena; Holmes, Gregory L
2011-03-01
Interictal spikes have been implicated in epileptogenesis and cognitive dysfunction in epilepsy. Unfortunately, antiepileptic drugs have shown poor efficacy in suppressing interictal discharges; novel therapies are needed. Surface charge on neuronal membranes provides a novel target for abolishing interictal spikes. This property can be modulated through the use of neuraminidase, an enzyme that decreases the amount of negatively charged sialic acid. In the present report we determined whether applying neuraminidase to brains of rats with a history of status epilepticus would reduce number of interictal discharges. Following pilocarpine-induced status epilepticus, rats received intrahippocampal injections of neuraminidase, which significantly decreased the number of interictal spikes recorded in the CA1 region. This study provides evidence that sialic acid degradation can reduce the number of interictal spikes. Furthermore, the results suggest that modifying surface charge created by negatively charged sialic acid may provide new opportunities for reducing aberrant epileptiform events in epilepsy.
Dual coding with STDP in a spiking recurrent neural network model of the hippocampus.
Directory of Open Access Journals (Sweden)
Daniel Bush
Full Text Available The firing rate of single neurons in the mammalian hippocampus has been demonstrated to encode for a range of spatial and non-spatial stimuli. It has also been demonstrated that phase of firing, with respect to the theta oscillation that dominates the hippocampal EEG during stereotype learning behaviour, correlates with an animal's spatial location. These findings have led to the hypothesis that the hippocampus operates using a dual (rate and temporal coding system. To investigate the phenomenon of dual coding in the hippocampus, we examine a spiking recurrent network model with theta coded neural dynamics and an STDP rule that mediates rate-coded Hebbian learning when pre- and post-synaptic firing is stochastic. We demonstrate that this plasticity rule can generate both symmetric and asymmetric connections between neurons that fire at concurrent or successive theta phase, respectively, and subsequently produce both pattern completion and sequence prediction from partial cues. This unifies previously disparate auto- and hetero-associative network models of hippocampal function and provides them with a firmer basis in modern neurobiology. Furthermore, the encoding and reactivation of activity in mutually exciting Hebbian cell assemblies demonstrated here is believed to represent a fundamental mechanism of cognitive processing in the brain.
Modelling Spiking Neural Network from the Architecture Evaluation Perspective
Institute of Scientific and Technical Information of China (English)
Yu Ji; You-Hui Zhang; Wei-Min Zheng
2016-01-01
The brain-inspired spiking neural network (SNN) computing paradigm offers the potential for low-power and scalable computing, suited to many intelligent tasks that conventional computational systems find diﬃcult. On the other hand, NoC (network-on-chips) based very large scale integration (VLSI) systems have been widely used to mimic neuro-biological architectures (including SNNs). This paper proposes an evaluation methodology for SNN applications from the aspect of micro-architecture. First, we extract accurate SNN models from existing simulators of neural systems. Second, a cycle-accurate NoC simulator is implemented to execute the aforementioned SNN applications to get timing and energy-consumption information. We believe this method not only benefits the exploration of NoC design space but also bridges the gap between applications (especially those from the neuroscientists’ community) and neuromorphic hardware. Based on the method, we have evaluated some typical SNNs in terms of timing and energy. The method is valuable for the development of neuromorphic hardware and applications.
Comparison of two thermal spike models for ion-solid interaction
Energy Technology Data Exchange (ETDEWEB)
Szenes, G., E-mail: szenes@ludens.elte.h [Department of Materials Physics, Eoetvoes University, P.O. Box 32, H-1518 Budapest (Hungary)
2011-01-15
A comparative review of the inelastic thermal spike model (ITSM, Meftah et al., 1994) and the analytical thermal spike model (ATSM Szenes, 1995) is given. The ITSM follows the formation of the ion-induced thermal spike based on the Fourier equation while the ATSM skips this stage and a final Gaussian temperature distribution is assumed. Each of the two models doubts the basic assumptions of the other. The ITSM rejects the Gaussian temperature distribution while according to ATSM several thermophysical parameters used by the ITSM are irrelevant to the formation of the thermal spike and the equilibrium values are not valid under spike conditions. The essentially different conclusions of the models are discussed in connection with experiments performed in BaFe{sub 12}O{sub 19}, Al{sub 2}O{sub 3}, silica and high-T{sub c} superconductors.
Comparison of two thermal spike models for ion-solid interaction
Szenes, G.
2011-01-01
A comparative review of the inelastic thermal spike model (ITSM, Meftah et al., 1994) and the analytical thermal spike model (ATSM Szenes, 1995) is given. The ITSM follows the formation of the ion-induced thermal spike based on the Fourier equation while the ATSM skips this stage and a final Gaussian temperature distribution is assumed. Each of the two models doubts the basic assumptions of the other. The ITSM rejects the Gaussian temperature distribution while according to ATSM several thermophysical parameters used by the ITSM are irrelevant to the formation of the thermal spike and the equilibrium values are not valid under spike conditions. The essentially different conclusions of the models are discussed in connection with experiments performed in BaFe 12O 19, Al 2O 3, silica and high- T c superconductors.
The ionic mechanism of gamma resonance in rat striatal fast-spiking neurons
Sciamanna, Giuseppe; Wilson, Charles J.
2011-01-01
Striatal fast-spiking (FS) cells in slices fire in the gamma frequency range and in vivo are often phase-locked to gamma oscillations in the field potential. We studied the firing patterns of these cells in slices from rats ages 16–23 days to determine the mechanism of their gamma resonance. The resonance of striatal FS cells was manifested as a minimum frequency for repetitive firing. At rheobase, cells fired a doublet of action potentials or doublets separated by pauses, with an instantaneo...
Pedroarena, Christine M
2011-12-01
Deep cerebellar nuclear neurons (DCNs) display characteristic electrical properties, including spontaneous spiking and the ability to discharge narrow spikes at high frequency. These properties are thought to be relevant to processing inhibitory Purkinje cell input and transferring well-timed signals to cerebellar targets. Yet, the underlying ionic mechanisms are not completely understood. BK and Kv3.1 potassium channels subserve similar functions in spike repolarization and fast firing in many neurons and are both highly expressed in DCNs. Here, their role in the abovementioned spiking characteristics was addressed using whole-cell recordings of large and small putative-glutamatergic DCNs. Selective BK channel block depolarized DCNs of both groups and increased spontaneous firing rate but scarcely affected evoked activity. After adjusting the membrane potential to control levels, the spike waveforms under BK channel block were indistinguishable from control ones, indicating no significant BK channel involvement in spike repolarization. The increased firing rate suggests that lack of DCN-BK channels may have contributed to the ataxic phenotype previously found in BK channel-deficient mice. On the other hand, block of Kv3.1 channels with low doses of 4-aminopyridine (20 μM) hindered spike repolarization and severely depressed evoked fast firing. Therefore, I propose that despite similar characteristics of BK and Kv3.1 channels, they play different roles in DCNs: BK channels control almost exclusively spontaneous firing rate, whereas DCN-Kv3.1 channels dominate the spike repolarization and enable fast firing. Interestingly, after Kv3.1 channel block, BK channels gained a role in spike repolarization, demonstrating how the different function of each of the two channels is determined in part by their co-expression and interplay.
Erfanian Saeedi, Nafise; Blamey, Peter J; Burkitt, Anthony N; Grayden, David B
2016-04-01
Pitch perception is important for understanding speech prosody, music perception, recognizing tones in tonal languages, and perceiving speech in noisy environments. The two principal pitch perception theories consider the place of maximum neural excitation along the auditory nerve and the temporal pattern of the auditory neurons' action potentials (spikes) as pitch cues. This paper describes a biophysical mechanism by which fine-structure temporal information can be extracted from the spikes generated at the auditory periphery. Deriving meaningful pitch-related information from spike times requires neural structures specialized in capturing synchronous or correlated activity from amongst neural events. The emergence of such pitch-processing neural mechanisms is described through a computational model of auditory processing. Simulation results show that a correlation-based, unsupervised, spike-based form of Hebbian learning can explain the development of neural structures required for recognizing the pitch of simple and complex tones, with or without the fundamental frequency. The temporal code is robust to variations in the spectral shape of the signal and thus can explain the phenomenon of pitch constancy.
Directory of Open Access Journals (Sweden)
Jason eJerome
2011-08-01
Full Text Available Neurons in the mammalian neocortex receive inputs from and communicate back to thousands of other neurons, creating complex spatiotemporal activity patterns. The experimental investigation of these parallel dynamic interactions has been limited due to the technical challenges of monitoring or manipulating neuronal activity at that level of complexity. Here we describe a new massively parallel photostimulation system that can be used to control action potential firing in in vitro brain slices with high spatial and temporal resolution while performing extracellular or intracellular electrophysiological measurements. The system uses Digital-Light-Processing (DLP technology to generate 2-dimensional (2D stimulus patterns with >780,000 independently controlled photostimulation sites that operate at high spatial (5.4 µm and temporal (>13kHz resolution. Light is projected through the quartz-glass bottom of the perfusion chamber providing access to a large area (2.76 x 2.07 mm2 of the slice preparation. This system has the unique capability to induce temporally precise action potential firing in large groups of neurons distributed over a wide area covering several cortical columns. Parallel photostimulation opens up new opportunities for the in vitro experimental investigation of spatiotemporal neuronal interactions at a broad range of anatomical scales.
Jerome, Jason; Foehring, Robert C; Armstrong, William E; Spain, William J; Heck, Detlef H
2011-01-01
Neurons in the mammalian neocortex receive inputs from and communicate back to thousands of other neurons, creating complex spatiotemporal activity patterns. The experimental investigation of these parallel dynamic interactions has been limited due to the technical challenges of monitoring or manipulating neuronal activity at that level of complexity. Here we describe a new massively parallel photostimulation system that can be used to control action potential firing in in vitro brain slices with high spatial and temporal resolution while performing extracellular or intracellular electrophysiological measurements. The system uses digital light processing technology to generate 2-dimensional (2D) stimulus patterns with >780,000 independently controlled photostimulation sites that operate at high spatial (5.4 μm) and temporal (>13 kHz) resolution. Light is projected through the quartz-glass bottom of the perfusion chamber providing access to a large area (2.76 mm × 2.07 mm) of the slice preparation. This system has the unique capability to induce temporally precise action potential firing in large groups of neurons distributed over a wide area covering several cortical columns. Parallel photostimulation opens up new opportunities for the in vitro experimental investigation of spatiotemporal neuronal interactions at a broad range of anatomical scales.
Perelman, Yevgeny; Ginosar, Ran
2007-01-01
A mixed-signal front-end processor for multichannel neuronal recording is described. It receives 12 differential-input channels of implanted recording electrodes. A programmable cutoff High Pass Filter (HPF) blocks dc and low-frequency input drift at about 1 Hz. The signals are band-split at about 200 Hz to low-frequency Local Field Potential (LFP) and high-frequency spike data (SPK), which is band limited by a programmable-cutoff LPF, in a range of 8-13 kHz. Amplifier offsets are compensated by 5-bit calibration digital-to-analog converters (DACs). The SPK and LFP channels provide variable amplification rates of up to 5000 and 500, respectively. The analog signals are converted into 10-bit digital form, and streamed out over a serial digital bus at up to 8 Mbps. A threshold filter suppresses inactive portions of the signal and emits only spike segments of programmable length. A prototype has been fabricated on a 0.35-microm CMOS process and tested successfully, demonstrating a 3-microV noise level. Special interface system incorporating an embedded CPU core in a programmable logic device accompanied by real-time software has been developed to allow connectivity to a computer host.
Directory of Open Access Journals (Sweden)
André eCyr
2014-07-01
Full Text Available We demonstrate the operant conditioning (OC learning process within a basic bio-inspired robot controller paradigm, using an artificial spiking neural network (ASNN with minimal component count as artificial brain. In biological agents, OC results in behavioral changes that are learned from the consequences of previous actions, using progressive prediction adjustment triggered by reinforcers. In a robotics context, virtual and physical robots may benefit from a similar learning skill when facing unknown environments with no supervision. In this work, we demonstrate that a simple ASNN can efficiently realise many OC scenarios. The elementary learning kernel that we describe relies on a few critical neurons, synaptic links and the integration of habituation and spike-timing dependent plasticity (STDP as learning rules. Using four tasks of incremental complexity, our experimental results show that such minimal neural component set may be sufficient to implement many OC procedures. Hence, with the described bio-inspired module, OC can be implemented in a wide range of robot controllers, including those with limited computational resources.
Introduction to spiking neural networks: Information processing, learning and applications.
Ponulak, Filip; Kasinski, Andrzej
2011-01-01
The concept that neural information is encoded in the firing rate of neurons has been the dominant paradigm in neurobiology for many years. This paradigm has also been adopted by the theory of artificial neural networks. Recent physiological experiments demonstrate, however, that in many parts of the nervous system, neural code is founded on the timing of individual action potentials. This finding has given rise to the emergence of a new class of neural models, called spiking neural networks. In this paper we summarize basic properties of spiking neurons and spiking networks. Our focus is, specifically, on models of spike-based information coding, synaptic plasticity and learning. We also survey real-life applications of spiking models. The paper is meant to be an introduction to spiking neural networks for scientists from various disciplines interested in spike-based neural processing.
Morris, N P; Harris, S J; Henderson, Z
1999-01-01
The medial septum/diagonal band complex is composed predominantly of cholinergic and GABAergic neurons, and it projects to the hippocampal formation. A proportion of the GABAergic neurons contain parvalbumin, a calcium-binding protein that has previously been localized in fast-spiking, non-accommodating GABAergic neurons in the cerebral cortex and neostriatum. The aim of the present study was to determine whether parvalbumin is localized preferentially in a similar electrophysiological class of neuron in the medial septum/diagonal band complex. The study was carried out using in vitro intracellular recording, intracellular biocytin filling and parvalbumin immunocytochemistry. Three main classes of neurons were identified according to standard criteria: burst-firing, slow-firing and fast-firing neuronal populations. The fast-firing neurons were subdivided into two subpopulations based on whether or not they displayed accommodation. The fast-spiking, non-accommodating cells were furthermore found to be spontaneously active at resting potentials, and to possess action potentials of significantly (P studies showing parvalbumin to be localized solely in GABAergic neurons in the medial septum/diagonal band complex. In conclusion, these findings suggest the presence of a previously uncharacterized population of neurons in the medial septum/diagonal band complex that generate high-frequency, non-adaptive discharge. This property correlates with the localization of parvalbumin in these neurons, which suggests that parvalbumin fulfils the same role in the medial septum/diagonal band complex that it does in other parts of the brain. The fast-spiking neurons in the medial septum/diagonal band complex may play an essential role in the GABAergic influence of the septum on the hippocampal formation.
Efficient transmission of subthreshold signals in complex networks of spiking neurons
Torres, Joaquin J; Marro, J
2014-01-01
We investigate the efficient transmission and processing of weak signals (subthreshold) in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances - that naturally balances the network with excitatory and inhibitory synapses - and considering short-term synaptic plasticity affecting such conductances, we found different dynamical phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron and increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases and therefore for quite well defined and different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite ro...
Song, Dong; Robinson, Brian S; Hampson, Robert E; Marmarelis, Vasilis Z; Deadwyler, Sam A; Berger, Theodore W
2015-01-01
In order to build hippocampal prostheses for restoring memory functions, we build multi-input, multi-output (MIMO) nonlinear dynamical models of the human hippocampus. Spike trains are recorded from the hippocampal CA3 and CA1 regions of epileptic patients performing a memory-dependent delayed match-to-sample task. Using CA3 and CA1 spike trains as inputs and outputs respectively, second-order sparse generalized Laguerre-Volterra models are estimated with group lasso and local coordinate descent methods to capture the nonlinear dynamics underlying the spike train transformations. These models can accurately predict the CA1 spike trains based on the ongoing CA3 spike trains and thus will serve as the computational basis of the hippocampal memory prosthesis.
Stochastic Variational Learning in Recurrent Spiking Networks
Directory of Open Access Journals (Sweden)
Danilo eJimenez Rezende
2014-04-01
Full Text Available The ability to learn and perform statistical inference with biologically plausible recurrent network of spiking neurons is an important step towards understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators conveying information about ``novelty on a statistically rigorous ground.Simulations show that our model is able to learn bothstationary and non-stationary patterns of spike trains.We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.
Stochastic variational learning in recurrent spiking networks.
Jimenez Rezende, Danilo; Gerstner, Wulfram
2014-01-01
The ability to learn and perform statistical inference with biologically plausible recurrent networks of spiking neurons is an important step toward understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators) conveying information about "novelty" on a statistically rigorous ground. Simulations show that our model is able to learn both stationary and non-stationary patterns of spike trains. We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.
Spiking patterns of a hippocampus model in electric fields
Institute of Scientific and Technical Information of China (English)
Men Cong; Wang Jiang; Qin Ying-Mei; Wei Xi-Le; Che Yan-Qiu; Deng Bin
2011-01-01
We develop a model of CA3 neurons embedded in a resistive array to mimic the effects of electric fields from a new perspective.Effects of DC and sinusoidal electric fields on firing patterns in CA3 neurons are investigated in this study.The firing patterns can be switched from no firing pattern to burst or from burst to fast periodic firing pattern with the increase of DC electric field intensity.It is also found that the firing activities are sensitive to the frequency and amplitude of the sinusoidal electric field.Different phase-locking states and chaotic firing regions are observed in the parameter space of frequency and amplitude.These findings are qualitatively in accordance with the results of relevant experimental and numerical studies.It is implied that the external or endogenous electric field can modulate the neural code in the brain.Furthermore,it is helpful to develop control strategies based on electric fields to control neural diseases such as epilepsy.
Rahmati, Vahid; Kirmse, Knut; Marković, Dimitrije; Holthoff, Knut; Kiebel, Stefan J
2016-02-01
Calcium imaging has been used as a promising technique to monitor the dynamic activity of neuronal populations. However, the calcium trace is temporally smeared which restricts the extraction of quantities of interest such as spike trains of individual neurons. To address this issue, spike reconstruction algorithms have been introduced. One limitation of such reconstructions is that the underlying models are not informed about the biophysics of spike and burst generations. Such existing prior knowledge might be useful for constraining the possible solutions of spikes. Here we describe, in a novel Bayesian approach, how principled knowledge about neuronal dynamics can be employed to infer biophysical variables and parameters from fluorescence traces. By using both synthetic and in vitro recorded fluorescence traces, we demonstrate that the new approach is able to reconstruct different repetitive spiking and/or bursting patterns with accurate single spike resolution. Furthermore, we show that the high inference precision of the new approach is preserved even if the fluorescence trace is rather noisy or if the fluorescence transients show slow rise kinetics lasting several hundred milliseconds, and inhomogeneous rise and decay times. In addition, we discuss the use of the new approach for inferring parameter changes, e.g. due to a pharmacological intervention, as well as for inferring complex characteristics of immature neuronal circuits.
Imprecise correlated activity in self-organizing maps of spiking neurons.
Veredas, Francisco J; Mesa, Héctor; Martínez, Luis A
2008-08-01
How neurons communicate with each other to form effective circuits providing support to functional features of the nervous system is currently under debate. While many experts argue the existence of sparse neural codes based either on oscillations, neural assemblies or synchronous fire chains, other studies defend the necessity of a precise inter-neural communication to arrange efficient neural codes. As it has been demonstrated in neurophysiological studies, in the visual pathway between the retina and the visual cortex of mammals, the correlated activity among neurons becomes less precise as a direct consequence of an increase in the variability of synaptic transmission latencies. Although it is difficult to measure the influence of this reduction of correlated firing precision on the self-organization of cortical maps, it does not preclude the emergence of receptive fields and orientation selectivity maps. This is in close agreement with authors who consider that codes for neural communication are sparse. In this article, integrate-and-fire neural networks are simulated to analyze how changes in the precision of correlated firing among neurons affect self-organization. We observe how by keeping these changes within biologically realistic ranges, orientation selectivity maps can emerge and the features of neuronal receptive fields are significantly affected.
Phase transitions and self-organized criticality in networks of stochastic spiking neurons
Brochini, Ludmila; de Andrade Costa, Ariadne; Abadi, Miguel; Roque, Antônio C.; Stolfi, Jorge; Kinouchi, Osame
2016-11-01
Phase transitions and critical behavior are crucial issues both in theoretical and experimental neuroscience. We report analytic and computational results about phase transitions and self-organized criticality (SOC) in networks with general stochastic neurons. The stochastic neuron has a firing probability given by a smooth monotonic function Φ(V) of the membrane potential V, rather than a sharp firing threshold. We find that such networks can operate in several dynamic regimes (phases) depending on the average synaptic weight and the shape of the firing function Φ. In particular, we encounter both continuous and discontinuous phase transitions to absorbing states. At the continuous transition critical boundary, neuronal avalanches occur whose distributions of size and duration are given by power laws, as observed in biological neural networks. We also propose and test a new mechanism to produce SOC: the use of dynamic neuronal gains - a form of short-term plasticity probably located at the axon initial segment (AIS) - instead of depressing synapses at the dendrites (as previously studied in the literature). The new self-organization mechanism produces a slightly supercritical state, that we called SOSC, in accord to some intuitions of Alan Turing.
Characteristics of Period-Adding Bursting Bifurcation Without Chaos in the Chay Neuron Model
Institute of Scientific and Technical Information of China (English)
YANG Zhuo-Qin; LU Qi-Shao
2004-01-01
@@ A period-adding bursting sequence without bursting-chaos in the Chay neuron model is studied by bifurcation analysis. The genesis of each periodic bursting is separately evoked by the corresponding periodic spiking patterns through two period-doubling bifurcations, except for the period-1 bursting occurring via Hopf bifurcation. Hence,it is concluded that this period-adding bursting bifurcation without chaos has a compound bifurcation structure closely related to period-doubling bifurcations of periodic spiking in essence.
Decision making under uncertainty in a spiking neural network model of the basal ganglia.
Héricé, Charlotte; Khalil, Radwa; Moftah, Marie; Boraud, Thomas; Guthrie, Martin; Garenne, André
2016-12-01
The mechanisms of decision-making and action selection are generally thought to be under the control of parallel cortico-subcortical loops connecting back to distinct areas of cortex through the basal ganglia and processing motor, cognitive and limbic modalities of decision-making. We have used these properties to develop and extend a connectionist model at a spiking neuron level based on a previous rate model approach. This model is demonstrated on decision-making tasks that have been studied in primates and the electrophysiology interpreted to show that the decision is made in two steps. To model this, we have used two parallel loops, each of which performs decision-making based on interactions between positive and negative feedback pathways. This model is able to perform two-level decision-making as in primates. We show here that, before learning, synaptic noise is sufficient to drive the decision-making process and that, after learning, the decision is based on the choice that has proven most likely to be rewarded. The model is then submitted to lesion tests, reversal learning and extinction protocols. We show that, under these conditions, it behaves in a consistent manner and provides predictions in accordance with observed experimental data.
Spike-based population coding and working memory.
Directory of Open Access Journals (Sweden)
Martin Boerlin
2011-02-01
Full Text Available Compelling behavioral evidence suggests that humans can make optimal decisions despite the uncertainty inherent in perceptual or motor tasks. A key question in neuroscience is how populations of spiking neurons can implement such probabilistic computations. In this article, we develop a comprehensive framework for optimal, spike-based sensory integration and working memory in a dynamic environment. We propose that probability distributions are inferred spike-per-spike in recurrently connected networks of integrate-and-fire neurons. As a result, these networks can combine sensory cues optimally, track the state of a time-varying stimulus and memorize accumulated evidence over periods much longer than the time constant of single neurons. Importantly, we propose that population responses and persistent working memory states represent entire probability distributions and not only single stimulus values. These memories are reflected by sustained, asynchronous patterns of activity which make relevant information available to downstream neurons within their short time window of integration. Model neurons act as predictive encoders, only firing spikes which account for new information that has not yet been signaled. Thus, spike times signal deterministically a prediction error, contrary to rate codes in which spike times are considered to be random samples of an underlying firing rate. As a consequence of this coding scheme, a multitude of spike patterns can reliably encode the same information. This results in weakly correlated, Poisson-like spike trains that are sensitive to initial conditions but robust to even high levels of external neural noise. This spike train variability reproduces the one observed in cortical sensory spike trains, but cannot be equated to noise. On the contrary, it is a consequence of optimal spike-based inference. In contrast, we show that rate-based models perform poorly when implemented with stochastically spiking neurons.
Electrophysiological properties of inferior olive neurons: A compartmental model.
Schweighofer, N; Doya, K; Kawato, M
1999-08-01
As a step in exploring the functions of the inferior olive, we constructed a biophysical model of the olivary neurons to examine their unique electrophysiological properties. The model consists of two compartments to represent the known distribution of ionic currents across the cell membrane, as well as the dendritic location of the gap junctions and synaptic inputs. The somatic compartment includes a low-threshold calcium current (I(Ca_l)), an anomalous inward rectifier current (I(h)), a sodium current (I(Na)), and a delayed rectifier potassium current (I(K_dr)). The dendritic compartment contains a high-threshold calcium current (I(Ca_h)), a calcium-dependent potassium current (I(K_Ca)), and a current flowing into other cells through electrical coupling (I(c)). First, kinetic parameters for these currents were set according to previously reported experimental data. Next, the remaining free parameters were determined to account for both static and spiking properties of single olivary neurons in vitro. We then performed a series of simulated pharmacological experiments using bifurcation analysis and extensive two-parameter searches. Consistent with previous studies, we quantitatively demonstrated the major role of I(Ca_l) in spiking excitability. In addition, I(h) had an important modulatory role in the spike generation and period of oscillations, as previously suggested by Bal and McCormick. Finally, we investigated the role of electrical coupling in two coupled spiking cells. Depending on the coupling strength, the hyperpolarization level, and the I(Ca_l) and I(h) modulation, the coupled cells had four different synchronization modes: the cells could be in-phase, phase-shifted, or anti-phase or could exhibit a complex desynchronized spiking mode. Hence these simulation results support the counterintuitive hypothesis that electrical coupling can desynchronize coupled inferior olive cells.
Meng, Yan; Jin, Yaochu; Yin, Jun
2011-12-01
Spiking neural networks (SNNs) are considered to be computationally more powerful than conventional NNs. However, the capability of SNNs in solving complex real-world problems remains to be demonstrated. In this paper, we propose a substantial extension of the Bienenstock, Cooper, and Munro (BCM) SNN model, in which the plasticity parameters are regulated by a gene regulatory network (GRN). Meanwhile, the dynamics of the GRN is dependent on the activation levels of the BCM neurons. We term the whole model "GRN-BCM." To demonstrate its computational power, we first compare the GRN-BCM with a standard BCM, a hidden Markov model, and a reservoir computing model on a complex time series classification problem. Simulation results indicate that the GRN-BCM significantly outperforms the compared models. The GRN-BCM is then applied to two widely used datasets for human behavior recognition. Comparative results on the two datasets suggest that the GRN-BCM is very promising for human behavior recognition, although the current experiments are still limited to the scenarios in which only one object is moving in the considered video sequences.
Are neuronal networks that vicious ? Or only their models ?
Cessac, B
2007-01-01
We present a mathematical analysis of a network with integrate and fire neurons, taking into account the realistic fact that the spike time is only known within some \\textit{finite} precision. This leads us to propose a model where spikes are effective at times multiple of a characteristic time scale $\\delta$, where $\\delta$ can be mathematically arbitrary small. We make a complete mathematical characterization of the model-dynamics for conductance based integrate and fire models. We obtain the following results. The asymptotic dynamics is composed by finitely many periodic orbits, whose number and period can be arbitrary large and diverge in a region of the parameters space, traditionally called the ``edge of chaos'', a notion mathematically well defined in the present paper. Furthermore, except at the edge of chaos, there is a one-to-one correspondence between the membrane potential trajectories and the spikes raster plot. This shows that the neural code is entirely ``in the spikes'' in this case. As a key ...
Thalamic neuron models encode stimulus information by burst-size modulation
Directory of Open Access Journals (Sweden)
Daniel Henry Elijah
2015-09-01
Full Text Available Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of
Thalamic neuron models encode stimulus information by burst-size modulation.
Elijah, Daniel H; Samengo, Inés; Montemurro, Marcelo A
2015-01-01
Thalamic neurons have been long assumed to fire in tonic mode during perceptive states, and in burst mode during sleep and unconsciousness. However, recent evidence suggests that bursts may also be relevant in the encoding of sensory information. Here, we explore the neural code of such thalamic bursts. In order to assess whether the burst code is generic or whether it depends on the detailed properties of each bursting neuron, we analyzed two neuron models incorporating different levels of biological detail. One of the models contained no information of the biophysical processes entailed in spike generation, and described neuron activity at a phenomenological level. The second model represented the evolution of the individual ionic conductances involved in spiking and bursting, and required a large number of parameters. We analyzed the models' input selectivity using reverse correlation methods and information theory. We found that n-spike bursts from both models transmit information by modulating their spike count in response to changes to instantaneous input features, such as slope, phase, amplitude, etc. The stimulus feature that is most efficiently encoded by bursts, however, need not coincide with one of such classical features. We therefore searched for the optimal feature among all those that could be expressed as a linear transformation of the time-dependent input current. We found that bursting neurons transmitted 6 times more information about such more general features. The relevant events in the stimulus were located in a time window spanning ~100 ms before and ~20 ms after burst onset. Most importantly, the neural code employed by the simple and the biologically realistic models was largely the same, implying that the simple thalamic neuron model contains the essential ingredients that account for the computational properties of the thalamic burst code. Thus, our results suggest the n-spike burst code is a general property of thalamic neurons.
The Effects of Dynamical Synapses on Firing Rate Activity: A Spiking Neural Network Model.
Khalil, Radwa; Moftah, Marie Z; Moustafa, Ahmed A
2017-09-18
Accumulating evidence relates the fine-tuning of synaptic maturation and regulation of neural network activity to several key factors, including GABAA signaling and a lateral spread length between neighboring neurons (i.e. local connectivity). Furthermore, a number of studies consider Short-Term synaptic Plasticity (STP) as an essential element in the instant modification of synaptic efficacy in the neuronal network and in modulating responses to sustained ranges of external Poisson Input Frequency (IF). Nevertheless, evaluating the firing activity in response to the dynamical interaction between STP (triggered by ranges of IF), and these key parameters in vitro remains elusive. Therefore, we designed a Spiking Neural Network (SNN) model in which we incorporated the following parameters: local density of arbor essences and a lateral spread length between neighboring neurons. We also created several network scenarios based on these key parameters. Then, we implemented two classes of STP: (1) Short-Term synaptic Depression (STD), and (2) Short-Term synaptic Facilitation (STF). Each class has two differential forms based on the parametric value of its synaptic time constant (either for depressing or facilitating synapses). Lastly, we compared the neural firing responses before and after the treatment with STP. We found that dynamical synapses(STP) have a critical differential role on evaluating, and modulating the firing rate activity in each network scenario. Moreover, we investigated the impact of changing the balance between excitation (E) / inhibition (I) on stabilizing this firing activity. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Lindahl, Mikael; Hellgren Kotaleski, Jeanette
2016-01-01
The basal ganglia are a crucial brain system for behavioral selection, and their function is disturbed in Parkinson's disease (PD), where neurons exhibit inappropriate synchronization and oscillations. We present a spiking neural model of basal ganglia including plausible details on synaptic dynamics, connectivity patterns, neuron behavior, and dopamine effects. Recordings of neuronal activity in the subthalamic nucleus and Type A (TA; arkypallidal) and Type I (TI; prototypical) neurons in globus pallidus externa were used to validate the model. Simulation experiments predict that both local inhibition in striatum and the existence of an indirect pathway are important for basal ganglia to function properly over a large range of cortical drives. The dopamine depletion-induced increase of AMPA efficacy in corticostriatal synapses to medium spiny neurons (MSNs) with dopamine receptor D2 synapses (CTX-MSN D2) and the reduction of MSN lateral connectivity (MSN-MSN) were found to contribute significantly to the enhanced synchrony and oscillations seen in PD. Additionally, reversing the dopamine depletion-induced changes to CTX-MSN D1, CTX-MSN D2, TA-MSN, and MSN-MSN couplings could improve or restore basal ganglia action selection ability. In summary, we found multiple changes of parameters for synaptic efficacy and neural excitability that could improve action selection ability and at the same time reduce oscillations. Identification of such targets could potentially generate ideas for treatments of PD and increase our understanding of the relation between network dynamics and network function.
The local field potential reflects surplus spike synchrony
DEFF Research Database (Denmark)
Denker, Michael; Roux, Sébastien; Lindén, Henrik;
2011-01-01
While oscillations of the local field potential (LFP) are commonly attributed to the synchronization of neuronal firing rate on the same time scale, their relationship to coincident spiking in the millisecond range is unknown. Here, we present experimental evidence to reconcile the notions...... of synchrony at the level of spiking and at the mesoscopic scale. We demonstrate that only in time intervals of significant spike synchrony that cannot be explained on the basis of firing rates, coincident spikes are better phase locked to the LFP than predicted by the locking of the individual spikes....... This effect is enhanced in periods of large LFP amplitudes. A quantitative model explains the LFP dynamics by the orchestrated spiking activity in neuronal groups that contribute the observed surplus synchrony. From the correlation analysis, we infer that neurons participate in different constellations...
A modeling approach on why simple central pattern generators are built of irregular neurons.
Reyes, Marcelo Bussotti; Carelli, Pedro Valadão; Sartorelli, José Carlos; Pinto, Reynaldo Daniel
2015-01-01
The crustacean pyloric Central Pattern Generator (CPG) is a nervous circuit that endogenously provides periodic motor patterns. Even after about 40 years of intensive studies, the rhythm genesis is still not rigorously understood in this CPG, mainly because it is made of neurons with irregular intrinsic activity. Using mathematical models we addressed the question of using a network of irregularly behaving elements to generate periodic oscillations, and we show some advantages of using non-periodic neurons with intrinsic behavior in the transition from bursting to tonic spiking (as found in biological pyloric CPGs) as building components. We studied two- and three-neuron model CPGs built either with Hindmarsh-Rose or with conductance-based Hodgkin-Huxley-like model neurons. By changing a model's parameter we could span the neuron's intrinsic dynamical behavior from slow periodic bursting to fast tonic spiking, passing through a transition where irregular bursting was observed. Two-neuron CPG, half center oscillator (HCO), was obtained for each intrinsic behavior of the neurons by coupling them with mutual symmetric synaptic inhibition. Most of these HCOs presented regular antiphasic bursting activity and the changes of the bursting frequencies was studied as a function of the inhibitory synaptic strength. Among all HCOs, those made of intrinsic irregular neurons presented a wider burst frequency range while keeping a reliable regular oscillatory (bursting) behavior. HCOs of periodic neurons tended to be either hard to change their behavior with synaptic strength variations (slow periodic burster neurons) or unable to perform a physiologically meaningful rhythm (fast tonic spiking neurons). Moreover, 3-neuron CPGs with connectivity and output similar to those of the pyloric CPG presented the same results.
A modeling approach on why simple central pattern generators are built of irregular neurons.
Directory of Open Access Journals (Sweden)
Marcelo Bussotti Reyes
Full Text Available The crustacean pyloric Central Pattern Generator (CPG is a nervous circuit that endogenously provides periodic motor patterns. Even after about 40 years of intensive studies, the rhythm genesis is still not rigorously understood in this CPG, mainly because it is made of neurons with irregular intrinsic activity. Using mathematical models we addressed the question of using a network of irregularly behaving elements to generate periodic oscillations, and we show some advantages of using non-periodic neurons with intrinsic behavior in the transition from bursting to tonic spiking (as found in biological pyloric CPGs as building components. We studied two- and three-neuron model CPGs built either with Hindmarsh-Rose or with conductance-based Hodgkin-Huxley-like model neurons. By changing a model's parameter we could span the neuron's intrinsic dynamical behavior from slow periodic bursting to fast tonic spiking, passing through a transition where irregular bursting was observed. Two-neuron CPG, half center oscillator (HCO, was obtained for each intrinsic behavior of the neurons by coupling them with mutual symmetric synaptic inhibition. Most of these HCOs presented regular antiphasic bursting activity and the changes of the bursting frequencies was studied as a function of the inhibitory synaptic strength. Among all HCOs, those made of intrinsic irregular neurons presented a wider burst frequency range while keeping a reliable regular oscillatory (bursting behavior. HCOs of periodic neurons tended to be either hard to change their behavior with synaptic strength variations (slow periodic burster neurons or unable to perform a physiologically meaningful rhythm (fast tonic spiking neurons. Moreover, 3-neuron CPGs with connectivity and output similar to those of the pyloric CPG presented the same results.
Ariav, Gal; Polsky, Alon; Schiller, Jackie
2003-08-27
The ability of cortical neurons to perform temporally accurate computations has been shown to be important for encoding of information in the cortex; however, cortical neurons are expected to be imprecise temporal encoders because of the stochastic nature of synaptic transmission and ion channel gating, dendritic filtering, and background synaptic noise. Here we show for the first time that fast local spikes in basal dendrites can serve to improve the temporal precision of neuronal output. Integration of coactivated, spatially distributed synaptic inputs produces temporally imprecise output action potentials within a time window of several milliseconds. In contrast, integration of closely spaced basal inputs initiates local dendritic spikes that amplify and sharpen the summed somatic potential. In turn, these fast basal spikes allow precise timing of output action potentials with submillisecond temporal jitter over a wide range of activation intensities and background synaptic noise. Our findings indicate that fast spikes initiated in individual basal dendrites can serve as precise "timers" of output action potentials in various network activity states and thus may contribute to temporal coding in the cortex.
A learning-enabled neuron array IC based upon transistor channel models of biological phenomena.
Brink, S; Nease, S; Hasler, P; Ramakrishnan, S; Wunderlich, R; Basu, A; Degnan, B
2013-02-01
We present a single-chip array of 100 biologically-based electronic neuron models interconnected to each other and the outside environment through 30,000 synapses. The chip was fabricated in a standard 350 nm CMOS IC process. Our approach used dense circuit models of synaptic behavior, including biological computation and learning, as well as transistor channel models. We use Address-Event Representation (AER) spike communication for inputs and outputs to this IC. We present the IC architecture and infrastructure, including IC chip, configuration tools, and testing platform. We present measurement of small network of neurons, measurement of STDP neuron dynamics, and measurement from a compiled spiking neuron WTA topology, all compiled into this IC.
Brink, S; Nease, S; Hasler, P
2013-09-01
Results are presented from several spiking network experiments performed on a novel neuromorphic integrated circuit. The networks are discussed in terms of their computational significance, which includes applications such as arbitrary spatiotemporal pattern generation and recognition, winner-take-all competition, stable generation of rhythmic outputs, and volatile memory. Analogies to the behavior of real biological neural systems are also noted. The alternatives for implementing the same computations are discussed and compared from a computational efficiency standpoint, with the conclusion that implementing neural networks on neuromorphic hardware is significantly more power efficient than numerical integration of model equations on traditional digital hardware.
Detection and assessment of near-zero delays in neuronal spiking activity.
Schneider, G; Nikolić, D
2006-04-15
Cross-correlation histograms (CCHs) have been widely used to study the temporal relationship between pairwise recordings of neuronal signals. One interesting parameter of a CCH is the time position of the central peak which indicates delays between signals. In order to study the potential relevance of these delays which can be as small as 1 ms, it is necessary to measure them with high precision. We present a method for the estimation of the central peak's position that is based on fitting a cosine function to the CCH and show that the precision of this estimate can be tracked analytically. We validate the resulting formula by simulations and by the analysis of a sample dataset obtained from cat visual cortex. The results indicate that the time position of the center peak can be estimated with submillisecond precision. The formula allows one also to develop a test of statistical significance for differences between two sets of measurements.
Borisyuk, Roman; Al Azad, Abul Kalam; Conte, Deborah; Roberts, Alan; Soffe, Stephen R
2014-01-01
Relating structure and function of neuronal circuits is a challenging problem. It requires demonstrating how dynamical patterns of spiking activity lead to functions like cognitive behaviour and identifying the neurons and connections that lead to appropriate activity of a circuit. We apply a "developmental approach" to define the connectome of a simple nervous system, where connections between neurons are not prescribed but appear as a result of neuron growth. A gradient based mathematical model of two-dimensional axon growth from rows of undifferentiated neurons is derived for the different types of neurons in the brainstem and spinal cord of young tadpoles of the frog Xenopus. Model parameters define a two-dimensional CNS growth environment with three gradient cues and the specific responsiveness of the axons of each neuron type to these cues. The model is described by a nonlinear system of three difference equations; it includes a random variable, and takes specific neuron characteristics into account. Anatomical measurements are first used to position cell bodies in rows and define axon origins. Then a generalization procedure allows information on the axons of individual neurons from small anatomical datasets to be used to generate larger artificial datasets. To specify parameters in the axon growth model we use a stochastic optimization procedure, derive a cost function and find the optimal parameters for each type of neuron. Our biologically realistic model of axon growth starts from axon outgrowth from the cell body and generates multiple axons for each different neuron type with statistical properties matching those of real axons. We illustrate how the axon growth model works for neurons with axons which grow to the same and the opposite side of the CNS. We then show how, by adding a simple specification for dendrite morphology, our model "developmental approach" allows us to generate biologically-realistic connectomes.
Directory of Open Access Journals (Sweden)
Roman Borisyuk
Full Text Available Relating structure and function of neuronal circuits is a challenging problem. It requires demonstrating how dynamical patterns of spiking activity lead to functions like cognitive behaviour and identifying the neurons and connections that lead to appropriate activity of a circuit. We apply a "developmental approach" to define the connectome of a simple nervous system, where connections between neurons are not prescribed but appear as a result of neuron growth. A gradient based mathematical model of two-dimensional axon growth from rows of undifferentiated neurons is derived for the different types of neurons in the brainstem and spinal cord of young tadpoles of the frog Xenopus. Model parameters define a two-dimensional CNS growth environment with three gradient cues and the specific responsiveness of the axons of each neuron type to these cues. The model is described by a nonlinear system of three difference equations; it includes a random variable, and takes specific neuron characteristics into account. Anatomical measurements are first used to position cell bodies in rows and define axon origins. Then a generalization procedure allows information on the axons of individual neurons from small anatomical datasets to be used to generate larger artificial datasets. To specify parameters in the axon growth model we use a stochastic optimization procedure, derive a cost function and find the optimal parameters for each type of neuron. Our biologically realistic model of axon growth starts from axon outgrowth from the cell body and generates multiple axons for each different neuron type with statistical properties matching those of real axons. We illustrate how the axon growth model works for neurons with axons which grow to the same and the opposite side of the CNS. We then show how, by adding a simple specification for dendrite morphology, our model "developmental approach" allows us to generate biologically-realistic connectomes.
Komin, Niko; Moein, Mahsa; Ellisman, Mark H; Skupin, Alexander
2015-01-01
Changes in the cytosolic Ca(2+) concentration ([Ca(2+)]i) are the most predominant active signaling mechanism in astrocytes that can modulate neuronal activity and is assumed to influence neuronal plasticity. Although Ca(2+) signaling in astrocytes has been intensively studied in the past, our understanding of the signaling mechanism and its impact on tissue level is still incomplete. Here we revisit our previously published data on the strong temperature dependence of Ca(2+) signals in both cultured primary astrocytes and astrocytes in acute brain slices of mice. We apply multiscale modeling to test the hypothesis that the temperature dependent [Ca(2+)]i spiking is mainly caused by the increased activity of the sarcoendoplasmic reticulum ATPases (SERCAs) that remove Ca(2+) from the cytosol into the endoplasmic reticulum. Quantitative comparison of experimental data with multiscale simulations supports the SERCA activity hypothesis. Further analysis of multiscale modeling and traditional rate equations indicates that the experimental observations are a spatial phenomenon where increasing pump strength leads to a decoupling of Ca(2+) release sites and subsequently to vanishing [Ca(2+)]i spikes.
Parametric models to relate spike train and LFP dynamics with neural information processing.
Banerjee, Arpan; Dean, Heather L; Pesaran, Bijan
2012-01-01
Spike trains and local field potentials (LFPs) resulting from extracellular current flows provide a substrate for neural information processing. Understanding the neural code from simultaneous spike-field recordings and subsequent decoding of information processing events will have widespread applications. One way to demonstrate an understanding of the neural code, with particular advantages for the development of applications, is to formulate a parametric statistical model of neural activity and its covariates. Here, we propose a set of parametric spike-field models (unified models) that can be used with existing decoding algorithms to reveal the timing of task or stimulus specific processing. Our proposed unified modeling framework captures the effects of two important features of information processing: time-varying stimulus-driven inputs and ongoing background activity that occurs even in the absence of environmental inputs. We have applied this framework for decoding neural latencies in simulated and experimentally recorded spike-field sessions obtained from the lateral intraparietal area (LIP) of awake, behaving monkeys performing cued look-and-reach movements to spatial targets. Using both simulated and experimental data, we find that estimates of trial-by-trial parameters are not significantly affected by the presence of ongoing background activity. However, including background activity in the unified model improves goodness of fit for predicting individual spiking events. Uncovering the relationship between the model parameters and the timing of movements offers new ways to test hypotheses about the relationship between neural activity and behavior. We obtained significant spike-field onset time correlations from single trials using a previously published data set where significantly strong correlation was only obtained through trial averaging. We also found that unified models extracted a stronger relationship between neural response latency and trial
Directory of Open Access Journals (Sweden)
Kim Elmer K
2012-07-01
Full Text Available Abstract Background The next generation of prosthetic limbs will restore sensory feedback to the nervous system by mimicking how skin mechanoreceptors, innervated by afferents, produce trains of action potentials in response to compressive stimuli. Prior work has addressed building sensors within skin substitutes for robotics, modeling skin mechanics and neural dynamics of mechanotransduction, and predicting response timing of action potentials for vibration. The effort here is unique because it accounts for skin elasticity by measuring force within simulated skin, utilizes few free model parameters for parsimony, and separates parameter fitting and model validation. Additionally, the ramp-and-hold, sustained stimuli used in this work capture the essential features of the everyday task of contacting and holding an object. Methods This systems integration effort computationally replicates the neural firing behavior for a slowly adapting type I (SAI afferent in its temporally varying response to both intensity and rate of indentation force by combining a physical force sensor, housed in a skin-like substrate, with a mathematical model of neuronal spiking, the leaky integrate-and-fire. Comparison experiments were then conducted using ramp-and-hold stimuli on both the spiking-sensor model and mouse SAI afferents. The model parameters were iteratively fit against recorded SAI interspike intervals (ISI before validating the model to assess its performance. Results Model-predicted spike firing compares favorably with that observed for single SAI afferents. As indentation magnitude increases (1.2, 1.3, to 1.4 mm, mean ISI decreases from 98.81 ± 24.73, 54.52 ± 6.94, to 41.11 ± 6.11 ms. Moreover, as rate of ramp-up increases, ISI during ramp-up decreases from 21.85 ± 5.33, 19.98 ± 3.10, to 15.42 ± 2.41 ms. Considering first spikes, the predicted latencies exhibited a decreasing trend as stimulus rate increased, as is
Novel model of neuronal bioenergetics
DEFF Research Database (Denmark)
Bak, Lasse Kristoffer; Obel, Linea Lykke Frimodt; Walls, Anne B
2012-01-01
matrix thus activating the tricarboxylic acid cycle dehydrogenases. This will lead to a lower activity of the MASH (malate-aspartate shuttle), which in turn will result in anaerobic glycolysis and lactate production rather than lactate utilization. In the present work, we have investigated the effect......We have previously investigated the relative roles of extracellular glucose and lactate as fuels for glutamatergic neurons during synaptic activity. The conclusion from these studies was that cultured glutamatergic neurons utilize glucose rather than lactate during NMDA (N......-methyl-d-aspartate)-induced synaptic activity and that lactate alone is not able to support neurotransmitter glutamate homoeostasis. Subsequently, a model was proposed to explain these results at the cellular level. In brief, the intermittent rises in intracellular Ca2+ during activation cause influx of Ca2+ into the mitochondrial...
Directory of Open Access Journals (Sweden)
Pierre Berthet
2016-07-01
Full Text Available The brain enables animals to behaviourally adapt in order to survive in a complex and dynamic environment, but how reward-oriented behaviours are achieved and computed by its underlying neural circuitry is an open question. To address this concern, we have developed a spiking model of the basal ganglia (BG that learns to dis-inhibit the action leading to a reward despite ongoing changes in the reward schedule. The architecture of the network features the two pathways commonly described in BG, the direct (denoted D1 and the indirect (denoted D2 pathway, as well as a loop involving striatum and the dopaminergic system. The activity of these dopaminergic neurons conveys the reward prediction error (RPE, which determines the magnitude of synaptic plasticity within the different pathways. All plastic connections implement a versatile four-factor learning rule derived from Bayesian inference that depends upon pre- and postsynaptic activity, receptor type and dopamine level. Synaptic weight updates occur in the D1 or D2 pathways depending on the sign of the RPE, and an efference copy informs upstream nuclei about the action selected. We demonstrate successful performance of the system in a multiple-choice learning task with a transiently changing reward schedule. We simulate lesioning of the various pathways and show that a condition without the D2 pathway fares worse than one without D1. Additionally, we simulate the degeneration observed in Parkinson’s disease (PD by decreasing the number of dopaminergic neurons during learning. The results suggest that the D1 pathway impairment in PD might have been overlooked. Furthermore, an analysis of the alterations in the synaptic weights shows that using the absolute reward value instead of the RPE leads to a larger change in D1.
Directory of Open Access Journals (Sweden)
Attila Szücs
Full Text Available Neurons display a high degree of variability and diversity in the expression and regulation of their voltage-dependent ionic channels. Under low level of synaptic background a number of physiologically distinct cell types can be identified in most brain areas that display different responses to standard forms of intracellular current stimulation. Nevertheless, it is not well understood how biophysically different neurons process synaptic inputs in natural conditions, i.e., when experiencing intense synaptic bombardment in vivo. While distinct cell types might process synaptic inputs into different patterns of action potentials representing specific "motifs" of network activity, standard methods of electrophysiology are not well suited to resolve such questions. In the current paper we performed dynamic clamp experiments with simulated synaptic inputs that were presented to three types of neurons in the juxtacapsular bed nucleus of stria terminalis (jcBNST of the rat. Our analysis on the temporal structure of firing showed that the three types of jcBNST neurons did not produce qualitatively different spike responses under identical patterns of input. However, we observed consistent, cell type dependent variations in the fine structure of firing, at the level of single spikes. At the millisecond resolution structure of firing we found high degree of diversity across the entire spectrum of neurons irrespective of their type. Additionally, we identified a new cell type with intrinsic oscillatory properties that produced a rhythmic and regular firing under synaptic stimulation that distinguishes it from the previously described jcBNST cell types. Our findings suggest a sophisticated, cell type dependent regulation of spike dynamics of neurons when experiencing a complex synaptic background. The high degree of their dynamical diversity has implications to their cooperative dynamics and synchronization.
Contrasting Effects of the Persistent Na + Current on Neuronal Excitability and Spike Timing
National Research Council Canada - National Science Library
Vervaeke, Koen; Hu, Hua; Graham, Lyle J; Storm, Johan F
2006-01-01
.... Using computational modeling, whole-cell recording, and dynamic clamp of CA1 hippocampal pyramidal cells in brain slices, we examined how INaP changes the transduction of excitatory current into action potentials...
Directory of Open Access Journals (Sweden)
Joshua T Dudman
2009-02-01
Full Text Available The transformation of synaptic input into patterns of spike output is a fundamental operation that is determined by the particular complement of ion channels that a neuron expresses. Although it is well established that individual ion channel proteins make stochastic transitions between conducting and non-conducting states, most models of synaptic integration are deterministic, and relatively little is known about the functional consequences of interactions between stochastically gating ion channels. Here, we show that a model of stellate neurons from layer II of the medial entorhinal cortex implemented with either stochastic or deterministically gating ion channels can reproduce the resting membrane properties of stellate neurons, but only the stochastic version of the model can fully account for perithreshold membrane potential fluctuations and clustered patterns of spike output that are recorded from stellate neurons during depolarized states. We demonstrate that the stochastic model implements an example of a general mechanism for patterning of neuronal output through activity-dependent changes in the probability of spike firing. Unlike deterministic mechanisms that generate spike patterns through slow changes in the state of model parameters, this general stochastic mechanism does not require retention of information beyond the duration of a single spike and its associated afterhyperpolarization. Instead, clustered patterns of spikes emerge in the stochastic model of stellate neurons as a result of a transient increase in firing probability driven by activation of HCN channels during recovery from the spike afterhyperpolarization. Using this model, we infer conditions in which stochastic ion channel gating may influence firing patterns in vivo and predict consequences of modifications of HCN channel function for in vivo firing patterns.
Reliability of spike and burst firing in thalamocortical relay cells
Zeldenrust, F.; Chameau, P.J.P.; Wadman, W.J.
2013-01-01
The reliability and precision of the timing of spikes in a spike train is an important aspect of neuronal coding. We investigated reliability in thalamocortical relay (TCR) cells in the acute slice and also in a Morris-Lecar model with several extensions. A frozen Gaussian noise current, superimpose
Simple cortical and thalamic neuron models for digital arithmetic circuit implementation
Directory of Open Access Journals (Sweden)
Takuya eNanami
2016-05-01
Full Text Available Trade-off between reproducibility of neuronal activities and computational efficiency is one ofcrucial subjects in computational neuroscience and neuromorphic engineering. A wide variety ofneuronal models have been studied from different viewpoints. The digital spiking silicon neuron(DSSN model is a qualitative model that focuses on efficient implementation by digital arithmeticcircuits. We expanded the DSSN model and found appropriate parameter sets with which itreproduces the dynamical behaviors of the ionic-conductance models of four classes of corticaland thalamic neurons. We first developed a 4-variable model by reducing the number of variablesin the ionic-conductance models and elucidated its mathematical structures using bifurcationanalysis. Then, expanded DSSN models were constructed that reproduce these mathematicalstructures and capture the characteristic behavior of each neuron class. We confirmed thatstatistics of the neuronal spike sequences are similar in the DSSN and the ionic-conductancemodels. Computational cost of the DSSN model is larger than that of the recent sophisticatedIntegrate-and-Fire-based models, but smaller than the ionic-conductance models. This modelis intended to provide another meeting point for above trade-off that satisfies the demand forlarge-scale neuronal network simulation with closer-to-biology models.
Noise and Synchronization Analysis of the Cold-Receptor Neuronal Network Model
Directory of Open Access Journals (Sweden)
Ying Du
2014-01-01
Full Text Available This paper analyzes the dynamics of the cold receptor neural network model. First, it examines noise effects on neuronal stimulus in the model. From ISI plots, it is shown that there are considerable differences between purely deterministic simulations and noisy ones. The ISI-distance is used to measure the noise effects on spike trains quantitatively. It is found that spike trains observed in neural models can be more strongly affected by noise for different temperatures in some aspects; meanwhile, spike train has greater variability with the noise intensity increasing. The synchronization of neuronal network with different connectivity patterns is also studied. It is shown that chaotic and high period patterns are more difficult to get complete synchronization than the situation in single spike and low period patterns. The neuronal network will exhibit various patterns of firing synchronization by varying some key parameters such as the coupling strength. Different types of firing synchronization are diagnosed by a correlation coefficient and the ISI-distance method. The simulations show that the synchronization status of neurons is related to the network connectivity patterns.
de Curtis, M; Radici, C; Forti, M
1999-01-01
The cellular mechanisms involved in the generation of spontaneous epileptiform potentials were investigated in the pirifom cortex of the in vitro isolated guinea-pig brain. A single, unilateral injection of bicuculline (150-200 nmol) in the anterior piriform cortex induced locally spontaneous interictal spikes that recurred with a period of 8.81+/-4.47 s and propagated caudally to the ipsi- and contralateral hemispheres. Simultaneous extra- and intracellular recordings from layer II and III principal cells showed that the spontaneous interictal spike correlates to a burst of action potentials followed by a large afterdepolarization. Intracellular application of the sodium conductance blocker, QX-314 (80 mM), abolished bursting activity and unmasked a high-threshold slow spike enhanced by the calcium chelator EGTA (50 mM). The slow spike was abolished by membrane hyperpolarization and by local perfusion with 2 mM cadmium. The depolarizing potential that followed the primary burst was reduced by arterial perfusion with the N-methyl-D-aspartate receptor antagonist, DL-2-amino-5-phosphonopentanoic acid (100-200 microM). The non-N-methyl-D-aspartate glutamate receptor antagonist, 6-cyano-7-nitroquinoxaline-2,3-dione (20 microM), completely and reversibly blocked the spontaneous spikes. The interictal spikes were terminated by a large afterpotential blocked either by intracellular QX-314 (80 mM) or by extracellular application of phaclofen and 2-hydroxysaclofen (10 and 4 mM, respectively). The present study demonstrates that, in an acute model of epileptogenesis, spontaneous interictal spikes are fostered by a primary burst of fast action potentials that ride on a regenerative high-threshold, possibly calcium-mediated spike, which activates a recurrent, glutamate-mediated potential responsible for the entrainment of adjacent and remote cortical regions. The bursting activity is controlled by a GABA(B) receptor-mediated inhibitory synaptic potential.
A generative spike train model with time-structured higher order correlations
Directory of Open Access Journals (Sweden)
James eTrousdale
2013-07-01
Full Text Available Emerging technologies are revealing the spiking activity in ever larger neural ensembles. Frequently, this spiking is far from independent, with correlations in the spike times of different cells. Understanding how such correlations impact the dynamics and function of neural ensembles remains an important open problem.Here we describe a new, generative model for correlated spike trains that can exhibit many of the features observed in data. Extending prior work in mathematical finance, this generalized thinning and shift (GTaS model creates marginally Poisson spike trains with diverse temporal correlation structures.We give several examples which highlight the model's flexibility and utility. For instance, we use it to examine how a neural network responds to highly structured patterns of inputs.We then show that the GTaS model is analytically tractable, and derive cumulant densities of all orders in terms of model parameters. The GTaS framework can therefore be an important tool in the experimental and theoretical exploration of neural dynamics.
Sreenivasa, Manish; Ayusawa, Ko; Nakamura, Yoshihiko
2016-05-01
This study develops a multi-level neuromuscular model consisting of topological pools of spiking motor, sensory and interneurons controlling a bi-muscular model of the human arm. The spiking output of motor neuron pools were used to drive muscle actions and skeletal movement via neuromuscular junctions. Feedback information from muscle spindles were relayed via monosynaptic excitatory and disynaptic inhibitory connections, to simulate spinal afferent pathways. Subject-specific model parameters were identified from human experiments by using inverse dynamics computations and optimization methods. The identified neuromuscular model was used to simulate the biceps stretch reflex and the results were compared to an independent dataset. The proposed model was able to track the recorded data and produce dynamically consistent neural spiking patterns, muscle forces and movement kinematics under varying conditions of external forces and co-contraction levels. This additional layer of detail in neuromuscular models has important relevance to the research communities of rehabilitation and clinical movement analysis by providing a mathematical approach to studying neuromuscular pathology.
Cyr, André; Boukadoum, Mounir
2013-03-01
This paper presents a novel bio-inspired habituation function for robots under control by an artificial spiking neural network. This non-associative learning rule is modelled at the synaptic level and validated through robotic behaviours in reaction to different stimuli patterns in a dynamical virtual 3D world. Habituation is minimally represented to show an attenuated response after exposure to and perception of persistent external stimuli. Based on current neurosciences research, the originality of this rule includes modulated response to variable frequencies of the captured stimuli. Filtering out repetitive data from the natural habituation mechanism has been demonstrated to be a key factor in the attention phenomenon, and inserting such a rule operating at multiple temporal dimensions of stimuli increases a robot's adaptive behaviours by ignoring broader contextual irrelevant information.
Serial Spike Time Correlations Affect Probability Distribution of Joint Spike Events.
Shahi, Mina; van Vreeswijk, Carl; Pipa, Gordon
2016-01-01
Detecting the existence of temporally coordinated spiking activity, and its role in information processing in the cortex, has remained a major challenge for neuroscience research. Different methods and approaches have been suggested to test whether the observed synchronized events are significantly different from those expected by chance. To analyze the simultaneous spike trains for precise spike correlation, these methods typically model the spike trains as a Poisson process implying that the generation of each spike is independent of all the other spikes. However, studies have shown that neural spike trains exhibit dependence among spike sequences, such as the absolute and relative refractory periods which govern the spike probability of the oncoming action potential based on the time of the last spike, or the bursting behavior, which is characterized by short epochs of rapid action potentials, followed by longer episodes of silence. Here we investigate non-renewal processes with the inter-spike interval distribution model that incorporates spike-history dependence of individual neurons. For that, we use the Monte Carlo method to estimate the full shape of the coincidence count distribution and to generate false positives for coincidence detection. The results show that compared to the distributions based on homogeneous Poisson processes, and also non-Poisson processes, the width of the distribution of joint spike events changes. Non-renewal processes can lead to both heavy tailed or narrow coincidence distribution. We conclude that small differences in the exact autostructure of the point process can cause large differences in the width of a coincidence distribution. Therefore, manipulations of the autostructure for the estimation of significance of joint spike events seem to be inadequate.
Heikkinen, Hanna; Sharifian, Fariba; Vigario, Ricardo; Vanni, Simo
2015-07-01
The blood oxygenation level-dependent (BOLD) response has been strongly associated with neuronal activity in the brain. However, some neuronal tuning properties are consistently different from the BOLD response. We studied the spatial extent of neural and hemodynamic responses in the primary visual cortex, where the BOLD responses spread and interact over much longer distances than the small receptive fields of individual neurons would predict. Our model shows that a feedforward-feedback loop between V1 and a higher visual area can account for the observed spread of the BOLD response. In particular, anisotropic landing of inputs to compartmental neurons were necessary to account for the BOLD signal spread, while retaining realistic spiking responses. Our work shows that simple dendrites can separate tuning at the synapses and at the action potential output, thus bridging the BOLD signal to the neural receptive fields with high fidelity.
Parametric models to relate spike train and LFP dynamics with neural information processing
Directory of Open Access Journals (Sweden)
Arpan eBanerjee
2012-07-01
Full Text Available Spike trains and local field potentials resulting from extracellular current flows provide a substrate for neural information processing. Understanding the neural code from simultaneous spike-field recordings and subsequent decoding of information processing events will have widespread applications. One way to demonstrate an understanding of the neural code, with particular advantages for the development of applications, is to formulate a parametric statistical model of neural activity and its covariates. Here, we propose a set of parametric spike-field models (unified models that can be used with existing decoding algorithms to reveal the timing of task or stimulus specific processing. Our proposed unified modeling framework captures the effects of two important features of information processing: time-varying stimulus driven inputs and ongoing background activity that occurs even in the absence of environmental inputs. We have applied this framework for decoding neural latencies in simulated and experimentally recorded spike-field sessions obtained from the lateral intraparietal area (LIP of awake, behaving monkeys performing cued look-and-reach movements to spatial targets. Using both simulated and experimental data, we find that estimates of trial-by-trial parameters are not significantly affected by the presence of ongoing background activity. However, including background activity in the unified model improves goodness of fit for predicting individual spiking events. Trial-by-trial spike-field correlation in visual response onset times are higher when the unified model is used, matching with corresponding values obtained using earlier trial-averaged measures on a previously published data set. Uncovering the relationship between the model parameters and the timing of movements offers new ways to test hypotheses about the relationship between neural activity and behavior.
Studying spike trains using a van Rossum metric with a synapse-like filter.
Houghton, Conor
2009-02-01
Spike trains are unreliable. For example, in the primary sensory areas, spike patterns and precise spike times will vary between responses to the same stimulus. Nonetheless, information about sensory inputs is communicated in the form of spike trains. A challenge in understanding spike trains is to assess the significance of individual spikes in encoding information. One approach is to define a spike train metric, allowing a distance to be calculated between pairs of spike trains. In a good metric, this distance will depend on the information the spike trains encode. This method has been used previously to calculate the timescale over which the precision of spike times is significant. Here, a new metric is constructed based on a simple model of synaptic conductances which includes binding site depletion. Including binding site depletion in the metric means that a given individual spike has a smaller effect on the distance if it occurs soon after other spikes. The metric proves effective at classifying neuronal responses by stimuli in the sample data set of electro-physiological recordings from the primary auditory area of the zebra finch fore-brain. This shows that this is an effective metric for these spike trains suggesting that in these spike trains the significance of a spike is modulated by its proximity to previous spikes. This modulation is a putative information-coding property of spike trains.
Directory of Open Access Journals (Sweden)
Sun Qian-Quan
2009-10-01
Full Text Available Abstract Background Little is known about the roles of dendritic gap junctions (GJs of inhibitory interneurons in modulating temporal properties of sensory induced responses in sensory cortices. Electrophysiological dual patch-clamp recording and computational simulation methods were used in combination to examine a novel role of GJs in sensory mediated feed-forward inhibitory responses in barrel cortex layer IV and its underlying mechanisms. Results Under physiological conditions, excitatory post-junctional potentials (EPJPs interact with thalamocortical (TC inputs within an unprecedented few milliseconds (i.e. over 200 Hz to enhance the firing probability and synchrony of coupled fast-spiking (FS cells. Dendritic GJ coupling allows fourfold increase in synchrony and a significant enhancement in spike transmission efficacy in excitatory spiny stellate cells. The model revealed the following novel mechanisms: 1 rapid capacitive current (Icap underlies the activation of voltage-gated sodium channels; 2 there was less than 2 milliseconds in which the Icap underlying TC input and EPJP was coupled effectively; 3 cells with dendritic GJs had larger input conductance and smaller membrane response to weaker inputs; 4 synchrony in inhibitory networks by GJ coupling leads to reduced sporadic lateral inhibition and increased TC transmission efficacy. Conclusion Dendritic GJs of neocortical inhibitory networks can have very powerful effects in modulating the strength and the temporal properties of sensory induced feed-forward inhibitory and excitatory responses at a very high frequency band (>200 Hz. Rapid capacitive currents are identified as main mechanisms underlying interaction between two transient synaptic conductances.
Directory of Open Access Journals (Sweden)
Kevin S Jones
Full Text Available The dysfunction of parvalbumin-positive, fast-spiking interneurons (FSI is considered a primary contributor to the pathophysiology of schizophrenia (SZ, but deficits in FSI physiology have not been explicitly characterized. We show for the first time, that a widely-employed model of schizophrenia minimizes first spike latency and increases GluN2B-mediated current in neocortical FSIs. The reduction in FSI first-spike latency coincides with reduced expression of the Kv1.1 potassium channel subunit which provides a biophysical explanation for the abnormal spiking behavior. Similarly, the increase in NMDA current coincides with enhanced expression of the GluN2B NMDA receptor subunit, specifically in FSIs. In this study mice were treated with the NMDA receptor antagonist, MK-801, during the first week of life. During adolescence, we detected reduced spike latency and increased GluN2B-mediated NMDA current in FSIs, which suggests transient disruption of NMDA signaling during neonatal development exerts lasting changes in the cellular and synaptic physiology of neocortical FSIs. Overall, we propose these physiological disturbances represent a general impairment to the physiological maturation of FSIs which may contribute to schizophrenia-like behaviors produced by this model.
Martina, M; Schultz, J H; Ehmke, H; Monyer, H; Jonas, P
1998-10-15
We have examined gating and pharmacological characteristics of somatic K+ channels in fast-spiking interneurons and regularly spiking principal neurons of hippocampal slices. In nucleated patches isolated from basket cells of the dentate gyrus, a fast delayed rectifier K+ current component that was highly sensitive to tetraethylammonium (TEA) and 4-aminopyridine (4-AP) (half-maximal inhibitory concentrations Kv3 (Kv3.1, Kv3.2) subunit transcripts were expressed in almost all (89%) of the interneurons but only in 17% of the pyramidal neurons. In contrast, Kv4 (Kv4.2, Kv4.3) subunit mRNAs were present in 87% of pyramidal neurons but only in 55% of interneurons. Selective block of fast delayed rectifier K+ channels, presumably assembled from Kv3 subunits, by 4-AP reduced substantially the action potential frequency in interneurons. These results indicate that the differential expression of Kv3 and Kv4 subunits shapes the action potential phenotypes of principal neurons and interneurons in the cortex.
Pyka, Martin; Klatt, Sebastian; Cheng, Sen
2014-01-01
Computational models of neural networks can be based on a variety of different parameters. These parameters include, for example, the 3d shape of neuron layers, the neurons' spatial projection patterns, spiking dynamics and neurotransmitter systems. While many well-developed approaches are available to model, for example, the spiking dynamics, there is a lack of approaches for modeling the anatomical layout of neurons and their projections. We present a new method, called Parametric Anatomical Modeling (PAM), to fill this gap. PAM can be used to derive network connectivities and conduction delays from anatomical data, such as the position and shape of the neuronal layers and the dendritic and axonal projection patterns. Within the PAM framework, several mapping techniques between layers can account for a large variety of connection properties between pre- and post-synaptic neuron layers. PAM is implemented as a Python tool and integrated in the 3d modeling software Blender. We demonstrate on a 3d model of the hippocampal formation how PAM can help reveal complex properties of the synaptic connectivity and conduction delays, properties that might be relevant to uncover the function of the hippocampus. Based on these analyses, two experimentally testable predictions arose: (i) the number of neurons and the spread of connections is heterogeneously distributed across the main anatomical axes, (ii) the distribution of connection lengths in CA3-CA1 differ qualitatively from those between DG-CA3 and CA3-CA3. Models created by PAM can also serve as an educational tool to visualize the 3d connectivity of brain regions. The low-dimensional, but yet biologically plausible, parameter space renders PAM suitable to analyse allometric and evolutionary factors in networks and to model the complexity of real networks with comparatively little effort.
Timing intervals using population synchrony and spike timing dependent plasticity
Directory of Open Access Journals (Sweden)
Wei Xu
2016-12-01
Full Text Available We present a computational model by which ensembles of regularly spiking neurons can encode different time intervals through synchronous firing. We show that a neuron responding to a large population of convergent inputs has the potential to learn to produce an appropriately-timed output via spike-time dependent plasticity. We explain why temporal variability of this population synchrony increases with increasing time intervals. We also show that the scalar property of timing and its violation at short intervals can be explained by the spike-wise accumulation of jitter in the inter-spike intervals of timing neurons. We explore how the challenge of encoding longer time intervals can be overcome and conclude that this may involve a switch to a different population of neurons with lower firing rate, with the added effect of producing an earlier bias in response. Experimental data on human timing performance show features in agreement with the model’s output.
Matsubara, Takashi; Torikai, Hiroyuki
2016-04-01
Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.
Woodward, Alexander; Froese, Tom; Ikegami, Takashi
2015-02-01
The state space of a conventional Hopfield network typically exhibits many different attractors of which only a small subset satisfies constraints between neurons in a globally optimal fashion. It has recently been demonstrated that combining Hebbian learning with occasional alterations of normal neural states avoids this problem by means of self-organized enlargement of the best basins of attraction. However, so far it is not clear to what extent this process of self-optimization is also operative in real brains. Here we demonstrate that it can be transferred to more biologically plausible neural networks by implementing a self-optimizing spiking neural network model. In addition, by using this spiking neural network to emulate a Hopfield network with Hebbian learning, we attempt to make a connection between rate-based and temporal coding based neural systems. Although further work is required to make this model more realistic, it already suggests that the efficacy of the self-optimizing process is independent from the simplifying assumptions of a conventional Hopfield network. We also discuss natural and cultural processes that could be responsible for occasional alteration of neural firing patterns in actual brains. Copyright © 2014 Elsevier Ltd. All rights reserved.
Chen, Zhe; Vijayan, Sujith; Barbieri, Riccardo; Wilson, Matthew A; Brown, Emery N
2009-07-01
UP and DOWN states, the periodic fluctuations between increased and decreased spiking activity of a neuronal population, are a fundamental feature of cortical circuits. Understanding UP-DOWN state dynamics is important for understanding how these circuits represent and transmit information in the brain. To date, limited work has been done on characterizing the stochastic properties of UP-DOWN state dynamics. We present a set of Markov and semi-Markov discrete- and continuous-time probability models for estimating UP and DOWN states from multiunit neural spiking activity. We model multiunit neural spiking activity as a stochastic point process, modulated by the hidden (UP and DOWN) states and the ensemble spiking history. We estimate jointly the hidden states and the model parameters by maximum likelihood using an expectation-maximization (EM) algorithm and a Monte Carlo EM algorithm that uses reversible-jump Markov chain Monte Carlo sampling in the E-step. We apply our models and algorithms in the analysis of both simulated multiunit spiking activity and actual multi- unit spiking activity recorded from primary somatosensory cortex in a behaving rat during slow-wave sleep. Our approach provides a statistical characterization of UP-DOWN state dynamics that can serve as a basis for verifying and refining mechanistic descriptions of this process.
Directory of Open Access Journals (Sweden)
Viswanathan Arunachalam
2013-01-01
Full Text Available The classical models of single neuron like Hodgkin-Huxley point neuron or leaky integrate and fire neuron assume the influence of postsynaptic potentials to last till the neuron fires. Vidybida (2008 in a refreshing departure has proposed models for binding neurons in which the trace of an input is remembered only for a finite fixed period of time after which it is forgotten. The binding neurons conform to the behaviour of real neurons and are applicable in constructing fast recurrent networks for computer modeling. This paper develops explicitly several useful results for a binding neuron like the firing time distribution and other statistical characteristics. We also discuss the applicability of the developed results in constructing a modified hourglass network model in which there are interconnected neurons with excitatory as well as inhibitory inputs. Limited simulation results of the hourglass network are presented.
Cellular Origin of Spontaneous Ganglion Cell Spike Activity in Animal Models of Retinitis Pigmentosa
Directory of Open Access Journals (Sweden)
David J. Margolis
2011-01-01
Full Text Available Here we review evidence that loss of photoreceptors due to degenerative retinal disease causes an increase in the rate of spontaneous ganglion spike discharge. Information about persistent spike activity is important since it is expected to add noise to the communication between the eye and the brain and thus impact the design and effective use of retinal prosthetics for restoring visual function in patients blinded by disease. Patch-clamp recordings from identified types of ON and OFF retinal ganglion cells in the adult (36–210 d old rd1 mouse show that the ongoing oscillatory spike activity in both cell types is driven by strong rhythmic synaptic input from presynaptic neurons that is blocked by CNQX. The recurrent synaptic activity may arise in a negative feedback loop between a bipolar cell and an amacrine cell that exhibits resonant behavior and oscillations in membrane potential when the normal balance between excitation and inhibition is disrupted by the absence of photoreceptor input.
Coupling and noise induced spiking-bursting transition in a parabolic bursting model.
Ji, Lin; Zhang, Jia; Lang, Xiufeng; Zhang, Xiuhui
2013-03-01
The transition from tonic spiking to bursting is an important dynamic process that carry physiologically relevant information. In this work, coupling and noise induced spiking-bursting transition is investigated in a parabolic bursting model with specific discussion on their cooperation effects. Fast/slow analysis shows that weak coupling may help to induce the bursting by changing the geometric property of the fast subsystem so that the original unstable periodical solution are stabilized. It turned out that noise can play the similar stabilization role and induce bursting at appropriate moderate intensity. However, their cooperation may either strengthen or weaken the overall effect depending on the choice of noise level.
Attention deficit associated with early life interictal spikes in a rat model is improved with ACTH.
Directory of Open Access Journals (Sweden)
Amanda E Hernan
Full Text Available Children with epilepsy often present with pervasive cognitive and behavioral comorbidities including working memory impairments, attention deficit hyperactivity disorder (ADHD and autism spectrum disorder. These non-seizure characteristics are severely detrimental to overall quality of life. Some of these children, particularly those with epilepsies classified as Landau-Kleffner Syndrome or continuous spike and wave during sleep, have infrequent seizure activity but frequent focal epileptiform activity. This frequent epileptiform activity is thought to be detrimental to cognitive development; however, it is also possible that these IIS events initiate pathophysiological pathways in the developing brain that may be independently associated with cognitive deficits. These hypotheses are difficult to address due to the previous lack of an appropriate animal model. To this end, we have recently developed a rat model to test the role of frequent focal epileptiform activity in the prefrontal cortex. Using microinjections of a GABA(A antagonist (bicuculline methiodine delivered multiple times per day from postnatal day (p 21 to p25, we showed that rat pups experiencing frequent, focal, recurrent epileptiform activity in the form of interictal spikes during neurodevelopment have significant long-term deficits in attention and sociability that persist into adulthood. To determine if treatment with ACTH, a drug widely used to treat early-life seizures, altered outcome we administered ACTH once per day subcutaneously during the time of the induced interictal spike activity. We show a modest amelioration of the attention deficit seen in animals with a history of early life interictal spikes with ACTH, in the absence of alteration of interictal spike activity. These results suggest that pharmacological intervention that is not targeted to the interictal spike activity is worthy of future study as it may be beneficial for preventing or ameliorating adverse
Cerminara, Caterina; D'Agati, Elisa; Lange, Klaus W.; Kaunzinger, Ivo; Tucha, Oliver; Parisi, Pasquale; Spalice, Alberto; Curatolo, Paolo
2010-01-01
Although the high risk of cognitive impairments in benign childhood epilepsy with centrotemporal spikes (BCECTS) is now well established, there is no clear definition of a uniform neurocognitive profile. This study was based on a neuropsychological model of attention that assessed various components
Hsu, Chih-Chieh; Parker, Alice C
2014-01-01
We present an electronic cortical neuron incorporating dynamic spike threshold and active dendritic properties. The circuit is simulated using a carbon nanotube field-effect transistor SPICE model. We demonstrate that our neuron has lower spike threshold for coincident synaptic inputs; however when the synaptic inputs are not in synchrony, it requires larger depolarization to evoke the neuron to fire. We also demonstrate that a dendritic spike is key to precisely-timed input-output transformation, produces reliable firing and results in more resilience to input jitter within an individual neuron.
Malik, Ruchi; Chattarji, Sumantra
2012-03-01
Environmental enrichment (EE) is a well-established paradigm for studying naturally occurring changes in synaptic efficacy in the hippocampus that underlie experience-induced modulation of learning and memory in rodents. Earlier research on the effects of EE on hippocampal plasticity focused on long-term potentiation (LTP). Whereas many of these studies investigated changes in synaptic weight, little is known about potential contributions of neuronal excitability to EE-induced plasticity. Here, using whole-cell recordings in hippocampal slices, we address this gap by analyzing the impact of EE on both synaptic plasticity and intrinsic excitability of hippocampal CA1 pyramidal neurons. Consistent with earlier reports, EE increased contextual fear memory and dendritic spine density on CA1 cells. Furthermore, EE facilitated LTP at Schaffer collateral inputs to CA1 pyramidal neurons. Analysis of the underlying causes for enhanced LTP shows EE to increase the frequency but not amplitude of miniature excitatory postsynaptic currents. However, presynaptic release probability, assayed using paired-pulse ratios and use-dependent block of N-methyl-d-aspartate receptor currents, was not affected. Furthermore, CA1 neurons fired more action potentials (APs) in response to somatic depolarization, as well as during the induction of LTP. EE also reduced spiking threshold and after-hyperpolarization amplitude. Strikingly, this EE-induced increase in excitability caused the same-sized excitatory postsynaptic potential to fire more APs. Together, these findings suggest that EE may enhance the capacity for plasticity in CA1 neurons, not only by strengthening synapses but also by enhancing their efficacy to fire spikes-and the two combine to act as an effective substrate for amplifying LTP.
DEFF Research Database (Denmark)
Joshi, Suyash Narendra; Dau, Torsten; Epp, Bastian
2017-01-01
A computational model of cat auditory nerve fiber (ANF) responses to electrical stimulation is presented. The model assumes that (1) there exist at least two sites of spike generation along the ANF and (2) both an anodic (positive) and a cathodic (negative) charge in isolation can evoke a spike. ...
Modeling study of the light stimulation of a neuron cell with channelrhodopsin-2 mutants.
Grossman, Nir; Nikolic, Konstantin; Toumazou, Christofer; Degenaar, Patrick
2011-06-01
Channelrhodopsin-2 (ChR2) has become a widely used tool for stimulating neurons with light. Nevertheless, the underlying dynamics of the ChR2-evoked spikes are still not yet fully understood. Here, we develop a model that describes the response of ChR2-expressing neurons to light stimuli and use the model to explore the light-to-spike process. We show that an optimal stimulation yield is achieved when the optical energies are delivered in short pulses. The model allows us to theoretically examine the effects of using various types of ChR2 mutants. We show that while increasing the lifetime and shuttering speed of ChR2 have limited effect, reducing the threshold irradiance by increased conductance will eliminate adaptation and allow constant dynamic range. The model and the conclusion presented in this study can help to interpret experimental results, design illumination protocols, and seek improvement strategies in the nascent optogenetic field.
Phase diagrams and dynamics of a computationally efficient map-based neuron model
Gonsalves, Jheniffer J.; Tragtenberg, Marcelo H. R.
2017-01-01
We introduce a new map-based neuron model derived from the dynamical perceptron family that has the best compromise between computational efficiency, analytical tractability, reduced parameter space and many dynamical behaviors. We calculate bifurcation and phase diagrams analytically and computationally that underpins a rich repertoire of autonomous and excitable dynamical behaviors. We report the existence of a new regime of cardiac spikes corresponding to nonchaotic aperiodic behavior. We compare the features of our model to standard neuron models currently available in the literature. PMID:28358843
Statistics of a neuron model driven by asymmetric colored noise.
Müller-Hansen, Finn; Droste, Felix; Lindner, Benjamin
2015-02-01
Irregular firing of neurons can be modeled as a stochastic process. Here we study the perfect integrate-and-fire neuron driven by dichotomous noise, a Markovian process that jumps between two states (i.e., possesses a non-Gaussian statistics) and exhibits nonvanishing temporal correlations (i.e., represents a colored noise). Specifically, we consider asymmetric dichotomous noise with two different transition rates. Using a first-passage-time formulation, we derive exact expressions for the probability density and the serial correlation coefficient of the interspike interval (time interval between two subsequent neural action potentials) and the power spectrum of the spike train. Furthermore, we extend the model by including additional Gaussian white noise, and we give approximations for the interspike interval (ISI) statistics in this case. Numerical simulations are used to validate the exact analytical results for pure dichotomous noise, and to test the approximations of the ISI statistics when Gaussian white noise is included. The results may help to understand how correlations and asymmetry of noise and signals in nerve cells shape neuronal firing statistics.
Modeling of Nike Experiments on Acceleration of Planar Targets Stabilized with a Short Spike
Metzler, N.; Velikovich, A. L.; Gardner, J. H.
2005-10-01
A short sub-ns laser pulse (spike) produces a decelerating shock wave and a rarefaction wave immediately behind it, shaping a density gradient in the target. The following main pulse ``rides'' this graded density profile. We have demonstrated how the deceleration of the ablation front following the shock wave suppresses laser imprint and delays perturbation growth in the target [1]. We report the results of 2D numerical modeling of experiments on Nike laser at NRL, with its recently developed short-pulse capability, for a low-energy spike which does not affect the target adiabat. We studied the effect of spike on laser imprint on smooth planar targets and on the growth of perturbations imposed as single-mode ripples on the irradiated surface of the targets. For all cases, delay of the onset and/or suppression of the rate of the mass perturbation growth due to the spike are robust and significant enough to be observable on Nike. [1] N. Metzler et al., Phys. Plasmas 6, 3283 (1999); 9, 5050 (2002); 10, 1897 (2003).
Directory of Open Access Journals (Sweden)
Sebastien Naze
2015-05-01
Full Text Available Epileptic seizure dynamics span multiple scales in space and time. Understanding seizure mechanisms requires identifying the relations between seizure components within and across these scales, together with the analysis of their dynamical repertoire. Mathematical models have been developed to reproduce seizure dynamics across scales ranging from the single neuron to the neural population. In this study, we develop a network model of spiking neurons and systematically investigate the conditions, under which the network displays the emergent dynamic behaviors known from the Epileptor, which is a well-investigated abstract model of epileptic neural activity. This approach allows us to study the biophysical parameters and variables leading to epileptiform discharges at cellular and network levels. Our network model is composed of two neuronal populations, characterized by fast excitatory bursting neurons and regular spiking inhibitory neurons, embedded in a common extracellular environment represented by a slow variable. By systematically analyzing the parameter landscape offered by the simulation framework, we reproduce typical sequences of neural activity observed during status epilepticus. We find that exogenous fluctuations from extracellular environment and electro-tonic couplings play a major role in the progression of the seizure, which supports previous studies and further validates our model. We also investigate the influence of chemical synaptic coupling in the generation of spontaneous seizure-like events. Our results argue towards a temporal shift of typical spike waves with fast discharges as synaptic strengths are varied. We demonstrate that spike waves, including interictal spikes, are generated primarily by inhibitory neurons, whereas fast discharges during the wave part are due to excitatory neurons. Simulated traces are compared with in vivo experimental data from rodents at different stages of the disorder. We draw the conclusion
Neuronal modelling of baroreflex response to orthostatic stress
Samin, Azfar
The accelerations experienced in aerial combat can cause pilot loss of consciousness (GLOC) due to a critical reduction in cerebral blood circulation. The development of smart protective equipment requires understanding of how the brain processes blood pressure (BP) information in response to acceleration. We present a biologically plausible model of the Baroreflex to investigate the neural correlates of short-term BP control under acceleration or orthostatic stress. The neuronal network model, which employs an integrate-and-fire representation of a biological neuron, comprises the sensory, motor, and the central neural processing areas that form the Baroreflex. Our modelling strategy is to test hypotheses relating to the encoding mechanisms of multiple sensory inputs to the nucleus tractus solitarius (NTS), the site of central neural processing. The goal is to run simulations and reproduce model responses that are consistent with the variety of available experimental data. Model construction and connectivity are inspired by the available anatomical and neurophysiological evidence that points to a barotopic organization in the NTS, and the presence of frequency-dependent synaptic depression, which provides a mechanism for generating non-linear local responses in NTS neurons that result in quantifiable dynamic global baroreflex responses. The entire physiological range of BP and rate of change of BP variables is encoded in a palisade of NTS neurons in that the spike responses approximate Gaussian 'tuning' curves. An adapting weighted-average decoding scheme computes the motor responses and a compensatory signal regulates the heart rate (HR). Model simulations suggest that: (1) the NTS neurons can encode the hydrostatic pressure difference between two vertically separated sensory receptor regions at +Gz, and use changes in that difference for the regulation of HR; (2) even though NTS neurons do not fire with a cardiac rhythm seen in the afferents, pulse
Improved cosmological model fitting of Planck data with a dark energy spike
Park, Chan-Gyung
2015-06-01
The Λ cold dark matter (Λ CDM ) model is currently known as the simplest cosmology model that best describes observations with a minimal number of parameters. Here we introduce a cosmology model that is preferred over the conventional Λ CDM one by constructing dark energy as the sum of the cosmological constant Λ and an additional fluid that is designed to have an extremely short transient spike in energy density during the radiation-matter equality era and an early scaling behavior with radiation and matter densities. The density parameter of the additional fluid is defined as a Gaussian function plus a constant in logarithmic scale-factor space. Searching for the best-fit cosmological parameters in the presence of such a dark energy spike gives a far smaller chi-square value by about 5 times the number of additional parameters introduced and narrower constraints on the matter density and Hubble constant compared with the best-fit Λ CDM model. The significant improvement in reducing the chi square mainly comes from the better fitting of the Planck temperature power spectrum around the third (ℓ≈800 ) and sixth (ℓ≈1800 ) acoustic peaks. The likelihood ratio test and the Akaike information criterion suggest that the model of a dark energy spike is strongly favored by the current cosmological observations over the conventional Λ CDM model. However, based on the Bayesian information criterion which penalizes models with more parameters, the strong evidence supporting the presence of a dark energy spike disappears. Our result emphasizes that the alternative cosmological parameter estimation with even better fitting of the same observational data is allowed in Einstein's gravity.
Effective stimuli for constructing reliable neuron models.
Directory of Open Access Journals (Sweden)
Shaul Druckmann
2011-08-01
Full Text Available The rich dynamical nature of neurons poses major conceptual and technical challenges for unraveling their nonlinear membrane properties. Traditionally, various current waveforms have been injected at the soma to probe neuron dynamics, but the rationale for selecting specific stimuli has never been rigorously justified. The present experimental and theoretical study proposes a novel framework, inspired by learning theory, for objectively selecting the stimuli that best unravel the neuron's dynamics. The efficacy of stimuli is assessed in terms of their ability to constrain the parameter space of biophysically detailed conductance-based models that faithfully replicate the neuron's dynamics as attested by their ability to generalize well to the neuron's response to novel experimental stimuli. We used this framework to evaluate a variety of stimuli in different types of cortical neurons, ages and animals. Despite their simplicity, a set of stimuli consisting of step and ramp current pulses outperforms synaptic-like noisy stimuli in revealing the dynamics of these neurons. The general framework that we propose paves a new way for defining, evaluating and standardizing effective electrical probing of neurons and will thus lay the foundation for a much deeper understanding of the electrical nature of these highly sophisticated and non-linear devices and of the neuronal networks that they compose.
Six types of multistability in a neuronal model based on slow calcium current.
Directory of Open Access Journals (Sweden)
Tatiana Malashchenko
Full Text Available BACKGROUND: Multistability of oscillatory and silent regimes is a ubiquitous phenomenon exhibited by excitable systems such as neurons and cardiac cells. Multistability can play functional roles in short-term memory and maintaining posture. It seems to pose an evolutionary advantage for neurons which are part of multifunctional Central Pattern Generators to possess multistability. The mechanisms supporting multistability of bursting regimes are not well understood or classified. METHODOLOGY/PRINCIPAL FINDINGS: Our study is focused on determining the bio-physical mechanisms underlying different types of co-existence of the oscillatory and silent regimes observed in a neuronal model. We develop a low-dimensional model typifying the dynamics of a single leech heart interneuron. We carry out a bifurcation analysis of the model and show that it possesses six different types of multistability of dynamical regimes. These types are the co-existence of 1 bursting and silence, 2 tonic spiking and silence, 3 tonic spiking and subthreshold oscillations, 4 bursting and subthreshold oscillations, 5 bursting, subthreshold oscillations and silence, and 6 bursting and tonic spiking. These first five types of multistability occur due to the presence of a separating regime that is either a saddle periodic orbit or a saddle equilibrium. We found that the parameter range wherein multistability is observed is limited by the parameter values at which the separating regimes emerge and terminate. CONCLUSIONS: We developed a neuronal model which exhibits a rich variety of different types of multistability. We described a novel mechanism supporting the bistability of bursting and silence. This neuronal model provides a unique opportunity to study the dynamics of networks with neurons possessing different types of multistability.
Directory of Open Access Journals (Sweden)
Dorea Vierling-Claassen
2010-11-01
Full Text Available Selective optogenetic drive of fast spiking interneurons (FS leads to enhanced local field potential (LFP power across the traditional gamma frequency band (20-80Hz; Cardin et al., 2009. In contrast, drive to regular-spiking pyramidal cells (RS enhances power at lower frequencies, with a peak at 8 Hz. The first result is consistent with previous computational studies emphasizing the role of FS and the time constant of GABAA synaptic inhibition in gamma rhythmicity. However, the same theoretical models do not typically predict low-frequency LFP enhancement with RS drive. To develop hypotheses as to how the same network can support these contrasting behaviors, we constructed a biophysically principled network model of primary somatosensory neocortex containing FS, RS and low-threshold-spiking (LTS interneurons. Cells were modeled with detailed cell anatomy and physiology, multiple dendritic compartments, and included active somatic and dendritic ionic currents. Consistent with prior studies, the model demonstrated gamma resonance during FS drive, dependent on the time-constant of GABAA inhibition induced by synchronous FS activity. Lower frequency enhancement during RS drive was replicated only on inclusion of an inhibitory LTS population, whose activation was critically dependent on RS synchrony and evoked longer-lasting inhibition. Our results predict that differential recruitment of FS and LTS inhibitory populations is essential to the observed cortical dynamics and may provide a means for amplifying the natural expression of distinct oscillations in normal cortical processing.
Lindner, Benjamin; Chacron, Maurice J.; Longtin, André
2017-01-01
Many neurons exhibit interval correlations in the absence of input signals. We study the influence of these intrinsic interval correlations of model neurons on their signal transmission properties. For this purpose, we employ two simple firing models, one of which generates a renewal process, while the other leads to a nonrenewal process with negative interval correlations. Different methods to solve for spectral statistics in the presence of a weak stimulus (spike train power spectra, cross spectra, and coherence functions) are presented, and their range of validity is discussed. Using these analytical results, we explore a lower bound on the mutual information rate between output spike train and input stimulus as a function of the system’s parameters. We demonstrate that negative correlations in the baseline activity can lead to enhanced information transfer of a weak signal by means of noise shaping of the background noise spectrum. We also show that an enhancement is not compulsory—for a stimulus with power exclusively at high frequencies, the renewal model can transfer more information than the nonrenewal model does. We discuss the application of our analytical results to other problems in neuroscience. Our results are also relevant to the general problem of how a signal affects the power spectrum of a nonlinear stochastic system. PMID:16196608
Mirror neurons: functions, mechanisms and models.
Oztop, Erhan; Kawato, Mitsuo; Arbib, Michael A
2013-04-12
Mirror neurons for manipulation fire both when the animal manipulates an object in a specific way and when it sees another animal (or the experimenter) perform an action that is more or less similar. Such neurons were originally found in macaque monkeys, in the ventral premotor cortex, area F5 and later also in the inferior parietal lobule. Recent neuroimaging data indicate that the adult human brain is endowed with a "mirror neuron system," putatively containing mirror neurons and other neurons, for matching the observation and execution of actions. Mirror neurons may serve action recognition in monkeys as well as humans, whereas their putative role in imitation and language may be realized in human but not in monkey. This article shows the important role of computational models in providing sufficient and causal explanations for the observed phenomena involving mirror systems and the learning processes which form them, and underlines the need for additional circuitry to lift up the monkey mirror neuron circuit to sustain the posited cognitive functions attributed to the human mirror neuron system. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Oscillations via Spike-Timing Dependent Plasticity in a Feed-Forward Model.
Directory of Open Access Journals (Sweden)
Yotam Luz
2016-04-01
Full Text Available Neuronal oscillatory activity has been reported in relation to a wide range of cognitive processes including the encoding of external stimuli, attention, and learning. Although the specific role of these oscillations has yet to be determined, it is clear that neuronal oscillations are abundant in the central nervous system. This raises the question of the origin of these oscillations: are the mechanisms for generating these oscillations genetically hard-wired or can they be acquired via a learning process? Here, we study the conditions under which oscillatory activity emerges through a process of spike timing dependent plasticity (STDP in a feed-forward architecture. First, we analyze the effect of oscillations on STDP-driven synaptic dynamics of a single synapse, and study how the parameters that characterize the STDP rule and the oscillations affect the resultant synaptic weight. Next, we analyze STDP-driven synaptic dynamics of a pre-synaptic population of neurons onto a single post-synaptic cell. The pre-synaptic neural population is assumed to be oscillating at the same frequency, albeit with different phases, such that the net activity of the pre-synaptic population is constant in time. Thus, in the homogeneous case in which all synapses are equal, the post-synaptic neuron receives constant input and hence does not oscillate. To investigate the transition to oscillatory activity, we develop a mean-field Fokker-Planck approximation of the synaptic dynamics. We analyze the conditions causing the homogeneous solution to lose its stability. The findings show that oscillatory activity appears through a mechanism of spontaneous symmetry breaking. However, in the general case the homogeneous solution is unstable, and the synaptic dynamics does not converge to a different fixed point, but rather to a limit cycle. We show how the temporal structure of the STDP rule determines the stability of the homogeneous solution and the drift velocity of the
Oscillations via Spike-Timing Dependent Plasticity in a Feed-Forward Model.
Luz, Yotam; Shamir, Maoz
2016-04-01
Neuronal oscillatory activity has been reported in relation to a wide range of cognitive processes including the encoding of external stimuli, attention, and learning. Although the specific role of these oscillations has yet to be determined, it is clear that neuronal oscillations are abundant in the central nervous system. This raises the question of the origin of these oscillations: are the mechanisms for generating these oscillations genetically hard-wired or can they be acquired via a learning process? Here, we study the conditions under which oscillatory activity emerges through a process of spike timing dependent plasticity (STDP) in a feed-forward architecture. First, we analyze the effect of oscillations on STDP-driven synaptic dynamics of a single synapse, and study how the parameters that characterize the STDP rule and the oscillations affect the resultant synaptic weight. Next, we analyze STDP-driven synaptic dynamics of a pre-synaptic population of neurons onto a single post-synaptic cell. The pre-synaptic neural population is assumed to be oscillating at the same frequency, albeit with different phases, such that the net activity of the pre-synaptic population is constant in time. Thus, in the homogeneous case in which all synapses are equal, the post-synaptic neuron receives constant input and hence does not oscillate. To investigate the transition to oscillatory activity, we develop a mean-field Fokker-Planck approximation of the synaptic dynamics. We analyze the conditions causing the homogeneous solution to lose its stability. The findings show that oscillatory activity appears through a mechanism of spontaneous symmetry breaking. However, in the general case the homogeneous solution is unstable, and the synaptic dynamics does not converge to a different fixed point, but rather to a limit cycle. We show how the temporal structure of the STDP rule determines the stability of the homogeneous solution and the drift velocity of the limit cycle.
Directory of Open Access Journals (Sweden)
Guillaume Drion
2011-05-01
Full Text Available Midbrain dopaminergic neurons are endowed with endogenous slow pacemaking properties. In recent years, many different groups have studied the basis for this phenomenon, often with conflicting conclusions. In particular, the role of a slowly-inactivating L-type calcium channel in the depolarizing phase between spikes is controversial, and the analysis of slow oscillatory potential (SOP recordings during the blockade of sodium channels has led to conflicting conclusions. Based on a minimal model of a dopaminergic neuron, our analysis suggests that the same experimental protocol may lead to drastically different observations in almost identical neurons. For example, complete L-type calcium channel blockade eliminates spontaneous firing or has almost no effect in two neurons differing by less than 1% in their maximal sodium conductance. The same prediction can be reproduced in a state of the art detailed model of a dopaminergic neuron. Some of these predictions are confirmed experimentally using single-cell recordings in brain slices. Our minimal model exhibits SOPs when sodium channels are blocked, these SOPs being uncorrelated with the spiking activity, as has been shown experimentally. We also show that block of a specific conductance (in this case, the SK conductance can have a different effect on these two oscillatory behaviors (pacemaking and SOPs, despite the fact that they have the same initiating mechanism. These results highlight the fact that computational approaches, besides their well known confirmatory and predictive interests in neurophysiology, may also be useful to resolve apparent discrepancies between experimental results.
Encoding Chaos in Neural Spike Trains
Richardson, Kristen A.; Imhoff, Thomas T.; Grigg, Peter; Collins, James J.
1998-03-01
Recently, it has been shown that interspike interval (ISI) series from driven model neurons can be used to discriminate between chaotic and stochastic inputs. Here we extend this work to in vitro experimental studies with rat cutaneous mechanoreceptors. For each of the neurons tested, we show that a chaotically driven ISI series can be distinguished from a stochastically driven ISI series on the basis of a nonlinear prediction measure. This work demonstrates that dynamical information can be preserved when an analog chaotic signal is converted into a spike train by a sensory neuron.
The thermodynamic temperature of a rhythmic spiking network
Merolla, Paul; Arthur, John
2010-01-01
Artificial neural networks built from two-state neurons are powerful computational substrates, whose computational ability is well understood by analogy with statistical mechanics. In this work, we introduce similar analogies in the context of spiking neurons in a fixed time window, where excitatory and inhibitory inputs drawn from a Poisson distribution play the role of temperature. For single neurons with a "bandgap" between their inputs and the spike threshold, this temperature allows for stochastic spiking. By imposing a global inhibitory rhythm over the fixed time windows, we connect neurons into a network that exhibits synchronous, clock-like updating akin to neural networks. We implement a single-layer Boltzmann machine without learning to demonstrate our model.
Energy Technology Data Exchange (ETDEWEB)
Rodrigues, Serafim [Department of Mathematical Sciences, Loughborough University, Leicestershire, LE11 3TU (United Kingdom); Terry, John R. [Department of Mathematical Sciences, Loughborough University, Leicestershire, LE11 3TU (United Kingdom)]. E-mail: j.r.terry@lboro.ac.uk; Breakspear, Michael [Black Dog Institute, Randwick, NSW 2031 (Australia); School of Psychiatry, UNSW, NSW 2030 (Australia)
2006-07-10
In this Letter, the genesis of spike-wave activity-a hallmark of many generalized epileptic seizures-is investigated in a reduced mean-field model of human neural activity. Drawing upon brain modelling and dynamical systems theory, we demonstrate that the thalamic circuitry of the system is crucial for the generation of these abnormal rhythms, observing that the combination of inhibition from reticular nuclei and excitation from the cortical signal, interplay to generate the spike-wave oscillation. The mechanism revealed provides an explanation of why approaches based on linear stability and Heaviside approximations to the activation function have failed to explain the phenomena of spike-wave behaviour in mean-field models. A mathematical understanding of this transition is a crucial step towards relating spiking network models and mean-field approaches to human brain modelling.
Effect of spontaneous activity on stimulus detection in a simple neuronal model.
Levakova, Marie
2016-06-01
It is studied what level of a continuous-valued signal is optimally estimable on the basis of first-spike latency neuronal data. When a spontaneous neuronal activity is present, the first spike after the stimulus onset may be caused either by the stimulus itself, or it may be a result of the prevailing spontaneous activity. Under certain regularity conditions, Fisher information is the inverse of the variance of the best estimator. It can be considered as a function of the signal intensity and then indicates accuracy of the estimation for each signal level. The Fisher information is normalized with respect to the time needed to obtain an observation. The accuracy of signal level estimation is investigated in basic discharge patterns modelled by a Poisson and a renewal process and the impact of the complex interaction between spontaneous activity and a delay of the response is shown.
Passive Electroreception in Fish: AN Analog Model of the Spike Generation Zone.
Harvey, James Robert
Sensory transduction begins in receptor cells specialized to the sensory modality involved and proceeds to the more generalized stage of the first afferent fiber, converting the initial sensory information into neural spikes for transmittal to the central nervous system. We have developed a unique analog electronic model of the generalized step (also known as the spike generation zone (SGZ)) using a tunnel diode, an operational amplifier, resistors, and capacitors. With no externally applied simulated postsynaptic input current, our model represents a 10^{-3}cm^2 patch (100 times the typical in vivo area) of tonically active, nonadaptive, postsynaptic neural membrane that behaves as a pacemaker cell. Similar to the FitzHugh-Nagumo equations, our model is shown to be a simplification of the Hodgkin-Huxley parallel conductance model and can be analyzed by the methods of van der Pol. Measurements using the model yield results which compare favorably to physiological stimulus-response data gathered by Murray for elasmobranch electroreceptors. We then use the model to show that the main contribution to variance in the rate of neural spike output is provided by coincident inputs to the SGZ oscillator (i.e., by synaptic input noise) and not by inherent instability of the SGZ oscillator. Configured for maximum sensitivity, our model is capable of detecting stimulus changes as low as 50 fA in less than a second and this corresponds to a fractional frequency change of Delta f/f ~ 2 times 10^{-3}. Much data exists implying that in vivo detection of Delta f/f is limited to the range of one to ten percent (Weber-Fechner criterion). We propose the variance induced by the synaptic input noise provides a plausible physiological basis for the Weber-Fechner criterion.
Truccolo, Wilson; Eden, Uri T; Fellows, Matthew R; Donoghue, John P; Brown, Emery N
2005-02-01
Multiple factors simultaneously affect the spiking activity of individual neurons. Determining the effects and relative importance of these factors is a challenging problem in neurophysiology. We propose a statistical framework based on the point process likelihood function to relate a neuron's spiking probability to three typical covariates: the neuron's own spiking history, concurrent ensemble activity, and extrinsic covariates such as stimuli or behavior. The framework uses parametric models of the conditional intensity function to define a neuron's spiking probability in terms of the covariates. The discrete time likelihood function for point processes is used to carry out model fitting and model analysis. We show that, by modeling the logarithm of the conditional intensity function as a linear combination of functions of the covariates, the discrete time point process likelihood function is readily analyzed in the generalized linear model (GLM) framework. We illustrate our approach for both GLM and non-GLM likelihood functions using simulated data and multivariate single-unit activity data simultaneously recorded from the motor cortex of a monkey performing a visuomotor pursuit-tracking task. The point process framework provides a flexible, computationally efficient approach for maximum likelihood estimation, goodness-of-fit assessment, residual analysis, model selection, and neural decoding. The framework thus allows for the formulation and analysis of point process models of neural spiking activity that readily capture the simultaneous effects of multiple covariates and enables the assessment of their relative importance.
Directory of Open Access Journals (Sweden)
Cengiz Günay
2015-05-01
Full Text Available Studying ion channel currents generated distally from the recording site is difficult because of artifacts caused by poor space clamp and membrane filtering. A computational model can quantify artifact parameters for correction by simulating the currents only if their exact anatomical location is known. We propose that the same artifacts that confound current recordings can help pinpoint the source of those currents by providing a signature of the neuron's morphology. This method can improve the recording quality of currents initiated at the spike initiation zone (SIZ that are often distal to the soma in invertebrate neurons. Drosophila being a valuable tool for characterizing ion currents, we estimated the SIZ location and quantified artifacts in an identified motoneuron, aCC/MN1-Ib, by constructing a novel multicompartmental model. Initial simulation of the measured biophysical channel properties in an isopotential Hodgkin-Huxley type neuron model partially replicated firing characteristics. Adding a second distal compartment, which contained spike-generating Na+ and K+ currents, was sufficient to simulate aCC's in vivo activity signature. Matching this signature using a reconstructed morphology predicted that the SIZ is on aCC's primary axon, 70 μm after the most distal dendritic branching point. From SIZ to soma, we observed and quantified selective morphological filtering of fast activating currents. Non-inactivating K+ currents are filtered ∼3 times less and despite their large magnitude at the soma they could be as distal as Na+ currents. The peak of transient component (NaT of the voltage-activated Na+ current is also filtered more than the magnitude of slower persistent component (NaP, which can contribute to seizures. The corrected NaP/NaT ratio explains the previously observed discrepancy when the same channel is expressed in different cells. In summary, we used an in vivo signature to estimate ion channel location and recording
Hindriks, Rikkert; Meijer, Hil G E; van Gils, Stephan A; van Putten, Michel J A M
2013-01-01
The EEG of patients in non-convulsive status epilepticus (NCSE) often displays delta oscillations or generalized spike-wave discharges. In some patients, these delta oscillations coexist with intermittent epileptic spikes. In this study we verify the prediction of a computational model of the thalamo-cortical system that these spikes are phase-locked to the delta oscillations. We subsequently describe the physiological mechanism underlying this observation as suggested by the model. It is suggested that the spikes reflect inhibitory stochastic fluctuations in the input to thalamo-cortical relay neurons and phase-locking is a consequence of differential excitability of relay neurons over the delta cycle. Further analysis shows that the observed phase-locking can be regarded as a stochastic precursor of generalized spike-wave discharges. This study thus provides an explanation of intermittent spikes during delta oscillations in NCSE and might be generalized to other encephathologies in which delta activity can be observed.
Nakano, Takashi; Otsuka, Makoto; Yoshimoto, Junichiro; Doya, Kenji
2015-01-01
A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach.
Nakano, Takashi; Otsuka, Makoto; Yoshimoto, Junichiro; Doya, Kenji
2015-01-01
A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach. PMID:25734662
Context-aware modeling of neuronal morphologies
Directory of Open Access Journals (Sweden)
Benjamin eTorben-Nielsen
2014-09-01
Full Text Available Neuronal morphologies are pivotal for brain functioning: physical overlap between dendrites and axons constrain the circuit topology, and the precise shape and composition of dendrites determine the integration of inputs to produce an output signal. At the same time, morphologies are highly diverse and variant. The variance, presumably, originates from neurons developing in a densely packed brain substrate where they interact (e.g., repulsion or attraction with other actors in this substrate. However, when studying neurons their context is never part of the analysis and they are treated as if they existed in isolation.Here we argue that to fully understand neuronal morphology and its variance it is important to consider neurons in relation to each other and to other actors in the surrounding brain substrate, i.e., their context. We propose a context-aware computational framework, NeuroMaC, in which large numbers of neurons can be grown simultaneously according to growth rules expressed in terms of interactions between the developing neuron and the surrounding brain substrate.As a proof of principle, we demonstrate that by using NeuroMaC we can generate accurate virtual morphologies of distinct classes both in isolation and as part of neuronal forests. Accuracy is validated against population statistics of experimentally reconstructed morphologies. We show that context-aware generation of neurons can explain characteristics of variation. Indeed, plausible variation is an inherent property of the morphologies generated by context-aware rules. We speculate about the applicability of this framework to investigate morphologies and circuits, to classify healthy and pathological morphologies, and to generate large quantities of morphologies for large-scale modeling.
Estimating extracellular spike waveforms from CA1 pyramidal cells with multichannel electrodes.
Molden, Sturla; Moldestad, Olve; Storm, Johan F
2013-01-01
Extracellular (EC) recordings of action potentials from the intact brain are embedded in background voltage fluctuations known as the "local field potential" (LFP). In order to use EC spike recordings for studying biophysical properties of neurons, the spike waveforms must be separated from the LFP. Linear low-pass and high-pass filters are usually insufficient to separate spike waveforms from LFP, because they have overlapping frequency bands. Broad-band recordings of LFP and spikes were obtained with a 16-channel laminar electrode array (silicone probe). We developed an algorithm whereby local LFP signals from spike-containing channel were modeled using locally weighted polynomial regression analysis of adjoining channels without spikes. The modeled LFP signal was subtracted from the recording to estimate the embedded spike waveforms. We tested the method both on defined spike waveforms added to LFP recordings, and on in vivo-recorded extracellular spikes from hippocampal CA1 pyramidal cells in anaesthetized mice. We show that the algorithm can correctly extract the spike waveforms embedded in the LFP. In contrast, traditional high-pass filters failed to recover correct spike shapes, albeit produceing smaller standard errors. We found that high-pass RC or 2-pole Butterworth filters with cut-off frequencies below 12.5 Hz, are required to retrieve waveforms comparable to our method. The method was also compared to spike-triggered averages of the broad-band signal, and yielded waveforms with smaller standard errors and less distortion before and after the spike.
Shan, Bonan; Wang, Jiang; Deng, Bin; Wei, Xile; Yu, Haitao; Zhang, Zhen; Li, Huiyan
2016-07-01
This paper proposes an epilepsy detection and closed-loop control strategy based on Particle Swarm Optimization (PSO) algorithm. The proposed strategy can effectively suppress the epileptic spikes in neural mass models, where the epileptiform spikes are recognized as the biomarkers of transitions from the normal (interictal) activity to the seizure (ictal) activity. In addition, the PSO algorithm shows capabilities of accurate estimation for the time evolution of key model parameters and practical detection for all the epileptic spikes. The estimation effects of unmeasurable parameters are improved significantly compared with unscented Kalman filter. When the estimated excitatory-inhibitory ratio exceeds a threshold value, the epileptiform spikes can be inhibited immediately by adopting the proportion-integration controller. Besides, numerical simulations are carried out to illustrate the effectiveness of the proposed method as well as the potential value for the model-based early seizure detection and closed-loop control treatment design.
Wyatt, Tanya J; Rossi, Sharyn L; Siegenthaler, Monica M; Frame, Jennifer; Robles, Rockelle; Nistor, Gabriel; Keirstead, Hans S
2011-01-01
Motor neuron loss is characteristic of many neurodegenerative disorders and results in rapid loss of muscle control, paralysis, and eventual death in severe cases. In order to investigate the neurotrophic effects of a motor neuron lineage graft, we transplanted human embryonic stem cell-derived motor neuron progenitors (hMNPs) and examined their histopathological effect in three animal models of motor neuron loss. Specifically, we transplanted hMNPs into rodent models of SMA (Δ7SMN), ALS (SOD1 G93A), and spinal cord injury (SCI). The transplanted cells survived and differentiated in all models. In addition, we have also found that hMNPs secrete physiologically active growth factors in vivo, including NGF and NT-3, which significantly enhanced the number of spared endogenous neurons in all three animal models. The ability to maintain dying motor neurons by delivering motor neuron-specific neurotrophic support represents a powerful treatment strategy for diseases characterized by motor neuron loss.
An introduction to modeling neuronal dynamics
Börgers, Christoph
2017-01-01
This book is intended as a text for a one-semester course on Mathematical and Computational Neuroscience for upper-level undergraduate and beginning graduate students of mathematics, the natural sciences, engineering, or computer science. An undergraduate introduction to differential equations is more than enough mathematical background. Only a slim, high school-level background in physics is assumed, and none in biology. Topics include models of individual nerve cells and their dynamics, models of networks of neurons coupled by synapses and gap junctions, origins and functions of population rhythms in neuronal networks, and models of synaptic plasticity. An extensive online collection of Matlab programs generating the figures accompanies the book. .
DYNAMICS IN A CLASS OF NEURON MODELS
Institute of Scientific and Technical Information of China (English)
Wang Junping; Ruan Jiong
2009-01-01
In this paper, we investigate the dynamics in a class of discrete-time neuron mo-dels. The neuron model we discussed, defined by such periodic input-output mapping as a sinusoidal function, has a remarkably larger memory capacity than the conven-tional association system with the monotonous function. Our results show that the orbit of the model takes a conventional bifurcation route, from stable equilibrium, to periodicity, even to chaotic region. And the theoretical analysis is verified by numerical simulations.
Directory of Open Access Journals (Sweden)
William H Barnett
Full Text Available The dynamics of individual neurons are crucial for producing functional activity in neuronal networks. An open question is how temporal characteristics can be controlled in bursting activity and in transient neuronal responses to synaptic input. Bifurcation theory provides a framework to discover generic mechanisms addressing this question. We present a family of mechanisms organized around a global codimension-2 bifurcation. The cornerstone bifurcation is located at the intersection of the border between bursting and spiking and the border between bursting and silence. These borders correspond to the blue sky catastrophe bifurcation and the saddle-node bifurcation on an invariant circle (SNIC curves, respectively. The cornerstone bifurcation satisfies the conditions for both the blue sky catastrophe and SNIC. The burst duration and interburst interval increase as the inverse of the square root of the difference between the corresponding bifurcation parameter and its bifurcation value. For a given set of burst duration and interburst interval, one can find the parameter values supporting these temporal characteristics. The cornerstone bifurcation also determines the responses of silent and spiking neurons. In a silent neuron with parameters close to the SNIC, a pulse of current triggers a single burst. In a spiking neuron with parameters close to the blue sky catastrophe, a pulse of current temporarily silences the neuron. These responses are stereotypical: the durations of the transient intervals-the duration of the burst and the duration of latency to spiking-are governed by the inverse-square-root laws. The mechanisms described here could be used to coordinate neuromuscular control in central pattern generators. As proof of principle, we construct small networks that control metachronal-wave motor pattern exhibited in locomotion. This pattern is determined by the phase relations of bursting neurons in a simple central pattern generator
Spatiotemporal Dynamics and Reliable Computations in Recurrent Spiking Neural Networks
Pyle, Ryan; Rosenbaum, Robert
2017-01-01
Randomly connected networks of excitatory and inhibitory spiking neurons provide a parsimonious model of neural variability, but are notoriously unreliable for performing computations. We show that this difficulty is overcome by incorporating the well-documented dependence of connection probability on distance. Spatially extended spiking networks exhibit symmetry-breaking bifurcations and generate spatiotemporal patterns that can be trained to perform dynamical computations under a reservoir computing framework.
Kong, Weiwei; Wang, Binghe; Lei, Yang
2015-07-01
Fusion of infrared and visible images is an active research area in image processing, and a variety of relevant algorithms have been developed. However, the existing techniques commonly cannot gain good fusion performance and acceptable computational complexity simultaneously. This paper proposes a novel image fusion approach that integrates the non-subsampled shearlet transform (NSST) with spiking cortical model (SCM) to overcome the above drawbacks. On the one hand, using NSST to conduct the decompositions and reconstruction not only consists with human vision characteristics, but also effectively decreases the computational complexity compared with the current popular multi-resolution analysis tools such as non-subsampled contourlet transform (NSCT). On the other hand, SCM, which has been considered to be an optimal neuron network model recently, is responsible for the fusion of sub-images from different scales and directions. Experimental results indicate that the proposed method is promising, and it does significantly improve the fusion quality in both aspects of subjective visual performance and objective comparisons compared with other current popular ones.
Dynamical laser spike processing
Shastri, Bhavin J; Tait, Alexander N; Rodriguez, Alejandro W; Wu, Ben; Prucnal, Paul R
2015-01-01
Novel materials and devices in photonics have the potential to revolutionize optical information processing, beyond conventional binary-logic approaches. Laser systems offer a rich repertoire of useful dynamical behaviors, including the excitable dynamics also found in the time-resolved "spiking" of neurons. Spiking reconciles the expressiveness and efficiency of analog processing with the robustness and scalability of digital processing. We demonstrate that graphene-coupled laser systems offer a unified low-level spike optical processing paradigm that goes well beyond previously studied laser dynamics. We show that this platform can simultaneously exhibit logic-level restoration, cascadability and input-output isolation---fundamental challenges in optical information processing. We also implement low-level spike-processing tasks that are critical for higher level processing: temporal pattern detection and stable recurrent memory. We study these properties in the context of a fiber laser system, but the addit...
Statistics of a leaky integrate-and-fire model of neurons driven by dichotomous noise
Mankin, Romi; Lumi, Neeme
2016-05-01
The behavior of a stochastic leaky integrate-and-fire model of neurons is considered. The effect of temporally correlated random neuronal input is modeled as a colored two-level (dichotomous) Markovian noise. Relying on the Riemann method, exact expressions for the output interspike interval density and for the serial correlation coefficient are derived, and their dependence on noise parameters (such as correlation time and amplitude) is analyzed. Particularly, noise-induced sign reversal and a resonancelike amplification of the kurtosis of the interspike interval distribution are established. The features of spike statistics, analytically revealed in our study, are compared with recently obtained results for a perfect integrate-and-fire neuron model.
Chimera-like states in a neuronal network model of the cat brain
Santos, M. S.; Szezech, J. D.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Batista, A. M.; Viana, R. L.; Kurths, J.
2017-08-01
Neuronal systems have been modeled by complex networks in different description levels. Recently, it has been verified that networks can simultaneously exhibit one coherent and other incoherent domain, known as chimera states. In this work, we study the existence of chimera states in a network considering the connectivity matrix based on the cat cerebral cortex. The cerebral cortex of the cat can be separated in 65 cortical areas organised into the four cognitive regions: visual, auditory, somatosensory-motor and frontolimbic. We consider a network where the local dynamics is given by the Hindmarsh-Rose model. The Hindmarsh-Rose equations are a well known model of neuronal activity that has been considered to simulate membrane potential in neuron. Here, we analyse under which conditions chimera states are present, as well as the affects induced by intensity of coupling on them. We observe the existence of chimera states in that incoherent structure can be composed of desynchronised spikes or desynchronised bursts. Moreover, we find that chimera states with desynchronised bursts are more robust to neuronal noise than with desynchronised spikes.
Spike-based VLSI modeling of the ILD system in the echolocating bat.
Horiuchi, T; Hynna, K
2001-01-01
The azimuthal localization of objects by echolocating bats is based on the difference of echo intensity received at the two ears, known as the interaural level difference (ILD). Mimicking the neural circuitry in the bat associated with the computation of ILD, we have constructed a spike-based VLSI model that can produce responses similar to those seen in the lateral superior olive (LSO) and some parts of the inferior colliculus (IC). We further explore some of the interesting computational consequences of the dynamics of both synapses and cellular mechanisms.
An improved cosmological model fitting of Planck data with a dark energy spike
Park, Chan-Gyung
2015-01-01
The $\\Lambda$ cold dark matter ($\\Lambda\\textrm{CDM}$) model is currently known as the simplest cosmology model that best describes observations with minimal number of parameters. Here we introduce a cosmology model that is preferred over the conventional $\\Lambda\\textrm{CDM}$ one by constructing dark energy as the sum of the cosmological constant $\\Lambda$ and the additional fluid that is designed to have an extremely short transient spike in energy density during the radiation-matter equality era and the early scaling behavior with radiation and matter densities. The density parameter of the additional fluid is defined as a Gaussian function plus a constant in logarithmic scale-factor space. Searching for the best-fit cosmological parameters in the presence of such a dark energy spike gives a far smaller chi-square value by about five times the number of additional parameters introduced and narrower constraints on matter density and Hubble constant compared with the best-fit $\\Lambda\\textrm{CDM}$ model. The...
Gardner, Richard J; Hughes, Stuart W; Jones, Matthew W
2013-11-20
The 8-15 Hz thalamocortical oscillations known as sleep spindles are a universal feature of mammalian non-REM sleep, during which they are presumed to shape activity-dependent plasticity in neocortical networks. The cortex is hypothesized to contribute to initiation and termination of spindles, but the mechanisms by which it implements these roles are unknown. We used dual-site local field potential and multiple single-unit recordings in the thalamic reticular nucleus (TRN) and medial prefrontal cortex (mPFC) of freely behaving rats at rest to investigate thalamocortical network dynamics during natural sleep spindles. During each spindle epoch, oscillatory activity in mPFC and TRN increased in frequency from onset to offset, accompanied by a consistent phase precession of TRN spike times relative to the cortical oscillation. In mPFC, the firing probability of putative pyramidal cells was highest at spindle initiation and termination times. We thus identified "early" and "late" cell subpopulations and found that they had distinct properties: early cells generally fired in synchrony with TRN spikes, whereas late cells fired in antiphase to TRN activity and also had higher firing rates than early cells. The accelerating and highly structured temporal pattern of thalamocortical network activity over the course of spindles therefore reflects the engagement of distinct subnetworks at specific times across spindle epochs. We propose that early cortical cells serve a synchronizing role in the initiation and propagation of spindle activity, whereas the subsequent recruitment of late cells actively antagonizes the thalamic spindle generator by providing asynchronous feedback.
Saleeon, Wachirapong; Jansri, Ukkrit; Srikiatkhachorn, Anan; Bongsebandhu-phubhakdi, Saknan
2016-02-01
Many women experience menstrual migraines that develop into recurrent migraine attacks during menstruation. In the human menstrual cycle, the estrogen level fluctuates according to changes in the follicular and luteal phases. The rat estrous cycle is used as an animal model to study the effects of estrogen fluctuation. To investigate whether the estrous cycle is involved in migraine development by comparing the neuronal excitability of trigeminal ganglion (TG) neurons in each stage of the estrous cycle. Female rats were divided into four experimental groups based on examinations of the cytologies of vaginal smears, and serum analyses of estrogen levels following each stage of the estrous cycle. The rats in each stage of the estrous cycle were anesthetized and their trigeminal ganglia were removed The collections of trigeminal ganglia were cultured for two to three hours, after which whole-cell patch clamp experiments were recorded to estimate the electrophysiological properties of the TG neurons. There were many vaginal epithelial cells and high estrogen levels in the proestrus and estrus stages of the estrous cycle. Electrophysiological studies revealed that the TG neurons in the proestrus and estrus stages exhibited significantly lower thresholds of stimulation, and significant increase in total spikes compared to the TG neurons that were collected in the diestrus stage. Our results revealed that high estrogen levels in the proestrus and estrus stages altered the thresholds, rheobases, and total spikes of the TG neurons. High estrogen levels in the estrous cycle induced an increase in neuronal excitability and the peripheral sensitization of TG neurons. These findings may provide an explanation for the correlation of estrogen fluctuations during the menstrual cycle with the pathogenesis of menstrual migraines.
Stromatias, Evangelos
2011-01-01
Spiking neural networks have been referred to as the third generation of artificial neural networks where the information is coded as time of the spikes. There are a number of different spiking neuron models available and they are categorized based on their level of abstraction. In addition, there are two known learning methods, unsupervised and supervised learning. This thesis focuses on supervised learning where a new algorithm is proposed, based on genetic algorithms. The proposed algorithm is able to train both synaptic weights and delays and also allow each neuron to emit multiple spikes thus taking full advantage of the spatial-temporal coding power of the spiking neurons. In addition, limited synaptic precision is applied; only six bits are used to describe and train a synapse, three bits for the weights and three bits for the delays. Two limited precision schemes are investigated. The proposed algorithm is tested on the XOR classification problem where it produces better results for even smaller netwo...