Optimal Detection of a Localized Perturbation in Random Networks of Integrate-and-Fire Neurons
Bernardi, Davide; Lindner, Benjamin
2017-06-01
Experimental and theoretical studies suggest that cortical networks are chaotic and coding relies on averages over large populations. However, there is evidence that rats can respond to the short stimulation of a single cortical cell, a theoretically unexplained fact. We study effects of single-cell stimulation on a large recurrent network of integrate-and-fire neurons and propose a simple way to detect the perturbation. Detection rates obtained from simulations and analytical estimates are similar to experimental response rates if the readout is slightly biased towards specific neurons. Near-optimal detection is attained for a broad range of intermediate values of the mean coupling between neurons.
Complex Behavior in an Integrate-and-Fire Neuron Model Based on Small World Networks
International Nuclear Information System (INIS)
Lin Min; Chen Tianlun
2005-01-01
Based on our previously pulse-coupled integrate-and-fire neuron model in small world networks, we investigate the complex behavior of electroencephalographic (EEG)-like activities produced by such a model. We find EEG-like activities have obvious chaotic characteristics. We also analyze the complex behaviors of EEG-like signals, such as spectral analysis, reconstruction of the phase space, the correlation dimension, and so on.
Noise adaptation in integrate-and fire neurons.
Rudd, M E; Brown, L G
1997-07-01
The statistical spiking response of an ensemble of identically prepared stochastic integrate-and-fire neurons to a rectangular input current plus gaussian white noise is analyzed. It is shown that, on average, integrate-and-fire neurons adapt to the root-mean-square noise level of their input. This phenomenon is referred to as noise adaptation. Noise adaptation is characterized by a decrease in the average neural firing rate and an accompanying decrease in the average value of the generator potential, both of which can be attributed to noise-induced resets of the generator potential mediated by the integrate-and-fire mechanism. A quantitative theory of noise adaptation in stochastic integrate-and-fire neurons is developed. It is shown that integrate-and-fire neurons, on average, produce transient spiking activity whenever there is an increase in the level of their input noise. This transient noise response is either reduced or eliminated over time, depending on the parameters of the model neuron. Analytical methods are used to prove that nonleaky integrate-and-fire neurons totally adapt to any constant input noise level, in the sense that their asymptotic spiking rates are independent of the magnitude of their input noise. For leaky integrate-and-fire neurons, the long-run noise adaptation is not total, but the response to noise is partially eliminated. Expressions for the probability density function of the generator potential and the first two moments of the potential distribution are derived for the particular case of a nonleaky neuron driven by gaussian white noise of mean zero and constant variance. The functional significance of noise adaptation for the performance of networks comprising integrate-and-fire neurons is discussed.
Chicca, E; Badoni, D; Dante, V; D'Andreagiovanni, M; Salina, G; Carota, L; Fusi, S; Del Giudice, P
2003-01-01
Electronic neuromorphic devices with on-chip, on-line learning should be able to modify quickly the synaptic couplings to acquire information about new patterns to be stored (synaptic plasticity) and, at the same time, preserve this information on very long time scales (synaptic stability). Here, we illustrate the electronic implementation of a simple solution to this stability-plasticity problem, recently proposed and studied in various contexts. It is based on the observation that reducing the analog depth of the synapses to the extreme (bistable synapses) does not necessarily disrupt the performance of the device as an associative memory, provided that 1) the number of neurons is large enough; 2) the transitions between stable synaptic states are stochastic; and 3) learning is slow. The drastic reduction of the analog depth of the synaptic variable also makes this solution appealing from the point of view of electronic implementation and offers a simple methodological alternative to the technological solution based on floating gates. We describe the full custom analog very large-scale integration (VLSI) realization of a small network of integrate-and-fire neurons connected by bistable deterministic plastic synapses which can implement the idea of stochastic learning. In the absence of stimuli, the memory is preserved indefinitely. During the stimulation the synapse undergoes quick temporary changes through the activities of the pre- and postsynaptic neurons; those changes stochastically result in a long-term modification of the synaptic efficacy. The intentionally disordered pattern of connectivity allows the system to generate a randomness suited to drive the stochastic selection mechanism. We check by a suitable stimulation protocol that the stochastic synaptic plasticity produces the expected pattern of potentiation and depression in the electronic network.
International Nuclear Information System (INIS)
Lin Min; Chen Tianlun
2005-01-01
A lattice model for a set of pulse-coupled integrate-and-fire neurons with small world structure is introduced. We find that our model displays the power-law behavior accompanied with the large-scale synchronized activities among the units. And the different connectivity topologies lead to different behaviors in models of integrate-and-fire neurons.
Integrate-and-fire neurons driven by asymmetric dichotomous noise.
Droste, Felix; Lindner, Benjamin
2014-12-01
We consider a general integrate-and-fire (IF) neuron driven by asymmetric dichotomous noise. In contrast to the Gaussian white noise usually used in the so-called diffusion approximation, this noise is colored, i.e., it exhibits temporal correlations. We give an analytical expression for the stationary voltage distribution of a neuron receiving such noise and derive recursive relations for the moments of the first passage time density, which allow us to calculate the firing rate and the coefficient of variation of interspike intervals. We study how correlations in the input affect the rate and regularity of firing under variation of the model's parameters for leaky and quadratic IF neurons. Further, we consider the limit of small correlation times and find lowest order corrections to the first passage time moments to be proportional to the square root of the correlation time. We show analytically that to this lowest order, correlations always lead to a decrease in firing rate for a leaky IF neuron. All theoretical expressions are compared to simulations of leaky and quadratic IF neurons.
Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models
DEFF Research Database (Denmark)
Mazzoni, Alberto; Linden, Henrik; Cuntz, Hermann
2015-01-01
Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local f...... in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo....
Leaky Integrate-and-Fire Neuron Circuit Based on Floating-Gate Integrator
Kornijcuk, Vladimir; Lim, Hyungkwang; Seok, Jun Yeong; Kim, Guhyun; Kim, Seong Keun; Kim, Inho; Choi, Byung Joon; Jeong, Doo Seok
2016-01-01
The artificial spiking neural network (SNN) is promising and has been brought to the notice of the theoretical neuroscience and neuromorphic engineering research communities. In this light, we propose a new type of artificial spiking neuron based on leaky integrate-and-fire (LIF) behavior. A distinctive feature of the proposed FG-LIF neuron is the use of a floating-gate (FG) integrator rather than a capacitor-based one. The relaxation time of the charge on the FG relies mainly on the tunnel barrier profile, e.g., barrier height and thickness (rather than the area). This opens up the possibility of large-scale integration of neurons. The circuit simulation results offered biologically plausible spiking activity (circuit was subject to possible types of noise, e.g., thermal noise and burst noise. The simulation results indicated remarkable distributional features of interspike intervals that are fitted to Gamma distribution functions, similar to biological neurons in the neocortex. PMID:27242416
Symmetry breaking in two interacting populations of quadratic integrate-and-fire neurons
Ratas, Irmantas; Pyragas, Kestutis
2017-10-01
We analyze the dynamics of two coupled identical populations of quadratic integrate-and-fire neurons, which represent the canonical model for class I neurons near the spiking threshold. The populations are heterogeneous; they include both inherently spiking and excitable neurons. The coupling within and between the populations is global via synapses that take into account the finite width of synaptic pulses. Using a recently developed reduction method based on the Lorentzian ansatz, we derive a closed system of equations for the neuron's firing rates and the mean membrane potentials in both populations. The reduced equations are exact in the infinite-size limit. The bifurcation analysis of the equations reveals a rich variety of nonsymmetric patterns, including a splay state, antiphase periodic oscillations, chimera-like states, and chaotic oscillations as well as bistabilities between various states. The validity of the reduced equations is confirmed by direct numerical simulations of the finite-size networks.
Equilibrium and response properties of the integrate-and-fire neuron in discrete time
Directory of Open Access Journals (Sweden)
Moritz Helias
2010-01-01
Full Text Available The integrate-and-fire neuron with exponential postsynaptic potentials is a frequently employed model to study neural networks. Simulations in discrete time still have highest performance at moderate numerical errors, which makes them first choice for long-term simulations of plastic networks. Here we extend the population density approach to investigate how the equilibrium and response properties of the leaky integrate-and-fire neuron are affected by time discretization. We present a novel analytical treatment of the boundary condition at threshold, taking both discretization of time and finite synaptic weights into account. We uncover an increased membrane potential density just below threshold as the decisive property that explains the deviations found between simulations and the classical diffusion approximation. Temporal discretization and finite synaptic weights both contribute to this effect. Our treatment improves the standard formula to calculate the neuron’s equilibrium firing rate. Direct solution of the Markov process describing the evolution of the membrane potential density confirms our analysis and yields a method to calculate the firing rate exactly. Knowing the shape of the membrane potential distribution near threshold enables us to devise the transient response properties of the neuron model to synaptic input. We find a pronounced non-linear fast response component that has not been described by the prevailing continuous time theory for Gaussian white noise input.
Impact of adaptation currents on synchronization of coupled exponential integrate-and-fire neurons.
Directory of Open Access Journals (Sweden)
Josef Ladenbauer
Full Text Available The ability of spiking neurons to synchronize their activity in a network depends on the response behavior of these neurons as quantified by the phase response curve (PRC and on coupling properties. The PRC characterizes the effects of transient inputs on spike timing and can be measured experimentally. Here we use the adaptive exponential integrate-and-fire (aEIF neuron model to determine how subthreshold and spike-triggered slow adaptation currents shape the PRC. Based on that, we predict how synchrony and phase locked states of coupled neurons change in presence of synaptic delays and unequal coupling strengths. We find that increased subthreshold adaptation currents cause a transition of the PRC from only phase advances to phase advances and delays in response to excitatory perturbations. Increased spike-triggered adaptation currents on the other hand predominantly skew the PRC to the right. Both adaptation induced changes of the PRC are modulated by spike frequency, being more prominent at lower frequencies. Applying phase reduction theory, we show that subthreshold adaptation stabilizes synchrony for pairs of coupled excitatory neurons, while spike-triggered adaptation causes locking with a small phase difference, as long as synaptic heterogeneities are negligible. For inhibitory pairs synchrony is stable and robust against conduction delays, and adaptation can mediate bistability of in-phase and anti-phase locking. We further demonstrate that stable synchrony and bistable in/anti-phase locking of pairs carry over to synchronization and clustering of larger networks. The effects of adaptation in aEIF neurons on PRCs and network dynamics qualitatively reflect those of biophysical adaptation currents in detailed Hodgkin-Huxley-based neurons, which underscores the utility of the aEIF model for investigating the dynamical behavior of networks. Our results suggest neuronal spike frequency adaptation as a mechanism synchronizing low frequency
The Morris-Lecar neuron model embeds a leaky integrate-and-fire model
DEFF Research Database (Denmark)
Ditlevsen, Susanne; Greenwood, Priscilla
2013-01-01
We showthat the stochastic Morris–Lecar neuron, in a neighborhood of its stable point, can be approximated by a two-dimensional Ornstein Uhlenbeck (OU) modulation of a constant circular motion. The associated radial OU process is an example of a leaky integrate-and-fire (LIF) model prior to firing...
Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models
Cuntz, Hermann; Lansner, Anders; Panzeri, Stefano; Einevoll, Gaute T.
2015-01-01
Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best “LFP proxy”, we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with “ground-truth” LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo. PMID:26657024
Computing the Local Field Potential (LFP from Integrate-and-Fire Network Models.
Directory of Open Access Journals (Sweden)
Alberto Mazzoni
2015-12-01
Full Text Available Leaky integrate-and-fire (LIF network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP. Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best "LFP proxy", we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents with "ground-truth" LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.
Grabska-Barwińska, Agnieszka; Latham, Peter E
2014-06-01
We use mean field techniques to compute the distribution of excitatory and inhibitory firing rates in large networks of randomly connected spiking quadratic integrate and fire neurons. These techniques are based on the assumption that activity is asynchronous and Poisson. For most parameter settings these assumptions are strongly violated; nevertheless, so long as the networks are not too synchronous, we find good agreement between mean field prediction and network simulations. Thus, much of the intuition developed for randomly connected networks in the asynchronous regime applies to mildly synchronous networks.
Intrinsic modulation of pulse-coupled integrate-and-fire neurons
Coombes, S.; Lord, G. J.
1997-11-01
Intrinsic neuromodulation is observed in sensory and neuromuscular circuits and in biological central pattern generators. We model a simple neuronal circuit with a system of two pulse-coupled integrate-and-fire neurons and explore the parameter regimes for periodic firing behavior. The inclusion of biologically realistic features shows that the speed and onset of neuronal response plays a crucial role in determining the firing phase for periodic rhythms. We explore the neurophysiological function of distributed delays arising from both the synaptic transmission process and dendritic structure as well as discrete delays associated with axonal communication delays. Bifurcation and stability diagrams are constructed with a mixture of simple analysis, numerical continuation and the Kuramoto phase-reduction technique. Moreover, we show that, for asynchronous behavior, the strength of electrical synapses can control the firing rate of the system.
Population density models of integrate-and-fire neurons with jumps: well-posedness.
Dumont, Grégory; Henry, Jacques
2013-09-01
In this paper we study the well-posedness of different models of population of leaky integrate-and-fire neurons with a population density approach. The synaptic interaction between neurons is modeled by a potential jump at the reception of a spike. We study populations that are self excitatory or self inhibitory. We distinguish the cases where this interaction is instantaneous from the one where there is a repartition of conduction delays. In the case of a bounded density of delays both excitatory and inhibitory population models are shown to be well-posed. But without conduction delay the solution of the model of self excitatory neurons may blow up. We analyze the different behaviours of the model with jumps compared to its diffusion approximation.
Chimeras in leaky integrate-and-fire neural networks: effects of reflecting connectivities
Tsigkri-DeSmedt, Nefeli Dimitra; Hizanidis, Johanne; Schöll, Eckehard; Hövel, Philipp; Provata, Astero
2017-07-01
The effects of attracting-nonlocal and reflecting connectivity are investigated in coupled Leaky Integrate-and-Fire (LIF) elements, which model the exchange of electrical signals between neurons. Earlier investigations have demonstrated that repulsive-nonlocal and hierarchical network connectivity can induce complex synchronization patterns and chimera states in systems of coupled oscillators. In the LIF system we show that if the elements are nonlocally linked with positive diffusive coupling on a ring network, the system splits into a number of alternating domains. Half of these domains contain elements whose potential stays near the threshold and they are interrupted by active domains where the elements perform regular LIF oscillations. The active domains travel along the ring with constant velocity, depending on the system parameters. When we introduce reflecting coupling in LIF networks unexpected complex spatio-temporal structures arise. For relatively extensive ranges of parameter values, the system splits into two coexisting domains: one where all elements stay near the threshold and one where incoherent states develop, characterized by multi-leveled mean phase velocity profiles.
The Gamma renewal process as an output of the diffusion leaky integrate-and-fire neuronal model
Czech Academy of Sciences Publication Activity Database
Lánský, Petr; Sacerdote, L.; Zucca, C.
2016-01-01
Roč. 110, 2-3 (2016), s. 193-200 ISSN 0340-1200 R&D Projects: GA ČR(CZ) GA15-08066S Institutional support: RVO:67985823 Keywords : first-passage-time problem * leaky integrate-and-fire * Stein's neuronal model Subject RIV: BD - Theory of Information Impact factor: 1.716, year: 2016
International Nuclear Information System (INIS)
Cofré, Rodrigo; Cessac, Bruno
2013-01-01
We investigate the effect of electric synapses (gap junctions) on collective neuronal dynamics and spike statistics in a conductance-based integrate-and-fire neural network, driven by Brownian noise, where conductances depend upon spike history. We compute explicitly the time evolution operator and show that, given the spike-history of the network and the membrane potentials at a given time, the further dynamical evolution can be written in a closed form. We show that spike train statistics is described by a Gibbs distribution whose potential can be approximated with an explicit formula, when the noise is weak. This potential form encompasses existing models for spike trains statistics analysis such as maximum entropy models or generalized linear models (GLM). We also discuss the different types of correlations: those induced by a shared stimulus and those induced by neurons interactions
Reconstructing stimuli from the spike-times of leaky integrate and fire neurons
Directory of Open Access Journals (Sweden)
Sebastian eGerwinn
2011-02-01
Full Text Available Reconstructing stimuli from the spike-trains of neurons is an important approach for understanding the neural code. One of the difficulties associated with this task is that signals which are varying continuously in time are encoded into sequences of discrete events or spikes. An important problem is to determine how much information about the continuously varying stimulus can be extracted from the time-points at which spikes were observed, especially if these time-points are subject to some sort of randomness. For the special case of spike trains generated by leaky integrate and fire neurons, noise can be introduced by allowing variations in the threshold every time a spike is released. A simple decoding algorithm previously derived for the noiseless case can be extended to the stochastic case, but turns out to be biased. Here, we review a solution to this problem, by presenting a simple yet efficient algorithm which greatly reduces the bias, and therefore leads to better decoding performance in the stochastic case.
Directory of Open Access Journals (Sweden)
Loreen eHertäg
2012-09-01
Full Text Available For large-scale network simulations, it is often desirable to have computationally tractable, yet in a defined sense still physiologically valid neuron models. In particular, these models should be able to reproduce physiological measurements, ideally in a predictive sense, and under different input regimes in which neurons may operate in vivo. Here we present an approach to parameter estimation for a simple spiking neuron model mainly based on standard f-I curves obtained from in vitro recordings. Such recordings are routinely obtained in standard protocols and assess a neuron's response under a wide range of mean input currents. Our fitting procedure makes use of closed-form expressions for the firing rate derived from an approximation to the adaptive exponential integrate-and-fire (AdEx model. The resulting fitting process is simple and about two orders of magnitude faster compared to methods based on numerical integration of the differential equations. We probe this method on different cell types recorded from rodent prefrontal cortex. After fitting to the f-I current-clamp data, the model cells are tested on completely different sets of recordings obtained by fluctuating ('in-vivo-like' input currents. For a wide range of different input regimes, cell types, and cortical layers, the model could predict spike times on these test traces quite accurately within the bounds of physiological reliability, although no information from these distinct test sets was used for model fitting. Further analyses delineated some of the empirical factors constraining model fitting and the model's generalization performance. An even simpler adaptive LIF neuron was also examined in this context. Hence, we have developed a 'high-throughput' model fitting procedure which is simple and fast, with good prediction performance, and which relies only on firing rate information and standard physiological data widely and easily available.
International Nuclear Information System (INIS)
Moreno-Bote, Ruben; Parga, Nestor
2006-01-01
An analytical description of the response properties of simple but realistic neuron models in the presence of noise is still lacking. We determine completely up to the second order the firing statistics of a single and a pair of leaky integrate-and-fire neurons receiving some common slowly filtered white noise. In particular, the auto- and cross-correlation functions of the output spike trains of pairs of cells are obtained from an improvement of the adiabatic approximation introduced previously by Moreno-Bote and Parga [Phys. Rev. Lett. 92, 028102 (2004)]. These two functions define the firing variability and firing synchronization between neurons, and are of much importance for understanding neuron communication
Leaky Integrate and Fire Neuron by Charge-Discharge Dynamics in Floating-Body MOSFET.
Dutta, Sangya; Kumar, Vinay; Shukla, Aditya; Mohapatra, Nihar R; Ganguly, Udayan
2017-08-15
Neuro-biology inspired Spiking Neural Network (SNN) enables efficient learning and recognition tasks. To achieve a large scale network akin to biology, a power and area efficient electronic neuron is essential. Earlier, we had demonstrated an LIF neuron by a novel 4-terminal impact ionization based n+/p/n+ with an extended gate (gated-INPN) device by physics simulation. Excellent improvement in area and power compared to conventional analog circuit implementations was observed. In this paper, we propose and experimentally demonstrate a compact conventional 3-terminal partially depleted (PD) SOI- MOSFET (100 nm gate length) to replace the 4-terminal gated-INPN device. Impact ionization (II) induced floating body effect in SOI-MOSFET is used to capture LIF neuron behavior to demonstrate spiking frequency dependence on input. MHz operation enables attractive hardware acceleration compared to biology. Overall, conventional PD-SOI-CMOS technology enables very-large-scale-integration (VLSI) which is essential for biology scale (~10 11 neuron based) large neural networks.
Directory of Open Access Journals (Sweden)
Andreas Steimer
Full Text Available Oscillations between high and low values of the membrane potential (UP and DOWN states respectively are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs of the exponential integrate and fire (EIF model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing
Steimer, Andreas; Schindler, Kaspar
2015-01-01
Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational
Directory of Open Access Journals (Sweden)
Stefano eCavallari
2014-03-01
Full Text Available Models of networks of Leaky Integrate-and-Fire neurons (LIF are a widely used tool for theoretical investigations of brain function. These models have been used both with current- and conductance-based synapses. However, the differences in the dynamics expressed by these two approaches have been so far mainly studied at the single neuron level. To investigate how these synaptic models affect network activity, we compared the single-neuron and neural population dynamics of conductance-based networks (COBN and current-based networks (CUBN of LIF neurons. These networks were endowed with sparse excitatory and inhibitory recurrent connections, and were tested in conditions including both low- and high-conductance states. We developed a novel procedure to obtain comparable networks by properly tuning the synaptic parameters not shared by the models. The so defined comparable networks displayed an excellent and robust match of first order statistics (average single neuron firing rates and average frequency spectrum of network activity. However, these comparable networks showed profound differences in the second order statistics of neural population interactions and in the modulation of these properties by external inputs. The correlation between inhibitory and excitatory synaptic currents and the cross-neuron correlation between synaptic inputs, membrane potentials and spike trains were stronger and more stimulus-sensitive in the COBN. Because of these properties, the spike train correlation carried more information about the strength of the input in the COBN, although the firing rates were equally informative in both network models. Moreover, COBN showed stronger neuronal population synchronization in the gamma band, and their spectral information about the network input was higher and spread over a broader range of frequencies. These results suggest that second order properties of network dynamics depend strongly on the choice of synaptic model.
Errors in estimation of the input signal for integrate-and-fire neuronal models
Czech Academy of Sciences Publication Activity Database
Bibbona, E.; Lánský, Petr; Sacerdote, L.; Sirovich, R.
2008-01-01
Roč. 78, č. 1 (2008), s. 1-10 ISSN 1539-3755 R&D Projects: GA MŠk(CZ) LC554; GA AV ČR(CZ) 1ET400110401 Grant - others:EC(XE) MIUR PRIN 2005 Institutional research plan: CEZ:AV0Z50110509 Keywords : parameter estimation * stochastic neuronal model Subject RIV: BO - Biophysics Impact factor: 2.508, year: 2008 http://link.aps.org/abstract/PRE/v78/e011918
Directory of Open Access Journals (Sweden)
Loreen eHertäg
2014-09-01
Full Text Available Computational models offer a unique tool for understanding the network-dynamical mechanisms which mediate between physiological and biophysical properties, and behavioral function. A traditional challenge in computational neuroscience is, however, that simple neuronal models which can be studied analytically fail to reproduce the diversity of electrophysiological behaviors seen in real neurons, while detailed neuronal models which do reproduce such diversity are intractable analytically and computationally expensive. A number of intermediate models have been proposed whose aim is to capture the diversity of firing behaviors and spike times of real neurons while entailing a mathematical description as simple as possible. One such model is the exponential integrate-and-fire neuron with spike rate adaptation (aEIF which consists of two differential equations for the membrane potential (V and an adaptation current (w. Despite its simplicity, it can reproduce a wide variety of physiologically observed spiking patterns, can be fit to physiological recordings quantitatively, and, once done so, is able to predict spike times on traces not used for model fitting. Here we compute the steady-state firing rate of aEIF in the presence of Gaussian synaptic noise, using two approaches. The first approach is based on the 2-dimensional Fokker-Planck equation that describes the (V,w-probability distribution, which is solved using an expansion in the ratio between the time constants of the two variables. The second is based on the firing rate of the EIF model, which is averaged over the distribution of the $w$ variable. These analytically derived closed-form expressions were tested on simulations from a large variety of model cells quantitatively fitted to in vitro electrophysiological recordings from pyramidal cells and interneurons. Theoretical predictions closely agreed with the firing rate of the simulated cells fed with in-vivo-like synaptic noise.
Hertäg, Loreen; Durstewitz, Daniel; Brunel, Nicolas
2014-01-01
Computational models offer a unique tool for understanding the network-dynamical mechanisms which mediate between physiological and biophysical properties, and behavioral function. A traditional challenge in computational neuroscience is, however, that simple neuronal models which can be studied analytically fail to reproduce the diversity of electrophysiological behaviors seen in real neurons, while detailed neuronal models which do reproduce such diversity are intractable analytically and computationally expensive. A number of intermediate models have been proposed whose aim is to capture the diversity of firing behaviors and spike times of real neurons while entailing the simplest possible mathematical description. One such model is the exponential integrate-and-fire neuron with spike rate adaptation (aEIF) which consists of two differential equations for the membrane potential (V) and an adaptation current (w). Despite its simplicity, it can reproduce a wide variety of physiologically observed spiking patterns, can be fit to physiological recordings quantitatively, and, once done so, is able to predict spike times on traces not used for model fitting. Here we compute the steady-state firing rate of aEIF in the presence of Gaussian synaptic noise, using two approaches. The first approach is based on the 2-dimensional Fokker-Planck equation that describes the (V,w)-probability distribution, which is solved using an expansion in the ratio between the time constants of the two variables. The second is based on the firing rate of the EIF model, which is averaged over the distribution of the w variable. These analytically derived closed-form expressions were tested on simulations from a large variety of model cells quantitatively fitted to in vitro electrophysiological recordings from pyramidal cells and interneurons. Theoretical predictions closely agreed with the firing rate of the simulated cells fed with in-vivo-like synaptic noise.
Simulating synchronization in neuronal networks
Fink, Christian G.
2016-06-01
We discuss several techniques used in simulating neuronal networks by exploring how a network's connectivity structure affects its propensity for synchronous spiking. Network connectivity is generated using the Watts-Strogatz small-world algorithm, and two key measures of network structure are described. These measures quantify structural characteristics that influence collective neuronal spiking, which is simulated using the leaky integrate-and-fire model. Simulations show that adding a small number of random connections to an otherwise lattice-like connectivity structure leads to a dramatic increase in neuronal synchronization.
Czech Academy of Sciences Publication Activity Database
Lánský, Petr; Ditlevsen, S.
2008-01-01
Roč. 99, 4-5 (2008), s. 253-262 ISSN 0340-1200 R&D Projects: GA MŠk(CZ) LC554; GA AV ČR(CZ) 1ET400110401 Institutional research plan: CEZ:AV0Z50110509 Keywords : parameter estimation * stochastic diffusion neuronal model Subject RIV: BO - Biophysics Impact factor: 1.935, year: 2008
Firing patterns in the adaptive exponential integrate-and-fire model.
Naud, Richard; Marcille, Nicolas; Clopath, Claudia; Gerstner, Wulfram
2008-11-01
For simulations of large spiking neuron networks, an accurate, simple and versatile single-neuron modeling framework is required. Here we explore the versatility of a simple two-equation model: the adaptive exponential integrate-and-fire neuron. We show that this model generates multiple firing patterns depending on the choice of parameter values, and present a phase diagram describing the transition from one firing type to another. We give an analytical criterion to distinguish between continuous adaption, initial bursting, regular bursting and two types of tonic spiking. Also, we report that the deterministic model is capable of producing irregular spiking when stimulated with constant current, indicating low-dimensional chaos. Lastly, the simple model is fitted to real experiments of cortical neurons under step current stimulation. The results provide support for the suitability of simple models such as the adaptive exponential integrate-and-fire neuron for large network simulations.
Directory of Open Access Journals (Sweden)
Viswanathan Arunachalam
2013-01-01
Full Text Available The classical models of single neuron like Hodgkin-Huxley point neuron or leaky integrate and fire neuron assume the influence of postsynaptic potentials to last till the neuron fires. Vidybida (2008 in a refreshing departure has proposed models for binding neurons in which the trace of an input is remembered only for a finite fixed period of time after which it is forgotten. The binding neurons conform to the behaviour of real neurons and are applicable in constructing fast recurrent networks for computer modeling. This paper develops explicitly several useful results for a binding neuron like the firing time distribution and other statistical characteristics. We also discuss the applicability of the developed results in constructing a modified hourglass network model in which there are interconnected neurons with excitatory as well as inhibitory inputs. Limited simulation results of the hourglass network are presented.
Sadeh, Sadra; Rotter, Stefan
2015-01-01
The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not
Directory of Open Access Journals (Sweden)
Sadra Sadeh
2015-01-01
Full Text Available The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are
Chaos in integrate-and-fire dynamical systems
International Nuclear Information System (INIS)
Coombes, S.
2000-01-01
Integrate-and-fire (IF) mechanisms are often studied within the context of neural dynamics. From a mathematical perspective they represent a minimal yet biologically realistic model of a spiking neuron. The non-smooth nature of the dynamics leads to extremely rich spike train behavior capable of explaining a variety of biological phenomenon including phase-locked states, mode-locking, bursting and pattern formation. The conditions under which chaotic spike trains may be generated in synaptically interacting networks of neural oscillators is an important open question. Using techniques originally introduced for the study of impact oscillators we develop the notion of a Liapunov exponent for IF systems. In the strong coupling regime a network may undergo a discrete Turing-Hopf bifurcation of the firing times from a synchronous state to a state with periodic or quasiperiodic variations of the interspike intervals on closed orbits. Away from the bifurcation point these invariant circles may break up. We establish numerically that in this case the largest IF Liapunov exponent becomes positive. Hence, one route to chaos in networks of synaptically coupled IF neurons is via the breakup of invariant circles
Bistability induces episodic spike communication by inhibitory neurons in neuronal networks.
Kazantsev, V B; Asatryan, S Yu
2011-09-01
Bistability is one of the important features of nonlinear dynamical systems. In neurodynamics, bistability has been found in basic Hodgkin-Huxley equations describing the cell membrane dynamics. When the neuron is clamped near its threshold, the stable rest potential may coexist with the stable limit cycle describing periodic spiking. However, this effect is often neglected in network computations where the neurons are typically reduced to threshold firing units (e.g., integrate-and-fire models). We found that the bistability may induce spike communication by inhibitory coupled neurons in the spiking network. The communication is realized in the form of episodic discharges with synchronous (correlated) spikes during the episodes. A spiking phase map is constructed to describe the synchronization and to estimate basic spike phase locking modes.
Connectivity and dynamics of neuronal networks as defined by the shape of individual neurons
International Nuclear Information System (INIS)
Ahnert, Sebastian E; A N Travencolo, Bruno; Costa, Luciano da Fontoura
2009-01-01
Biological neuronal networks constitute a special class of dynamical systems, as they are formed by individual geometrical components, namely the neurons. In the existing literature, relatively little attention has been given to the influence of neuron shape on the overall connectivity and dynamics of the emerging networks. The current work addresses this issue by considering simplified neuronal shapes consisting of circular regions (soma/axons) with spokes (dendrites). Networks are grown by placing these patterns randomly in the two-dimensional (2D) plane and establishing connections whenever a piece of dendrite falls inside an axon. Several topological and dynamical properties of the resulting graph are measured, including the degree distribution, clustering coefficients, symmetry of connections, size of the largest connected component, as well as three hierarchical measurements of the local topology. By varying the number of processes of the individual basic patterns, we can quantify relationships between the individual neuronal shape and the topological and dynamical features of the networks. Integrate-and-fire dynamics on these networks is also investigated with respect to transient activation from a source node, indicating that long-range connections play an important role in the propagation of avalanches.
Understanding the Generation of Network Bursts by Adaptive Oscillatory Neurons
Directory of Open Access Journals (Sweden)
Tanguy Fardet
2018-02-01
Full Text Available Experimental and numerical studies have revealed that isolated populations of oscillatory neurons can spontaneously synchronize and generate periodic bursts involving the whole network. Such a behavior has notably been observed for cultured neurons in rodent's cortex or hippocampus. We show here that a sufficient condition for this network bursting is the presence of an excitatory population of oscillatory neurons which displays spike-driven adaptation. We provide an analytic model to analyze network bursts generated by coupled adaptive exponential integrate-and-fire neurons. We show that, for strong synaptic coupling, intrinsically tonic spiking neurons evolve to reach a synchronized intermittent bursting state. The presence of inhibitory neurons or plastic synapses can then modulate this dynamics in many ways but is not necessary for its appearance. Thanks to a simple self-consistent equation, our model gives an intuitive and semi-quantitative tool to understand the bursting behavior. Furthermore, it suggests that after-hyperpolarization currents are sufficient to explain bursting termination. Through a thorough mapping between the theoretical parameters and ion-channel properties, we discuss the biological mechanisms that could be involved and the relevance of the explored parameter-space. Such an insight enables us to propose experimentally-testable predictions regarding how blocking fast, medium or slow after-hyperpolarization channels would affect the firing rate and burst duration, as well as the interburst interval.
Constructing Precisely Computing Networks with Biophysical Spiking Neurons.
Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T
2015-07-15
While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks
Phase-locking and bistability in neuronal networks with synaptic depression
Akcay, Zeynep; Huang, Xinxian; Nadim, Farzan; Bose, Amitabha
2018-02-01
We consider a recurrent network of two oscillatory neurons that are coupled with inhibitory synapses. We use the phase response curves of the neurons and the properties of short-term synaptic depression to define Poincaré maps for the activity of the network. The fixed points of these maps correspond to phase-locked modes of the network. Using these maps, we analyze the conditions that allow short-term synaptic depression to lead to the existence of bistable phase-locked, periodic solutions. We show that bistability arises when either the phase response curve of the neuron or the short-term depression profile changes steeply enough. The results apply to any Type I oscillator and we illustrate our findings using the Quadratic Integrate-and-Fire and Morris-Lecar neuron models.
Motif statistics and spike correlations in neuronal networks
International Nuclear Information System (INIS)
Hu, Yu; Shea-Brown, Eric; Trousdale, James; Josić, Krešimir
2013-01-01
Motifs are patterns of subgraphs of complex networks. We studied the impact of such patterns of connectivity on the level of correlated, or synchronized, spiking activity among pairs of cells in a recurrent network of integrate and fire neurons. For a range of network architectures, we find that the pairwise correlation coefficients, averaged across the network, can be closely approximated using only three statistics of network connectivity. These are the overall network connection probability and the frequencies of two second order motifs: diverging motifs, in which one cell provides input to two others, and chain motifs, in which two cells are connected via a third intermediary cell. Specifically, the prevalence of diverging and chain motifs tends to increase correlation. Our method is based on linear response theory, which enables us to express spiking statistics using linear algebra, and a resumming technique, which extrapolates from second order motifs to predict the overall effect of coupling on network correlation. Our motif-based results seek to isolate the effect of network architecture perturbatively from a known network state. (paper)
Dynamics of the exponential integrate-and-fire model with slow currents and adaptation.
Barranca, Victor J; Johnson, Daniel C; Moyher, Jennifer L; Sauppe, Joshua P; Shkarayev, Maxim S; Kovačič, Gregor; Cai, David
2014-08-01
In order to properly capture spike-frequency adaptation with a simplified point-neuron model, we study approximations of Hodgkin-Huxley (HH) models including slow currents by exponential integrate-and-fire (EIF) models that incorporate the same types of currents. We optimize the parameters of the EIF models under the external drive consisting of AMPA-type conductance pulses using the current-voltage curves and the van Rossum metric to best capture the subthreshold membrane potential, firing rate, and jump size of the slow current at the neuron's spike times. Our numerical simulations demonstrate that, in addition to these quantities, the approximate EIF-type models faithfully reproduce bifurcation properties of the HH neurons with slow currents, which include spike-frequency adaptation, phase-response curves, critical exponents at the transition between a finite and infinite number of spikes with increasing constant external drive, and bifurcation diagrams of interspike intervals in time-periodically forced models. Dynamics of networks of HH neurons with slow currents can also be approximated by corresponding EIF-type networks, with the approximation being at least statistically accurate over a broad range of Poisson rates of the external drive. For the form of external drive resembling realistic, AMPA-like synaptic conductance response to incoming action potentials, the EIF model affords great savings of computation time as compared with the corresponding HH-type model. Our work shows that the EIF model with additional slow currents is well suited for use in large-scale, point-neuron models in which spike-frequency adaptation is important.
Akao, Akihiko; Ogawa, Yutaro; Jimbo, Yasuhiko; Ermentrout, G. Bard; Kotani, Kiyoshi
2018-01-01
Gamma oscillations are thought to play an important role in brain function. Interneuron gamma (ING) and pyramidal interneuron gamma (PING) mechanisms have been proposed as generation mechanisms for these oscillations. However, the relation between the generation mechanisms and the dynamical properties of the gamma oscillation are still unclear. Among the dynamical properties of the gamma oscillation, the phase response function (PRF) is important because it encodes the response of the oscillation to inputs. Recently, the PRF for an inhibitory population of modified theta neurons that generate an ING rhythm was computed by the adjoint method applied to the associated Fokker-Planck equation (FPE) for the model. The modified theta model incorporates conductance-based synapses as well as the voltage and current dynamics. Here, we extended this previous work by creating an excitatory-inhibitory (E-I) network using the modified theta model and described the population dynamics with the corresponding FPE. We conducted a bifurcation analysis of the FPE to find parameter regions which generate gamma oscillations. In order to label the oscillatory parameter regions by their generation mechanisms, we defined ING- and PING-type gamma oscillation in a mathematically plausible way based on the driver of the inhibitory population. We labeled the oscillatory parameter regions by these generation mechanisms and derived PRFs via the adjoint method on the FPE in order to investigate the differences in the responses of each type of oscillation to inputs. PRFs for PING and ING mechanisms are derived and compared. We found the amplitude of the PRF for the excitatory population is larger in the PING case than in the ING case. Finally, the E-I population of the modified theta neuron enabled us to analyze the PRFs of PING-type gamma oscillation and the entrainment ability of E and I populations. We found a parameter region in which PRFs of E and I are both purely positive in the case of
Nagashino, Hirofumi; Kinouchi, Yohsuke; Danesh, Ali A; Pandya, Abhijit S
2013-01-01
Tinnitus is the perception of sound in the ears or in the head where no external source is present. Sound therapy is one of the most effective techniques for tinnitus treatment that have been proposed. In order to investigate mechanisms of tinnitus generation and the clinical effects of sound therapy, we have proposed conceptual and computational models with plasticity using a neural oscillator or a neuronal network model. In the present paper, we propose a neuronal network model with simplified tonotopicity of the auditory system as more detailed structure. In this model an integrate-and-fire neuron model is employed and homeostatic plasticity is incorporated. The computer simulation results show that the present model can show the generation of oscillation and its cessation by external input. It suggests that the present framework is promising as a modeling for the tinnitus generation and the effects of sound therapy.
Lin, I-Chun; Xing, Dajun; Shapley, Robert
2012-12-01
One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.
Computing with Spiking Neuron Networks
H. Paugam-Moisy; S.M. Bohte (Sander); G. Rozenberg; T.H.W. Baeck (Thomas); J.N. Kok (Joost)
2012-01-01
htmlabstractAbstract Spiking Neuron Networks (SNNs) are often referred to as the 3rd gener- ation of neural networks. Highly inspired from natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an ac- curate modeling of synaptic interactions
Kwon, Min-Woo; Baek, Myung-Hyun; Hwang, Sungmin; Kim, Sungjun; Park, Byung-Gook
2018-09-01
We designed the CMOS analog integrate and fire (I&F) neuron circuit can drive resistive synaptic device. The neuron circuit consists of a current mirror for spatial integration, a capacitor for temporal integration, asymmetric negative and positive pulse generation part, a refractory part, and finally a back-propagation pulse generation part for learning of the synaptic devices. The resistive synaptic devices were fabricated using HfOx switching layer by atomic layer deposition (ALD). The resistive synaptic device had gradual set and reset characteristics and the conductance was adjusted by spike-timing-dependent-plasticity (STDP) learning rule. We carried out circuit simulation of synaptic device and CMOS neuron circuit. And we have developed an unsupervised spiking neural networks (SNNs) for 5 × 5 pattern recognition and classification using the neuron circuit and synaptic devices. The hardware-based SNNs can autonomously and efficiently control the weight updates of the synapses between neurons, without the aid of software calculations.
Engelken, Rainer; Farkhooi, Farzad; Hansel, David; van Vreeswijk, Carl; Wolf, Fred
2016-01-01
Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks of spiking neuron models. A recent study investigated spiking balanced networks of leaky integrate and fire (LIF) neurons and compared their dynamics to a matched rate network with identical topology, where single unit input-output functions were chosen from isolated LIF neurons receiving Gaussian white noise input. A mathematical analogy between the chaotic instability in networks of rate units and the spiking network dynamics was proposed. Here we revisit the behavior of the spiking LIF networks and these matched rate networks. We find expected hallmarks of a chaotic instability in the rate network: For supercritical coupling strength near the transition point, the autocorrelation time diverges. For subcritical coupling strengths, we observe critical slowing down in response to small external perturbations. In the spiking network, we found in contrast that the timescale of the autocorrelations is insensitive to the coupling strength and that rate deviations resulting from small input perturbations rapidly decay. The decay speed even accelerates for increasing coupling strength. In conclusion, our reanalysis demonstrates fundamental differences between the behavior of pulse-coupled spiking LIF networks and rate networks with matched topology and input-output function. In particular there is no indication of a corresponding chaotic instability in the spiking network.
Criticality in Neuronal Networks
Friedman, Nir; Ito, Shinya; Brinkman, Braden A. W.; Shimono, Masanori; Deville, R. E. Lee; Beggs, John M.; Dahmen, Karin A.; Butler, Tom C.
2012-02-01
In recent years, experiments detecting the electrical firing patterns in slices of in vitro brain tissue have been analyzed to suggest the presence of scale invariance and possibly criticality in the brain. Much of the work done however has been limited in two ways: 1) the data collected is from local field potentials that do not represent the firing of individual neurons; 2) the analysis has been primarily limited to histograms. In our work we examine data based on the firing of individual neurons (spike data), and greatly extend the analysis by considering shape collapse and exponents. Our results strongly suggest that the brain operates near a tuned critical point of a highly distinctive universality class.
Stages of neuronal network formation
International Nuclear Information System (INIS)
Woiterski, Lydia; Käs, Josef A; Claudepierre, Thomas; Luxenhofer, Robert; Jordan, Rainer
2013-01-01
Graph theoretical approaches have become a powerful tool for investigating the architecture and dynamics of complex networks. The topology of network graphs revealed small-world properties for very different real systems among these neuronal networks. In this study, we observed the early development of mouse retinal ganglion cell (RGC) networks in vitro using time-lapse video microscopy. By means of a time-resolved graph theoretical analysis of the connectivity, shortest path length and the edge length, we were able to discover the different stages during the network formation. Starting from single cells, at the first stage neurons connected to each other ending up in a network with maximum complexity. In the further course, we observed a simplification of the network which manifested in a change of relevant network parameters such as the minimization of the path length. Moreover, we found that RGC networks self-organized as small-world networks at both stages; however, the optimization occurred only in the second stage. (paper)
Chimera patterns in two-dimensional networks of coupled neurons
Schmidt, Alexander; Kasimatis, Theodoros; Hizanidis, Johanne; Provata, Astero; Hövel, Philipp
2017-03-01
We discuss synchronization patterns in networks of FitzHugh-Nagumo and leaky integrate-and-fire oscillators coupled in a two-dimensional toroidal geometry. A common feature between the two models is the presence of fast and slow dynamics, a typical characteristic of neurons. Earlier studies have demonstrated that both models when coupled nonlocally in one-dimensional ring networks produce chimera states for a large range of parameter values. In this study, we give evidence of a plethora of two-dimensional chimera patterns of various shapes, including spots, rings, stripes, and grids, observed in both models, as well as additional patterns found mainly in the FitzHugh-Nagumo system. Both systems exhibit multistability: For the same parameter values, different initial conditions give rise to different dynamical states. Transitions occur between various patterns when the parameters (coupling range, coupling strength, refractory period, and coupling phase) are varied. Many patterns observed in the two models follow similar rules. For example, the diameter of the rings grows linearly with the coupling radius.
Generalization of the event-based Carnevale-Hines integration scheme for integrate-and-fire models
van Elburg, R.A.J.; van Ooyen, A.
2009-01-01
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on
Generalization of the Event-Based Carnevale-Hines Integration Scheme for Integrate-and-Fire Models
van Elburg, Ronald A. J.; van Ooyen, Arjen
An event-based integration scheme for an integrate-and-fire neuron model with exponentially decaying excitatory synaptic currents and double exponential inhibitory synaptic currents has been introduced by Carnevale and Hines. However, the integration scheme imposes nonphysiological constraints on
Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression
Onesto, Valentina
2016-05-10
Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect.
Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression
Directory of Open Access Journals (Sweden)
Valentina Onesto
2016-01-01
Full Text Available Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect.
Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression
Onesto, Valentina; Cosentino, Carlo; Di Fabrizio, Enzo M.; Cesarelli, Mario; Amato, Francesco; Gentile, Francesco
2016-01-01
Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect.
A network of spiking neurons for computing sparse representations in an energy-efficient way.
Hu, Tao; Genkin, Alexander; Chklovskii, Dmitri B
2012-11-01
Computing sparse redundant representations is an important problem in both applied mathematics and neuroscience. In many applications, this problem must be solved in an energy-efficient way. Here, we propose a hybrid distributed algorithm (HDA), which solves this problem on a network of simple nodes communicating by low-bandwidth channels. HDA nodes perform both gradient-descent-like steps on analog internal variables and coordinate-descent-like steps via quantized external variables communicated to each other. Interestingly, the operation is equivalent to a network of integrate-and-fire neurons, suggesting that HDA may serve as a model of neural computation. We show that the numerical performance of HDA is on par with existing algorithms. In the asymptotic regime, the representation error of HDA decays with time, t, as 1/t. HDA is stable against time-varying noise; specifically, the representation error decays as 1/√t for gaussian white noise.
Self-organized criticality occurs in non-conservative neuronal networks during `up' states
Millman, Daniel; Mihalas, Stefan; Kirkwood, Alfredo; Niebur, Ernst
2010-10-01
During sleep, under anaesthesia and in vitro, cortical neurons in sensory, motor, association and executive areas fluctuate between so-called up and down states, which are characterized by distinct membrane potentials and spike rates. Another phenomenon observed in preparations similar to those that exhibit up and down states-such as anaesthetized rats, brain slices and cultures devoid of sensory input, as well as awake monkey cortex-is self-organized criticality (SOC). SOC is characterized by activity `avalanches' with a branching parameter near unity and size distribution that obeys a power law with a critical exponent of about -3/2. Recent work has demonstrated SOC in conservative neuronal network models, but critical behaviour breaks down when biologically realistic `leaky' neurons are introduced. Here, we report robust SOC behaviour in networks of non-conservative leaky integrate-and-fire neurons with short-term synaptic depression. We show analytically and numerically that these networks typically have two stable activity levels, corresponding to up and down states, that the networks switch spontaneously between these states and that up states are critical and down states are subcritical.
Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State.
Lagzi, Fereshteh; Rotter, Stefan
2015-01-01
We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the "within" versus "between" connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed "winnerless competition", which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might suggest a
Response variability in balanced cortical networks
DEFF Research Database (Denmark)
Lerchner, Alexander; Ursta, C.; Hertz, J.
2006-01-01
We study the spike statistics of neurons in a network with dynamically balanced excitation and inhibition. Our model, intended to represent a generic cortical column, comprises randomly connected excitatory and inhibitory leaky integrate-and-fire neurons, driven by excitatory input from an external...
Neuronal Networks on Nanocellulose Scaffolds.
Jonsson, Malin; Brackmann, Christian; Puchades, Maja; Brattås, Karoline; Ewing, Andrew; Gatenholm, Paul; Enejder, Annika
2015-11-01
Proliferation, integration, and neurite extension of PC12 cells, a widely used culture model for cholinergic neurons, were studied in nanocellulose scaffolds biosynthesized by Gluconacetobacter xylinus to allow a three-dimensional (3D) extension of neurites better mimicking neuronal networks in tissue. The interaction with control scaffolds was compared with cationized nanocellulose (trimethyl ammonium betahydroxy propyl [TMAHP] cellulose) to investigate the impact of surface charges on the cell interaction mechanisms. Furthermore, coatings with extracellular matrix proteins (collagen, fibronectin, and laminin) were investigated to determine the importance of integrin-mediated cell attachment. Cell proliferation was evaluated by a cellular proliferation assay, while cell integration and neurite propagation were studied by simultaneous label-free Coherent anti-Stokes Raman Scattering and second harmonic generation microscopy, providing 3D images of PC12 cells and arrangement of nanocellulose fibrils, respectively. Cell attachment and proliferation were enhanced by TMAHP modification, but not by protein coating. Protein coating instead promoted active interaction between the cells and the scaffold, hence lateral cell migration and integration. Irrespective of surface modification, deepest cell integration measured was one to two cell layers, whereas neurites have a capacity to integrate deeper than the cell bodies in the scaffold due to their fine dimensions and amoeba-like migration pattern. Neurites with lengths of >50 μm were observed, successfully connecting individual cells and cell clusters. In conclusion, TMAHP-modified nanocellulose scaffolds promote initial cellular scaffold adhesion, which combined with additional cell-scaffold treatments enables further formation of 3D neuronal networks.
Network reconfiguration and neuronal plasticity in rhythm-generating networks.
Koch, Henner; Garcia, Alfredo J; Ramirez, Jan-Marino
2011-12-01
Neuronal networks are highly plastic and reconfigure in a state-dependent manner. The plasticity at the network level emerges through multiple intrinsic and synaptic membrane properties that imbue neurons and their interactions with numerous nonlinear properties. These properties are continuously regulated by neuromodulators and homeostatic mechanisms that are critical to maintain not only network stability and also adapt networks in a short- and long-term manner to changes in behavioral, developmental, metabolic, and environmental conditions. This review provides concrete examples from neuronal networks in invertebrates and vertebrates, and illustrates that the concepts and rules that govern neuronal networks and behaviors are universal.
International Nuclear Information System (INIS)
Amancio, Diego R; Oliveira Jr, Osvaldo N; Costa, Luciano da F
2012-01-01
The mechanisms responsible for containing activity in systems represented by networks are crucial in various phenomena, for example, in diseases such as epilepsy that affect the neuronal networks and for information dissemination in social networks. The first models to account for contained activity included triggering and inhibition processes, but they cannot be applied to social networks where inhibition is clearly absent. A recent model showed that contained activity can be achieved with no need of inhibition processes provided that the network is subdivided into modules (communities). In this paper, we introduce a new concept inspired in the Hebbian theory, through which containment of activity is achieved by incorporating a dynamics based on a decaying activity in a random walk mechanism preferential to the node activity. Upon selecting the decay coefficient within a proper range, we observed sustained activity in all the networks tested, namely, random, Barabási–Albert and geographical networks. The generality of this finding was confirmed by showing that modularity is no longer needed if the dynamics based on the integrate-and-fire dynamics incorporated the decay factor. Taken together, these results provide a proof of principle that persistent, restrained network activation might occur in the absence of any particular topological structure. This may be the reason why neuronal activity does not spread out to the entire neuronal network, even when no special topological organization exists. (paper)
Three-dimensional chimera patterns in networks of spiking neuron oscillators
Kasimatis, T.; Hizanidis, J.; Provata, A.
2018-05-01
We study the stable spatiotemporal patterns that arise in a three-dimensional (3D) network of neuron oscillators, whose dynamics is described by the leaky integrate-and-fire (LIF) model. More specifically, we investigate the form of the chimera states induced by a 3D coupling matrix with nonlocal topology. The observed patterns are in many cases direct generalizations of the corresponding two-dimensional (2D) patterns, e.g., spheres, layers, and cylinder grids. We also find cylindrical and "cross-layered" chimeras that do not have an equivalent in 2D systems. Quantitative measures are calculated, such as the ratio of synchronized and unsynchronized neurons as a function of the coupling range, the mean phase velocities, and the distribution of neurons in mean phase velocities. Based on these measures, the chimeras are categorized in two families. The first family of patterns is observed for weaker coupling and exhibits higher mean phase velocities for the unsynchronized areas of the network. The opposite holds for the second family, where the unsynchronized areas have lower mean phase velocities. The various measures demonstrate discontinuities, indicating criticality as the parameters cross from the first family of patterns to the second.
Training the integrate-and-fire model with the informax principle: I
Energy Technology Data Exchange (ETDEWEB)
Feng Jianfeng; Buxton, Hilary [COGS, Sussex University, Brighton (United Kingdom); Deng Yingchun [Department of Mathematics, Hunan Normal University, Changsha (China)
2002-03-15
In terms of the informax principle, and the input-output relationship of the integrate-and-fire (IF) model, IF neuron learning rules are developed. For supervised learning and with uniform weight of synapses (the theoretically tractable case), we show that the derived learning rule is stable and the stable state is unique. For unsupervised learning, within physiologically reasonable parameter regions, both long-term potentiation (LTP) and long-term depression (LTD) could happen when the inhibitory input is weak, but LTD cannot be observed when inhibitory input is strong enough. When both LTP and LTD occur, LTD is observable when the output of the postsynaptic neuron is faster than pre-synaptic inputs, otherwise LTP is observable, as observed in recent experiments. Learning rules of general cases are also studied and numerical examples show that the derived learning rule tends to equalize the contribution of different inputs to the output firing rates. (author)
Directory of Open Access Journals (Sweden)
Robert R Kerr
Full Text Available Learning rules, such as spike-timing-dependent plasticity (STDP, change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.
Associative memory in phasing neuron networks
Energy Technology Data Exchange (ETDEWEB)
Nair, Niketh S [ORNL; Bochove, Erik J. [United States Air Force Research Laboratory, Kirtland Air Force Base; Braiman, Yehuda [ORNL
2014-01-01
We studied pattern formation in a network of coupled Hindmarsh-Rose model neurons and introduced a new model for associative memory retrieval using networks of Kuramoto oscillators. Hindmarsh-Rose Neural Networks can exhibit a rich set of collective dynamics that can be controlled by their connectivity. Specifically, we showed an instance of Hebb's rule where spiking was correlated with network topology. Based on this, we presented a simple model of associative memory in coupled phase oscillators.
Doubly stochastic coherence in complex neuronal networks
Gao, Yang; Wang, Jianjun
2012-11-01
A system composed of coupled FitzHugh-Nagumo neurons with various topological structures is investigated under the co-presence of two independently additive and multiplicative Gaussian white noises, in which particular attention is paid to the neuronal networks spiking regularity. As the additive noise intensity and the multiplicative noise intensity are simultaneously adjusted to optimal values, the temporal periodicity of the output of the system reaches the maximum, indicating the occurrence of doubly stochastic coherence. The network topology randomness exerts different influences on the temporal coherence of the spiking oscillation for dissimilar coupling strength regimes. At a small coupling strength, the spiking regularity shows nearly no difference in the regular, small-world, and completely random networks. At an intermediate coupling strength, the temporal periodicity in a small-world neuronal network can be improved slightly by adding a small fraction of long-range connections. At a large coupling strength, the dynamical behavior of the neurons completely loses the resonance property with regard to the additive noise intensity or the multiplicative noise intensity, and the spiking regularity decreases considerably with the increase of the network topology randomness. The network topology randomness plays more of a depressed role than a favorable role in improving the temporal coherence of the spiking oscillation in the neuronal network research study.
Directory of Open Access Journals (Sweden)
Wassim M. Haddad
2014-07-01
Full Text Available Advances in neuroscience have been closely linked to mathematical modeling beginning with the integrate-and-fire model of Lapicque and proceeding through the modeling of the action potential by Hodgkin and Huxley to the current era. The fundamental building block of the central nervous system, the neuron, may be thought of as a dynamic element that is “excitable”, and can generate a pulse or spike whenever the electrochemical potential across the cell membrane of the neuron exceeds a threshold. A key application of nonlinear dynamical systems theory to the neurosciences is to study phenomena of the central nervous system that exhibit nearly discontinuous transitions between macroscopic states. A very challenging and clinically important problem exhibiting this phenomenon is the induction of general anesthesia. In any specific patient, the transition from consciousness to unconsciousness as the concentration of anesthetic drugs increases is very sharp, resembling a thermodynamic phase transition. This paper focuses on multistability theory for continuous and discontinuous dynamical systems having a set of multiple isolated equilibria and/or a continuum of equilibria. Multistability is the property whereby the solutions of a dynamical system can alternate between two or more mutually exclusive Lyapunov stable and convergent equilibrium states under asymptotically slowly changing inputs or system parameters. In this paper, we extend the theory of multistability to continuous, discontinuous, and stochastic nonlinear dynamical systems. In particular, Lyapunov-based tests for multistability and synchronization of dynamical systems with continuously differentiable and absolutely continuous flows are established. The results are then applied to excitatory and inhibitory biological neuronal networks to explain the underlying mechanism of action for anesthesia and consciousness from a multistable dynamical system perspective, thereby providing a
Building functional networks of spiking model neurons.
Abbott, L F; DePasquale, Brian; Memmesheimer, Raoul-Martin
2016-03-01
Most of the networks used by computer scientists and many of those studied by modelers in neuroscience represent unit activities as continuous variables. Neurons, however, communicate primarily through discontinuous spiking. We review methods for transferring our ability to construct interesting networks that perform relevant tasks from the artificial continuous domain to more realistic spiking network models. These methods raise a number of issues that warrant further theoretical and experimental study.
Neuronal network analyses: premises, promises and uncertainties
Parker, David
2010-01-01
Neuronal networks assemble the cellular components needed for sensory, motor and cognitive functions. Any rational intervention in the nervous system will thus require an understanding of network function. Obtaining this understanding is widely considered to be one of the major tasks facing neuroscience today. Network analyses have been performed for some years in relatively simple systems. In addition to the direct insights these systems have provided, they also illustrate some of the diffic...
Attractor dynamics in local neuronal networks
Directory of Open Access Journals (Sweden)
Jean-Philippe eThivierge
2014-03-01
Full Text Available Patterns of synaptic connectivity in various regions of the brain are characterized by the presence of synaptic motifs, defined as unidirectional and bidirectional synaptic contacts that follow a particular configuration and link together small groups of neurons. Recent computational work proposes that a relay network (two populations communicating via a third, relay population of neurons can generate precise patterns of neural synchronization. Here, we employ two distinct models of neuronal dynamics and show that simulated neural circuits designed in this way are caught in a global attractor of activity that prevents neurons from modulating their response on the basis of incoming stimuli. To circumvent the emergence of a fixed global attractor, we propose a mechanism of selective gain inhibition that promotes flexible responses to external stimuli. We suggest that local neuronal circuits may employ this mechanism to generate precise patterns of neural synchronization whose transient nature delimits the occurrence of a brief stimulus.
Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State
Lagzi, Fereshteh; Rotter, Stefan
2015-01-01
We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the “within” versus “between” connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed “winnerless competition”, which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might
Quantifying chaotic dynamics from integrate-and-fire processes
Energy Technology Data Exchange (ETDEWEB)
Pavlov, A. N. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Saratov State Technical University, Politehnicheskaya Str. 77, 410054 Saratov (Russian Federation); Pavlova, O. N. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Mohammad, Y. K. [Department of Physics, Saratov State University, Astrakhanskaya Str. 83, 410012 Saratov (Russian Federation); Tikrit University Salahudin, Tikrit Qadisiyah, University Str. P.O. Box 42, Tikrit (Iraq); Kurths, J. [Potsdam Institute for Climate Impact Research, Telegraphenberg A 31, 14473 Potsdam (Germany); Institute of Physics, Humboldt University Berlin, 12489 Berlin (Germany)
2015-01-15
Characterizing chaotic dynamics from integrate-and-fire (IF) interspike intervals (ISIs) is relatively easy performed at high firing rates. When the firing rate is low, a correct estimation of Lyapunov exponents (LEs) describing dynamical features of complex oscillations reflected in the IF ISI sequences becomes more complicated. In this work we discuss peculiarities and limitations of quantifying chaotic dynamics from IF point processes. We consider main factors leading to underestimated LEs and demonstrate a way of improving numerical determining of LEs from IF ISI sequences. We show that estimations of the two largest LEs can be performed using around 400 mean periods of chaotic oscillations in the regime of phase-coherent chaos. Application to real data is discussed.
Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.
Burbank, Kendra S
2015-12-01
The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.
Complementary responses to mean and variance modulations in the perfect integrate-and-fire model.
Pressley, Joanna; Troyer, Todd W
2009-07-01
In the perfect integrate-and-fire model (PIF), the membrane voltage is proportional to the integral of the input current since the time of the previous spike. It has been shown that the firing rate within a noise free ensemble of PIF neurons responds instantaneously to dynamic changes in the input current, whereas in the presence of white noise, model neurons preferentially pass low frequency modulations of the mean current. Here, we prove that when the input variance is perturbed while holding the mean current constant, the PIF responds preferentially to high frequency modulations. Moreover, the linear filters for mean and variance modulations are complementary, adding exactly to one. Since changes in the rate of Poisson distributed inputs lead to proportional changes in the mean and variance, these results imply that an ensemble of PIF neurons transmits a perfect replica of the time-varying input rate for Poisson distributed input. A more general argument shows that this property holds for any signal leading to proportional changes in the mean and variance of the input current.
Towards reproducible descriptions of neuronal network models.
Directory of Open Access Journals (Sweden)
Eilen Nordlie
2009-08-01
Full Text Available Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing--and thinking about--complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain.
Bursting synchronization in clustered neuronal networks
International Nuclear Information System (INIS)
Yu Hai-Tao; Wang Jiang; Deng Bin; Wei Xi-Le
2013-01-01
Neuronal networks in the brain exhibit the modular (clustered) property, i.e., they are composed of certain subnetworks with differential internal and external connectivity. We investigate bursting synchronization in a clustered neuronal network. A transition to mutual-phase synchronization takes place on the bursting time scale of coupled neurons, while on the spiking time scale, they behave asynchronously. This synchronization transition can be induced by the variations of inter- and intracoupling strengths, as well as the probability of random links between different subnetworks. Considering that some pathological conditions are related with the synchronization of bursting neurons in the brain, we analyze the control of bursting synchronization by using a time-periodic external signal in the clustered neuronal network. Simulation results show a frequency locking tongue in the driving parameter plane, where bursting synchronization is maintained, even in the presence of external driving. Hence, effective synchronization suppression can be realized with the driving parameters outside the frequency locking region. (interdisciplinary physics and related areas of science and technology)
Neuronal avalanches in complex networks
Directory of Open Access Journals (Sweden)
Victor Hernandez-Urbina
2016-12-01
Full Text Available Brain networks are neither regular nor random. Their structure allows for optimal information processing and transmission across the entire neural substrate of an organism. However, for topological features to be appropriately harnessed, brain networks should implement a dynamical regime which prevents phase-locked and chaotic behaviour. Critical neural dynamics refer to a dynamical regime in which the system is poised at the boundary between regularity and randomness. It has been reported that neural systems poised at this boundary achieve maximum computational power. In this paper, we review recent results regarding critical neural dynamics that emerge from systems whose underlying structure exhibits complex network properties.
A riddled basin escaping crisis and the universality in an integrate-and-fire circuit
Dai, Jun; He, Da-Ren; Xu, Xiu-Lian; Hu, Chin-Kun
2018-06-01
We investigate an integrate-and-fire model of an electronic relaxation oscillator, which can be described by the discontinuous and non-invertible composition of two mapping functions f1 and f2, with f1 being dissipative. Depending on a control parameter d, f2 can be conservative (for d =dc = 1) or dissipative (for d >dc). We find a kind of crisis, which is induced by the escape from a riddled-like attraction basin sea in the phase space. The averaged crisis transient lifetime (〈 τ 〉), the relative measure of the fat fractal forbidden network (η), and the measure of the escaping hole (Δ) show clear scaling behaviors: 〈 τ 〉 ∝(d -dc) - γ, η ∝(d -dc) σ, and Δ ∝(d -dc) α. Extending an argument by Jiang et al. (2004), we derive γ = σ + α, which agrees well with numerical simulation data.
Modulation of neuronal network activity with ghrelin
Stoyanova, Irina; Rutten, Wim; le Feber, Jakob
2012-01-01
Ghrelin is a neuropeptide regulating multiple physiological processes, including high brain functions such as learning and memory formation. However, the effect of ghrelin on network activity patterns and developments has not been studied yet. Therefore, we used dissociated cortical neurons plated
Coherence resonance in globally coupled neuronal networks with different neuron numbers
International Nuclear Information System (INIS)
Ning Wei-Lian; Zhang Zheng-Zhen; Zeng Shang-You; Luo Xiao-Shu; Hu Jin-Lin; Zeng Shao-Wen; Qiu Yi; Wu Hui-Si
2012-01-01
Because a brain consists of tremendous neuronal networks with different neuron numbers ranging from tens to tens of thousands, we study the coherence resonance due to ion channel noises in globally coupled neuronal networks with different neuron numbers. We confirm that for all neuronal networks with different neuron numbers there exist the array enhanced coherence resonance and the optimal synaptic conductance to cause the maximal spiking coherence. Furthermoremore, the enhancement effects of coupling on spiking coherence and on optimal synaptic conductance are almost the same, regardless of the neuron numbers in the neuronal networks. Therefore for all the neuronal networks with different neuron numbers in the brain, relative weak synaptic conductance (0.1 mS/cm 2 ) is sufficient to induce the maximal spiking coherence and the best sub-threshold signal encoding. (interdisciplinary physics and related areas of science and technology)
Pressley, Joanna; Troyer, Todd W
2011-05-01
The leaky integrate-and-fire (LIF) is the simplest neuron model that captures the essential properties of neuronal signaling. Yet common intuitions are inadequate to explain basic properties of LIF responses to sinusoidal modulations of the input. Here we examine responses to low and moderate frequency modulations of both the mean and variance of the input current and quantify how these responses depend on baseline parameters. Across parameters, responses to modulations in the mean current are low pass, approaching zero in the limit of high frequencies. For very low baseline firing rates, the response cutoff frequency matches that expected from membrane integration. However, the cutoff shows a rapid, supralinear increase with firing rate, with a steeper increase in the case of lower noise. For modulations of the input variance, the gain at high frequency remains finite. Here, we show that the low-frequency responses depend strongly on baseline parameters and derive an analytic condition specifying the parameters at which responses switch from being dominated by low versus high frequencies. Additionally, we show that the resonant responses for variance modulations have properties not expected for common oscillatory resonances: they peak at frequencies higher than the baseline firing rate and persist when oscillatory spiking is disrupted by high noise. Finally, the responses to mean and variance modulations are shown to have a complementary dependence on baseline parameters at higher frequencies, resulting in responses to modulations of Poisson input rates that are independent of baseline input statistics.
Characterization of modulated integrate-and-fire systems
International Nuclear Information System (INIS)
Alstroem, P.; Christiansen, B.; Levinsen, M.T.
1988-01-01
The phase locking structure in threshold modulated integrate-and-fire systems is explored. The existence of a smooth critical line where the Poincare map has an infinite slope inflection point is emphasized. At and below this line the system is related to circle map systems. Especially, this allows realization of systems with higher order scaling structures, qualitatively distinct from ordinary third order circle map structures. Hourglass patterns develop in parameter space and at small modulation amplitudes the behavior of the phase-locking regions (Arnold tongues) change dramatically. Above the critical line the Arnold tongues complete the parameter space, leaving along any line a zero-dimensional Cantor set of points associated with irrational rotation numbers. The critical line is not associated with a transition to chaos. In particular non-chaotic regions with complete phase-locking exist. In the supercritical region a gap is present in the Poincare map. The features at this gap are examined. Also local hysteresis may occur. We discuss the applicability of the local approximation. (orig.)
How structure determines correlations in neuronal networks.
Directory of Open Access Journals (Sweden)
Volker Pernice
2011-05-01
Full Text Available Networks are becoming a ubiquitous metaphor for the understanding of complex biological systems, spanning the range between molecular signalling pathways, neural networks in the brain, and interacting species in a food web. In many models, we face an intricate interplay between the topology of the network and the dynamics of the system, which is generally very hard to disentangle. A dynamical feature that has been subject of intense research in various fields are correlations between the noisy activity of nodes in a network. We consider a class of systems, where discrete signals are sent along the links of the network. Such systems are of particular relevance in neuroscience, because they provide models for networks of neurons that use action potentials for communication. We study correlations in dynamic networks with arbitrary topology, assuming linear pulse coupling. With our novel approach, we are able to understand in detail how specific structural motifs affect pairwise correlations. Based on a power series decomposition of the covariance matrix, we describe the conditions under which very indirect interactions will have a pronounced effect on correlations and population dynamics. In random networks, we find that indirect interactions may lead to a broad distribution of activation levels with low average but highly variable correlations. This phenomenon is even more pronounced in networks with distance dependent connectivity. In contrast, networks with highly connected hubs or patchy connections often exhibit strong average correlations. Our results are particularly relevant in view of new experimental techniques that enable the parallel recording of spiking activity from a large number of neurons, an appropriate interpretation of which is hampered by the currently limited understanding of structure-dynamics relations in complex networks.
Decoding spikes in a spiking neuronal network
Energy Technology Data Exchange (ETDEWEB)
Feng Jianfeng [Department of Informatics, University of Sussex, Brighton BN1 9QH (United Kingdom); Ding, Mingzhou [Department of Mathematics, Florida Atlantic University, Boca Raton, FL 33431 (United States)
2004-06-04
We investigate how to reliably decode the input information from the output of a spiking neuronal network. A maximum likelihood estimator of the input signal, together with its Fisher information, is rigorously calculated. The advantage of the maximum likelihood estimation over the 'brute-force rate coding' estimate is clearly demonstrated. It is pointed out that the ergodic assumption in neuroscience, i.e. a temporal average is equivalent to an ensemble average, is in general not true. Averaging over an ensemble of neurons usually gives a biased estimate of the input information. A method on how to compensate for the bias is proposed. Reconstruction of dynamical input signals with a group of spiking neurons is extensively studied and our results show that less than a spike is sufficient to accurately decode dynamical inputs.
Decoding spikes in a spiking neuronal network
International Nuclear Information System (INIS)
Feng Jianfeng; Ding, Mingzhou
2004-01-01
We investigate how to reliably decode the input information from the output of a spiking neuronal network. A maximum likelihood estimator of the input signal, together with its Fisher information, is rigorously calculated. The advantage of the maximum likelihood estimation over the 'brute-force rate coding' estimate is clearly demonstrated. It is pointed out that the ergodic assumption in neuroscience, i.e. a temporal average is equivalent to an ensemble average, is in general not true. Averaging over an ensemble of neurons usually gives a biased estimate of the input information. A method on how to compensate for the bias is proposed. Reconstruction of dynamical input signals with a group of spiking neurons is extensively studied and our results show that less than a spike is sufficient to accurately decode dynamical inputs
Stochastic synchronization in finite size spiking networks
Doiron, Brent; Rinzel, John; Reyes, Alex
2006-09-01
We study a stochastic synchronization of spiking activity in feedforward networks of integrate-and-fire model neurons. A stochastic mean field analysis shows that synchronization occurs only when the network size is sufficiently small. This gives evidence that the dynamics, and hence processing, of finite size populations can be drastically different from that observed in the infinite size limit. Our results agree with experimentally observed synchrony in cortical networks, and further strengthen the link between synchrony and propagation in cortical systems.
Hidden neuronal correlations in cultured networks
International Nuclear Information System (INIS)
Segev, Ronen; Baruchi, Itay; Hulata, Eyal; Ben-Jacob, Eshel
2004-01-01
Utilization of a clustering algorithm on neuronal spatiotemporal correlation matrices recorded during a spontaneous activity of in vitro networks revealed the existence of hidden correlations: the sequence of synchronized bursting events (SBEs) is composed of statistically distinguishable subgroups each with its own distinct pattern of interneuron spatiotemporal correlations. These findings hint that each of the SBE subgroups can serve as a template for coding, storage, and retrieval of a specific information
Spike Code Flow in Cultured Neuronal Networks.
Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime; Kamimura, Takuya; Yagi, Yasushi; Mizuno-Matsumoto, Yuko; Chen, Yen-Wei
2016-01-01
We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of "1101" and "1011," which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the "maximum cross-correlations" among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.
Spike Code Flow in Cultured Neuronal Networks
Directory of Open Access Journals (Sweden)
Shinichi Tamura
2016-01-01
Full Text Available We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of “1101” and “1011,” which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the “maximum cross-correlations” among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.
Ly, Cheng
2013-10-01
The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.
Population coding in sparsely connected networks of noisy neurons
Tripp, Bryan P.; Orchard, Jeff
2012-01-01
This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and be...
Autapse-induced synchronization in a coupled neuronal network
International Nuclear Information System (INIS)
Ma, Jun; Song, Xinlin; Jin, Wuyin; Wang, Chuni
2015-01-01
Highlights: • The functional effect of autapse on neuronal activity is detected. • Autapse driving plays active role in regulating electrical activities as pacemaker. • It confirms biological experimental results for rhythm synchronization between heterogeneous cells. - Abstract: The effect of autapse on coupled neuronal network is detected. In our studies, three identical neurons are connected with ring type and autapse connected to one neuron of the network. The autapse connected to neuron can impose time-delayed feedback in close loop on the neuron thus the dynamics of membrane potentials can be changed. Firstly, the effect of autapse driving on single neuron is confirmed that negative feedback can calm down the neuronal activity while positive feedback can excite the neuronal activity. Secondly, the collective electrical behaviors of neurons are regulated by a pacemaker, which associated with the autapse forcing. By using appropriate gain and time delay in the autapse, the neurons can reach synchronization and the membrane potentials of all neurons can oscillate with the same rhythm under mutual coupling. It indicates that autapse forcing plays an important role in changing the collective electric activities of neuronal network, and appropriate electric modes can be selected due to the switch of feedback type(positive or negative) in autapse. And the autapse-induced synchronization in network is also consistent with some biological experiments about synchronization between nonidentical neurons.
Neurons from the adult human dentate nucleus: neural networks in the neuron classification.
Grbatinić, Ivan; Marić, Dušica L; Milošević, Nebojša T
2015-04-07
Topological (central vs. border neuron type) and morphological classification of adult human dentate nucleus neurons according to their quantified histomorphological properties using neural networks on real and virtual neuron samples. In the real sample 53.1% and 14.1% of central and border neurons, respectively, are classified correctly with total of 32.8% of misclassified neurons. The most important result present 62.2% of misclassified neurons in border neurons group which is even greater than number of correctly classified neurons (37.8%) in that group, showing obvious failure of network to classify neurons correctly based on computational parameters used in our study. On the virtual sample 97.3% of misclassified neurons in border neurons group which is much greater than number of correctly classified neurons (2.7%) in that group, again confirms obvious failure of network to classify neurons correctly. Statistical analysis shows that there is no statistically significant difference in between central and border neurons for each measured parameter (p>0.05). Total of 96.74% neurons are morphologically classified correctly by neural networks and each one belongs to one of the four histomorphological types: (a) neurons with small soma and short dendrites, (b) neurons with small soma and long dendrites, (c) neuron with large soma and short dendrites, (d) neurons with large soma and long dendrites. Statistical analysis supports these results (pneurons can be classified in four neuron types according to their quantitative histomorphological properties. These neuron types consist of two neuron sets, small and large ones with respect to their perykarions with subtypes differing in dendrite length i.e. neurons with short vs. long dendrites. Besides confirmation of neuron classification on small and large ones, already shown in literature, we found two new subtypes i.e. neurons with small soma and long dendrites and with large soma and short dendrites. These neurons are
Integrated workflows for spiking neuronal network simulations
Directory of Open Access Journals (Sweden)
Ján eAntolík
2013-12-01
Full Text Available The increasing availability of computational resources is enabling more detailed, realistic modelling in computational neuroscience, resulting in a shift towards more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeller's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modellers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity.To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualisation into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organised configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualisation stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modelling studies by relieving the user from manual handling of the flow of metadata between the individual
Influence of Selective Edge Removal and Refractory Period in a Self-Organized Critical Neuron Model
International Nuclear Information System (INIS)
Lin Min; Gang, Zhao; Chen Tianlun
2009-01-01
A simple model for a set of integrate-and-fire neurons based on the weighted network is introduced. By considering the neurobiological phenomenon in brain development and the difference of the synaptic strength, we construct weighted networks develop with link additions and followed by selective edge removal. The network exhibits the small-world and scale-free properties with high network efficiency. The model displays an avalanche activity on a power-law distribution. We investigate the effect of selective edge removal and the neuron refractory period on the self-organized criticality of the system. (condensed matter: structural, mechanical, and thermal properties)
Implementing Signature Neural Networks with Spiking Neurons.
Carrillo-Medina, José Luis; Latorre, Roberto
2016-01-01
Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence
Tibau, Elisenda; Valencia, Miguel; Soriano, Jordi
2013-01-01
Neuronal networks in vitro are prominent systems to study the development of connections in living neuronal networks and the interplay between connectivity, activity and function. These cultured networks show a rich spontaneous activity that evolves concurrently with the connectivity of the underlying network. In this work we monitor the development of neuronal cultures, and record their activity using calcium fluorescence imaging. We use spectral analysis to characterize global dynamical and structural traits of the neuronal cultures. We first observe that the power spectrum can be used as a signature of the state of the network, for instance when inhibition is active or silent, as well as a measure of the network's connectivity strength. Second, the power spectrum identifies prominent developmental changes in the network such as GABAA switch. And third, the analysis of the spatial distribution of the spectral density, in experiments with a controlled disintegration of the network through CNQX, an AMPA-glutamate receptor antagonist in excitatory neurons, reveals the existence of communities of strongly connected, highly active neurons that display synchronous oscillations. Our work illustrates the interest of spectral analysis for the study of in vitro networks, and its potential use as a network-state indicator, for instance to compare healthy and diseased neuronal networks.
Collective stochastic coherence in recurrent neuronal networks
Sancristóbal, Belén; Rebollo, Beatriz; Boada, Pol; Sanchez-Vives, Maria V.; Garcia-Ojalvo, Jordi
2016-09-01
Recurrent networks of dynamic elements frequently exhibit emergent collective oscillations, which can show substantial regularity even when the individual elements are considerably noisy. How noise-induced dynamics at the local level coexists with regular oscillations at the global level is still unclear. Here we show that a combination of stochastic recurrence-based initiation with deterministic refractoriness in an excitable network can reconcile these two features, leading to maximum collective coherence for an intermediate noise level. We report this behaviour in the slow oscillation regime exhibited by a cerebral cortex network under dynamical conditions resembling slow-wave sleep and anaesthesia. Computational analysis of a biologically realistic network model reveals that an intermediate level of background noise leads to quasi-regular dynamics. We verify this prediction experimentally in cortical slices subject to varying amounts of extracellular potassium, which modulates neuronal excitability and thus synaptic noise. The model also predicts that this effectively regular state should exhibit noise-induced memory of the spatial propagation profile of the collective oscillations, which is also verified experimentally. Taken together, these results allow us to construe the high regularity observed experimentally in the brain as an instance of collective stochastic coherence.
Communication through resonance in spiking neuronal networks.
Hahn, Gerald; Bujan, Alejandro F; Frégnac, Yves; Aertsen, Ad; Kumar, Arvind
2014-08-01
The cortex processes stimuli through a distributed network of specialized brain areas. This processing requires mechanisms that can route neuronal activity across weakly connected cortical regions. Routing models proposed thus far are either limited to propagation of spiking activity across strongly connected networks or require distinct mechanisms that create local oscillations and establish their coherence between distant cortical areas. Here, we propose a novel mechanism which explains how synchronous spiking activity propagates across weakly connected brain areas supported by oscillations. In our model, oscillatory activity unleashes network resonance that amplifies feeble synchronous signals and promotes their propagation along weak connections ("communication through resonance"). The emergence of coherent oscillations is a natural consequence of synchronous activity propagation and therefore the assumption of different mechanisms that create oscillations and provide coherence is not necessary. Moreover, the phase-locking of oscillations is a side effect of communication rather than its requirement. Finally, we show how the state of ongoing activity could affect the communication through resonance and propose that modulations of the ongoing activity state could influence information processing in distributed cortical networks.
Developmental time windows for axon growth influence neuronal network topology.
Lim, Sol; Kaiser, Marcus
2015-04-01
Early brain connectivity development consists of multiple stages: birth of neurons, their migration and the subsequent growth of axons and dendrites. Each stage occurs within a certain period of time depending on types of neurons and cortical layers. Forming synapses between neurons either by growing axons starting at similar times for all neurons (much-overlapped time windows) or at different time points (less-overlapped) may affect the topological and spatial properties of neuronal networks. Here, we explore the extreme cases of axon formation during early development, either starting at the same time for all neurons (parallel, i.e., maximally overlapped time windows) or occurring for each neuron separately one neuron after another (serial, i.e., no overlaps in time windows). For both cases, the number of potential and established synapses remained comparable. Topological and spatial properties, however, differed: Neurons that started axon growth early on in serial growth achieved higher out-degrees, higher local efficiency and longer axon lengths while neurons demonstrated more homogeneous connectivity patterns for parallel growth. Second, connection probability decreased more rapidly with distance between neurons for parallel growth than for serial growth. Third, bidirectional connections were more numerous for parallel growth. Finally, we tested our predictions with C. elegans data. Together, this indicates that time windows for axon growth influence the topological and spatial properties of neuronal networks opening up the possibility to a posteriori estimate developmental mechanisms based on network properties of a developed network.
Kori, Hiroshi; Kiss, István Z.; Jain, Swati; Hudson, John L.
2018-04-01
Experiments and supporting theoretical analysis are presented to describe the synchronization patterns that can be observed with a population of globally coupled electrochemical oscillators close to a homoclinic, saddle-loop bifurcation, where the coupling is repulsive in the electrode potential. While attractive coupling generates phase clusters and desynchronized states, repulsive coupling results in synchronized oscillations. The experiments are interpreted with a phenomenological model that captures the waveform of the oscillations (exponential increase) followed by a refractory period. The globally coupled autocatalytic integrate-and-fire model predicts the development of partially synchronized states that occur through attracting heteroclinic cycles between out-of-phase two-cluster states. Similar behavior can be expected in many other systems where the oscillations occur close to a saddle-loop bifurcation, e.g., with Morris-Lecar neurons.
Yoshi Nishitani; Chie Hosokawa; Yuko Mizuno-Matsumoto; Tomomitsu Miyoshi; Shinichi Tamura
2017-01-01
Neuronal networks have fluctuating characteristics, unlike the stable characteristics seen in computers. The underlying mechanisms that drive reliable communication among neuronal networks and their ability to perform intelligible tasks remain unknown. Recently, in an attempt to resolve this issue, we showed that stimulated neurons communicate via spikes that propagate temporally, in the form of spike trains. We named this phenomenon “spike wave propagation”. In these previous studies, using ...
Solving Constraint Satisfaction Problems with Networks of Spiking Neurons.
Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang
2016-01-01
Network of neurons in the brain apply-unlike processors in our current generation of computer hardware-an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling.
Assessing neuronal networks: understanding Alzheimer's disease.
LENUS (Irish Health Repository)
Bokde, Arun L W
2012-02-01
Findings derived from neuroimaging of the structural and functional organization of the human brain have led to the widely supported hypothesis that neuronal networks of temporally coordinated brain activity across different regional brain structures underpin cognitive function. Failure of integration within a network leads to cognitive dysfunction. The current discussion on Alzheimer\\'s disease (AD) argues that it presents in part a disconnection syndrome. Studies using functional magnetic resonance imaging, positron emission tomography and electroencephalography demonstrate that synchronicity of brain activity is altered in AD and correlates with cognitive deficits. Moreover, recent advances in diffusion tensor imaging have made it possible to track axonal projections across the brain, revealing substantial regional impairment in fiber-tract integrity in AD. Accumulating evidence points towards a network breakdown reflecting disconnection at both the structural and functional system level. The exact relationship among these multiple mechanistic variables and their contribution to cognitive alterations and ultimately decline is yet unknown. Focused research efforts aimed at the integration of both function and structure hold great promise not only in improving our understanding of cognition but also of its characteristic progressive metamorphosis in complex chronic neurodegenerative disorders such as AD.
Shiau, LieJune; Schwalger, Tilo; Lindner, Benjamin
2015-06-01
We study the spike statistics of an adaptive exponential integrate-and-fire neuron stimulated by white Gaussian current noise. We derive analytical approximations for the coefficient of variation and the serial correlation coefficient of the interspike interval assuming that the neuron operates in the mean-driven tonic firing regime and that the stochastic input is weak. Our result for the serial correlation coefficient has the form of a geometric sequence and is confirmed by the comparison to numerical simulations. The theory predicts various patterns of interval correlations (positive or negative at lag one, monotonically decreasing or oscillating) depending on the strength of the spike-triggered and subthreshold components of the adaptation current. In particular, for pure subthreshold adaptation we find strong positive ISI correlations that are usually ascribed to positive correlations in the input current. Our results i) provide an alternative explanation for interspike-interval correlations observed in vivo, ii) may be useful in fitting point neuron models to experimental data, and iii) may be instrumental in exploring the role of adaptation currents for signal detection and signal transmission in single neurons.
Solving constraint satisfaction problems with networks of spiking neurons
Directory of Open Access Journals (Sweden)
Zeno eJonke
2016-03-01
Full Text Available Network of neurons in the brain apply – unlike processors in our current generation ofcomputer hardware – an event-based processing strategy, where short pulses (spikes areemitted sparsely by neurons to signal the occurrence of an event at a particular point intime. Such spike-based computations promise to be substantially more power-efficient thantraditional clocked processing schemes. However it turned out to be surprisingly difficult todesign networks of spiking neurons that can solve difficult computational problems on the levelof single spikes (rather than rates of spikes. We present here a new method for designingnetworks of spiking neurons via an energy function. Furthermore we show how the energyfunction of a network of stochastically firing neurons can be shaped in a quite transparentmanner by composing the networks of simple stereotypical network motifs. We show that thisdesign approach enables networks of spiking neurons to produce approximate solutions todifficult (NP-hard constraint satisfaction problems from the domains of planning/optimizationand verification/logical inference. The resulting networks employ noise as a computationalresource. Nevertheless the timing of spikes (rather than just spike rates plays an essential rolein their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines and Gibbs sampling.
The Hypocretin/Orexin Neuronal Networks in Zebrafish.
Elbaz, Idan; Levitas-Djerbi, Talia; Appelbaum, Lior
2017-01-01
The hypothalamic Hypocretin/Orexin (Hcrt) neurons secrete two Hcrt neuropeptides. These neurons and peptides play a major role in the regulation of feeding, sleep wake cycle, reward-seeking, addiction, and stress. Loss of Hcrt neurons causes the sleep disorder narcolepsy. The zebrafish has become an attractive model to study the Hcrt neuronal network because it is a transparent vertebrate that enables simple genetic manipulation, imaging of the structure and function of neuronal circuits in live animals, and high-throughput monitoring of behavioral performance during both day and night. The zebrafish Hcrt network comprises ~16-60 neurons, which similar to mammals, are located in the hypothalamus and widely innervate the brain and spinal cord, and regulate various fundamental behaviors such as feeding, sleep, and wakefulness. Here we review how the zebrafish contributes to the study of the Hcrt neuronal system molecularly, anatomically, physiologically, and pathologically.
Pattern formation and firing synchronization in networks of map neurons
International Nuclear Information System (INIS)
Wang Qingyun; Duan Zhisheng; Huang Lin; Chen Guanrong; Lu Qishao
2007-01-01
Patterns and collective phenomena such as firing synchronization are studied in networks of nonhomogeneous oscillatory neurons and mixtures of oscillatory and excitable neurons, with dynamics of each neuron described by a two-dimensional (2D) Rulkov map neuron. It is shown that as the coupling strength is increased, typical patterns emerge spatially, which propagate through the networks in the form of beautiful target waves or parallel ones depending on the size of networks. Furthermore, we investigate the transitions of firing synchronization characterized by the rate of firing when the coupling strength is increased. It is found that there exists an intermediate coupling strength; firing synchronization is minimal simultaneously irrespective of the size of networks. For further increasing the coupling strength, synchronization is enhanced. Since noise is inevitable in real neurons, we also investigate the effects of white noise on firing synchronization for different networks. For the networks of oscillatory neurons, it is shown that firing synchronization decreases when the noise level increases. For the missed networks, firing synchronization is robust under the noise conditions considered in this paper. Results presented in this paper should prove to be valuable for understanding the properties of collective dynamics in real neuronal networks
Performance of networks of artificial neurons: The role of clustering
International Nuclear Information System (INIS)
Kim, Beom Jun
2004-01-01
The performance of the Hopfield neural network model is numerically studied on various complex networks, such as the Watts-Strogatz network, the Barabasi-Albert network, and the neuronal network of Caenorhabditis elegans. Through the use of a systematic way of controlling the clustering coefficient, with the degree of each neuron kept unchanged, we find that the networks with the lower clustering exhibit much better performance. The results are discussed in the practical viewpoint of application, and the biological implications are also suggested
Management of synchronized network activity by highly active neurons
International Nuclear Information System (INIS)
Shein, Mark; Raichman, Nadav; Ben-Jacob, Eshel; Volman, Vladislav; Hanein, Yael
2008-01-01
Increasing evidence supports the idea that spontaneous brain activity may have an important functional role. Cultured neuronal networks provide a suitable model system to search for the mechanisms by which neuronal spontaneous activity is maintained and regulated. This activity is marked by synchronized bursting events (SBEs)—short time windows (hundreds of milliseconds) of rapid neuronal firing separated by long quiescent periods (seconds). However, there exists a special subset of rapidly firing neurons whose activity also persists between SBEs. It has been proposed that these highly active (HA) neurons play an important role in the management (i.e. establishment, maintenance and regulation) of the synchronized network activity. Here, we studied the dynamical properties and the functional role of HA neurons in homogeneous and engineered networks, during early network development, upon recovery from chemical inhibition and in response to electrical stimulations. We found that their sequences of inter-spike intervals (ISI) exhibit long time correlations and a unimodal distribution. During the network's development and under intense inhibition, the observed activity follows a transition period during which mostly HA neurons are active. Studying networks with engineered geometry, we found that HA neurons are precursors (the first to fire) of the spontaneous SBEs and are more responsive to electrical stimulations
Emergent synchronous bursting of oxytocin neuronal network.
Directory of Open Access Journals (Sweden)
Enrico Rossoni
2008-07-01
Full Text Available When young suckle, they are rewarded intermittently with a let-down of milk that results from reflex secretion of the hormone oxytocin; without oxytocin, newly born young will die unless they are fostered. Oxytocin is made by magnocellular hypothalamic neurons, and is secreted from their nerve endings in the pituitary in response to action potentials (spikes that are generated in the cell bodies and which are propagated down their axons to the nerve endings. Normally, oxytocin cells discharge asynchronously at 1-3 spikes/s, but during suckling, every 5 min or so, each discharges a brief, intense burst of spikes that release a pulse of oxytocin into the circulation. This reflex was the first, and is perhaps the best, example of a physiological role for peptide-mediated communication within the brain: it is coordinated by the release of oxytocin from the dendrites of oxytocin cells; it can be facilitated by injection of tiny amounts of oxytocin into the hypothalamus, and it can be blocked by injection of tiny amounts of oxytocin antagonist. Here we show how synchronized bursting can arise in a neuronal network model that incorporates basic observations of the physiology of oxytocin cells. In our model, bursting is an emergent behaviour of a complex system, involving both positive and negative feedbacks, between many sparsely connected cells. The oxytocin cells are regulated by independent afferent inputs, but they interact by local release of oxytocin and endocannabinoids. Oxytocin released from the dendrites of these cells has a positive-feedback effect, while endocannabinoids have an inhibitory effect by suppressing the afferent input to the cells.
Dynamical Encoding by Networks of Competing Neuron Groups: Winnerless Competition
International Nuclear Information System (INIS)
Rabinovich, M.; Volkovskii, A.; Lecanda, P.; Huerta, R.; Abarbanel, H. D. I.; Laurent, G.
2001-01-01
Following studies of olfactory processing in insects and fish, we investigate neural networks whose dynamics in phase space is represented by orbits near the heteroclinic connections between saddle regions (fixed points or limit cycles). These networks encode input information as trajectories along the heteroclinic connections. If there are N neurons in the network, the capacity is approximately e(N-1) ! , i.e., much larger than that of most traditional network structures. We show that a small winnerless competition network composed of FitzHugh-Nagumo spiking neurons efficiently transforms input information into a spatiotemporal output
Population coding in sparsely connected networks of noisy neurons.
Tripp, Bryan P; Orchard, Jeff
2012-01-01
This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behavior. However, population coding theory has often ignored network structure, or assumed discrete, fully connected populations (in contrast with the sparsely connected, continuous sheet of the cortex). In this study, we modeled a sheet of cortical neurons with sparse, primarily local connections, and found that a network with this structure could encode multiple internal state variables with high signal-to-noise ratio. However, we were unable to create high-fidelity networks by instantiating connections at random according to spatial connection probabilities. In our models, high-fidelity networks required additional structure, with higher cluster factors and correlations between the inputs to nearby neurons.
A Neuronal Network Model for Pitch Selectivity and Representation
Huang, Chengcheng; Rinzel, John
2016-01-01
Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among c...
A Neuron- and a Synapse Chip for Artificial Neural Networks
DEFF Research Database (Denmark)
Lansner, John; Lehmann, Torsten
1992-01-01
A cascadable, analog, CMOS chip set has been developed for hardware implementations of artificial neural networks (ANN's):I) a neuron chip containing an array of neurons with hyperbolic tangent activation functions and adjustable gains, and II) a synapse chip (or a matrix-vector multiplier) where...
Memristor-based neural networks: Synaptic versus neuronal stochasticity
Naous, Rawan; Alshedivat, Maruan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled N.
2016-01-01
In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors
Nicotinic modulaton of neuronal networks: from receptors to cognition
Mansvelder, H.D.; van Aerde, K.I.; Couey, J.J.; Brussaard, A.B.
2006-01-01
Rationale: Nicotine affects many aspects of human cognition, including attention and memory. Activation of nicotinic acetylcholine receptors (nAChRs) in neuronal networks modulates activity and information processing during cognitive tasks, which can be observed in electroencephalograms (EEGs) and
Visualizing neuronal network connectivity with connectivity pattern tables
Directory of Open Access Journals (Sweden)
Eilen Nordlie
2010-01-01
Full Text Available Complex ideas are best conveyed through well-designed illustrations. Up to now, computational neuroscientists have mostly relied on box-and-arrow diagrams of even complex neuronal networks, often using ad hoc notations with conflicting use of symbols from paper to paper. This significantly impedes the communication of ideas in neuronal network modeling. We present here Connectivity Pattern Tables (CPTs as a clutter-free visualization of connectivity in large neuronal networks containing two-dimensional populations of neurons. CPTs can be generated automatically from the same script code used to create the actual network in the NEST simulator. Through aggregation, CPTs can be viewed at different levels, providing either full detail or summary information. We also provide the open source ConnPlotter tool as a means to create connectivity pattern tables.
An FPGA-based silicon neuronal network with selectable excitability silicon neurons
Directory of Open Access Journals (Sweden)
Jing eLi
2012-12-01
Full Text Available This paper presents a digital silicon neuronal network which simulates the nerve system in creatures and has the ability to execute intelligent tasks, such as associative memory. Two essential elements, the mathematical-structure-based digital spiking silicon neuron (DSSN and the transmitter release based silicon synapse, allow the network to show rich dynamic behaviors and are computationally efficient for hardware implementation. We adopt mixed pipeline and parallel structure and shift operations to design a sufficient large and complex network without excessive hardware resource cost. The network with $256$ full-connected neurons is built on a Digilent Atlys board equipped with a Xilinx Spartan-6 LX45 FPGA. Besides, a memory control block and USB control block are designed to accomplish the task of data communication between the network and the host PC. This paper also describes the mechanism of associative memory performed in the silicon neuronal network. The network is capable of retrieving stored patterns if the inputs contain enough information of them. The retrieving probability increases with the similarity between the input and the stored pattern increasing. Synchronization of neurons is observed when the successful stored pattern retrieval occurs.
Amin, Hayder; Maccione, Alessandro; Nieus, Thierry
2017-01-01
Developing neuronal systems intrinsically generate coordinated spontaneous activity that propagates by involving a large number of synchronously firing neurons. In vivo, waves of spikes transiently characterize the activity of developing brain circuits and are fundamental for activity-dependent circuit formation. In vitro, coordinated spontaneous spiking activity, or network bursts (NBs), interleaved within periods of asynchronous spikes emerge during the development of 2D and 3D neuronal cultures. Several studies have investigated this type of activity and its dynamics, but how a neuronal system generates these coordinated events remains unclear. Here, we investigate at a cellular level the generation of network bursts in spontaneously active neuronal cultures by exploiting high-resolution multielectrode array recordings and computational network modelling. Our analysis reveals that NBs are generated in specialized regions of the network (functional neuronal communities) that feature neuronal links with high cross-correlation peak values, sub-millisecond lags and that share very similar structural connectivity motifs providing recurrent interactions. We show that the particular properties of these local structures enable locally amplifying spontaneous asynchronous spikes and that this mechanism can lead to the initiation of NBs. Through the analysis of simulated and experimental data, we also show that AMPA currents drive the coordinated activity, while NMDA and GABA currents are only involved in shaping the dynamics of NBs. Overall, our results suggest that the presence of functional neuronal communities with recurrent local connections allows a neuronal system to generate spontaneous coordinated spiking activity events. As suggested by the rules used for implementing our computational model, such functional communities might naturally emerge during network development by following simple constraints on distance-based connectivity. PMID:28749937
Directory of Open Access Journals (Sweden)
Davide Lonardoni
2017-07-01
Full Text Available Developing neuronal systems intrinsically generate coordinated spontaneous activity that propagates by involving a large number of synchronously firing neurons. In vivo, waves of spikes transiently characterize the activity of developing brain circuits and are fundamental for activity-dependent circuit formation. In vitro, coordinated spontaneous spiking activity, or network bursts (NBs, interleaved within periods of asynchronous spikes emerge during the development of 2D and 3D neuronal cultures. Several studies have investigated this type of activity and its dynamics, but how a neuronal system generates these coordinated events remains unclear. Here, we investigate at a cellular level the generation of network bursts in spontaneously active neuronal cultures by exploiting high-resolution multielectrode array recordings and computational network modelling. Our analysis reveals that NBs are generated in specialized regions of the network (functional neuronal communities that feature neuronal links with high cross-correlation peak values, sub-millisecond lags and that share very similar structural connectivity motifs providing recurrent interactions. We show that the particular properties of these local structures enable locally amplifying spontaneous asynchronous spikes and that this mechanism can lead to the initiation of NBs. Through the analysis of simulated and experimental data, we also show that AMPA currents drive the coordinated activity, while NMDA and GABA currents are only involved in shaping the dynamics of NBs. Overall, our results suggest that the presence of functional neuronal communities with recurrent local connections allows a neuronal system to generate spontaneous coordinated spiking activity events. As suggested by the rules used for implementing our computational model, such functional communities might naturally emerge during network development by following simple constraints on distance-based connectivity.
Population Coding in Sparsely Connected Networks of Noisy Neurons
Directory of Open Access Journals (Sweden)
Bryan Patrick Tripp
2012-05-01
Full Text Available This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behaviour. However, population coding theory has often ignored network structure, or assumed discrete, fully-connected populations (in contrast with the sparsely connected, continuous sheet of the cortex. In this study, we model a sheet of cortical neurons with sparse, primarily local connections, and find that a network with this structure can encode multiple internal state variables with high signal-to-noise ratio. However, in our model, although connection probability varies with the distance between neurons, we find that the connections cannot be instantiated at random according to these probabilities, but must have additional structure if information is to be encoded with high fidelity.
Soft chitosan microbeads scaffold for 3D functional neuronal networks.
Tedesco, Maria Teresa; Di Lisa, Donatella; Massobrio, Paolo; Colistra, Nicolò; Pesce, Mattia; Catelani, Tiziano; Dellacasa, Elena; Raiteri, Roberto; Martinoia, Sergio; Pastorino, Laura
2018-02-01
The availability of 3D biomimetic in vitro neuronal networks of mammalian neurons represents a pivotal step for the development of brain-on-a-chip experimental models to study neuronal (dys)functions and particularly neuronal connectivity. The use of hydrogel-based scaffolds for 3D cell cultures has been extensively studied in the last years. However, limited work on biomimetic 3D neuronal cultures has been carried out to date. In this respect, here we investigated the use of a widely popular polysaccharide, chitosan (CHI), for the fabrication of a microbead based 3D scaffold to be coupled to primary neuronal cells. CHI microbeads were characterized by optical and atomic force microscopies. The cell/scaffold interaction was deeply characterized by transmission electron microscopy and by immunocytochemistry using confocal microscopy. Finally, a preliminary electrophysiological characterization by micro-electrode arrays was carried out. Copyright © 2017 Elsevier Ltd. All rights reserved.
Pesavento, Michael J; Pinto, David J
2012-11-01
Rapidly changing environments require rapid processing from sensory inputs. Varying deflection velocities of a rodent's primary facial vibrissa cause varying temporal neuronal activity profiles within the ventral posteromedial thalamic nucleus. Local neuron populations in a single somatosensory layer 4 barrel transform sparsely coded input into a spike count based on the input's temporal profile. We investigate this transformation by creating a barrel-like hybrid network with whole cell recordings of in vitro neurons from a cortical slice preparation, embedding the biological neuron in the simulated network by presenting virtual synaptic conductances via a conductance clamp. Utilizing the hybrid network, we examine the reciprocal network properties (local excitatory and inhibitory synaptic convergence) and neuronal membrane properties (input resistance) by altering the barrel population response to diverse thalamic input. In the presence of local network input, neurons are more selective to thalamic input timing; this arises from strong feedforward inhibition. Strongly inhibitory (damping) network regimes are more selective to timing and less selective to the magnitude of input but require stronger initial input. Input selectivity relies heavily on the different membrane properties of excitatory and inhibitory neurons. When inhibitory and excitatory neurons had identical membrane properties, the sensitivity of in vitro neurons to temporal vs. magnitude features of input was substantially reduced. Increasing the mean leak conductance of the inhibitory cells decreased the network's temporal sensitivity, whereas increasing excitatory leak conductance enhanced magnitude sensitivity. Local network synapses are essential in shaping thalamic input, and differing membrane properties of functional classes reciprocally modulate this effect.
Inverse stochastic resonance in networks of spiking neurons.
Uzuntarla, Muhammet; Barreto, Ernest; Torres, Joaquin J
2017-07-01
Inverse Stochastic Resonance (ISR) is a phenomenon in which the average spiking rate of a neuron exhibits a minimum with respect to noise. ISR has been studied in individual neurons, but here, we investigate ISR in scale-free networks, where the average spiking rate is calculated over the neuronal population. We use Hodgkin-Huxley model neurons with channel noise (i.e., stochastic gating variable dynamics), and the network connectivity is implemented via electrical or chemical connections (i.e., gap junctions or excitatory/inhibitory synapses). We find that the emergence of ISR depends on the interplay between each neuron's intrinsic dynamical structure, channel noise, and network inputs, where the latter in turn depend on network structure parameters. We observe that with weak gap junction or excitatory synaptic coupling, network heterogeneity and sparseness tend to favor the emergence of ISR. With inhibitory coupling, ISR is quite robust. We also identify dynamical mechanisms that underlie various features of this ISR behavior. Our results suggest possible ways of experimentally observing ISR in actual neuronal systems.
Effects of extracellular potassium diffusion on electrically coupled neuron networks
Wu, Xing-Xing; Shuai, Jianwei
2015-02-01
Potassium accumulation and diffusion during neuronal epileptiform activity have been observed experimentally, and potassium lateral diffusion has been suggested to play an important role in nonsynaptic neuron networks. We adopt a hippocampal CA1 pyramidal neuron network in a zero-calcium condition to better understand the influence of extracellular potassium dynamics on the stimulus-induced activity. The potassium concentration in the interstitial space for each neuron is regulated by potassium currents, Na+-K+ pumps, glial buffering, and ion diffusion. In addition to potassium diffusion, nearby neurons are also coupled through gap junctions. Our results reveal that the latency of the first spike responding to stimulus monotonically decreases with increasing gap-junction conductance but is insensitive to potassium diffusive coupling. The duration of network oscillations shows a bell-like shape with increasing potassium diffusive coupling at weak gap-junction coupling. For modest electrical coupling, there is an optimal K+ diffusion strength, at which the flow of potassium ions among the network neurons appropriately modulates interstitial potassium concentrations in a degree that provides the most favorable environment for the generation and continuance of the action potential waves in the network.
Synaptic Plasticity and Spike Synchronisation in Neuronal Networks
Borges, Rafael R.; Borges, Fernando S.; Lameu, Ewandson L.; Protachevicz, Paulo R.; Iarosz, Kelly C.; Caldas, Iberê L.; Viana, Ricardo L.; Macau, Elbert E. N.; Baptista, Murilo S.; Grebogi, Celso; Batista, Antonio M.
2017-12-01
Brain plasticity, also known as neuroplasticity, is a fundamental mechanism of neuronal adaptation in response to changes in the environment or due to brain injury. In this review, we show our results about the effects of synaptic plasticity on neuronal networks composed by Hodgkin-Huxley neurons. We show that the final topology of the evolved network depends crucially on the ratio between the strengths of the inhibitory and excitatory synapses. Excitation of the same order of inhibition revels an evolved network that presents the rich-club phenomenon, well known to exist in the brain. For initial networks with considerably larger inhibitory strengths, we observe the emergence of a complex evolved topology, where neurons sparsely connected to other neurons, also a typical topology of the brain. The presence of noise enhances the strength of both types of synapses, but if the initial network has synapses of both natures with similar strengths. Finally, we show how the synchronous behaviour of the evolved network will reflect its evolved topology.
Bayesian Inference and Online Learning in Poisson Neuronal Networks.
Huang, Yanping; Rao, Rajesh P N
2016-08-01
Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.
Joint statistics of strongly correlated neurons via dimensionality reduction
International Nuclear Information System (INIS)
Deniz, Taşkın; Rotter, Stefan
2017-01-01
The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input. (paper)
Energy-efficient neural information processing in individual neurons and neuronal networks.
Yu, Lianchun; Yu, Yuguo
2017-11-01
Brains are composed of networks of an enormous number of neurons interconnected with synapses. Neural information is carried by the electrical signals within neurons and the chemical signals among neurons. Generating these electrical and chemical signals is metabolically expensive. The fundamental issue raised here is whether brains have evolved efficient ways of developing an energy-efficient neural code from the molecular level to the circuit level. Here, we summarize the factors and biophysical mechanisms that could contribute to the energy-efficient neural code for processing input signals. The factors range from ion channel kinetics, body temperature, axonal propagation of action potentials, low-probability release of synaptic neurotransmitters, optimal input and noise, the size of neurons and neuronal clusters, excitation/inhibition balance, coding strategy, cortical wiring, and the organization of functional connectivity. Both experimental and computational evidence suggests that neural systems may use these factors to maximize the efficiency of energy consumption in processing neural signals. Studies indicate that efficient energy utilization may be universal in neuronal systems as an evolutionary consequence of the pressure of limited energy. As a result, neuronal connections may be wired in a highly economical manner to lower energy costs and space. Individual neurons within a network may encode independent stimulus components to allow a minimal number of neurons to represent whole stimulus characteristics efficiently. This basic principle may fundamentally change our view of how billions of neurons organize themselves into complex circuits to operate and generate the most powerful intelligent cognition in nature. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Novel transcriptional networks regulated by CLOCK in human neurons.
Fontenot, Miles R; Berto, Stefano; Liu, Yuxiang; Werthmann, Gordon; Douglas, Connor; Usui, Noriyoshi; Gleason, Kelly; Tamminga, Carol A; Takahashi, Joseph S; Konopka, Genevieve
2017-11-01
The molecular mechanisms underlying human brain evolution are not fully understood; however, previous work suggested that expression of the transcription factor CLOCK in the human cortex might be relevant to human cognition and disease. In this study, we investigated this novel transcriptional role for CLOCK in human neurons by performing chromatin immunoprecipitation sequencing for endogenous CLOCK in adult neocortices and RNA sequencing following CLOCK knockdown in differentiated human neurons in vitro. These data suggested that CLOCK regulates the expression of genes involved in neuronal migration, and a functional assay showed that CLOCK knockdown increased neuronal migratory distance. Furthermore, dysregulation of CLOCK disrupts coexpressed networks of genes implicated in neuropsychiatric disorders, and the expression of these networks is driven by hub genes with human-specific patterns of expression. These data support a role for CLOCK-regulated transcriptional cascades involved in human brain evolution and function. © 2017 Fontenot et al.; Published by Cold Spring Harbor Laboratory Press.
Fractional-order leaky integrate-and-fire model with long-term memory and power law dynamics.
Teka, Wondimu W; Upadhyay, Ranjit Kumar; Mondal, Argha
2017-09-01
Pyramidal neurons produce different spiking patterns to process information, communicate with each other and transform information. These spiking patterns have complex and multiple time scale dynamics that have been described with the fractional-order leaky integrate-and-Fire (FLIF) model. Models with fractional (non-integer) order differentiation that generalize power law dynamics can be used to describe complex temporal voltage dynamics. The main characteristic of FLIF model is that it depends on all past values of the voltage that causes long-term memory. The model produces spikes with high interspike interval variability and displays several spiking properties such as upward spike-frequency adaptation and long spike latency in response to a constant stimulus. We show that the subthreshold voltage and the firing rate of the fractional-order model make transitions from exponential to power law dynamics when the fractional order α decreases from 1 to smaller values. The firing rate displays different types of spike timing adaptation caused by changes on initial values. We also show that the voltage-memory trace and fractional coefficient are the causes of these different types of spiking properties. The voltage-memory trace that represents the long-term memory has a feedback regulatory mechanism and affects spiking activity. The results suggest that fractional-order models might be appropriate for understanding multiple time scale neuronal dynamics. Overall, a neuron with fractional dynamics displays history dependent activities that might be very useful and powerful for effective information processing. Copyright © 2017 Elsevier Ltd. All rights reserved.
Extracting functionally feedforward networks from a population of spiking neurons.
Vincent, Kathleen; Tauskela, Joseph S; Thivierge, Jean-Philippe
2012-01-01
Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABA(A) receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV
NT2 derived neuronal and astrocytic network signalling.
Directory of Open Access Journals (Sweden)
Eric J Hill
Full Text Available A major focus of stem cell research is the generation of neurons that may then be implanted to treat neurodegenerative diseases. However, a picture is emerging where astrocytes are partners to neurons in sustaining and modulating brain function. We therefore investigated the functional properties of NT2 derived astrocytes and neurons using electrophysiological and calcium imaging approaches. NT2 neurons (NT2Ns expressed sodium dependent action potentials, as well as responses to depolarisation and the neurotransmitter glutamate. NT2Ns exhibited spontaneous and coordinated calcium elevations in clusters and in extended processes, indicating local and long distance signalling. Tetrodotoxin sensitive network activity could also be evoked by electrical stimulation. Similarly, NT2 astrocytes (NT2As exhibited morphology and functional properties consistent with this glial cell type. NT2As responded to neuronal activity and to exogenously applied neurotransmitters with calcium elevations, and in contrast to neurons, also exhibited spontaneous rhythmic calcium oscillations. NT2As also generated propagating calcium waves that were gap junction and purinergic signalling dependent. Our results show that NT2 derived astrocytes exhibit appropriate functionality and that NT2N networks interact with NT2A networks in co-culture. These findings underline the utility of such cultures to investigate human brain cell type signalling under controlled conditions. Furthermore, since stem cell derived neuron function and survival is of great importance therapeutically, our findings suggest that the presence of complementary astrocytes may be valuable in supporting stem cell derived neuronal networks. Indeed, this also supports the intriguing possibility of selective therapeutic replacement of astrocytes in diseases where these cells are either lost or lose functionality.
Learning and structure of neuronal networks
Indian Academy of Sciences (India)
structures, protein–protein interaction networks, social interactions, the Internet, and so on can be described by complex networks [1–5]. Recent developments in the understanding of complex networks has led to deeper insights about their origin and other properties [1–5]. One common realization that emerges from these ...
Effect of stimulation on the input parameters of stochastic leaky integrate-and-fire neuronal model
Czech Academy of Sciences Publication Activity Database
Lánský, Petr; Šanda, Pavel; He, J.
2010-01-01
Roč. 104, 3-4 (2010), s. 160-166 ISSN 0928-4257 R&D Projects: GA MŠk(CZ) LC554; GA AV ČR(CZ) IAA101120604 Institutional research plan: CEZ:AV0Z50110509 Keywords : membrane depolarization * input parameters * diffusion Subject RIV: BO - Biophysics Impact factor: 3.030, year: 2010
Short-term memory in networks of dissociated cortical neurons.
Dranias, Mark R; Ju, Han; Rajaram, Ezhilarasan; VanDongen, Antonius M J
2013-01-30
Short-term memory refers to the ability to store small amounts of stimulus-specific information for a short period of time. It is supported by both fading and hidden memory processes. Fading memory relies on recurrent activity patterns in a neuronal network, whereas hidden memory is encoded using synaptic mechanisms, such as facilitation, which persist even when neurons fall silent. We have used a novel computational and optogenetic approach to investigate whether these same memory processes hypothesized to support pattern recognition and short-term memory in vivo, exist in vitro. Electrophysiological activity was recorded from primary cultures of dissociated rat cortical neurons plated on multielectrode arrays. Cultures were transfected with ChannelRhodopsin-2 and optically stimulated using random dot stimuli. The pattern of neuronal activity resulting from this stimulation was analyzed using classification algorithms that enabled the identification of stimulus-specific memories. Fading memories for different stimuli, encoded in ongoing neural activity, persisted and could be distinguished from each other for as long as 1 s after stimulation was terminated. Hidden memories were detected by altered responses of neurons to additional stimulation, and this effect persisted longer than 1 s. Interestingly, network bursts seem to eliminate hidden memories. These results are similar to those that have been reported from similar experiments in vivo and demonstrate that mechanisms of information processing and short-term memory can be studied using cultured neuronal networks, thereby setting the stage for therapeutic applications using this platform.
How adaptation shapes spike rate oscillations in recurrent neuronal networks
Directory of Open Access Journals (Sweden)
Moritz eAugustin
2013-02-01
Full Text Available Neural mass signals from in-vivo recordings often show oscillations with frequencies ranging from <1 Hz to 100 Hz. Fast rhythmic activity in the beta and gamma range can be generated by network based mechanisms such as recurrent synaptic excitation-inhibition loops. Slower oscillations might instead depend on neuronal adaptation currents whose timescales range from tens of milliseconds to seconds. Here we investigate how the dynamics of such adaptation currents contribute to spike rate oscillations and resonance properties in recurrent networks of excitatory and inhibitory neurons. Based on a network of sparsely coupled spiking model neurons with two types of adaptation current and conductance based synapses with heterogeneous strengths and delays we use a mean-field approach to analyze oscillatory network activity. For constant external input, we find that spike-triggered adaptation currents provide a mechanism to generate slow oscillations over a wide range of adaptation timescales as long as recurrent synaptic excitation is sufficiently strong. Faster rhythms occur when recurrent inhibition is slower than excitation and oscillation frequency increases with the strength of inhibition. Adaptation facilitates such network based oscillations for fast synaptic inhibition and leads to decreased frequencies. For oscillatory external input, adaptation currents amplify a narrow band of frequencies and cause phase advances for low frequencies in addition to phase delays at higher frequencies. Our results therefore identify the different key roles of neuronal adaptation dynamics for rhythmogenesis and selective signal propagation in recurrent networks.
Extracting neuronal functional network dynamics via adaptive Granger causality analysis.
Sheikhattar, Alireza; Miran, Sina; Liu, Ji; Fritz, Jonathan B; Shamma, Shihab A; Kanold, Patrick O; Babadi, Behtash
2018-04-24
Quantifying the functional relations between the nodes in a network based on local observations is a key challenge in studying complex systems. Most existing time series analysis techniques for this purpose provide static estimates of the network properties, pertain to stationary Gaussian data, or do not take into account the ubiquitous sparsity in the underlying functional networks. When applied to spike recordings from neuronal ensembles undergoing rapid task-dependent dynamics, they thus hinder a precise statistical characterization of the dynamic neuronal functional networks underlying adaptive behavior. We develop a dynamic estimation and inference paradigm for extracting functional neuronal network dynamics in the sense of Granger, by integrating techniques from adaptive filtering, compressed sensing, point process theory, and high-dimensional statistics. We demonstrate the utility of our proposed paradigm through theoretical analysis, algorithm development, and application to synthetic and real data. Application of our techniques to two-photon Ca 2+ imaging experiments from the mouse auditory cortex reveals unique features of the functional neuronal network structures underlying spontaneous activity at unprecedented spatiotemporal resolution. Our analysis of simultaneous recordings from the ferret auditory and prefrontal cortical areas suggests evidence for the role of rapid top-down and bottom-up functional dynamics across these areas involved in robust attentive behavior.
The Dynamics of Networks of Identical Theta Neurons.
Laing, Carlo R
2018-02-05
We consider finite and infinite all-to-all coupled networks of identical theta neurons. Two types of synaptic interactions are investigated: instantaneous and delayed (via first-order synaptic processing). Extensive use is made of the Watanabe/Strogatz (WS) ansatz for reducing the dimension of networks of identical sinusoidally-coupled oscillators. As well as the degeneracy associated with the constants of motion of the WS ansatz, we also find continuous families of solutions for instantaneously coupled neurons, resulting from the reversibility of the reduced model and the form of the synaptic input. We also investigate a number of similar related models. We conclude that the dynamics of networks of all-to-all coupled identical neurons can be surprisingly complicated.
A Markovian event-based framework for stochastic spiking neural networks.
Touboul, Jonathan D; Faugeras, Olivier D
2011-11-01
In spiking neural networks, the information is conveyed by the spike times, that depend on the intrinsic dynamics of each neuron, the input they receive and on the connections between neurons. In this article we study the Markovian nature of the sequence of spike times in stochastic neural networks, and in particular the ability to deduce from a spike train the next spike time, and therefore produce a description of the network activity only based on the spike times regardless of the membrane potential process. To study this question in a rigorous manner, we introduce and study an event-based description of networks of noisy integrate-and-fire neurons, i.e. that is based on the computation of the spike times. We show that the firing times of the neurons in the networks constitute a Markov chain, whose transition probability is related to the probability distribution of the interspike interval of the neurons in the network. In the cases where the Markovian model can be developed, the transition probability is explicitly derived in such classical cases of neural networks as the linear integrate-and-fire neuron models with excitatory and inhibitory interactions, for different types of synapses, possibly featuring noisy synaptic integration, transmission delays and absolute and relative refractory period. This covers most of the cases that have been investigated in the event-based description of spiking deterministic neural networks.
Growth of cortical neuronal network in vitro: Modeling and analysis
International Nuclear Information System (INIS)
Lai, P.-Y.; Jia, L. C.; Chan, C. K.
2006-01-01
We present a detailed analysis and theoretical growth models to account for recent experimental data on the growth of cortical neuronal networks in vitro [Phys. Rev. Lett. 93, 088101 (2004)]. The experimentally observed synchronized firing frequency of a well-connected neuronal network is shown to be proportional to the mean network connectivity. The growth of the network is consistent with the model of an early enhanced growth of connection, but followed by a retarded growth once the synchronized cluster is formed. Microscopic models with dominant excluded volume interactions are consistent with the observed exponential decay of the mean connection probability as a function of the mean network connectivity. The biological implications of the growth model are also discussed
Clustering promotes switching dynamics in networks of noisy neurons
Franović, Igor; Klinshov, Vladimir
2018-02-01
Macroscopic variability is an emergent property of neural networks, typically manifested in spontaneous switching between the episodes of elevated neuronal activity and the quiescent episodes. We investigate the conditions that facilitate switching dynamics, focusing on the interplay between the different sources of noise and heterogeneity of the network topology. We consider clustered networks of rate-based neurons subjected to external and intrinsic noise and derive an effective model where the network dynamics is described by a set of coupled second-order stochastic mean-field systems representing each of the clusters. The model provides an insight into the different contributions to effective macroscopic noise and qualitatively indicates the parameter domains where switching dynamics may occur. By analyzing the mean-field model in the thermodynamic limit, we demonstrate that clustering promotes multistability, which gives rise to switching dynamics in a considerably wider parameter region compared to the case of a non-clustered network with sparse random connection topology.
Learning and structure of neuronal networks
Indian Academy of Sciences (India)
We study the effect of learning dynamics on network topology. Firstly, a network of discrete dynamical systems is considered for this purpose and the coupling strengths are made to evolve according to a temporal learning rule that is based on the paradigm of spike-time-dependent plasticity (STDP). This incorporates ...
Connectivities and synchronous firing in cortical neuronal networks
International Nuclear Information System (INIS)
Jia, L.C.; Sano, M.; Lai, P.-Y.; Chan, C.K.
2004-01-01
Network connectivities (k-bar) of cortical neural cultures are studied by synchronized firing and determined from measured correlations between fluorescence intensities of firing neurons. The bursting frequency (f) during synchronized firing of the networks is found to be an increasing function of k-bar. With f taken to be proportional to k-bar, a simple random model with a k-bar dependent connection probability p(k-bar) has been constructed to explain our experimental findings successfully
Order-based representation in random networks of cortical neurons.
Directory of Open Access Journals (Sweden)
Goded Shahaf
2008-11-01
Full Text Available The wide range of time scales involved in neural excitability and synaptic transmission might lead to ongoing change in the temporal structure of responses to recurring stimulus presentations on a trial-to-trial basis. This is probably the most severe biophysical constraint on putative time-based primitives of stimulus representation in neuronal networks. Here we show that in spontaneously developing large-scale random networks of cortical neurons in vitro the order in which neurons are recruited following each stimulus is a naturally emerging representation primitive that is invariant to significant temporal changes in spike times. With a relatively small number of randomly sampled neurons, the information about stimulus position is fully retrievable from the recruitment order. The effective connectivity that makes order-based representation invariant to time warping is characterized by the existence of stations through which activity is required to pass in order to propagate further into the network. This study uncovers a simple invariant in a noisy biological network in vitro; its applicability under in vivo constraints remains to be seen.
Error-backpropagation in temporally encoded networks of spiking neurons
S.M. Bohte (Sander); J.A. La Poutré (Han); J.N. Kok (Joost)
2000-01-01
textabstractFor a network of spiking neurons that encodes information in the timing of individual spike-times, we derive a supervised learning rule, emph{SpikeProp, akin to traditional error-backpropagation and show how to overcome the discontinuities introduced by thresholding. With this algorithm,
Studies on a network of complex neurons
Chakravarthy, Srinivasa V.; Ghosh, Joydeep
1993-09-01
In the last decade, much effort has been directed towards understanding the role of chaos in the brain. Work with rabbits reveals that in the resting state the electrical activity on the surface of the olfactory bulb is chaotic. But, when the animal is involved in a recognition task, the activity shifts to a specific pattern corresponding to the odor that is being recognized. Unstable, quasiperiodic behavior can be found in a class of conservative, deterministic physical systems called the Hamiltonian systems. In this paper, we formulate a complex version of Hopfield's network of real parameters and show that a variation on this model is a conservative system. Conditions under which the complex network can be used as a Content Addressable memory are studied. We also examine the effect of singularities of the complex sigmoid function on the network dynamics. The network exhibits unpredictable behavior at the singularities due to the failure of a uniqueness condition for the solution of the dynamic equations. On incorporating a weight adaptation rule, the structure of the resulting complex network equations is shown to have an interesting similarity with Kosko's Adaptive Bidirectional Associative Memory.
A spiking neuron circuit based on a carbon nanotube transistor
International Nuclear Information System (INIS)
Chen, C-L; Kim, K; Truong, Q; Shen, A; Li, Z; Chen, Y
2012-01-01
A spiking neuron circuit based on a carbon nanotube (CNT) transistor is presented in this paper. The spiking neuron circuit has a crossbar architecture in which the transistor gates are connected to its row electrodes and the transistor sources are connected to its column electrodes. An electrochemical cell is incorporated in the gate of the transistor by sandwiching a hydrogen-doped poly(ethylene glycol)methyl ether (PEG) electrolyte between the CNT channel and the top gate electrode. An input spike applied to the gate triggers a dynamic drift of the hydrogen ions in the PEG electrolyte, resulting in a post-synaptic current (PSC) through the CNT channel. Spikes input into the rows trigger PSCs through multiple CNT transistors, and PSCs cumulate in the columns and integrate into a ‘soma’ circuit to trigger output spikes based on an integrate-and-fire mechanism. The spiking neuron circuit can potentially emulate biological neuron networks and their intelligent functions. (paper)
The Neuronal Network Orchestration behind Motor Behaviors
DEFF Research Database (Denmark)
Petersen, Peter Christian
to motoneurons during rhythmic motor behaviors, and specifically the hypothesis that motoneurons receive concurrent excitatory and inhibitory (E/I) inputs. Berg et al. (2007) presented the concurrent hypothesis, which goes against the classical feed forward reciprocal model for spinal motor networks that has...... gained widespread acceptance. We developed an adult turtle preparation where the spinal motor network was intact, which also allowed us to perform intracellular recordings from motoneurons during rhythmic motor activity. We estimated the synaptic excitatory and inhibitory conductances by two individual...... (Buzsáki and Mizuseki, 2014). Roxin et al. (2011) detailed the firing rate distribution in networks in the balanced regime, and found it to be similar to a lognormal distribution and describing the data from the population studies very well. Our experimental observations and analysis are in agreement...
Numerical simulation of coherent resonance in a model network of Rulkov neurons
Andreev, Andrey V.; Runnova, Anastasia E.; Pisarchik, Alexander N.
2018-04-01
In this paper we study the spiking behaviour of a neuronal network consisting of Rulkov elements. We find that the regularity of this behaviour maximizes at a certain level of environment noise. This effect referred to as coherence resonance is demonstrated in a random complex network of Rulkov neurons. An external stimulus added to some of neurons excites them, and then activates other neurons in the network. The network coherence is also maximized at the certain stimulus amplitude.
Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua
2018-01-01
An excitatory-inhibitory recurrent neuronal network is established to numerically study the effect of inhibitory neurons on the synchronization degree of neuronal systems. The obtained results show that, with the number of inhibitory neurons and the coupling strength from an inhibitory neuron to an excitatory neuron increasing, inhibitory neurons can not only reduce the synchronization degree when the synchronization degree of the excitatory population is initially higher, but also enhance it when it is initially lower. Meanwhile, inhibitory neurons could also help the neuronal networks to maintain moderate synchronized states. In this paper, we call this effect as modulation effect of inhibitory neurons. With the obtained results, it is further revealed that the ratio of excitatory neurons to inhibitory neurons being nearly 4 : 1 is an economic and affordable choice for inhibitory neurons to realize this modulation effect.
Directory of Open Access Journals (Sweden)
Silvia Scarpetta
2010-08-01
Full Text Available We study the storage and retrieval of phase-coded patterns as stable dynamical attractors in recurrent neural networks, for both an analog and a integrate-and-fire spiking model. The synaptic strength is determined by a learning rule based on spike-time-dependent plasticity, with an asymmetric time window depending on the relative timing between pre- and post-synaptic activity. We store multiple patterns and study the network capacity. For the analog model, we find that the network capacity scales linearly with the network size, and that both capacity and the oscillation frequency of the retrieval state depend on the asymmetry of the learning time window. In addition to fully-connected networks, we study sparse networks, where each neuron is connected only to a small number $zll N$ of other neurons. Connections can be short range, between neighboring neurons placed on a regular lattice, or long range, between randomly chosen pairs of neurons. We find that a small fraction of long range connections is able to amplify the capacity of the network. This imply that a small-world-network topology is optimal, as a compromise between the cost of long range connections and the capacity increase. Also in the spiking integrate and fire model the crucial result of storing and retrieval of multiple phase-coded patterns is observed. The capacity of the fully-connected spiking network is investigated, together with the relation between oscillation frequency of retrieval state and window asymmetry.
Mensi, Skander; Hagens, Olivier; Gerstner, Wulfram; Pozzorini, Christian
2016-02-01
The way in which single neurons transform input into output spike trains has fundamental consequences for network coding. Theories and modeling studies based on standard Integrate-and-Fire models implicitly assume that, in response to increasingly strong inputs, neurons modify their coding strategy by progressively reducing their selective sensitivity to rapid input fluctuations. Combining mathematical modeling with in vitro experiments, we demonstrate that, in L5 pyramidal neurons, the firing threshold dynamics adaptively adjust the effective timescale of somatic integration in order to preserve sensitivity to rapid signals over a broad range of input statistics. For that, a new Generalized Integrate-and-Fire model featuring nonlinear firing threshold dynamics and conductance-based adaptation is introduced that outperforms state-of-the-art neuron models in predicting the spiking activity of neurons responding to a variety of in vivo-like fluctuating currents. Our model allows for efficient parameter extraction and can be analytically mapped to a Generalized Linear Model in which both the input filter--describing somatic integration--and the spike-history filter--accounting for spike-frequency adaptation--dynamically adapt to the input statistics, as experimentally observed. Overall, our results provide new insights on the computational role of different biophysical processes known to underlie adaptive coding in single neurons and support previous theoretical findings indicating that the nonlinear dynamics of the firing threshold due to Na+-channel inactivation regulate the sensitivity to rapid input fluctuations.
Distribution of spinal neuronal networks controlling forward and backward locomotion.
Merkulyeva, Natalia; Veshchitskii, Aleksandr; Gorsky, Oleg; Pavlova, Natalia; Zelenin, Pavel V; Gerasimenko, Yury; Deliagina, Tatiana G; Musienko, Pavel
2018-04-20
Higher vertebrates, including humans, are capable not only of forward (FW) locomotion but also of walking in other directions relative to the body axis [backward (BW), sideways, etc.]. While the neural mechanisms responsible for controlling FW locomotion have been studied in considerable detail, the mechanisms controlling steps in other directions are mostly unknown. The aim of the present study was to investigate the distribution of spinal neuronal networks controlling FW and BW locomotion. First, we applied electrical epidural stimulation (ES) to different segments of the spinal cord from L2 to S2 to reveal zones triggering FW and BW locomotion in decerebrate cats of either sex. Second, to determine the location of spinal neurons activated during FW and BW locomotion, we used c-fos immunostaining. We found that the neuronal networks responsible for FW locomotion were distributed broadly in the lumbosacral spinal cord and could be activated by ES of any segment from L3 to S2. By contrast, networks generating BW locomotion were activated by ES of a limited zone from the caudal part of L5 to the caudal part of L7. In the intermediate part of the gray matter within this zone, a significantly higher number of c- fos -positive interneurons was revealed in BW-stepping cats compared with FW-stepping cats. We suggest that this region of the spinal cord contains the network that determines the BW direction of locomotion. Significance Statement Sequential and single steps in various directions relative to the body axis [forward (FW), backward (BW), sideways, etc.] are used during locomotion and to correct for perturbations, respectively. The mechanisms controlling step direction are unknown. In the present study, for the first time we compared the distributions of spinal neuronal networks controlling FW and BW locomotion. Using a marker to visualize active neurons, we demonstrated that in the intermediate part of the gray matter within L6 and L7 spinal segments
Complexity in neuronal noise depends on network interconnectivity.
Serletis, Demitre; Zalay, Osbert C; Valiante, Taufik A; Bardakjian, Berj L; Carlen, Peter L
2011-06-01
"Noise," or noise-like activity (NLA), defines background electrical membrane potential fluctuations at the cellular level of the nervous system, comprising an important aspect of brain dynamics. Using whole-cell voltage recordings from fast-spiking stratum oriens interneurons and stratum pyramidale neurons located in the CA3 region of the intact mouse hippocampus, we applied complexity measures from dynamical systems theory (i.e., 1/f(γ) noise and correlation dimension) and found evidence for complexity in neuronal NLA, ranging from high- to low-complexity dynamics. Importantly, these high- and low-complexity signal features were largely dependent on gap junction and chemical synaptic transmission. Progressive neuronal isolation from the surrounding local network via gap junction blockade (abolishing gap junction-dependent spikelets) and then chemical synaptic blockade (abolishing excitatory and inhibitory post-synaptic potentials), or the reverse order of these treatments, resulted in emergence of high-complexity NLA dynamics. Restoring local network interconnectivity via blockade washout resulted in resolution to low-complexity behavior. These results suggest that the observed increase in background NLA complexity is the result of reduced network interconnectivity, thereby highlighting the potential importance of the NLA signal to the study of network state transitions arising in normal and abnormal brain dynamics (such as in epilepsy, for example).
How the self-coupled neuron can affect the chaotic synchronization of network
International Nuclear Information System (INIS)
Jia Chenhui; Wang Jiang; Deng, Bin
2009-01-01
We have calculated 34 kinds of three-cell neuron networks' minimum coupling strength, from the result; we find that a self-coupled neuron can have some effect on the synchronization of the network. The reason is the self-coupled neurons make the number of neurons looks 'decrease', and they decrease the coupling strength of the other neurons which are coupled with them.
Oscillations in the bistable regime of neuronal networks.
Roxin, Alex; Compte, Albert
2016-07-01
Bistability between attracting fixed points in neuronal networks has been hypothesized to underlie persistent activity observed in several cortical areas during working memory tasks. In network models this kind of bistability arises due to strong recurrent excitation, sufficient to generate a state of high activity created in a saddle-node (SN) bifurcation. On the other hand, canonical network models of excitatory and inhibitory neurons (E-I networks) robustly produce oscillatory states via a Hopf (H) bifurcation due to the E-I loop. This mechanism for generating oscillations has been invoked to explain the emergence of brain rhythms in the β to γ bands. Although both bistability and oscillatory activity have been intensively studied in network models, there has not been much focus on the coincidence of the two. Here we show that when oscillations emerge in E-I networks in the bistable regime, their phenomenology can be explained to a large extent by considering coincident SN and H bifurcations, known as a codimension two Takens-Bogdanov bifurcation. In particular, we find that such oscillations are not composed of a stable limit cycle, but rather are due to noise-driven oscillatory fluctuations. Furthermore, oscillations in the bistable regime can, in principle, have arbitrarily low frequency.
Effect of Transcranial Magnetic Stimulation on Neuronal Networks
Unsal, Ahmet; Hadimani, Ravi; Jiles, David
2013-03-01
The human brain contains around 100 billion nerve cells controlling our day to day activities. Consequently, brain disorders often result in impairments such as paralysis, loss of coordination and seizure. It has been said that 1 in 5 Americans suffer some diagnosable mental disorder. There is an urgent need to understand the disorders, prevent them and if possible, develop permanent cure for them. As a result, a significant amount of research activities is being directed towards brain research. Transcranial Magnetic Stimulation (TMS) is a promising tool for diagnosing and treating brain disorders. It is a non-invasive treatment method that produces a current flow in the brain which excites the neurons. Even though TMS has been verified to have advantageous effects on various brain related disorders, there have not been enough studies on the impact of TMS on cells. In this study, we are investigating the electrophysiological effects of TMS on one dimensional neuronal culture grown in a circular pathway. Electrical currents are produced on the neuronal networks depending on the directionality of the applied field. This aids in understanding how neuronal networks react under TMS treatment.
Synchronization stability and pattern selection in a memristive neuronal network.
Wang, Chunni; Lv, Mi; Alsaedi, Ahmed; Ma, Jun
2017-11-01
Spatial pattern formation and selection depend on the intrinsic self-organization and cooperation between nodes in spatiotemporal systems. Based on a memory neuron model, a regular network with electromagnetic induction is proposed to investigate the synchronization and pattern selection. In our model, the memristor is used to bridge the coupling between the magnetic flux and the membrane potential, and the induction current results from the time-varying electromagnetic field contributed by the exchange of ion currents and the distribution of charged ions. The statistical factor of synchronization predicts the transition of synchronization and pattern stability. The bifurcation analysis of the sampled time series for the membrane potential reveals the mode transition in electrical activity and pattern selection. A formation mechanism is outlined to account for the emergence of target waves. Although an external stimulus is imposed on each neuron uniformly, the diversity in the magnetic flux and the induction current leads to emergence of target waves in the studied network.
Synchronization stability and pattern selection in a memristive neuronal network
Wang, Chunni; Lv, Mi; Alsaedi, Ahmed; Ma, Jun
2017-11-01
Spatial pattern formation and selection depend on the intrinsic self-organization and cooperation between nodes in spatiotemporal systems. Based on a memory neuron model, a regular network with electromagnetic induction is proposed to investigate the synchronization and pattern selection. In our model, the memristor is used to bridge the coupling between the magnetic flux and the membrane potential, and the induction current results from the time-varying electromagnetic field contributed by the exchange of ion currents and the distribution of charged ions. The statistical factor of synchronization predicts the transition of synchronization and pattern stability. The bifurcation analysis of the sampled time series for the membrane potential reveals the mode transition in electrical activity and pattern selection. A formation mechanism is outlined to account for the emergence of target waves. Although an external stimulus is imposed on each neuron uniformly, the diversity in the magnetic flux and the induction current leads to emergence of target waves in the studied network.
Memristor-based neural networks: Synaptic versus neuronal stochasticity
Naous, Rawan
2016-11-02
In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered.
Mean field methods for cortical network dynamics
DEFF Research Database (Denmark)
Hertz, J.; Lerchner, Alexander; Ahmadi, M.
2004-01-01
We review the use of mean field theory for describing the dynamics of dense, randomly connected cortical circuits. For a simple network of excitatory and inhibitory leaky integrate- and-fire neurons, we can show how the firing irregularity, as measured by the Fano factor, increases...... with the strength of the synapses in the network and with the value to which the membrane potential is reset after a spike. Generalizing the model to include conductance-based synapses gives insight into the connection between the firing statistics and the high- conductance state observed experimentally in visual...
Gritsun, T.; Stegenga, J.; le Feber, Jakob; Rutten, Wim
2009-01-01
In this paper we address the issue of spontaneous bursting activity in cortical neuronal cultures and explain what might cause this collective behavior using computer simulations of two different neural network models. While the common approach to acivate a passive network is done by introducing
Identifying Controlling Nodes in Neuronal Networks in Different Scales
Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen
2012-01-01
Recent studies have detected hubs in neuronal networks using degree, betweenness centrality, motif and synchronization and revealed the importance of hubs in their structural and functional roles. In addition, the analysis of complex networks in different scales are widely used in physics community. This can provide detailed insights into the intrinsic properties of networks. In this study, we focus on the identification of controlling regions in cortical networks of cats’ brain in microscopic, mesoscopic and macroscopic scales, based on single-objective evolutionary computation methods. The problem is investigated by considering two measures of controllability separately. The impact of the number of driver nodes on controllability is revealed and the properties of controlling nodes are shown in a statistical way. Our results show that the statistical properties of the controlling nodes display a concave or convex shape with an increase of the allowed number of controlling nodes, revealing a transition in choosing driver nodes from the areas with a large degree to the areas with a low degree. Interestingly, the community Auditory in cats’ brain, which has sparse connections with other communities, plays an important role in controlling the neuronal networks. PMID:22848475
Noise focusing and the emergence of coherent activity in neuronal cultures
Orlandi, Javier G.; Soriano, Jordi; Alvarez-Lacalle, Enrique; Teller, Sara; Casademunt, Jaume
2013-09-01
At early stages of development, neuronal cultures in vitro spontaneously reach a coherent state of collective firing in a pattern of nearly periodic global bursts. Although understanding the spontaneous activity of neuronal networks is of chief importance in neuroscience, the origin and nature of that pulsation has remained elusive. By combining high-resolution calcium imaging with modelling in silico, we show that this behaviour is controlled by the propagation of waves that nucleate randomly in a set of points that is specific to each culture and is selected by a non-trivial interplay between dynamics and topology. The phenomenon is explained by the noise focusing effect--a strong spatio-temporal localization of the noise dynamics that originates in the complex structure of avalanches of spontaneous activity. Results are relevant to neuronal tissues and to complex networks with integrate-and-fire dynamics and metric correlations, for instance, in rumour spreading on social networks.
Network feedback regulates motor output across a range of modulatory neuron activity.
Spencer, Robert M; Blitz, Dawn M
2016-06-01
Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5-35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. Copyright © 2016 the American Physiological Society.
A Neuronal Network Model for Pitch Selectivity and Representation.
Huang, Chengcheng; Rinzel, John
2016-01-01
Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions.
Alterations of cortical GABA neurons and network oscillations in schizophrenia.
Gonzalez-Burgos, Guillermo; Hashimoto, Takanori; Lewis, David A
2010-08-01
The hypothesis that alterations of cortical inhibitory gamma-aminobutyric acid (GABA) neurons are a central element in the pathology of schizophrenia has emerged from a series of postmortem studies. How such abnormalities may contribute to the clinical features of schizophrenia has been substantially informed by a convergence with basic neuroscience studies revealing complex details of GABA neuron function in the healthy brain. Importantly, activity of the parvalbumin-containing class of GABA neurons has been linked to the production of cortical network oscillations. Furthermore, growing knowledge supports the concept that gamma band oscillations (30-80 Hz) are an essential mechanism for cortical information transmission and processing. Herein we review recent studies further indicating that inhibition from parvalbumin-positive GABA neurons is necessary to produce gamma oscillations in cortical circuits; provide an update on postmortem studies documenting that deficits in the expression of glutamic acid decarboxylase67, which accounts for most GABA synthesis in the cortex, are widely observed in schizophrenia; and describe studies using novel, noninvasive approaches directly assessing potential relations between alterations in GABA, oscillations, and cognitive function in schizophrenia.
Intermittent synchronization in a network of bursting neurons
Park, Choongseok; Rubchinsky, Leonid L.
2011-09-01
Synchronized oscillations in networks of inhibitory and excitatory coupled bursting neurons are common in a variety of neural systems from central pattern generators to human brain circuits. One example of the latter is the subcortical network of the basal ganglia, formed by excitatory and inhibitory bursters of the subthalamic nucleus and globus pallidus, involved in motor control and affected in Parkinson's disease. Recent experiments have demonstrated the intermittent nature of the phase-locking of neural activity in this network. Here, we explore one potential mechanism to explain the intermittent phase-locking in a network. We simplify the network to obtain a model of two inhibitory coupled elements and explore its dynamics. We used geometric analysis and singular perturbation methods for dynamical systems to reduce the full model to a simpler set of equations. Mathematical analysis was completed using three slow variables with two different time scales. Intermittently, synchronous oscillations are generated by overlapped spiking which crucially depends on the geometry of the slow phase plane and the interplay between slow variables as well as the strength of synapses. Two slow variables are responsible for the generation of activity patterns with overlapped spiking, and the other slower variable enhances the robustness of an irregular and intermittent activity pattern. While the analyzed network and the explored mechanism of intermittent synchrony appear to be quite generic, the results of this analysis can be used to trace particular values of biophysical parameters (synaptic strength and parameters of calcium dynamics), which are known to be impacted in Parkinson's disease.
Self-organized criticality in developing neuronal networks.
Directory of Open Access Journals (Sweden)
Christian Tetzlaff
Full Text Available Recently evidence has accumulated that many neural networks exhibit self-organized criticality. In this state, activity is similar across temporal scales and this is beneficial with respect to information flow. If subcritical, activity can die out, if supercritical epileptiform patterns may occur. Little is known about how developing networks will reach and stabilize criticality. Here we monitor the development between 13 and 95 days in vitro (DIV of cortical cell cultures (n = 20 and find four different phases, related to their morphological maturation: An initial low-activity state (≈19 DIV is followed by a supercritical (≈20 DIV and then a subcritical one (≈36 DIV until the network finally reaches stable criticality (≈58 DIV. Using network modeling and mathematical analysis we describe the dynamics of the emergent connectivity in such developing systems. Based on physiological observations, the synaptic development in the model is determined by the drive of the neurons to adjust their connectivity for reaching on average firing rate homeostasis. We predict a specific time course for the maturation of inhibition, with strong onset and delayed pruning, and that total synaptic connectivity should be strongly linked to the relative levels of excitation and inhibition. These results demonstrate that the interplay between activity and connectivity guides developing networks into criticality suggesting that this may be a generic and stable state of many networks in vivo and in vitro.
Neuronal network disintegration: common pathways linking neurodegenerative diseases.
Ahmed, Rebekah M; Devenney, Emma M; Irish, Muireann; Ittner, Arne; Naismith, Sharon; Ittner, Lars M; Rohrer, Jonathan D; Halliday, Glenda M; Eisen, Andrew; Hodges, John R; Kiernan, Matthew C
2016-11-01
Neurodegeneration refers to a heterogeneous group of brain disorders that progressively evolve. It has been increasingly appreciated that many neurodegenerative conditions overlap at multiple levels and therefore traditional clinicopathological correlation approaches to better classify a disease have met with limited success. Neuronal network disintegration is fundamental to neurodegeneration, and concepts based around such a concept may better explain the overlap between their clinical and pathological phenotypes. In this Review, promoters of overlap in neurodegeneration incorporating behavioural, cognitive, metabolic, motor, and extrapyramidal presentations will be critically appraised. In addition, evidence that may support the existence of large-scale networks that might be contributing to phenotypic differentiation will be considered across a neurodegenerative spectrum. Disintegration of neuronal networks through different pathological processes, such as prion-like spread, may provide a better paradigm of disease and thereby facilitate the identification of novel therapies for neurodegeneration. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks
DEFF Research Database (Denmark)
Hagen, Espen; Dahmen, David; Stavrinou, Maria L
2016-01-01
on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network......With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical...... and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely...
Complete and phase synchronization in a heterogeneous small-world neuronal network
International Nuclear Information System (INIS)
Fang, Han; Qi-Shao, Lu; Quan-Bao, Ji; Marian, Wiercigroch
2009-01-01
Synchronous firing of neurons is thought to be important for information communication in neuronal networks. This paper investigates the complete and phase synchronization in a heterogeneous small-world chaotic Hindmarsh–Rose neuronal network. The effects of various network parameters on synchronization behaviour are discussed with some biological explanations. Complete synchronization of small-world neuronal networks is studied theoretically by the master stability function method. It is shown that the coupling strength necessary for complete or phase synchronization decreases with the neuron number, the node degree and the connection density are increased. The effect of heterogeneity of neuronal networks is also considered and it is found that the network heterogeneity has an adverse effect on synchrony. (general)
Synchronization of the small-world neuronal network with unreliable synapses
International Nuclear Information System (INIS)
Li, Chunguang; Zheng, Qunxian
2010-01-01
As is well known, synchronization phenomena are ubiquitous in neuronal systems. Recently a lot of work concerning the synchronization of the neuronal network has been accomplished. In these works, the synapses are usually considered reliable, but experimental results show that, in biological neuronal networks, synapses are usually unreliable. In our previous work, we have studied the synchronization of the neuronal network with unreliable synapses; however, we have not paid attention to the effect of topology on the synchronization of the neuronal network. Several recent studies have found that biological neuronal networks have typical properties of small-world networks, characterized by a short path length and high clustering coefficient. In this work, mainly based on the small-world neuronal network (SWNN) with inhibitory neurons, we study the effect of network topology on the synchronization of the neuronal network with unreliable synapses. Together with the network topology, the effects of the GABAergic reversal potential, time delay and noise are also considered. Interestingly, we found a counter-intuitive phenomenon for the SWNN with specific shortcut adding probability, that is, the less reliable the synapses, the better the synchronization performance of the SWNN. We also consider the effects of both local noise and global noise in this work. It is shown that these two different types of noise have distinct effects on the synchronization: one is negative and the other is positive
Neuronal network disturbance after focal ischemia in rats
International Nuclear Information System (INIS)
Kataoka, K.; Hayakawa, T.; Yamada, K.; Mushiroi, T.; Kuroda, R.; Mogami, H.
1989-01-01
We studied functional disturbances following left middle cerebral artery occlusion in rats. Neuronal function was evaluated by [14C]2-deoxyglucose autoradiography 1 day after occlusion. We analyzed the mechanisms of change in glucose utilization outside the infarct using Fink-Heimer silver impregnation, axonal transport of wheat germ agglutinin-conjugated-horseradish peroxidase, and succinate dehydrogenase histochemistry. One day after occlusion, glucose utilization was remarkably reduced in the areas surrounding the infarct. There were many silver grains indicating degeneration of the synaptic terminals in the cortical areas surrounding the infarct and the ipsilateral cingulate cortex. Moreover, in the left thalamus where the left middle cerebral artery supplied no blood, glucose utilization significantly decreased compared with sham-operated rats. In the left thalamus, massive silver staining of degenerated synaptic terminals and decreases in succinate dehydrogenase activity were observed 4 and 5 days after occlusion. The absence of succinate dehydrogenase staining may reflect early changes in retrograde degeneration of thalamic neurons after ischemic injury of the thalamocortical pathway. Terminal degeneration even affected areas remote from the infarct: there were silver grains in the contralateral hemisphere transcallosally connected to the infarct and in the ipsilateral substantia nigra. Axonal transport study showed disruption of the corticospinal tract by subcortical ischemia; the transcallosal pathways in the cortex surrounding the infarct were preserved. The relation between neural function and the neuronal network in the area surrounding the focal cerebral infarct is discussed with regard to ischemic penumbra and diaschisis
Complex Behavior in a Selective Aging Neuron Model Based on Small World Networks
International Nuclear Information System (INIS)
Zhang Guiqing; Chen Tianlun
2008-01-01
Complex behavior in a selective aging simple neuron model based on small world networks is investigated. The basic elements of the model are endowed with the main features of a neuron function. The structure of the selective aging neuron model is discussed. We also give some properties of the new network and find that the neuron model displays a power-law behavior. If the brain network is small world-like network, the mean avalanche size is almost the same unless the aging parameter is big enough.
Dynamic Control of Synchronous Activity in Networks of Spiking Neurons.
Directory of Open Access Journals (Sweden)
Axel Hutt
Full Text Available Oscillatory brain activity is believed to play a central role in neural coding. Accumulating evidence shows that features of these oscillations are highly dynamic: power, frequency and phase fluctuate alongside changes in behavior and task demands. The role and mechanism supporting this variability is however poorly understood. We here analyze a network of recurrently connected spiking neurons with time delay displaying stable synchronous dynamics. Using mean-field and stability analyses, we investigate the influence of dynamic inputs on the frequency of firing rate oscillations. We show that afferent noise, mimicking inputs to the neurons, causes smoothing of the system's response function, displacing equilibria and altering the stability of oscillatory states. Our analysis further shows that these noise-induced changes cause a shift of the peak frequency of synchronous oscillations that scales with input intensity, leading the network towards critical states. We lastly discuss the extension of these principles to periodic stimulation, in which externally applied driving signals can trigger analogous phenomena. Our results reveal one possible mechanism involved in shaping oscillatory activity in the brain and associated control principles.
Dynamic Control of Synchronous Activity in Networks of Spiking Neurons.
Hutt, Axel; Mierau, Andreas; Lefebvre, Jérémie
Oscillatory brain activity is believed to play a central role in neural coding. Accumulating evidence shows that features of these oscillations are highly dynamic: power, frequency and phase fluctuate alongside changes in behavior and task demands. The role and mechanism supporting this variability is however poorly understood. We here analyze a network of recurrently connected spiking neurons with time delay displaying stable synchronous dynamics. Using mean-field and stability analyses, we investigate the influence of dynamic inputs on the frequency of firing rate oscillations. We show that afferent noise, mimicking inputs to the neurons, causes smoothing of the system's response function, displacing equilibria and altering the stability of oscillatory states. Our analysis further shows that these noise-induced changes cause a shift of the peak frequency of synchronous oscillations that scales with input intensity, leading the network towards critical states. We lastly discuss the extension of these principles to periodic stimulation, in which externally applied driving signals can trigger analogous phenomena. Our results reveal one possible mechanism involved in shaping oscillatory activity in the brain and associated control principles.
Neuron-Like Networks Between Ribosomal Proteins Within the Ribosome
Poirot, Olivier; Timsit, Youri
2016-05-01
From brain to the World Wide Web, information-processing networks share common scale invariant properties. Here, we reveal the existence of neural-like networks at a molecular scale within the ribosome. We show that with their extensions, ribosomal proteins form complex assortative interaction networks through which they communicate through tiny interfaces. The analysis of the crystal structures of 50S eubacterial particles reveals that most of these interfaces involve key phylogenetically conserved residues. The systematic observation of interactions between basic and aromatic amino acids at the interfaces and along the extension provides new structural insights that may contribute to decipher the molecular mechanisms of signal transmission within or between the ribosomal proteins. Similar to neurons interacting through “molecular synapses”, ribosomal proteins form a network that suggest an analogy with a simple molecular brain in which the “sensory-proteins” innervate the functional ribosomal sites, while the “inter-proteins” interconnect them into circuits suitable to process the information flow that circulates during protein synthesis. It is likely that these circuits have evolved to coordinate both the complex macromolecular motions and the binding of the multiple factors during translation. This opens new perspectives on nanoscale information transfer and processing.
Stochastic resonance in small-world neuronal networks with hybrid electrical–chemical synapses
International Nuclear Information System (INIS)
Wang, Jiang; Guo, Xinmeng; Yu, Haitao; Liu, Chen; Deng, Bin; Wei, Xile; Chen, Yingyuan
2014-01-01
Highlights: •We study stochastic resonance in small-world neural networks with hybrid synapses. •The resonance effect depends largely on the probability of chemical synapse. •An optimal chemical synapse probability exists to evoke network resonance. •Network topology affects the stochastic resonance in hybrid neuronal networks. - Abstract: The dependence of stochastic resonance in small-world neuronal networks with hybrid electrical–chemical synapses on the probability of chemical synapse and the rewiring probability is investigated. A subthreshold periodic signal is imposed on one single neuron within the neuronal network as a pacemaker. It is shown that, irrespective of the probability of chemical synapse, there exists a moderate intensity of external noise optimizing the response of neuronal networks to the pacemaker. Moreover, the effect of pacemaker driven stochastic resonance of the system depends largely on the probability of chemical synapse. A high probability of chemical synapse will need lower noise intensity to evoke the phenomenon of stochastic resonance in the networked neuronal systems. In addition, for fixed noise intensity, there is an optimal chemical synapse probability, which can promote the propagation of the localized subthreshold pacemaker across neural networks. And the optimal chemical synapses probability turns even larger as the coupling strength decreases. Furthermore, the small-world topology has a significant impact on the stochastic resonance in hybrid neuronal networks. It is found that increasing the rewiring probability can always enhance the stochastic resonance until it approaches the random network limit
A combined Bodian-Nissl stain for improved network analysis in neuronal cell culture.
Hightower, M; Gross, G W
1985-11-01
Bodian and Nissl procedures were combined to stain dissociated mouse spinal cord cells cultured on coverslips. The Bodian technique stains fine neuronal processes in great detail as well as an intracellular fibrillar network concentrated around the nucleus and in proximal neurites. The Nissl stain clearly delimits neuronal cytoplasm in somata and in large dendrites. A combination of these techniques allows the simultaneous depiction of neuronal perikarya and all afferent and efferent processes. Costaining with little background staining by either procedure suggests high specificity for neurons. This procedure could be exploited for routine network analysis of cultured neurons.
Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang
2011-12-01
An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away") and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.
A real-time hybrid neuron network for highly parallel cognitive systems.
Christiaanse, Gerrit Jan; Zjajo, Amir; Galuzzi, Carlo; van Leuken, Rene
2016-08-01
For comprehensive understanding of how neurons communicate with each other, new tools need to be developed that can accurately mimic the behaviour of such neurons and neuron networks under `real-time' constraints. In this paper, we propose an easily customisable, highly pipelined, neuron network design, which executes optimally scheduled floating-point operations for maximal amount of biophysically plausible neurons per FPGA family type. To reduce the required amount of resources without adverse effect on the calculation latency, a single exponent instance is used for multiple neuron calculation operations. Experimental results indicate that the proposed network design allows the simulation of up to 1188 neurons on Virtex7 (XC7VX550T) device in brain real-time yielding a speed-up of x12.4 compared to the state-of-the art.
Pulsed neural networks consisting of single-flux-quantum spiking neurons
International Nuclear Information System (INIS)
Hirose, T.; Asai, T.; Amemiya, Y.
2007-01-01
An inhibitory pulsed neural network was developed for brain-like information processing, by using single-flux-quantum (SFQ) circuits. It consists of spiking neuron devices that are coupled to each other through all-to-all inhibitory connections. The network selects neural activity. The operation of the neural network was confirmed by computer simulation. SFQ neuron devices can imitate the operation of the inhibition phenomenon of neural networks
Voltage-sensitive dye recording from networks of cultured neurons
Chien, Chi-Bin
This thesis describes the development and testing of a sensitive apparatus for recording electrical activity from microcultures of rat superior cervical ganglion (SCG) neurons by using voltage-sensitive fluorescent dyes.The apparatus comprises a feedback-regulated mercury arc light source, an inverted epifluorescence microscope, a novel fiber-optic camera with discrete photodiode detectors, and low-noise preamplifiers. Using an NA 0.75 objective and illuminating at 10 W/cm2 with the 546 nm mercury line, a typical SCG neuron stained with the styryl dye RH423 gives a detected photocurrent of 1 nA; the light source and optical detectors are quiet enough that the shot noise in this photocurrent--about.03% rms--dominates. The design, theory, and performance of this dye-recording apparatus are discussed in detail.Styryl dyes such as RH423 typically give signals of 1%/100 mV on these cells; the signals are linear in membrane potential, but do not appear to arise from a purely electrochromic mechanism. Given this voltage sensitivity and the noise level of the apparatus, it should be possible to detect both action potentials and subthreshold synaptic potentials from SCG cell bodies. In practice, dye recording can easily detect action potentials from every neuron in an SCG microculture, but small synaptic potentials are obscured by dye signals from the dense network of axons.In another microculture system that does not have such long and complex axons, this dye-recording apparatus should be able to detect synaptic potentials, making it possible to noninvasively map the synaptic connections in a microculture, and thus to study long-term synaptic plasticity.
Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.
Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T
2016-12-01
With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.
Dynamical patterns of calcium signaling in a functional model of neuron-astrocyte networks
DEFF Research Database (Denmark)
Postnov, D.E.; Koreshkov, R.N.; Brazhe, N.A.
2009-01-01
We propose a functional mathematical model for neuron-astrocyte networks. The model incorporates elements of the tripartite synapse and the spatial branching structure of coupled astrocytes. We consider glutamate-induced calcium signaling as a specific mode of excitability and transmission...... in astrocytic-neuronal networks. We reproduce local and global dynamical patterns observed experimentally....
Kamal, Hassan; Kanhirodan, Rajan; Srinivas, Kalyan V.; Sikdar, Sujit K.
2010-04-01
We study the responses of a cultured neural network when it is exposed to epileptogenesis glutamate injury causing epilepsy and subsequent treatment with phenobarbital by constructing connectivity map of neurons using correlation matrix. This study is particularly useful in understanding the pharmaceutical drug induced changes in the neuronal network properties with insights into changes at the systems biology level.
Efficient computation in networks of spiking neurons: simulations and theory
International Nuclear Information System (INIS)
Natschlaeger, T.
1999-01-01
One of the most prominent features of biological neural systems is that individual neurons communicate via short electrical pulses, the so called action potentials or spikes. In this thesis we investigate possible mechanisms which can in principle explain how complex computations in spiking neural networks (SNN) can be performed very fast, i.e. within a few 10 milliseconds. Some of these models are based on the assumption that relevant information is encoded by the timing of individual spikes (temporal coding). We will also discuss a model which is based on a population code and still is able to perform fast complex computations. In their natural environment biological neural systems have to process signals with a rich temporal structure. Hence it is an interesting question how neural systems process time series. In this context we explore possible links between biophysical characteristics of single neurons (refractory behavior, connectivity, time course of postsynaptic potentials) and synapses (unreliability, dynamics) on the one hand and possible computations on times series on the other hand. Furthermore we describe a general model of computation that exploits dynamic synapses. This model provides a general framework for understanding how neural systems process time-varying signals. (author)
The mechanism of synchronization in feed-forward neuronal networks
International Nuclear Information System (INIS)
Goedeke, S; Diesmann, M
2008-01-01
Synchronization in feed-forward subnetworks of the brain has been proposed to explain the precisely timed spike patterns observed in experiments. While the attractor dynamics of these networks is now well understood, the underlying single neuron mechanisms remain unexplained. Previous attempts have captured the effects of the highly fluctuating membrane potential by relating spike intensity f(U) to the instantaneous voltage U generated by the input. This article shows that f is high during the rise and low during the decay of U(t), demonstrating that the U-dot-dependence of f, not refractoriness, is essential for synchronization. Moreover, the bifurcation scenario is quantitatively described by a simple f(U,U-dot) relationship. These findings suggest f(U,U-dot) as the relevant model class for the investigation of neural synchronization phenomena in a noisy environment
The synchronization of FitzHugh–Nagumo neuron network coupled by gap junction
International Nuclear Information System (INIS)
Zhan Yong; Zhang Suhua; Zhao Tongjun; An Hailong; Zhang Zhendong; Han Yingrong; Liu Hui; Zhang Yuhong
2008-01-01
It is well known that the strong coupling can synchronize a network of nonlinear oscillators. Synchronization provides the basis of the remarkable computational performance of the brain. In this paper the FitzHugh–Nagumo neuron network is constructed. The dependence of the synchronization on the coupling strength, the noise intensity and the size of the neuron network has been discussed. The results indicate that the coupling among neurons works to improve the synchronization, and noise increases the neuron random dynamics and the local fluctuations; the larger the size of network, the worse the synchronization. The dependence of the synchronization on the strength of the electric synapse coupling and chemical synapse coupling has also been discussed, which proves that electric synapse coupling can enhance the synchronization of the neuron network largely
Neuronal spike sorting based on radial basis function neural networks
Directory of Open Access Journals (Sweden)
Taghavi Kani M
2011-02-01
Full Text Available "nBackground: Studying the behavior of a society of neurons, extracting the communication mechanisms of brain with other tissues, finding treatment for some nervous system diseases and designing neuroprosthetic devices, require an algorithm to sort neuralspikes automatically. However, sorting neural spikes is a challenging task because of the low signal to noise ratio (SNR of the spikes. The main purpose of this study was to design an automatic algorithm for classifying neuronal spikes that are emitted from a specific region of the nervous system."n "nMethods: The spike sorting process usually consists of three stages: detection, feature extraction and sorting. We initially used signal statistics to detect neural spikes. Then, we chose a limited number of typical spikes as features and finally used them to train a radial basis function (RBF neural network to sort the spikes. In most spike sorting devices, these signals are not linearly discriminative. In order to solve this problem, the aforesaid RBF neural network was used."n "nResults: After the learning process, our proposed algorithm classified any arbitrary spike. The obtained results showed that even though the proposed Radial Basis Spike Sorter (RBSS reached to the same error as the previous methods, however, the computational costs were much lower compared to other algorithms. Moreover, the competitive points of the proposed algorithm were its good speed and low computational complexity."n "nConclusion: Regarding the results of this study, the proposed algorithm seems to serve the purpose of procedures that require real-time processing and spike sorting.
[Functional organization and structure of the serotonergic neuronal network of terrestrial snail].
Nikitin, E S; Balaban, P M
2011-01-01
The extension of knowledge how the brain works requires permanent improvement of methods of recording of neuronal activity and increase in the number of neurons recorded simultaneously to better understand the collective work of neuronal networks and assemblies. Conventional methods allow simultaneous intracellular recording up to 2-5 neurons and their membrane potentials, currents or monosynaptic connections or observation of spiking of neuronal groups with subsequent discrimination of individual spikes with loss of details of the dynamics of membrane potential. We recorded activity of a compact group of serotonergic neurons (up to 56 simultaneously) in the ganglion of a terrestrial mollusk using the method of optical recording of membrane potential that allowed to record individual action potentials in details with action potential parameters and to reveal morphology of the neurons rcorded. We demonstrated clear clustering in the group in relation with the dynamics of action potentials and phasic or tonic components in the neuronal responses to external electrophysiological and tactile stimuli. Also, we showed that identified neuron Pd2 could induce activation of a significant number of neurons in the group whereas neuron Pd4 did not induce any activation. However, its activation is delayed with regard to activation of the reacting group of neurons. Our data strongly support the concept of possible delegation of the integrative function by the network to a single neuron.
Lelito, Katherine R; Shafer, Orie T
2012-04-01
The relatively simple clock neuron network of Drosophila is a valuable model system for the neuronal basis of circadian timekeeping. Unfortunately, many key neuronal classes of this network are inaccessible to electrophysiological analysis. We have therefore adopted the use of genetically encoded sensors to address the physiology of the fly's circadian clock network. Using genetically encoded Ca(2+) and cAMP sensors, we have investigated the physiological responses of two specific classes of clock neuron, the large and small ventrolateral neurons (l- and s-LN(v)s), to two neurotransmitters implicated in their modulation: acetylcholine (ACh) and γ-aminobutyric acid (GABA). Live imaging of l-LN(v) cAMP and Ca(2+) dynamics in response to cholinergic agonist and GABA application were well aligned with published electrophysiological data, indicating that our sensors were capable of faithfully reporting acute physiological responses to these transmitters within single adult clock neuron soma. We extended these live imaging methods to s-LN(v)s, critical neuronal pacemakers whose physiological properties in the adult brain are largely unknown. Our s-LN(v) experiments revealed the predicted excitatory responses to bath-applied cholinergic agonists and the predicted inhibitory effects of GABA and established that the antagonism of ACh and GABA extends to their effects on cAMP signaling. These data support recently published but physiologically untested models of s-LN(v) modulation and lead to the prediction that cholinergic and GABAergic inputs to s-LN(v)s will have opposing effects on the phase and/or period of the molecular clock within these critical pacemaker neurons.
Obenhaus, Horst A; Rozov, Andrei; Bertocchi, Ilaria; Tang, Wannan; Kirsch, Joachim; Betz, Heinrich; Sprengel, Rolf
2016-01-01
The causal interrogation of neuronal networks involved in specific behaviors requires the spatially and temporally controlled modulation of neuronal activity. For long-term manipulation of neuronal activity, chemogenetic tools provide a reasonable alternative to short-term optogenetic approaches. Here we show that virus mediated gene transfer of the ivermectin (IVM) activated glycine receptor mutant GlyRα1 (AG) can be used for the selective and reversible silencing of specific neuronal networks in mice. In the striatum, dorsal hippocampus, and olfactory bulb, GlyRα1 (AG) promoted IVM dependent effects in representative behavioral assays. Moreover, GlyRα1 (AG) mediated silencing had a strong and reversible impact on neuronal ensemble activity and c-Fos activation in the olfactory bulb. Together our results demonstrate that long-term, reversible and re-inducible neuronal silencing via GlyRα1 (AG) is a promising tool for the interrogation of network mechanisms underlying the control of behavior and memory formation.
Dislocation Coupling-Induced Transition of Synchronization in Two-Layer Neuronal Networks
International Nuclear Information System (INIS)
Qin Hui-Xin; Ma Jun; Wang Chun-Ni; Jin Wu-Yin
2014-01-01
The mutual coupling between neurons in a realistic neuronal system is much complex, and a two-layer neuronal network is designed to investigate the transition of electric activities of neurons. The Hindmarsh—Rose neuron model is used to describe the local dynamics of each neuron, and neurons in the two-layer networks are coupled in dislocated type. The coupling intensity between two-layer networks, and the coupling ratio (Pro), which defines the percentage involved in the coupling in each layer, are changed to observe the synchronization transition of collective behaviors in the two-layer networks. It is found that the two-layer networks of neurons becomes synchronized with increasing the coupling intensity and coupling ratio (Pro) beyond certain thresholds. An ordered wave in the first layer is useful to wake up the rest state in the second layer, or suppress the spatiotemporal state in the second layer under coupling by generating target wave or spiral waves. And the scheme of dislocation coupling can be used to suppress spatiotemporal chaos and excite quiescent neurons. (interdisciplinary physics and related areas of science and technology)
Directory of Open Access Journals (Sweden)
Harish Babu
2009-09-01
Full Text Available Adult hippocampal neurogenesis is regulated by activity. But how do neural precursor cells in the hippocampus respond to surrounding network activity and translate increased neural activity into a developmental program? Here we show that long-term potential (LTP-like synaptic activity within a cellular network of mature hippocampal neurons promotes neuronal differentiation of newly generated cells. In co-cultures of precursor cells with primary hippocampal neurons, LTP-like synaptic plasticity induced by addition of glycine in Mg2+-free media for 5 min, produced synchronous network activity and subsequently increased synaptic strength between neurons. Furthermore, this synchronous network activity led to a significant increase in neuronal differentiation from the co-cultured neural precursor cells. When applied directly to precursor cells, glycine and Mg2+-free solution did not induce neuronal differentiation. Synaptic plasticity-induced neuronal differentiation of precursor cells was observed in the presence of GABAergic neurotransmission blockers but was dependent on NMDA-mediated Ca2+ influx. Most importantly, neuronal differentiation required the release of brain-derived neurotrophic factor (BDNF from the underlying substrate hippocampal neurons as well as TrkB receptor phosphorylation in precursor cells. This suggests that activity-dependent stem cell differentiation within the hippocampal network is mediated via synaptically evoked BDNF signaling.
Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity
Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R.; Baldelli, Pietro; Benfenati, Fabio
2013-01-01
Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows. PMID:23970852
Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity.
Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R; Baldelli, Pietro; Benfenati, Fabio
2013-01-01
Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows.
Spiking sychronization regulated by noise in three types of Hodgkin—Huxley neuronal networks
International Nuclear Information System (INIS)
Zhang Zheng-Zhen; Zeng Shang-You; Tang Wen-Yan; Hu Jin-Lin; Zeng Shao-Wen; Ning Wei-Lian; Qiu Yi; Wu Hui-Si
2012-01-01
In this paper, we study spiking synchronization in three different types of Hodgkin—Huxley neuronal networks, which are the small-world, regular, and random neuronal networks. All the neurons are subjected to subthreshold stimulus and external noise. It is found that in each of all the neuronal networks there is an optimal strength of noise to induce the maximal spiking synchronization. We further demonstrate that in each of the neuronal networks there is a range of synaptic conductance to induce the effect that an optimal strength of noise maximizes the spiking synchronization. Only when the magnitude of the synaptic conductance is moderate, will the effect be considerable. However, if the synaptic conductance is small or large, the effect vanishes. As the connections between neurons increase, the synaptic conductance to maximize the effect decreases. Therefore, we show quantitatively that the noise-induced maximal synchronization in the Hodgkin—Huxley neuronal network is a general effect, regardless of the specific type of neuronal network
From Structure to Activity: Using Centrality Measures to Predict Neuronal Activity.
Fletcher, Jack McKay; Wennekers, Thomas
2018-03-01
It is clear that the topological structure of a neural network somehow determines the activity of the neurons within it. In the present work, we ask to what extent it is possible to examine the structural features of a network and learn something about its activity? Specifically, we consider how the centrality (the importance of a node in a network) of a neuron correlates with its firing rate. To investigate, we apply an array of centrality measures, including In-Degree, Closeness, Betweenness, Eigenvector, Katz, PageRank, Hyperlink-Induced Topic Search (HITS) and NeuronRank to Leaky-Integrate and Fire neural networks with different connectivity schemes. We find that Katz centrality is the best predictor of firing rate given the network structure, with almost perfect correlation in all cases studied, which include purely excitatory and excitatory-inhibitory networks, with either homogeneous connections or a small-world structure. We identify the properties of a network which will cause this correlation to hold. We argue that the reason Katz centrality correlates so highly with neuronal activity compared to other centrality measures is because it nicely captures disinhibition in neural networks. In addition, we argue that these theoretical findings are applicable to neuroscientists who apply centrality measures to functional brain networks, as well as offer a neurophysiological justification to high level cognitive models which use certain centrality measures.
Effects of channel noise on firing coherence of small-world Hodgkin-Huxley neuronal networks
Sun, X. J.; Lei, J. Z.; Perc, M.; Lu, Q. S.; Lv, S. J.
2011-01-01
We investigate the effects of channel noise on firing coherence of Watts-Strogatz small-world networks consisting of biophysically realistic HH neurons having a fraction of blocked voltage-gated sodium and potassium ion channels embedded in their neuronal membranes. The intensity of channel noise is determined by the number of non-blocked ion channels, which depends on the fraction of working ion channels and the membrane patch size with the assumption of homogeneous ion channel density. We find that firing coherence of the neuronal network can be either enhanced or reduced depending on the source of channel noise. As shown in this paper, sodium channel noise reduces firing coherence of neuronal networks; in contrast, potassium channel noise enhances it. Furthermore, compared with potassium channel noise, sodium channel noise plays a dominant role in affecting firing coherence of the neuronal network. Moreover, we declare that the observed phenomena are independent of the rewiring probability.
On Rhythms in Neuronal Networks with Recurrent Excitation.
Börgers, Christoph; Takeuchi, R Melody; Rosebrock, Daniel T
2018-02-01
We investigate rhythms in networks of neurons with recurrent excitation, that is, with excitatory cells exciting each other. Recurrent excitation can sustain activity even when the cells in the network are driven below threshold, too weak to fire on their own. This sort of "reverberating" activity is often thought to be the basis of working memory. Recurrent excitation can also lead to "runaway" transitions, sudden transitions to high-frequency firing; this may be related to epileptic seizures. Not all fundamental questions about these phenomena have been answered with clarity in the literature. We focus on three questions here: (1) How much recurrent excitation is needed to sustain reverberating activity? How does the answer depend on parameters? (2) Is there a positive minimum frequency of reverberating activity, a positive "onset frequency"? How does it depend on parameters? (3) When do runaway transitions occur? For reduced models, we give mathematical answers to these questions. We also examine computationally to which extent our findings are reflected in the behavior of biophysically more realistic model networks. Our main results can be summarized as follows. (1) Reverberating activity can be fueled by extremely weak slow recurrent excitation, but only by sufficiently strong fast recurrent excitation. (2) The onset of reverberating activity, as recurrent excitation is strengthened or external drive is raised, occurs at a positive frequency. It is faster when the external drive is weaker (and the recurrent excitation stronger). It is slower when the recurrent excitation has a longer decay time constant. (3) Runaway transitions occur only with fast, not with slow, recurrent excitation. We also demonstrate that the relation between reverberating activity fueled by recurrent excitation and runaway transitions can be visualized in an instructive way by a (generalized) cusp catastrophe surface.
Effects of network structure on the synchronizability of nonlinearly coupled Hindmarsh–Rose neurons
Energy Technology Data Exchange (ETDEWEB)
Li, Chun-Hsien, E-mail: chli@nknucc.nknu.edu.tw [Department of Mathematics, National Kaohsiung Normal University, Yanchao District, Kaohsiung City 82444, Taiwan (China); Yang, Suh-Yuh, E-mail: syyang@math.ncu.edu.tw [Department of Mathematics, National Central University, Jhongli District, Taoyuan City 32001, Taiwan (China)
2015-10-23
This work is devoted to investigate the effects of network structure on the synchronizability of nonlinearly coupled dynamical network of Hindmarsh–Rose neurons with a sigmoidal coupling function. We mainly focus on the networks that exhibit the small-world character or scale-free property. By checking the first nonzero eigenvalue of the outer-coupling matrix, which is closely related to the synchronization threshold, the synchronizabilities of three specific network ensembles with prescribed network structures are compared. Interestingly, we find that networks with more connections will not necessarily result in better synchronizability. - Highlights: • We investigate the effects of network structure on the synchronizability of nonlinearly coupled Hindmarsh–Rose neurons. • We mainly consider the networks that exhibit the small-world character or scale-free property. • The synchronizability of three specific network ensembles with prescribed network structures are compared. • Networks with more connections will not necessarily result in better synchronizability.
Effects of network structure on the synchronizability of nonlinearly coupled Hindmarsh–Rose neurons
International Nuclear Information System (INIS)
Li, Chun-Hsien; Yang, Suh-Yuh
2015-01-01
This work is devoted to investigate the effects of network structure on the synchronizability of nonlinearly coupled dynamical network of Hindmarsh–Rose neurons with a sigmoidal coupling function. We mainly focus on the networks that exhibit the small-world character or scale-free property. By checking the first nonzero eigenvalue of the outer-coupling matrix, which is closely related to the synchronization threshold, the synchronizabilities of three specific network ensembles with prescribed network structures are compared. Interestingly, we find that networks with more connections will not necessarily result in better synchronizability. - Highlights: • We investigate the effects of network structure on the synchronizability of nonlinearly coupled Hindmarsh–Rose neurons. • We mainly consider the networks that exhibit the small-world character or scale-free property. • The synchronizability of three specific network ensembles with prescribed network structures are compared. • Networks with more connections will not necessarily result in better synchronizability
Directory of Open Access Journals (Sweden)
Fikret Emre eKapucu
2012-06-01
Full Text Available In this paper we propose a firing statistics based neuronal network burst detection algorithm for neuronal networks exhibiting highly variable action potential dynamics. Electrical activity of neuronal networks is generally analyzed by the occurrences of spikes and bursts both in time and space. Commonly accepted analysis tools employ burst detection algorithms based on predefined criteria. However, maturing neuronal networks, such as those originating from human embryonic stem cells (hESC, exhibit highly variable network structure and time-varying dynamics. To explore the developing burst/spike activities of such networks, we propose a burst detection algorithm which utilizes the firing statistics based on interspike interval (ISI histograms. Moreover, the algorithm calculates interspike interval thresholds for burst spikes as well as for pre-burst spikes and burst tails by evaluating the cumulative moving average and skewness of the ISI histogram. Because of the adaptive nature of the proposed algorithm, its analysis power is not limited by the type of neuronal cell network at hand. We demonstrate the functionality of our algorithm with two different types of microelectrode array (MEA data recorded from spontaneously active hESC-derived neuronal cell networks. The same data was also analyzed by two commonly employed burst detection algorithms and the differences in burst detection results are illustrated. The results demonstrate that our method is both adaptive to the firing statistics of the network and yields successful burst detection from the data. In conclusion, the proposed method is a potential tool for analyzing of hESC-derived neuronal cell networks and thus can be utilized in studies aiming to understand the development and functioning of human neuronal networks and as an analysis tool for in vitro drug screening and neurotoxicity assays.
The Role of Adult-Born Neurons in the Constantly Changing Olfactory Bulb Network
Directory of Open Access Journals (Sweden)
Sarah Malvaut
2016-01-01
Full Text Available The adult mammalian brain is remarkably plastic and constantly undergoes structurofunctional modifications in response to environmental stimuli. In many regions plasticity is manifested by modifications in the efficacy of existing synaptic connections or synapse formation and elimination. In a few regions, however, plasticity is brought by the addition of new neurons that integrate into established neuronal networks. This type of neuronal plasticity is particularly prominent in the olfactory bulb (OB where thousands of neuronal progenitors are produced on a daily basis in the subventricular zone (SVZ and migrate along the rostral migratory stream (RMS towards the OB. In the OB, these neuronal precursors differentiate into local interneurons, mature, and functionally integrate into the bulbar network by establishing output synapses with principal neurons. Despite continuous progress, it is still not well understood how normal functioning of the OB is preserved in the constantly remodelling bulbar network and what role adult-born neurons play in odor behaviour. In this review we will discuss different levels of morphofunctional plasticity effected by adult-born neurons and their functional role in the adult OB and also highlight the possibility that different subpopulations of adult-born cells may fulfill distinct functions in the OB neuronal network and odor behaviour.
The Role of Adult-Born Neurons in the Constantly Changing Olfactory Bulb Network
Malvaut, Sarah; Saghatelyan, Armen
2016-01-01
The adult mammalian brain is remarkably plastic and constantly undergoes structurofunctional modifications in response to environmental stimuli. In many regions plasticity is manifested by modifications in the efficacy of existing synaptic connections or synapse formation and elimination. In a few regions, however, plasticity is brought by the addition of new neurons that integrate into established neuronal networks. This type of neuronal plasticity is particularly prominent in the olfactory bulb (OB) where thousands of neuronal progenitors are produced on a daily basis in the subventricular zone (SVZ) and migrate along the rostral migratory stream (RMS) towards the OB. In the OB, these neuronal precursors differentiate into local interneurons, mature, and functionally integrate into the bulbar network by establishing output synapses with principal neurons. Despite continuous progress, it is still not well understood how normal functioning of the OB is preserved in the constantly remodelling bulbar network and what role adult-born neurons play in odor behaviour. In this review we will discuss different levels of morphofunctional plasticity effected by adult-born neurons and their functional role in the adult OB and also highlight the possibility that different subpopulations of adult-born cells may fulfill distinct functions in the OB neuronal network and odor behaviour. PMID:26839709
Directory of Open Access Journals (Sweden)
Willem de Haan
2017-09-01
Full Text Available Neuronal hyperactivity and hyperexcitability of the cerebral cortex and hippocampal region is an increasingly observed phenomenon in preclinical Alzheimer's disease (AD. In later stages, oscillatory slowing and loss of functional connectivity are ubiquitous. Recent evidence suggests that neuronal dynamics have a prominent role in AD pathophysiology, making it a potentially interesting therapeutic target. However, although neuronal activity can be manipulated by various (non-pharmacological means, intervening in a highly integrated system that depends on complex dynamics can produce counterintuitive and adverse effects. Computational dynamic network modeling may serve as a virtual test ground for developing effective interventions. To explore this approach, a previously introduced large-scale neural mass network with human brain topology was used to simulate the temporal evolution of AD-like, activity-dependent network degeneration. In addition, six defense strategies that either enhanced or diminished neuronal excitability were tested against the degeneration process, targeting excitatory and inhibitory neurons combined or separately. Outcome measures described oscillatory, connectivity and topological features of the damaged networks. Over time, the various interventions produced diverse large-scale network effects. Contrary to our hypothesis, the most successful strategy was a selective stimulation of all excitatory neurons in the network; it substantially prolonged the preservation of network integrity. The results of this study imply that functional network damage due to pathological neuronal activity can be opposed by targeted adjustment of neuronal excitability levels. The present approach may help to explore therapeutic effects aimed at preserving or restoring neuronal network integrity and contribute to better-informed intervention choices in future clinical trials in AD.
A distance constrained synaptic plasticity model of C. elegans neuronal network
Badhwar, Rahul; Bagler, Ganesh
2017-03-01
Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.
Gerstner, Wulfram
2017-01-01
Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50–2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations. PMID:28422957
Schwalger, Tilo; Deger, Moritz; Gerstner, Wulfram
2017-04-01
Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50-2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations.
Directory of Open Access Journals (Sweden)
Yuzo Takayama
Full Text Available Morphology and function of the nervous system is maintained via well-coordinated processes both in central and peripheral nervous tissues, which govern the homeostasis of organs/tissues. Impairments of the nervous system induce neuronal disorders such as peripheral neuropathy or cardiac arrhythmia. Although further investigation is warranted to reveal the molecular mechanisms of progression in such diseases, appropriate model systems mimicking the patient-specific communication between neurons and organs are not established yet. In this study, we reconstructed the neuronal network in vitro either between neurons of the human induced pluripotent stem (iPS cell derived peripheral nervous system (PNS and central nervous system (CNS, or between PNS neurons and cardiac cells in a morphologically and functionally compartmentalized manner. Networks were constructed in photolithographically microfabricated devices with two culture compartments connected by 20 microtunnels. We confirmed that PNS and CNS neurons connected via synapses and formed a network. Additionally, calcium-imaging experiments showed that the bundles originating from the PNS neurons were functionally active and responded reproducibly to external stimuli. Next, we confirmed that CNS neurons showed an increase in calcium activity during electrical stimulation of networked bundles from PNS neurons in order to demonstrate the formation of functional cell-cell interactions. We also confirmed the formation of synapses between PNS neurons and mature cardiac cells. These results indicate that compartmentalized culture devices are promising tools for reconstructing network-wide connections between PNS neurons and various organs, and might help to understand patient-specific molecular and functional mechanisms under normal and pathological conditions.
Directory of Open Access Journals (Sweden)
Fabiano Baroni
2014-05-01
Full Text Available High-frequency oscillations (above 30 Hz have been observed in sensory and higher-order brain areas, and are believed to constitute a general hallmark of functional neuronal activation. Fast inhibition in interneuronal networks has been suggested as a general mechanism for the generation of high-frequency oscillations. Certain classes of interneurons exhibit subthreshold oscillations, but the effect of this intrinsic neuronal property on the population rhythm is not completely understood. We study the influence of intrinsic damped subthreshold oscillations in the emergence of collective high-frequency oscillations, and elucidate the dynamical mechanisms that underlie this phenomenon. We simulate neuronal networks composed of either Integrate-and-Fire (IF or Generalized Integrate-and-Fire (GIF neurons. The IF model displays purely passive subthreshold dynamics, while the GIF model exhibits subthreshold damped oscillations. Individual neurons receive inhibitory synaptic currents mediated by spiking activity in their neighbors as well as noisy synaptic bombardment, and fire irregularly at a lower rate than population frequency. We identify three factors that affect the influence of single-neuron properties on synchronization mediated by inhibition: i the firing rate response to the noisy background input, ii the membrane potential distribution, and iii the shape of Inhibitory Post-Synaptic Potentials (IPSPs. For hyperpolarizing inhibition, the GIF IPSP profile (factor iii exhibits post-inhibitory rebound, which induces a coherent spike-mediated depolarization across cells that greatly facilitates synchronous oscillations. This effect dominates the network dynamics, hence GIF networks display stronger oscillations than IF networks. However, the restorative current in the GIF neuron lowers firing rates and narrows the membrane potential distribution (factors i and ii, respectively, which tend to decrease synchrony. If inhibition is shunting instead
Baroni, Fabiano; Burkitt, Anthony N; Grayden, David B
2014-05-01
High-frequency oscillations (above 30 Hz) have been observed in sensory and higher-order brain areas, and are believed to constitute a general hallmark of functional neuronal activation. Fast inhibition in interneuronal networks has been suggested as a general mechanism for the generation of high-frequency oscillations. Certain classes of interneurons exhibit subthreshold oscillations, but the effect of this intrinsic neuronal property on the population rhythm is not completely understood. We study the influence of intrinsic damped subthreshold oscillations in the emergence of collective high-frequency oscillations, and elucidate the dynamical mechanisms that underlie this phenomenon. We simulate neuronal networks composed of either Integrate-and-Fire (IF) or Generalized Integrate-and-Fire (GIF) neurons. The IF model displays purely passive subthreshold dynamics, while the GIF model exhibits subthreshold damped oscillations. Individual neurons receive inhibitory synaptic currents mediated by spiking activity in their neighbors as well as noisy synaptic bombardment, and fire irregularly at a lower rate than population frequency. We identify three factors that affect the influence of single-neuron properties on synchronization mediated by inhibition: i) the firing rate response to the noisy background input, ii) the membrane potential distribution, and iii) the shape of Inhibitory Post-Synaptic Potentials (IPSPs). For hyperpolarizing inhibition, the GIF IPSP profile (factor iii)) exhibits post-inhibitory rebound, which induces a coherent spike-mediated depolarization across cells that greatly facilitates synchronous oscillations. This effect dominates the network dynamics, hence GIF networks display stronger oscillations than IF networks. However, the restorative current in the GIF neuron lowers firing rates and narrows the membrane potential distribution (factors i) and ii), respectively), which tend to decrease synchrony. If inhibition is shunting instead of
Mapping cortical mesoscopic networks of single spiking cortical or sub-cortical neurons.
Xiao, Dongsheng; Vanni, Matthieu P; Mitelut, Catalin C; Chan, Allen W; LeDue, Jeffrey M; Xie, Yicheng; Chen, Andrew Cn; Swindale, Nicholas V; Murphy, Timothy H
2017-02-04
Understanding the basis of brain function requires knowledge of cortical operations over wide-spatial scales, but also within the context of single neurons. In vivo, wide-field GCaMP imaging and sub-cortical/cortical cellular electrophysiology were used in mice to investigate relationships between spontaneous single neuron spiking and mesoscopic cortical activity. We make use of a rich set of cortical activity motifs that are present in spontaneous activity in anesthetized and awake animals. A mesoscale spike-triggered averaging procedure allowed the identification of motifs that are preferentially linked to individual spiking neurons by employing genetically targeted indicators of neuronal activity. Thalamic neurons predicted and reported specific cycles of wide-scale cortical inhibition/excitation. In contrast, spike-triggered maps derived from single cortical neurons yielded spatio-temporal maps expected for regional cortical consensus function. This approach can define network relationships between any point source of neuronal spiking and mesoscale cortical maps.
Effects of weak electric fields on the activity of neurons and neuronal networks
International Nuclear Information System (INIS)
Jeffreys, J.G.R.; Deans, J.; Bikson, M.; Fox, J.
2003-01-01
Electric fields applied to brain tissue will affect cellular properties. They will hyperpolarise the ends of cells closest to the positive part of the field, and depolarise ends closest to the negative. In the case of neurons this affects excitability. How these changes in transmembrane potential are distributed depends on the length constant of the neuron, and on its geometry; if the neuron is electrically compact, the change in transmembrane potential becomes an almost linear function of distance in the direction of the field. Neurons from the mammalian hippocampus, maintained in tissue slices in vitro, are significantly affected by fields of around 1-5 Vm -1 . (author)
Synaptic and intrinsic activation of GABAergic neurons in the cardiorespiratory brainstem network.
Directory of Open Access Journals (Sweden)
Julie G Frank
Full Text Available GABAergic pathways in the brainstem play an essential role in respiratory rhythmogenesis and interactions between the respiratory and cardiovascular neuronal control networks. However, little is known about the identity and function of these GABAergic inhibitory neurons and what determines their activity. In this study we have identified a population of GABAergic neurons in the ventrolateral medulla that receive increased excitatory post-synaptic potentials during inspiration, but also have spontaneous firing in the absence of synaptic input. Using transgenic mice that express GFP under the control of the Gad1 (GAD67 gene promoter, we determined that this population of GABAergic neurons is in close apposition to cardioinhibitory parasympathetic cardiac neurons in the nucleus ambiguus (NA. These neurons fire in synchronization with inspiratory activity. Although they receive excitatory glutamatergic synaptic inputs during inspiration, this excitatory neurotransmission was not altered by blocking nicotinic receptors, and many of these GABAergic neurons continue to fire after synaptic blockade. The spontaneous firing in these GABAergic neurons was not altered by the voltage-gated calcium channel blocker cadmium chloride that blocks both neurotransmission to these neurons and voltage-gated Ca(2+ currents, but spontaneous firing was diminished by riluzole, demonstrating a role of persistent sodium channels in the spontaneous firing in these cardiorespiratory GABAergic neurons that possess a pacemaker phenotype. The spontaneously firing GABAergic neurons identified in this study that increase their activity during inspiration would support respiratory rhythm generation if they acted primarily to inhibit post-inspiratory neurons and thereby release inspiration neurons to increase their activity. This population of inspiratory-modulated GABAergic neurons could also play a role in inhibiting neurons that are most active during expiration and provide a
Synaptic and intrinsic activation of GABAergic neurons in the cardiorespiratory brainstem network.
Frank, Julie G; Mendelowitz, David
2012-01-01
GABAergic pathways in the brainstem play an essential role in respiratory rhythmogenesis and interactions between the respiratory and cardiovascular neuronal control networks. However, little is known about the identity and function of these GABAergic inhibitory neurons and what determines their activity. In this study we have identified a population of GABAergic neurons in the ventrolateral medulla that receive increased excitatory post-synaptic potentials during inspiration, but also have spontaneous firing in the absence of synaptic input. Using transgenic mice that express GFP under the control of the Gad1 (GAD67) gene promoter, we determined that this population of GABAergic neurons is in close apposition to cardioinhibitory parasympathetic cardiac neurons in the nucleus ambiguus (NA). These neurons fire in synchronization with inspiratory activity. Although they receive excitatory glutamatergic synaptic inputs during inspiration, this excitatory neurotransmission was not altered by blocking nicotinic receptors, and many of these GABAergic neurons continue to fire after synaptic blockade. The spontaneous firing in these GABAergic neurons was not altered by the voltage-gated calcium channel blocker cadmium chloride that blocks both neurotransmission to these neurons and voltage-gated Ca(2+) currents, but spontaneous firing was diminished by riluzole, demonstrating a role of persistent sodium channels in the spontaneous firing in these cardiorespiratory GABAergic neurons that possess a pacemaker phenotype. The spontaneously firing GABAergic neurons identified in this study that increase their activity during inspiration would support respiratory rhythm generation if they acted primarily to inhibit post-inspiratory neurons and thereby release inspiration neurons to increase their activity. This population of inspiratory-modulated GABAergic neurons could also play a role in inhibiting neurons that are most active during expiration and provide a framework for
Xiao, Min; Zheng, Wei Xing; Cao, Jinde
2013-01-01
Recent studies on Hopf bifurcations of neural networks with delays are confined to simplified neural network models consisting of only two, three, four, five, or six neurons. It is well known that neural networks are complex and large-scale nonlinear dynamical systems, so the dynamics of the delayed neural networks are very rich and complicated. Although discussing the dynamics of networks with a few neurons may help us to understand large-scale networks, there are inevitably some complicated problems that may be overlooked if simplified networks are carried over to large-scale networks. In this paper, a general delayed bidirectional associative memory neural network model with n + 1 neurons is considered. By analyzing the associated characteristic equation, the local stability of the trivial steady state is examined, and then the existence of the Hopf bifurcation at the trivial steady state is established. By applying the normal form theory and the center manifold reduction, explicit formulae are derived to determine the direction and stability of the bifurcating periodic solution. Furthermore, the paper highlights situations where the Hopf bifurcations are particularly critical, in the sense that the amplitude and the period of oscillations are very sensitive to errors due to tolerances in the implementation of neuron interconnections. It is shown that the sensitivity is crucially dependent on the delay and also significantly influenced by the feature of the number of neurons. Numerical simulations are carried out to illustrate the main results.
Coherent and intermittent ensemble oscillations emerge from networks of irregular spiking neurons.
Hoseini, Mahmood S; Wessel, Ralf
2016-01-01
Local field potential (LFP) recordings from spatially distant cortical circuits reveal episodes of coherent gamma oscillations that are intermittent, and of variable peak frequency and duration. Concurrently, single neuron spiking remains largely irregular and of low rate. The underlying potential mechanisms of this emergent network activity have long been debated. Here we reproduce such intermittent ensemble oscillations in a model network, consisting of excitatory and inhibitory model neurons with the characteristics of regular-spiking (RS) pyramidal neurons, and fast-spiking (FS) and low-threshold spiking (LTS) interneurons. We find that fluctuations in the external inputs trigger reciprocally connected and irregularly spiking RS and FS neurons in episodes of ensemble oscillations, which are terminated by the recruitment of the LTS population with concurrent accumulation of inhibitory conductance in both RS and FS neurons. The model qualitatively reproduces experimentally observed phase drift, oscillation episode duration distributions, variation in the peak frequency, and the concurrent irregular single-neuron spiking at low rate. Furthermore, consistent with previous experimental studies using optogenetic manipulation, periodic activation of FS, but not RS, model neurons causes enhancement of gamma oscillations. In addition, increasing the coupling between two model networks from low to high reveals a transition from independent intermittent oscillations to coherent intermittent oscillations. In conclusion, the model network suggests biologically plausible mechanisms for the generation of episodes of coherent intermittent ensemble oscillations with irregular spiking neurons in cortical circuits. Copyright © 2016 the American Physiological Society.
Dual-mode operation of neuronal networks involved in left-right alternation
DEFF Research Database (Denmark)
Talpalar, Adolfo E.; Bouvier, Julien; Borgius, Lotta
2013-01-01
All forms of locomotion are repetitive motor activities that require coordinated bilateral activation of muscles. The executive elements of locomotor control are networks of spinal neurons that determine gait pattern through the sequential activation of motor-neuron pools on either side of the bo...
Attractor switching in neuron networks and Spatiotemporal filters for motion processing
Subramanian, Easwara Naga
2008-01-01
From a broader perspective, we address two important questions, viz., (a) what kind of mechanism would enable a neuronal network to switch between various tasks or stored patterns? (b) what are the properties of neurons that are used by the visual system in early motion detection? To address (a) we
Hughes, Mark A; Brennan, Paul M; Bunting, Andrew S; Cameron, Katherine; Murray, Alan F; Shipston, Mike J
2014-05-01
Interfacing neurons with silicon semiconductors is a challenge being tackled through various bioengineering approaches. Such constructs inform our understanding of neuronal coding and learning and ultimately guide us toward creating intelligent neuroprostheses. A fundamental prerequisite is to dictate the spatial organization of neuronal cells. We sought to pattern neurons using photolithographically defined arrays of polymer parylene-C, activated with fetal calf serum. We used a purified human neuronal cell line [Lund human mesencephalic (LUHMES)] to establish whether neurons remain viable when isolated on-chip or whether they require a supporting cell substrate. When cultured in isolation, LUHMES neurons failed to pattern and did not show any morphological signs of differentiation. We therefore sought a cell type with which to prepattern parylene regions, hypothesizing that this cellular template would enable secondary neuronal adhesion and network formation. From a range of cell lines tested, human embryonal kidney (HEK) 293 cells patterned with highest accuracy. LUHMES neurons adhered to pre-established HEK 293 cell clusters and this coculture environment promoted morphological differentiation of neurons. Neurites extended between islands of adherent cell somata, creating an orthogonally arranged neuronal network. HEK 293 cells appear to fulfill a role analogous to glia, dictating cell adhesion, and generating an environment conducive to neuronal survival. We next replaced HEK 293 cells with slower growing glioma-derived precursors. These primary human cells patterned accurately on parylene and provided a similarly effective scaffold for neuronal adhesion. These findings advance the use of this microfabrication-compatible platform for neuronal patterning. Copyright © 2013 Wiley Periodicals, Inc.
Dynamics of macro- and microscopic neural networks
DEFF Research Database (Denmark)
Mikkelsen, Kaare
2014-01-01
that the method continues to find use, of which examples are presented. In the second part of the thesis, numerical simulations of networks of neurons are described. To simplify the analysis, a relatively simpled neuron model - Leaky Integrate and Fire - is chosen. The strengths of the connections between...... shown that the syncronizing effect of the plasticity disappears when the strengths of the connections are frozen in time. Subsequently, the so-called ``Sisyphus'' mechanism is discussed, which is shown to cause slow fluctuations in the both the network synchronization and the strengths...... in groups of high internal syncronization. It is demonstrated that these states are exceedingly robust towards additive noise. Studies of simulations such as these are important, in that simple models grant us an efficient way to investigat...
Dynamics of Moment Neuronal Networks with Intra- and Inter-Interactions
Directory of Open Access Journals (Sweden)
Xuyan Xiang
2015-01-01
Full Text Available A framework of moment neuronal networks with intra- and inter-interactions is presented. It is to show how the spontaneous activity is propagated across the homogeneous and heterogeneous network. The input-output firing relationship and the stability are first explored for a homogeneous network. For heterogeneous network without the constraint of the correlation coefficients between neurons, a more sophisticated dynamics is then explored. With random interactions, the network gets easily synchronized. However, desynchronization is produced by a lateral interaction such as Mexico hat function. It is the external intralayer input unit that offers a more sophisticated and unexpected dynamics over the predecessors. Hence, the work further opens up the possibility of carrying out a stochastic computation in neuronal networks.
Communication networks in the brain: neurons, receptors, neurotransmitters, and alcohol.
Lovinger, David M
2008-01-01
Nerve cells (i.e., neurons) communicate via a combination of electrical and chemical signals. Within the neuron, electrical signals driven by charged particles allow rapid conduction from one end of the cell to the other. Communication between neurons occurs at tiny gaps called synapses, where specialized parts of the two cells (i.e., the presynaptic and postsynaptic neurons) come within nanometers of one another to allow for chemical transmission. The presynaptic neuron releases a chemical (i.e., a neurotransmitter) that is received by the postsynaptic neuron's specialized proteins called neurotransmitter receptors. The neurotransmitter molecules bind to the receptor proteins and alter postsynaptic neuronal function. Two types of neurotransmitter receptors exist-ligand-gated ion channels, which permit rapid ion flow directly across the outer cell membrane, and G-protein-coupled receptors, which set into motion chemical signaling events within the cell. Hundreds of molecules are known to act as neurotransmitters in the brain. Neuronal development and function also are affected by peptides known as neurotrophins and by steroid hormones. This article reviews the chemical nature, neuronal actions, receptor subtypes, and therapeutic roles of several transmitters, neurotrophins, and hormones. It focuses on neurotransmitters with important roles in acute and chronic alcohol effects on the brain, such as those that contribute to intoxication, tolerance, dependence, and neurotoxicity, as well as maintained alcohol drinking and addiction.
Self-Organized Criticality in a Simple Neuron Model Based on Scale-Free Networks
International Nuclear Information System (INIS)
Lin Min; Wang Gang; Chen Tianlun
2006-01-01
A simple model for a set of interacting idealized neurons in scale-free networks is introduced. The basic elements of the model are endowed with the main features of a neuron function. We find that our model displays power-law behavior of avalanche sizes and generates long-range temporal correlation. More importantly, we find different dynamical behavior for nodes with different connectivity in the scale-free networks.
International Nuclear Information System (INIS)
Liu, Chen; Wang, Jiang; Wang, Lin; Yu, Haitao; Deng, Bin; Wei, Xile; Tsang, Kaiming; Chan, Wailok
2014-01-01
Highlights: • Synchronization transitions in hybrid scale-free neuronal networks are investigated. • Multiple synchronization transitions can be induced by the time delay. • Effect of synchronization transitions depends on the ratio of the electrical and chemical synapses. • Coupling strength and the density of inter-neuronal links can enhance the synchronization. -- Abstract: The impacts of information transmission delay on the synchronization transitions in scale-free neuronal networks with electrical and chemical hybrid synapses are investigated. Numerical results show that multiple appearances of synchronization regions transitions can be induced by different information transmission delays. With the time delay increasing, the synchronization of neuronal activities can be enhanced or destroyed, irrespective of the probability of chemical synapses in the whole hybrid neuronal network. In particular, for larger probability of electrical synapses, the regions of synchronous activities appear broader with stronger synchronization ability of electrical synapses compared with chemical ones. Moreover, it can be found that increasing the coupling strength can promote synchronization monotonously, playing the similar role of the increasing the probability of the electrical synapses. Interestingly, the structures and parameters of the scale-free neuronal networks, especially the structural evolvement plays a more subtle role in the synchronization transitions. In the network formation process, it is found that every new vertex is attached to the more old vertices already present in the network, the more synchronous activities will be emerge
Directory of Open Access Journals (Sweden)
О.С. Якушенко
2010-01-01
Full Text Available The article is devoted to the problem of gas turbine engine (GTE technical state class automatic recognition with operation parameters by neuron networks. The one of main problems for creation the neuron networks is determination of their optimal structures size (amount of layers in network and count of neurons in each layer.The method of neuron network size optimization intended for classification of GTE technical state is considered in the article. Optimization is cared out with taking into account of overlearning effect possibility when a learning network loses property of generalization and begins strictly describing educational data set. To determinate a moment when overlearning effect is appeared in learning neuron network the method of three data sets is used. The method is based on the comparison of recognition quality parameters changes which were calculated during recognition of educational and control data sets. As the moment when network overlearning effect is appeared the moment when control data set recognition quality begins deteriorating but educational data set recognition quality continues still improving is used. To determinate this moment learning process periodically is terminated and simulation of network with education and control data sets is fulfilled. The optimization of two-, three- and four-layer networks is conducted and some results of optimization are shown. Also the extended educational set is created and shown. The set describes 16 GTE technical state classes and each class is represented with 200 points (200 possible technical state class realizations instead of 20 points using in the former articles. It was done to increase representativeness of data set.In the article the algorithm of optimization is considered and some results which were obtained with it are shown. The results of experiments were analyzed to determinate most optimal neuron network structure. This structure provides most high-quality GTE
Transition Dynamics of a Dentate Gyrus-CA3 Neuronal Network during Temporal Lobe Epilepsy
Directory of Open Access Journals (Sweden)
Liyuan Zhang
2017-07-01
Full Text Available In temporal lobe epilepsy (TLE, the variation of chemical receptor expression underlies the basis of neural network activity shifts, resulting in neuronal hyperexcitability and epileptiform discharges. However, dynamical mechanisms involved in the transitions of TLE are not fully understood, because of the neuronal diversity and the indeterminacy of network connection. Hence, based on Hodgkin–Huxley (HH type neurons and Pinsky–Rinzel (PR type neurons coupling with glutamatergic and GABAergic synaptic connections respectively, we propose a computational framework which contains dentate gyrus (DG region and CA3 region. By regulating the concentration range of N-methyl-D-aspartate-type glutamate receptor (NMDAR, we demonstrate the pyramidal neuron can generate transitions from interictal to seizure discharges. This suggests that enhanced endogenous activity of NMDAR contributes to excitability in pyramidal neuron. Moreover, we conclude that excitatory discharges in CA3 region vary considerably on account of the excitatory currents produced by the excitatory pyramidal neuron. Interestingly, by changing the backprojection connection, we find that glutamatergic type backprojection can promote the dominant frequency of firings and further motivate excitatory counterpropagation from CA3 region to DG region. However, GABAergic type backprojection can reduce firing rate and block morbid counterpropagation, which may be factored into the terminations of TLE. In addition, neuronal diversity dominated network shows weak correlation with different backprojections. Our modeling and simulation studies provide new insights into the mechanisms of seizures generation and connectionism in local hippocampus, along with the synaptic mechanisms of this disease.
Transition Dynamics of a Dentate Gyrus-CA3 Neuronal Network during Temporal Lobe Epilepsy.
Zhang, Liyuan; Fan, Denggui; Wang, Qingyun
2017-01-01
In temporal lobe epilepsy (TLE), the variation of chemical receptor expression underlies the basis of neural network activity shifts, resulting in neuronal hyperexcitability and epileptiform discharges. However, dynamical mechanisms involved in the transitions of TLE are not fully understood, because of the neuronal diversity and the indeterminacy of network connection. Hence, based on Hodgkin-Huxley (HH) type neurons and Pinsky-Rinzel (PR) type neurons coupling with glutamatergic and GABAergic synaptic connections respectively, we propose a computational framework which contains dentate gyrus (DG) region and CA3 region. By regulating the concentration range of N-methyl-D-aspartate-type glutamate receptor (NMDAR), we demonstrate the pyramidal neuron can generate transitions from interictal to seizure discharges. This suggests that enhanced endogenous activity of NMDAR contributes to excitability in pyramidal neuron. Moreover, we conclude that excitatory discharges in CA3 region vary considerably on account of the excitatory currents produced by the excitatory pyramidal neuron. Interestingly, by changing the backprojection connection, we find that glutamatergic type backprojection can promote the dominant frequency of firings and further motivate excitatory counterpropagation from CA3 region to DG region. However, GABAergic type backprojection can reduce firing rate and block morbid counterpropagation, which may be factored into the terminations of TLE. In addition, neuronal diversity dominated network shows weak correlation with different backprojections. Our modeling and simulation studies provide new insights into the mechanisms of seizures generation and connectionism in local hippocampus, along with the synaptic mechanisms of this disease.
SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks
Wang, Linnan; Ye, Jinmian; Zhao, Yiyang; Wu, Wei; Li, Ang; Song, Shuaiwen Leon; Xu, Zenglin; Kraska, Tim
2018-01-01
Going deeper and wider in neural architectures improves the accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need change to less desired network architectures, or nontrivially dissect a network across multiGPUs. These distract DL practitioners from concentrating on their original machine learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling runtime to enable the network training far be...
Versatile Networks of Simulated Spiking Neurons Displaying Winner-Take-All Behavior
Directory of Open Access Journals (Sweden)
Yanqing eChen
2013-03-01
Full Text Available We describe simulations of large-scale networks of excitatory and inhibitory spiking neurons that can generate dynamically stable winner-take-all (WTA behavior. The network connectivity is a variant of center-surround architecture that we call center-annular-surround (CAS. In this architecture each neuron is excited by nearby neighbors and inhibited by more distant neighbors in an annular-surround region. The neural units of these networks simulate conductance-based spiking neurons that interact via mechanisms susceptible to both short-term synaptic plasticity and STDP. We show that such CAS networks display robust WTA behavior unlike the center-surround networks and other control architectures that we have studied. We find that a large-scale network of spiking neurons with separate populations of excitatory and inhibitory neurons can give rise to smooth maps of sensory input. In addition, we show that a humanoid Brain-Based-Device (BBD under the control of a spiking WTA neural network can learn to reach to target positions in its visual field, thus demonstrating the acquisition of sensorimotor coordination.
Versatile networks of simulated spiking neurons displaying winner-take-all behavior.
Chen, Yanqing; McKinstry, Jeffrey L; Edelman, Gerald M
2013-01-01
We describe simulations of large-scale networks of excitatory and inhibitory spiking neurons that can generate dynamically stable winner-take-all (WTA) behavior. The network connectivity is a variant of center-surround architecture that we call center-annular-surround (CAS). In this architecture each neuron is excited by nearby neighbors and inhibited by more distant neighbors in an annular-surround region. The neural units of these networks simulate conductance-based spiking neurons that interact via mechanisms susceptible to both short-term synaptic plasticity and STDP. We show that such CAS networks display robust WTA behavior unlike the center-surround networks and other control architectures that we have studied. We find that a large-scale network of spiking neurons with separate populations of excitatory and inhibitory neurons can give rise to smooth maps of sensory input. In addition, we show that a humanoid brain-based-device (BBD) under the control of a spiking WTA neural network can learn to reach to target positions in its visual field, thus demonstrating the acquisition of sensorimotor coordination.
Directory of Open Access Journals (Sweden)
Siebler Mario
2009-08-01
Full Text Available Abstract Background The present work was performed to investigate the ability of two different embryonic stem (ES cell-derived neural precursor populations to generate functional neuronal networks in vitro. The first ES cell-derived neural precursor population was cultivated as free-floating neural aggregates which are known to form a developmental niche comprising different types of neural cells, including neural precursor cells (NPCs, progenitor cells and even further matured cells. This niche provides by itself a variety of different growth factors and extracellular matrix proteins that influence the proliferation and differentiation of neural precursor and progenitor cells. The second population was cultivated adherently in monolayer cultures to control most stringently the extracellular environment. This population comprises highly homogeneous NPCs which are supposed to represent an attractive way to provide well-defined neuronal progeny. However, the ability of these different ES cell-derived immature neural cell populations to generate functional neuronal networks has not been assessed so far. Results While both precursor populations were shown to differentiate into sufficient quantities of mature NeuN+ neurons that also express GABA or vesicular-glutamate-transporter-2 (vGlut2, only aggregate-derived neuronal populations exhibited a synchronously oscillating network activity 24 weeks after initiating the differentiation as detected by the microelectrode array technology. Neurons derived from homogeneous NPCs within monolayer cultures did merely show uncorrelated spiking activity even when differentiated for up to 12 weeks. We demonstrated that these neurons exhibited sparsely ramified neurites and an embryonic vGlut2 distribution suggesting an inhibited terminal neuronal maturation. In comparison, neurons derived from heterogeneous populations within neural aggregates appeared as fully mature with a dense neurite network and punctuated
Pacemaker neuron and network oscillations depend on a neuromodulator-regulated linear current
Directory of Open Access Journals (Sweden)
Shunbing Zhao
2010-05-01
Full Text Available Linear leak currents have been implicated in the regulation of neuronal excitability, generation of neuronal and network oscillations, and network state transitions. Yet, few studies have directly tested the dependence of network oscillations on leak currents or explored the role of leak currents on network activity. In the oscillatory pyloric network of decapod crustaceans neuromodulatory inputs are necessary for pacemaker activity. A large subset of neuromodulators is known to activate a single voltage-gated inward current IMI, which has been shown to regulate the rhythmic activity of the network and its pacemaker neurons. Using the dynamic clamp technique, we show that the crucial component of IMI for the generation of oscillatory activity is only a close-to-linear portion of the current-voltage relationship. The nature of this conductance is such that the presence or the absence of neuromodulators effectively regulates the amount of leak current and the input resistance in the pacemaker neurons. When deprived of neuromodulatory inputs, pyloric oscillations are disrupted; yet, a linear reduction of the total conductance in a single neuron within the pacemaker group recovers not only the pacemaker activity in that neuron, but also leads to a recovery of oscillations in the entire pyloric network. The recovered activity produces proper frequency and phasing that is similar to that induced by neuromodulators. These results show that the passive properties of pacemaker neurons can significantly affect their capacity to generate and regulate the oscillatory activity of an entire network, and that this feature is exploited by neuromodulatory inputs.
Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang
2011-11-01
The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.
Fluctuations and information filtering in coupled populations of spiking neurons with adaptation.
Deger, Moritz; Schwalger, Tilo; Naud, Richard; Gerstner, Wulfram
2014-12-01
Finite-sized populations of spiking elements are fundamental to brain function but also are used in many areas of physics. Here we present a theory of the dynamics of finite-sized populations of spiking units, based on a quasirenewal description of neurons with adaptation. We derive an integral equation with colored noise that governs the stochastic dynamics of the population activity in response to time-dependent stimulation and calculate the spectral density in the asynchronous state. We show that systems of coupled populations with adaptation can generate a frequency band in which sensory information is preferentially encoded. The theory is applicable to fully as well as randomly connected networks and to leaky integrate-and-fire as well as to generalized spiking neurons with adaptation on multiple time scales.
Weick, Jason P.; Liu, Yan; Zhang, Su-Chun
2011-01-01
Whether hESC-derived neurons can fully integrate with and functionally regulate an existing neural network remains unknown. Here, we demonstrate that hESC-derived neurons receive unitary postsynaptic currents both in vitro and in vivo and adopt the rhythmic firing behavior of mouse cortical networks via synaptic integration. Optical stimulation of hESC-derived neurons expressing Channelrhodopsin-2 elicited both inhibitory and excitatory postsynaptic currents and triggered network bursting in mouse neurons. Furthermore, light stimulation of hESC-derived neurons transplanted to the hippocampus of adult mice triggered postsynaptic currents in host pyramidal neurons in acute slice preparations. Thus, hESC-derived neurons can participate in and modulate neural network activity through functional synaptic integration, suggesting they are capable of contributing to neural network information processing both in vitro and in vivo. PMID:22106298
TETRAMETHRIN AND DDT INHIBIT SPONTANEOUS FIRING IN CORTICAL NEURONAL NETWORKS
The insecticidal and neurotoxic effects of pyrethroids result from prolonged sodium channel inactivation, which causes alterations in neuronal firing and communication. Previously, we determined the relative potencies of 11 type I and type II pyrethroid insecticides using microel...
Endogenous fields enhanced stochastic resonance in a randomly coupled neuronal network
International Nuclear Information System (INIS)
Deng, Bin; Wang, Lin; Wang, Jiang; Wei, Xi-le; Yu, Hai-tao
2014-01-01
Highlights: • We study effects of endogenous fields on stochastic resonance in a neural network. • Stochastic resonance can be notably enhanced by endogenous field feedback. • Endogenous field feedback delay plays a vital role in stochastic resonance. • The parameters of low-passed filter play a subtle role in SR. - Abstract: Endogenous field, evoked by structured neuronal network activity in vivo, is correlated with many vital neuronal processes. In this paper, the effects of endogenous fields on stochastic resonance (SR) in a randomly connected neuronal network are investigated. The network consists of excitatory and inhibitory neurons and the axonal conduction delays between neurons are also considered. Numerical results elucidate that endogenous field feedback results in more rhythmic macroscope activation of the network for proper time delay and feedback coefficient. The response of the network to the weak periodic stimulation can be notably enhanced by endogenous field feedback. Moreover, the endogenous field feedback delay plays a vital role in SR. We reveal that appropriately tuned delays of the feedback can either induce the enhancement of SR, appearing at every integer multiple of the weak input signal’s oscillation period, or the depression of SR, appearing at every integer multiple of half the weak input signal’s oscillation period for the same feedback coefficient. Interestingly, the parameters of low-passed filter which is used in obtaining the endogenous field feedback signal play a subtle role in SR
Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang
2011-01-01
An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717
Directory of Open Access Journals (Sweden)
Dejan Pecevski
2011-12-01
Full Text Available An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away" and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.
Delay-induced diversity of firing behavior and ordered chaotic firing in adaptive neuronal networks
International Nuclear Information System (INIS)
Gong Yubing; Wang Li; Xu Bo
2012-01-01
In this paper, we study the effect of time delay on the firing behavior and temporal coherence and synchronization in Newman–Watts thermosensitive neuron networks with adaptive coupling. At beginning, the firing exhibit disordered spiking in absence of time delay. As time delay is increased, the neurons exhibit diversity of firing behaviors including bursting with multiple spikes in a burst, spiking, bursting with four, three and two spikes, firing death, and bursting with increasing amplitude. The spiking is the most ordered, exhibiting coherence resonance (CR)-like behavior, and the firing synchronization becomes enhanced with the increase of time delay. As growth rate of coupling strength or network randomness increases, CR-like behavior shifts to smaller time delay and the synchronization of firing increases. These results show that time delay can induce diversity of firing behaviors in adaptive neuronal networks, and can order the chaotic firing by enhancing and optimizing the temporal coherence and enhancing the synchronization of firing. However, the phenomenon of firing death shows that time delay may inhibit the firing of adaptive neuronal networks. These findings provide new insight into the role of time delay in the firing activity of adaptive neuronal networks, and can help to better understand the complex firing phenomena in neural networks.
Fardet, Tanguy; Bottani, Samuel; Métens, Stéphane; Monceau, Pascal
2018-06-01
The Quorum Percolation model (QP) has been designed in the context of neurobiology to describe the initiation of activity bursts occurring in neuronal cultures from the point of view of statistical physics rather than from a dynamical synchronization approach. This paper aims at investigating an extension of the original QP model by taking into account the presence of inhibitory neurons in the cultures (IQP model). The first part of this paper is focused on an equivalence between the presence of inhibitory neurons and a reduction of the network connectivity. By relying on a simple topological argument, we show that the mean activation behavior of networks containing a fraction η of inhibitory neurons can be mapped onto purely excitatory networks with an appropriately modified wiring, provided that η remains in the range usually observed in neuronal cultures, namely η ⪅ 20%. As a striking result, we show that such a mapping enables to predict the evolution of the critical point of the IQP model with the fraction of inhibitory neurons. In a second part, we bridge the gap between the description of bursts in the framework of percolation and the temporal description of neural networks activity by showing how dynamical simulations of bursts with an adaptive exponential integrate-and-fire model lead to a mean description of bursts activation which is captured by Quorum Percolation.
Complete Neuron-Astrocyte Interaction Model: Digital Multiplierless Design and Networking Mechanism.
Haghiri, Saeed; Ahmadi, Arash; Saif, Mehrdad
2017-02-01
Glial cells, also known as neuroglia or glia, are non-neuronal cells providing support and protection for neurons in the central nervous system (CNS). They also act as supportive cells in the brain. Among a variety of glial cells, the star-shaped glial cells, i.e., astrocytes, are the largest cell population in the brain. The important role of astrocyte such as neuronal synchronization, synaptic information regulation, feedback to neural activity and extracellular regulation make the astrocytes play a vital role in brain disease. This paper presents a modified complete neuron-astrocyte interaction model that is more suitable for efficient and large scale biological neural network realization on digital platforms. Simulation results show that the modified complete interaction model can reproduce biological-like behavior of the original neuron-astrocyte mechanism. The modified interaction model is investigated in terms of digital realization feasibility and cost targeting a low cost hardware implementation. Networking behavior of this interaction is investigated and compared between two cases: i) the neuron spiking mechanism without astrocyte effects, and ii) the effect of astrocyte in regulating the neurons behavior and synaptic transmission via controlling the LTP and LTD processes. Hardware implementation on FPGA shows that the modified model mimics the main mechanism of neuron-astrocyte communication with higher performance and considerably lower hardware overhead cost compared with the original interaction model.
Robust emergence of small-world structure in networks of spiking neurons.
Kwok, Hoi Fei; Jurica, Peter; Raffone, Antonino; van Leeuwen, Cees
2007-03-01
Spontaneous activity in biological neural networks shows patterns of dynamic synchronization. We propose that these patterns support the formation of a small-world structure-network connectivity optimal for distributed information processing. We present numerical simulations with connected Hindmarsh-Rose neurons in which, starting from random connection distributions, small-world networks evolve as a result of applying an adaptive rewiring rule. The rule connects pairs of neurons that tend fire in synchrony, and disconnects ones that fail to synchronize. Repeated application of the rule leads to small-world structures. This mechanism is robustly observed for bursting and irregular firing regimes.
Synchronization in a non-uniform network of excitatory spiking neurons
Echeveste, Rodrigo; Gros, Claudius
Spontaneous synchronization of pulse coupled elements is ubiquitous in nature and seems to be of vital importance for life. Networks of pacemaker cells in the heart, extended populations of southeast asian fireflies, and neuronal oscillations in cortical networks, are examples of this. In the present work, a rich repertoire of dynamical states with different degrees of synchronization are found in a network of excitatory-only spiking neurons connected in a non-uniform fashion. In particular, uncorrelated and partially correlated states are found without the need for inhibitory neurons or external currents. The phase transitions between these states, as well the robustness, stability, and response of the network to external stimulus are studied.
Mechanism for propagation of rate signals through a 10-layer feedforward neuronal network
International Nuclear Information System (INIS)
Jie, Li; Wan-Qing, Yu; Ding, Xu; Feng, Liu; Wei, Wang
2009-01-01
Using numerical simulations, we explore the mechanism for propagation of rate signals through a 10-layer feedforward network composed of Hodgkin–Huxley (HH) neurons with sparse connectivity. When white noise is afferent to the input layer, neuronal firing becomes progressively more synchronous in successive layers and synchrony is well developed in deeper layers owing to the feedforward connections between neighboring layers. The synchrony ensures the successful propagation of rate signals through the network when the synaptic conductance is weak. As the synaptic time constant τ syn varies, coherence resonance is observed in the network activity due to the intrinsic property of HH neurons. This makes the output firing rate single-peaked as a function of τ syn , suggesting that the signal propagation can be modulated by the synaptic time constant. These results are consistent with experimental results and advance our understanding of how information is processed in feedforward networks. (cross-disciplinary physics and related areas of science and technology)
Heterogeneous delay-induced asynchrony and resonance in a small-world neuronal network system
Yu, Wen-Ting; Tang, Jun; Ma, Jun; Yang, Xianqing
2016-06-01
A neuronal network often involves time delay caused by the finite signal propagation time in a given biological network. This time delay is not a homogenous fluctuation in a biological system. The heterogeneous delay-induced asynchrony and resonance in a noisy small-world neuronal network system are numerically studied in this work by calculating synchronization measure and spike interval distribution. We focus on three different delay conditions: double-values delay, triple-values delay, and Gaussian-distributed delay. Our results show the following: 1) the heterogeneity in delay results in asynchronous firing in the neuronal network, and 2) maximum synchronization could be achieved through resonance given that the delay values are integer or half-integer times of each other.
International Nuclear Information System (INIS)
Hao Yinghang; Gong, Yubing; Wang Li; Ma Xiaoguang; Yang Chuanlu
2011-01-01
Research highlights: → Single synchronization transition for gap-junctional coupling. → Multiple synchronization transitions for chemical synaptic coupling. → Gap junctions and chemical synapses have different impacts on synchronization transition. → Chemical synapses may play a dominant role in neurons' information processing. - Abstract: In this paper, we have studied time delay- and coupling strength-induced synchronization transitions in scale-free modified Hodgkin-Huxley (MHH) neuron networks with gap-junctions and chemical synaptic coupling. It is shown that the synchronization transitions are much different for these two coupling types. For gap-junctions, the neurons exhibit a single synchronization transition with time delay and coupling strength, while for chemical synapses, there are multiple synchronization transitions with time delay, and the synchronization transition with coupling strength is dependent on the time delay lengths. For short delays we observe a single synchronization transition, whereas for long delays the neurons exhibit multiple synchronization transitions as the coupling strength is varied. These results show that gap junctions and chemical synapses have different impacts on the pattern formation and synchronization transitions of the scale-free MHH neuronal networks, and chemical synapses, compared to gap junctions, may play a dominant and more active function in the firing activity of the networks. These findings would be helpful for further understanding the roles of gap junctions and chemical synapses in the firing dynamics of neuronal networks.
Energy Technology Data Exchange (ETDEWEB)
Hao Yinghang [School of Physics, Ludong University, Yantai 264025 (China); Gong, Yubing, E-mail: gongyubing09@hotmail.co [School of Physics, Ludong University, Yantai 264025 (China); Wang Li; Ma Xiaoguang; Yang Chuanlu [School of Physics, Ludong University, Yantai 264025 (China)
2011-04-15
Research highlights: Single synchronization transition for gap-junctional coupling. Multiple synchronization transitions for chemical synaptic coupling. Gap junctions and chemical synapses have different impacts on synchronization transition. Chemical synapses may play a dominant role in neurons' information processing. - Abstract: In this paper, we have studied time delay- and coupling strength-induced synchronization transitions in scale-free modified Hodgkin-Huxley (MHH) neuron networks with gap-junctions and chemical synaptic coupling. It is shown that the synchronization transitions are much different for these two coupling types. For gap-junctions, the neurons exhibit a single synchronization transition with time delay and coupling strength, while for chemical synapses, there are multiple synchronization transitions with time delay, and the synchronization transition with coupling strength is dependent on the time delay lengths. For short delays we observe a single synchronization transition, whereas for long delays the neurons exhibit multiple synchronization transitions as the coupling strength is varied. These results show that gap junctions and chemical synapses have different impacts on the pattern formation and synchronization transitions of the scale-free MHH neuronal networks, and chemical synapses, compared to gap junctions, may play a dominant and more active function in the firing activity of the networks. These findings would be helpful for further understanding the roles of gap junctions and chemical synapses in the firing dynamics of neuronal networks.
Directory of Open Access Journals (Sweden)
Sinisa Pajevic
2009-01-01
Full Text Available Cascading activity is commonly found in complex systems with directed interactions such as metabolic networks, neuronal networks, or disease spreading in social networks. Substantial insight into a system's organization can be obtained by reconstructing the underlying functional network architecture from the observed activity cascades. Here we focus on Bayesian approaches and reduce their computational demands by introducing the Iterative Bayesian (IB and Posterior Weighted Averaging (PWA methods. We introduce a special case of PWA, cast in nonparametric form, which we call the normalized count (NC algorithm. NC efficiently reconstructs random and small-world functional network topologies and architectures from subcritical, critical, and supercritical cascading dynamics and yields significant improvements over commonly used correlation methods. With experimental data, NC identified a functional and structural small-world topology and its corresponding traffic in cortical networks with neuronal avalanche dynamics.
Inference of neuronal network spike dynamics and topology from calcium imaging data
Directory of Open Access Journals (Sweden)
Henry eLütcke
2013-12-01
Full Text Available Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP occurrence ('spike trains' from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties.
International Nuclear Information System (INIS)
Arana, E.; Marti-Bonmati, L.; Bautista, D.; Paredes, R.
1998-01-01
To study the utility of logistic regression and the neuronal network in the diagnosis of cranial hemangiomas. Fifteen patients presenting hemangiomas were selected form a total of 167 patients with cranial lesions. All were evaluated by plain radiography and computed tomography (CT). Nineteen variables in their medical records were reviewed. Logistic regression and neuronal network models were constructed and validated by the jackknife (leave-one-out) approach. The yields of the two models were compared by means of ROC curves, using the area under the curve as parameter. Seven men and 8 women presented hemangiomas. The mean age of these patients was 38.4 (15.4 years (mea ± standard deviation). Logistic regression identified as significant variables the shape, soft tissue mass and periosteal reaction. The neuronal network lent more importance to the existence of ossified matrix, ruptured cortical vein and the mixed calcified-blastic (trabeculated) pattern. The neuronal network showed a greater yield than logistic regression (Az, 0.9409) (0.004 versus 0.7211± 0.075; p<0.001). The neuronal network discloses hidden interactions among the variables, providing a higher yield in the characterization of cranial hemangiomas and constituting a medical diagnostic acid. (Author)29 refs
Liu, Chen; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Tsang, Kaiming; Chan, Wailok
2013-09-01
The combined effects of the information transmission delay and the ratio of the electrical and chemical synapses on the synchronization transitions in the hybrid modular neuronal network are investigated in this paper. Numerical results show that the synchronization of neuron activities can be either promoted or destroyed as the information transmission delay increases, irrespective of the probability of electrical synapses in the hybrid-synaptic network. Interestingly, when the number of the electrical synapses exceeds a certain level, further increasing its proportion can obviously enhance the spatiotemporal synchronization transitions. Moreover, the coupling strength has a significant effect on the synchronization transition. The dominated type of the synapse always has a more profound effect on the emergency of the synchronous behaviors. Furthermore, the results of the modular neuronal network structures demonstrate that excessive partitioning of the modular network may result in the dramatic detriment of neuronal synchronization. Considering that information transmission delays are inevitable in intra- and inter-neuronal networks communication, the obtained results may have important implications for the exploration of the synchronization mechanism underlying several neural system diseases such as Parkinson's Disease.
Schmitt, Michael
2004-09-01
We study networks of spiking neurons that use the timing of pulses to encode information. Nonlinear interactions model the spatial groupings of synapses on the neural dendrites and describe the computations performed at local branches. Within a theoretical framework of learning we analyze the question of how many training examples these networks must receive to be able to generalize well. Bounds for this sample complexity of learning can be obtained in terms of a combinatorial parameter known as the pseudodimension. This dimension characterizes the computational richness of a neural network and is given in terms of the number of network parameters. Two types of feedforward architectures are considered: constant-depth networks and networks of unconstrained depth. We derive asymptotically tight bounds for each of these network types. Constant depth networks are shown to have an almost linear pseudodimension, whereas the pseudodimension of general networks is quadratic. Networks of spiking neurons that use temporal coding are becoming increasingly more important in practical tasks such as computer vision, speech recognition, and motor control. The question of how well these networks generalize from a given set of training examples is a central issue for their successful application as adaptive systems. The results show that, although coding and computation in these networks is quite different and in many cases more powerful, their generalization capabilities are at least as good as those of traditional neural network models.
Directory of Open Access Journals (Sweden)
O. О. Sudakov
2015-12-01
Full Text Available In present work the Ukrainian National Grid (UNG infrastructure was applied for investigation of synchronization in large networks of interacting neurons. This application is important for solving of modern neuroscience problems related to mechanisms of nervous system activities (memory, cognition etc. and nervous pathologies (epilepsy, Parkinsonism, etc.. Modern non-linear dynamics theories and applications provides powerful basis for computer simulations of biological neuronal networks and investigation of phenomena which mechanisms hardly could be clarified by other approaches. Cubic millimeter of brain tissue contains about 105 neurons, so realistic (Hodgkin-Huxley model and phenomenological (Kuramoto-Sakaguchi, FitzHugh-Nagumo, etc. models simulations require consideration of large neurons numbers.
Spiral Waves and Multiple Spatial Coherence Resonances Induced by Colored Noise in Neuronal Network
International Nuclear Information System (INIS)
Tang Zhao; Li Yuye; Xi Lei; Jia Bing; Gu Huaguang
2012-01-01
Gaussian colored noise induced spatial patterns and spatial coherence resonances in a square lattice neuronal network composed of Morris-Lecar neurons are studied. Each neuron is at resting state near a saddle-node bifurcation on invariant circle, coupled to its nearest neighbors by electronic coupling. Spiral waves with different structures and disordered spatial structures can be alternately induced within a large range of noise intensity. By calculating spatial structure function and signal-to-noise ratio (SNR), it is found that SNR values are higher when the spiral structures are simple and are lower when the spatial patterns are complex or disordered, respectively. SNR manifest multiple local maximal peaks, indicating that the colored noise can induce multiple spatial coherence resonances. The maximal SNR values decrease as the correlation time of the noise increases. These results not only provide an example of multiple resonances, but also show that Gaussian colored noise play constructive roles in neuronal network. (general)
From in silico astrocyte cell models to neuron-astrocyte network models: A review.
Oschmann, Franziska; Berry, Hugues; Obermayer, Klaus; Lenk, Kerstin
2018-01-01
The idea that astrocytes may be active partners in synaptic information processing has recently emerged from abundant experimental reports. Because of their spatial proximity to neurons and their bidirectional communication with them, astrocytes are now considered as an important third element of the synapse. Astrocytes integrate and process synaptic information and by doing so generate cytosolic calcium signals that are believed to reflect neuronal transmitter release. Moreover, they regulate neuronal information transmission by releasing gliotransmitters into the synaptic cleft affecting both pre- and postsynaptic receptors. Concurrent with the first experimental reports of the astrocytic impact on neural network dynamics, computational models describing astrocytic functions have been developed. In this review, we give an overview over the published computational models of astrocytic functions, from single-cell dynamics to the tripartite synapse level and network models of astrocytes and neurons. Copyright © 2017 Elsevier Inc. All rights reserved.
Delay-enhanced coherence of spiral waves in noisy Hodgkin-Huxley neuronal networks
International Nuclear Information System (INIS)
Wang Qingyun; Perc, Matjaz; Duan Zhisheng; Chen Guanrong
2008-01-01
We study the spatial dynamics of spiral waves in noisy Hodgkin-Huxley neuronal ensembles evoked by different information transmission delays and network topologies. In classical settings of coherence resonance the intensity of noise is fine-tuned so as to optimize the system's response. Here, we keep the noise intensity constant, and instead, vary the length of information transmission delay amongst coupled neurons. We show that there exists an intermediate transmission delay by which the spiral waves are optimally ordered, hence indicating the existence of delay-enhanced coherence of spatial dynamics in the examined system. Additionally, we examine the robustness of this phenomenon as the diffusive interaction topology changes towards the small-world type, and discover that shortcut links amongst distant neurons hinder the emergence of coherent spiral waves irrespective of transmission delay length. Presented results thus provide insights that could facilitate the understanding of information transmission delay on realistic neuronal networks
The pairwise phase consistency in cortical network and its relationship with neuronal activation
Directory of Open Access Journals (Sweden)
Wang Daming
2017-01-01
Full Text Available Gamma-band neuronal oscillation and synchronization with the range of 30-90 Hz are ubiquitous phenomenon across numerous brain areas and various species, and correlated with plenty of cognitive functions. The phase of the oscillation, as one aspect of CTC (Communication through Coherence hypothesis, underlies various functions for feature coding, memory processing and behaviour performing. The PPC (Pairwise Phase Consistency, an improved coherence measure, statistically quantifies the strength of phase synchronization. In order to evaluate the PPC and its relationships with input stimulus, neuronal activation and firing rate, a simplified spiking neuronal network is constructed to simulate orientation columns in primary visual cortex. If the input orientation stimulus is preferred for a certain orientation column, neurons within this corresponding column will obtain higher firing rate and stronger neuronal activation, which consequently engender higher PPC values, with higher PPC corresponding to higher firing rate. In addition, we investigate the PPC in time resolved analysis with a sliding window.
Hu, Liang; Wang, Qin; Qin, Zhen; Su, Kaiqi; Huang, Liquan; Hu, Ning; Wang, Ping
2015-04-15
5-hydroxytryptamine (5-HT) is an important neurotransmitter in regulating emotions and related behaviors in mammals. To detect and monitor the 5-HT, effective and convenient methods are demanded in investigation of neuronal network. In this study, hippocampal neuronal networks (HNNs) endogenously expressing 5-HT receptors were employed as sensing elements to build an in vitro neuronal network-based biosensor. The electrophysiological characteristics were analyzed in both neuron and network levels. The firing rates and amplitudes were derived from signal to determine the biosensor response characteristics. The experimental results demonstrate a dose-dependent inhibitory effect of 5-HT on hippocampal neuron activities, indicating the effectiveness of this hybrid biosensor in detecting 5-HT with a response range from 0.01μmol/L to 10μmol/L. In addition, the cross-correlation analysis of HNNs activities suggests 5-HT could weaken HNN connectivity reversibly, providing more specificity of this biosensor in detecting 5-HT. Moreover, 5-HT induced spatiotemporal firing pattern alterations could be monitored in neuron and network levels simultaneously by this hybrid biosensor in a convenient and direct way. With those merits, this neuronal network-based biosensor will be promising to be a valuable and utility platform for the study of neurotransmitter in vitro. Copyright © 2014 Elsevier B.V. All rights reserved.
A Note on Some Numerical Approaches to Solve a θ˙ Neuron Networks Model
Directory of Open Access Journals (Sweden)
Samir Kumar Bhowmik
2014-01-01
Full Text Available Space time integration plays an important role in analyzing scientific and engineering models. In this paper, we consider an integrodifferential equation that comes from modeling θ˙ neuron networks. Here, we investigate various schemes for time discretization of a theta-neuron model. We use collocation and midpoint quadrature formula for space integration and then apply various time integration schemes to get a full discrete system. We present some computational results to demonstrate the schemes.
International Nuclear Information System (INIS)
Liu Jie; Bibari, Olivier; Marchand, Gilles; Benabid, Alim-Louis; Sauter-Starace, Fabien; Appaix, Florence; De Waard, Michel
2011-01-01
Carbon nanotube substrates are promising candidates for biological applications and devices. Interfacing of these carbon nanotubes with neurons can be controlled by chemical modifications. In this study, we investigated how chemical surface functionalization of multi-walled carbon nanotube arrays (MWNT-A) influences neuronal adhesion and network organization. Functionalization of MWNT-A dramatically modifies the length of neurite fascicles, cluster inter-connection success rate, and the percentage of neurites that escape from the clusters. We propose that chemical functionalization represents a method of choice for developing applications in which neuronal patterning on MWNT-A substrates is required.
Energy Technology Data Exchange (ETDEWEB)
Liu Jie; Bibari, Olivier; Marchand, Gilles; Benabid, Alim-Louis; Sauter-Starace, Fabien [CEA, LETI-Minatec, 17 Rue des Martyrs, 38054 Grenoble Cedex 9 (France); Appaix, Florence; De Waard, Michel, E-mail: fabien.sauter@cea.fr, E-mail: michel.dewaard@ujf-grenoble.fr [Inserm U836, Grenoble Institute of Neuroscience, Site Sante la Tronche, Batiment Edmond J Safra, Chemin Fortune Ferrini, BP170, 38042 Grenoble Cedex 09 (France)
2011-05-13
Carbon nanotube substrates are promising candidates for biological applications and devices. Interfacing of these carbon nanotubes with neurons can be controlled by chemical modifications. In this study, we investigated how chemical surface functionalization of multi-walled carbon nanotube arrays (MWNT-A) influences neuronal adhesion and network organization. Functionalization of MWNT-A dramatically modifies the length of neurite fascicles, cluster inter-connection success rate, and the percentage of neurites that escape from the clusters. We propose that chemical functionalization represents a method of choice for developing applications in which neuronal patterning on MWNT-A substrates is required.
Cytokines and cytokine networks target neurons to modulate long-term potentiation.
Prieto, G Aleph; Cotman, Carl W
2017-04-01
Cytokines play crucial roles in the communication between brain cells including neurons and glia, as well as in the brain-periphery interactions. In the brain, cytokines modulate long-term potentiation (LTP), a cellular correlate of memory. Whether cytokines regulate LTP by direct effects on neurons or by indirect mechanisms mediated by non-neuronal cells is poorly understood. Elucidating neuron-specific effects of cytokines has been challenging because most brain cells express cytokine receptors. Moreover, cytokines commonly increase the expression of multiple cytokines in their target cells, thus increasing the complexity of brain cytokine networks even after single-cytokine challenges. Here, we review evidence on both direct and indirect-mediated modulation of LTP by cytokines. We also describe novel approaches based on neuron- and synaptosome-enriched systems to identify cytokines able to directly modulate LTP, by targeting neurons and synapses. These approaches can test multiple samples in parallel, thus allowing the study of multiple cytokines simultaneously. Hence, a cytokine networks perspective coupled with neuron-specific analysis may contribute to delineation of maps of the modulation of LTP by cytokines. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Benjamin eDummer
2014-09-01
Full Text Available A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian spike trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar spike-train statistics for pre- and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrate-and-fire neurons and study a self-consistent statistics of input and output spectra of neural spike trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i a superposition of renewal spike trains with the same interspike interval density as observed in the previous generation and (ii a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the self-consistent spike-train power spectrum. We compare our results to large-scale simulations of a random sparsely connected network of leaky integrate-and-fire neurons (Brunel, J. Comp. Neurosci. 2000 and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide excellent approximations to the autocorrelation of spike trains in the recurrent network.
Neurotransmitters and Integration in Neuronal-Astroglial Networks
Czech Academy of Sciences Publication Activity Database
Verkhratsky, Alexei; Rodríguez Arellano, Jose Julio; Parpura, V.
2012-01-01
Roč. 37, č. 11 (2012), s. 2326-2338 ISSN 0364-3190 R&D Projects: GA ČR GA309/09/1696; GA ČR GA305/08/1384 Institutional research plan: CEZ:AV0Z50390703 Keywords : astrocyte * calcium * neurone Subject RIV: FH - Neurology Impact factor: 2.125, year: 2012
Predictive coding of dynamical variables in balanced spiking networks.
Boerlin, Martin; Machens, Christian K; Denève, Sophie
2013-01-01
Two observations about the cortex have puzzled neuroscientists for a long time. First, neural responses are highly variable. Second, the level of excitation and inhibition received by each neuron is tightly balanced at all times. Here, we demonstrate that both properties are necessary consequences of neural networks that represent information efficiently in their spikes. We illustrate this insight with spiking networks that represent dynamical variables. Our approach is based on two assumptions: We assume that information about dynamical variables can be read out linearly from neural spike trains, and we assume that neurons only fire a spike if that improves the representation of the dynamical variables. Based on these assumptions, we derive a network of leaky integrate-and-fire neurons that is able to implement arbitrary linear dynamical systems. We show that the membrane voltage of the neurons is equivalent to a prediction error about a common population-level signal. Among other things, our approach allows us to construct an integrator network of spiking neurons that is robust against many perturbations. Most importantly, neural variability in our networks cannot be equated to noise. Despite exhibiting the same single unit properties as widely used population code models (e.g. tuning curves, Poisson distributed spike trains), balanced networks are orders of magnitudes more reliable. Our approach suggests that spikes do matter when considering how the brain computes, and that the reliability of cortical representations could have been strongly underestimated.
Claus, Lena; Philippot, Camille; Griemsmann, Stephanie; Timmermann, Aline; Jabs, Ronald; Henneberger, Christian; Kettenmann, Helmut; Steinhäuser, Christian
2018-01-01
The ventral posterior nucleus of the thalamus plays an important role in somatosensory information processing. It contains elongated cellular domains called barreloids, which are the structural basis for the somatotopic organization of vibrissae representation. So far, the organization of glial networks in these barreloid structures and its modulation by neuronal activity has not been studied. We have developed a method to visualize thalamic barreloid fields in acute slices. Combining electrophysiology, immunohistochemistry, and electroporation in transgenic mice with cell type-specific fluorescence labeling, we provide the first structure-function analyses of barreloidal glial gap junction networks. We observed coupled networks, which comprised both astrocytes and oligodendrocytes. The spread of tracers or a fluorescent glucose derivative through these networks was dependent on neuronal activity and limited by the barreloid borders, which were formed by uncoupled or weakly coupled oligodendrocytes. Neuronal somata were distributed homogeneously across barreloid fields with their processes running in parallel to the barreloid borders. Many astrocytes and oligodendrocytes were not part of the panglial networks. Thus, oligodendrocytes are the cellular elements limiting the communicating panglial network to a single barreloid, which might be important to ensure proper metabolic support to active neurons located within a particular vibrissae signaling pathway. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Bi-directional astrocytic regulation of neuronal activity within a network
Directory of Open Access Journals (Sweden)
Susan Yu Gordleeva
2012-11-01
Full Text Available The concept of a tripartite synapse holds that astrocytes can affect both the pre- and postsynaptic compartments through the Ca2+-dependent release of gliotransmitters. Because astrocytic Ca2+ transients usually last for a few seconds, we assumed that astrocytic regulation of synaptic transmission may also occur on the scale of seconds. Here, we considered the basic physiological functions of tripartite synapses and investigated astrocytic regulation at the level of neural network activity. The firing dynamics of individual neurons in a spontaneous firing network was described by the Hodgkin-Huxley model. The neurons received excitatory synaptic input driven by the Poisson spike train with variable frequency. The mean field concentration of the released neurotransmitter was used to describe the presynaptic dynamics. The amplitudes of the excitatory postsynaptic currents (PSCs obeyed the gamma distribution law. In our model, astrocytes depressed the presynaptic release and enhanced the postsynaptic currents. As a result, low frequency synaptic input was suppressed while high frequency input was amplified. The analysis of the neuron spiking frequency as an indicator of network activity revealed that tripartite synaptic transmission dramatically changed the local network operation compared to bipartite synapses. Specifically, the astrocytes supported homeostatic regulation of the network activity by increasing or decreasing firing of the neurons. Thus, the astrocyte activation may modulate a transition of neural network into bistable regime of activity with two stable firing levels and spontaneous transitions between them.
Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua
2015-01-01
In this paper, we investigate how clustering factors influent spiking regularity of the neuronal network of subnetworks. In order to do so, we fix the averaged coupling probability and the averaged coupling strength, and take the cluster number M, the ratio of intra-connection probability and inter-connection probability R, the ratio of intra-coupling strength and inter-coupling strength S as controlled parameters. With the obtained simulation results, we find that spiking regularity of the neuronal networks has little variations with changing of R and S when M is fixed. However, cluster number M could reduce the spiking regularity to low level when the uniform neuronal network's spiking regularity is at high level. Combined the obtained results, we can see that clustering factors have little influences on the spiking regularity when the entire energy is fixed, which could be controlled by the averaged coupling strength and the averaged connection probability.
The influence of hubs in the structure of a neuronal network during an epileptic seizure
Rodrigues, Abner Cardoso; Cerdeira, Hilda A.; Machado, Birajara Soares
2016-02-01
In this work, we propose changes in the structure of a neuronal network with the intention to provoke strong synchronization to simulate episodes of epileptic seizure. Starting with a network of Izhikevich neurons we slowly increase the number of connections in selected nodes in a controlled way, to produce (or not) hubs. We study how these structures alter the synchronization on the spike firings interval, on individual neurons as well as on mean values, as a function of the concentration of connections for random and non-random (hubs) distribution. We also analyze how the post-ictal signal varies for the different distributions. We conclude that a network with hubs is more appropriate to represent an epileptic state.
Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong
2018-04-19
Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm⁻neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS.
Mechanisms of Winner-Take-All and Group Selection in Neuronal Spiking Networks.
Chen, Yanqing
2017-01-01
A major function of central nervous systems is to discriminate different categories or types of sensory input. Neuronal networks accomplish such tasks by learning different sensory maps at several stages of neural hierarchy, such that different neurons fire selectively to reflect different internal or external patterns and states. The exact mechanisms of such map formation processes in the brain are not completely understood. Here we study the mechanism by which a simple recurrent/reentrant neuronal network accomplish group selection and discrimination to different inputs in order to generate sensory maps. We describe the conditions and mechanism of transition from a rhythmic epileptic state (in which all neurons fire synchronized and indiscriminately to any input) to a winner-take-all state in which only a subset of neurons fire for a specific input. We prove an analytic condition under which a stable bump solution and a winner-take-all state can emerge from the local recurrent excitation-inhibition interactions in a three-layer spiking network with distinct excitatory and inhibitory populations, and demonstrate the importance of surround inhibitory connection topology on the stability of dynamic patterns in spiking neural network.
Spiral Wave in Small-World Networks of Hodgkin-Huxley Neurons
International Nuclear Information System (INIS)
Ma Jun; Zhang Cairong; Yang Lijian; Wu Ying
2010-01-01
The effect of small-world connection and noise on the formation and transition of spiral wave in the networks of Hodgkin-Huxley neurons are investigated in detail. Some interesting results are found in our numerical studies. i) The quiescent neurons are activated to propagate electric signal to others by generating and developing spiral wave from spiral seed in small area. ii) A statistical factor is defined to describe the collective properties and phase transition induced by the topology of networks and noise. iii) Stable rotating spiral wave can be generated and keeps robust when the rewiring probability is below certain threshold, otherwise, spiral wave can not be developed from the spiral seed and spiral wave breakup occurs for a stable rotating spiral wave. iv) Gaussian white noise is introduced on the membrane of neurons to study the noise-induced phase transition on spiral wave in small-world networks of neurons. It is confirmed that Gaussian white noise plays active role in supporting and developing spiral wave in the networks of neurons, and appearance of smaller factor of synchronization indicates high possibility to induce spiral wave. (interdisciplinary physics and related areas of science and technology)
Network control principles predict neuron function in the Caenorhabditis elegans connectome
Yan, Gang; Vértes, Petra E.; Towlson, Emma K.; Chew, Yee Lian; Walker, Denise S.; Schafer, William R.; Barabási, Albert-László
2017-10-01
Recent studies on the controllability of complex systems offer a powerful mathematical framework to systematically explore the structure-function relationship in biological, social, and technological networks. Despite theoretical advances, we lack direct experimental proof of the validity of these widely used control principles. Here we fill this gap by applying a control framework to the connectome of the nematode Caenorhabditis elegans, allowing us to predict the involvement of each C. elegans neuron in locomotor behaviours. We predict that control of the muscles or motor neurons requires 12 neuronal classes, which include neuronal groups previously implicated in locomotion by laser ablation, as well as one previously uncharacterized neuron, PDB. We validate this prediction experimentally, finding that the ablation of PDB leads to a significant loss of dorsoventral polarity in large body bends. Importantly, control principles also allow us to investigate the involvement of individual neurons within each neuronal class. For example, we predict that, within the class of DD motor neurons, only three (DD04, DD05, or DD06) should affect locomotion when ablated individually. This prediction is also confirmed; single cell ablations of DD04 or DD05 specifically affect posterior body movements, whereas ablations of DD02 or DD03 do not. Our predictions are robust to deletions of weak connections, missing connections, and rewired connections in the current connectome, indicating the potential applicability of this analytical framework to larger and less well-characterized connectomes.
Network control principles predict neuron function in the Caenorhabditis elegans connectome.
Yan, Gang; Vértes, Petra E; Towlson, Emma K; Chew, Yee Lian; Walker, Denise S; Schafer, William R; Barabási, Albert-László
2017-10-26
Recent studies on the controllability of complex systems offer a powerful mathematical framework to systematically explore the structure-function relationship in biological, social, and technological networks. Despite theoretical advances, we lack direct experimental proof of the validity of these widely used control principles. Here we fill this gap by applying a control framework to the connectome of the nematode Caenorhabditis elegans, allowing us to predict the involvement of each C. elegans neuron in locomotor behaviours. We predict that control of the muscles or motor neurons requires 12 neuronal classes, which include neuronal groups previously implicated in locomotion by laser ablation, as well as one previously uncharacterized neuron, PDB. We validate this prediction experimentally, finding that the ablation of PDB leads to a significant loss of dorsoventral polarity in large body bends. Importantly, control principles also allow us to investigate the involvement of individual neurons within each neuronal class. For example, we predict that, within the class of DD motor neurons, only three (DD04, DD05, or DD06) should affect locomotion when ablated individually. This prediction is also confirmed; single cell ablations of DD04 or DD05 specifically affect posterior body movements, whereas ablations of DD02 or DD03 do not. Our predictions are robust to deletions of weak connections, missing connections, and rewired connections in the current connectome, indicating the potential applicability of this analytical framework to larger and less well-characterized connectomes.
Directory of Open Access Journals (Sweden)
Alex eRoxin
2011-03-01
Full Text Available Neuronal network models often assume a fixed probability of connectionbetween neurons. This assumption leads to random networks withbinomial in-degree and out-degree distributions which are relatively narrow. Here I study the effect of broaddegree distributions on network dynamics by interpolating between abinomial and a truncated powerlaw distribution for the in-degree andout-degree independently. This is done both for an inhibitory network(I network as well as for the recurrent excitatory connections in anetwork of excitatory and inhibitory neurons (EI network. In bothcases increasing the width of the in-degree distribution affects theglobal state of the network by driving transitions betweenasynchronous behavior and oscillations. This effect is reproduced ina simplified rate model which includes the heterogeneity in neuronalinput due to the in-degree of cells. On the other hand, broadeningthe out-degree distribution is shown to increase the fraction ofcommon inputs to pairs of neurons. This leads to increases in theamplitude of the cross-correlation (CC of synaptic currents. In thecase of the I network, despite strong oscillatory CCs in the currents, CCs of the membrane potential are low due to filtering and reset effects, leading to very weak CCs of the spikecount. In the asynchronous regime ofthe EI network, broadening the out-degree increases the amplitude ofCCs in the recurrent excitatory currents, while CC of the totalcurrent is essentially unaffected as are pairwise spikingcorrelations. This is due to a dynamic balance between excitatoryand inhibitory synaptic currents. In the oscillatory regime, changesin the out-degree can have a large effect on spiking correlations andeven on the qualitative dynamical state of the network.
Role of Delays in Shaping Spatiotemporal Dynamics of Neuronal Activity in Large Networks
International Nuclear Information System (INIS)
Roxin, Alex; Brunel, Nicolas; Hansel, David
2005-01-01
We study the effect of delays on the dynamics of large networks of neurons. We show that delays give rise to a wealth of bifurcations and to a rich phase diagram, which includes oscillatory bumps, traveling waves, lurching waves, standing waves arising via a period-doubling bifurcation, aperiodic regimes, and regimes of multistability. We study the existence and the stability of the various dynamical patterns analytically and numerically in a simplified rate model as a function of the interaction parameters. The results derived in that framework allow us to understand the origin of the diversity of dynamical states observed in large networks of spiking neurons
Connectivity, excitability and activity patterns in neuronal networks
International Nuclear Information System (INIS)
Le Feber, Joost; Stoyanova, Irina I; Chiappalone, Michela
2014-01-01
Extremely synchronized firing patterns such as those observed in brain diseases like epilepsy may result from excessive network excitability. Although network excitability is closely related to (excitatory) connectivity, a direct measure for network excitability remains unavailable. Several methods currently exist for estimating network connectivity, most of which are related to cross-correlation. An example is the conditional firing probability (CFP) analysis which calculates the pairwise probability (CFP i,j ) that electrode j records an action potential at time t = τ, given that electrode i recorded a spike at t = 0. However, electrode i often records multiple spikes within the analysis interval, and CFP values are biased by the on-going dynamic state of the network. Here we show that in a linear approximation this bias may be removed by deconvoluting CFP i,j with the autocorrelation of i (i.e. CFP i,i ), to obtain the single pulse response (SPR i,j )—the average response at electrode j to a single spike at electrode i. Thus, in a linear system SPRs would be independent of the dynamic network state. Nonlinear components of synaptic transmission, such as facilitation and short term depression, will however still affect SPRs. Therefore SPRs provide a clean measure of network excitability. We used carbachol and ghrelin to moderately activate cultured cortical networks to affect their dynamic state. Both neuromodulators transformed the bursting firing patterns of the isolated networks into more dispersed firing. We show that the influence of the dynamic state on SPRs is much smaller than the effect on CFPs, but not zero. The remaining difference reflects the alteration in network excitability. We conclude that SPRs are less contaminated by the dynamic network state and that mild excitation may decrease network excitability, possibly through short term synaptic depression. (papers)
Sun, Xiaojuan; Perc, Matjaž; Kurths, Jürgen
2017-05-01
In this paper, we study effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks. Our focus is on the impact of two parameters, namely the time delay τ and the probability of partial time delay pdelay, whereby the latter determines the probability with which a connection between two neurons is delayed. Our research reveals that partial time delays significantly affect phase synchronization in this system. In particular, partial time delays can either enhance or decrease phase synchronization and induce synchronization transitions with changes in the mean firing rate of neurons, as well as induce switching between synchronized neurons with period-1 firing to synchronized neurons with period-2 firing. Moreover, in comparison to a neuronal network where all connections are delayed, we show that small partial time delay probabilities have especially different influences on phase synchronization of neuronal networks.
Simulating large-scale spiking neuronal networks with NEST
Schücker, Jannis; Eppler, Jochen Martin
2014-01-01
The Neural Simulation Tool NEST [1, www.nest-simulator.org] is the simulator for spiking neural networkmodels of the HBP that focuses on the dynamics, size and structure of neural systems rather than on theexact morphology of individual neurons. Its simulation kernel is written in C++ and it runs on computinghardware ranging from simple laptops to clusters and supercomputers with thousands of processor cores.The development of NEST is coordinated by the NEST Initiative [www.nest-initiative.or...
Traveling wave front solutions in lateral-excitatory neuronal networks
Directory of Open Access Journals (Sweden)
Sittipong Ruktamatakul
2008-05-01
Full Text Available In this paper, we discuss the shape of traveling wave front solutions to a neuronal model with the connection function to be of lateral excitation type. This means that close connecting cells have an inhibitory influence, while cells that aremore distant have an excitatory influence. We give results on the shape of the wave fronts solutions, which exhibit different shapes depend ing on the size of a threshold parameter.
Symbol manipulation and rule learning in spiking neuronal networks.
Fernando, Chrisantha
2011-04-21
It has been claimed that the productivity, systematicity and compositionality of human language and thought necessitate the existence of a physical symbol system (PSS) in the brain. Recent discoveries about temporal coding suggest a novel type of neuronal implementation of a physical symbol system. Furthermore, learning classifier systems provide a plausible algorithmic basis by which symbol re-write rules could be trained to undertake behaviors exhibiting systematicity and compositionality, using a kind of natural selection of re-write rules in the brain, We show how the core operation of a learning classifier system, namely, the replication with variation of symbol re-write rules, can be implemented using spike-time dependent plasticity based supervised learning. As a whole, the aim of this paper is to integrate an algorithmic and an implementation level description of a neuronal symbol system capable of sustaining systematic and compositional behaviors. Previously proposed neuronal implementations of symbolic representations are compared with this new proposal. Copyright © 2011 Elsevier Ltd. All rights reserved.
Stochastic resonance on Newman-Watts networks of Hodgkin-Huxley neurons with local periodic driving
Energy Technology Data Exchange (ETDEWEB)
Ozer, Mahmut [Zonguldak Karaelmas University, Engineering Faculty, Department of Electrical and Electronics Engineering, 67100 Zonguldak (Turkey)], E-mail: mahmutozer2002@yahoo.com; Perc, Matjaz [University of Maribor, Faculty of Natural Sciences and Mathematics, Department of Physics, Koroska cesta 160, SI-2000 Maribor (Slovenia); Uzuntarla, Muhammet [Zonguldak Karaelmas University, Engineering Faculty, Department of Electrical and Electronics Engineering, 67100 Zonguldak (Turkey)
2009-03-02
We study the phenomenon of stochastic resonance on Newman-Watts small-world networks consisting of biophysically realistic Hodgkin-Huxley neurons with a tunable intensity of intrinsic noise via voltage-gated ion channels embedded in neuronal membranes. Importantly thereby, the subthreshold periodic driving is introduced to a single neuron of the network, thus acting as a pacemaker trying to impose its rhythm on the whole ensemble. We show that there exists an optimal intensity of intrinsic ion channel noise by which the outreach of the pacemaker extends optimally across the whole network. This stochastic resonance phenomenon can be further amplified via fine-tuning of the small-world network structure, and depends significantly also on the coupling strength among neurons and the driving frequency of the pacemaker. In particular, we demonstrate that the noise-induced transmission of weak localized rhythmic activity peaks when the pacemaker frequency matches the intrinsic frequency of subthreshold oscillations. The implications of our findings for weak signal detection and information propagation across neural networks are discussed.
Directory of Open Access Journals (Sweden)
Khanh eDao Duc
2015-07-01
Full Text Available The dynamics of neuronal networks connected by synaptic dynamics can sustain long periods of depolarization that can last for hundreds of milliseconds such as Up states recorded during sleep or anesthesia. Yet the underlying mechanism driving these periods remain unclear. We show here within a mean-field model that the residence times of the neuronal membrane potential in cortical Up states does not follow a Poissonian law, but presents several peaks. Furthermore, the present modeling approach allows extracting some information about the neuronal network connectivity from the time distribution histogram. Based on a synaptic-depression model, we find that these peaks, that can be observed in histograms of patch-clamp recordings are not artifacts of electrophysiological measurements, but rather are an inherent property of the network dynamics. Analysis of the equations reveals a stable focus located close to the unstable limit cycle, delimiting a region that defines the Up state. The model further shows that the peaks observed in the Up state time distribution are due to winding around the focus before escaping from the basin of attraction. Finally, we use in vivo recordings of intracellular membrane potential and we recover from the peak distribution, some information about the network connectivity. We conclude that it is possible to recover the network connectivity from the distribution of times that the neuronal membrane voltage spends in Up states.
Spiking, Bursting, and Population Dynamics in a Network of Growth Transform Neurons.
Gangopadhyay, Ahana; Chakrabartty, Shantanu
2017-04-27
This paper investigates the dynamical properties of a network of neurons, each of which implements an asynchronous mapping based on polynomial growth transforms. In the first part of this paper, we present a geometric approach for visualizing the dynamics of the network where each of the neurons traverses a trajectory in a dual optimization space, whereas the network itself traverses a trajectory in an equivalent primal optimization space. We show that as the network learns to solve basic classification tasks, different choices of primal-dual mapping produce unique but interpretable neural dynamics like noise shaping, spiking, and bursting. While the proposed framework is general enough, in this paper, we demonstrate its use for designing support vector machines (SVMs) that exhibit noise-shaping properties similar to those of ΣΔ modulators, and for designing SVMs that learn to encode information using spikes and bursts. It is demonstrated that the emergent switching, spiking, and burst dynamics produced by each neuron encodes its respective margin of separation from a classification hyperplane whose parameters are encoded by the network population dynamics. We believe that the proposed growth transform neuron model and the underlying geometric framework could serve as an important tool to connect well-established machine learning algorithms like SVMs to neuromorphic principles like spiking, bursting, population encoding, and noise shaping.
Efficient transmission of subthreshold signals in complex networks of spiking neurons.
Torres, Joaquin J; Elices, Irene; Marro, J
2015-01-01
We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances--that naturally balances the network with excitatory and inhibitory synapses--and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest.
Efficient transmission of subthreshold signals in complex networks of spiking neurons.
Directory of Open Access Journals (Sweden)
Joaquin J Torres
Full Text Available We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances--that naturally balances the network with excitatory and inhibitory synapses--and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest.
Stochastic Wilson–Cowan models of neuronal network dynamics with memory and delay
International Nuclear Information System (INIS)
Goychuk, Igor; Goychuk, Andriy
2015-01-01
We consider a simple Markovian class of the stochastic Wilson–Cowan type models of neuronal network dynamics, which incorporates stochastic delay caused by the existence of a refractory period of neurons. From the point of view of the dynamics of the individual elements, we are dealing with a network of non-Markovian stochastic two-state oscillators with memory, which are coupled globally in a mean-field fashion. This interrelation of a higher-dimensional Markovian and lower-dimensional non-Markovian dynamics is discussed in its relevance to the general problem of the network dynamics of complex elements possessing memory. The simplest model of this class is provided by a three-state Markovian neuron with one refractory state, which causes firing delay with an exponentially decaying memory within the two-state reduced model. This basic model is used to study critical avalanche dynamics (the noise sustained criticality) in a balanced feedforward network consisting of the excitatory and inhibitory neurons. Such avalanches emerge due to the network size dependent noise (mesoscopic noise). Numerical simulations reveal an intermediate power law in the distribution of avalanche sizes with the critical exponent around −1.16. We show that this power law is robust upon a variation of the refractory time over several orders of magnitude. However, the avalanche time distribution is biexponential. It does not reflect any genuine power law dependence. (paper)
Self-organized criticality in a network of interacting neurons
Cowan, J.D.; Neuman, J.; Kiewiet, B.; van Drongelen, W.
2013-01-01
This paper contains an analysis of a simple neural network that exhibits self-organized criticality. Such criticality follows from the combination of a simple neural network with an excitatory feedback loop that generates bistability, in combination with an anti-Hebbian synapse in its input pathway.
Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B.
2012-01-01
The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem. PMID:22649480
International Nuclear Information System (INIS)
Gong Yubing; Xie Yanhang; Lin Xiu; Hao Yinghang; Ma Xiaoguang
2010-01-01
Research highlights: → Chemical delay and chemical coupling can tame chaotic bursting. → Chemical delay-induced transitions from bursting synchronization to intermittent multiple spiking synchronizations. → Chemical coupling-induced different types of delay-dependent firing transitions. - Abstract: Chemical synaptic connections are more common than electric ones in neurons, and information transmission delay is especially significant for the synapses of chemical type. In this paper, we report a phenomenon of ordering spatiotemporal chaos and synchronization transitions by the delays and coupling through chemical synapses of modified Hodgkin-Huxley (MHH) neurons on scale-free networks. As the delay τ is increased, the neurons exhibit transitions from bursting synchronization (BS) to intermittent multiple spiking synchronizations (SS). As the coupling g syn is increased, the neurons exhibit different types of firing transitions, depending on the values of τ. For a smaller τ, there are transitions from spatiotemporal chaotic bursting (SCB) to BS or SS; while for a larger τ, there are transitions from SCB to intermittent multiple SS. These findings show that the delays and coupling through chemical synapses can tame the chaotic firings and repeatedly enhance the firing synchronization of neurons, and hence could play important roles in the firing activity of the neurons on scale-free networks.
Directory of Open Access Journals (Sweden)
Wayne Croft
2015-01-01
Full Text Available The capacity of synaptic networks to express activity-dependent changes in strength and connectivity is essential for learning and memory processes. In recent years, glial cells (most notably astrocytes have been recognized as active participants in the modulation of synaptic transmission and synaptic plasticity, implicating these electrically nonexcitable cells in information processing in the brain. While the concept of bidirectional communication between neurons and glia and the mechanisms by which gliotransmission can modulate neuronal function are well established, less attention has been focussed on the computational potential of neuron-glial transmission itself. In particular, whether neuron-glial transmission is itself subject to activity-dependent plasticity and what the computational properties of such plasticity might be has not been explored in detail. In this review, we summarize current examples of plasticity in neuron-glial transmission, in many brain regions and neurotransmitter pathways. We argue that induction of glial plasticity typically requires repetitive neuronal firing over long time periods (minutes-hours rather than the short-lived, stereotyped trigger typical of canonical long-term potentiation. We speculate that this equips glia with a mechanism for monitoring average firing rates in the synaptic network, which is suited to the longer term roles proposed for astrocytes in neurophysiology.
Directory of Open Access Journals (Sweden)
Mark Niedringhaus
Full Text Available Matrix metalloproteinases (MMPs are zinc-dependent endopeptidases that are released from neurons in an activity dependent manner. Published studies suggest their activity is important to varied forms of learning and memory. At least one MMP can stimulate an increase in the size of dendritic spines, structures which represent the post synaptic component for a large number of glutamatergic synapses. This change may be associated with increased synaptic glutamate receptor incorporation, and an increased amplitude and/or frequency of α-amino-3-hydroxyl-5-methyl-4-isoxazole-propionate (AMPA mini excitatory post-synaptic currents (EPSCs. An associated increase in the probability of action potential occurrence would be expected. While the mechanism(s by which MMPs may influence synaptic structure and function are not completely understood, MMP dependent shedding of specific cell adhesion molecules (CAMs could play an important role. CAMs are ideally positioned to be cleaved by synaptically released MMPs, and shed N terminal domains could potentially interact with previously unengaged integrins to stimulate dendritic actin polymerization with spine expansion. In the present study, we have used multielectrode arrays (MEAs to investigate MMP and soluble CAM dependent changes in neuronal activity recorded from hippocampal cultures. We have focused on intercellular adhesion molecule-5 (ICAM-5 in particular, as this CAM is expressed on glutamatergic dendrites and shed in an MMP dependent manner. We show that chemical long-term potentiation (cLTP evoked changes in recorded activity, and the dynamics of action potential bursts in particular, are altered by MMP inhibition. A blocking antibody to β(1 integrins has a similar effect. We also show that the ectodomain of ICAM-5 can stimulate β(1 integrin dependent increases in spike counts and burst number. These results support a growing body of literature suggesting that MMPs have important effects on neuronal
de Santos-Sierra, Daniel; Sendiña-Nadal, Irene; Leyva, Inmaculada; Almendral, Juan A; Ayali, Amir; Anava, Sarit; Sánchez-Ávila, Carmen; Boccaletti, Stefano
2015-06-01
Large scale phase-contrast images taken at high resolution through the life of a cultured neuronal network are analyzed by a graph-based unsupervised segmentation algorithm with a very low computational cost, scaling linearly with the image size. The processing automatically retrieves the whole network structure, an object whose mathematical representation is a matrix in which nodes are identified neurons or neurons' clusters, and links are the reconstructed connections between them. The algorithm is also able to extract any other relevant morphological information characterizing neurons and neurites. More importantly, and at variance with other segmentation methods that require fluorescence imaging from immunocytochemistry techniques, our non invasive measures entitle us to perform a longitudinal analysis during the maturation of a single culture. Such an analysis furnishes the way of individuating the main physical processes underlying the self-organization of the neurons' ensemble into a complex network, and drives the formulation of a phenomenological model yet able to describe qualitatively the overall scenario observed during the culture growth. © 2014 International Society for Advancement of Cytometry.
Neuromorphic Silicon Neuron Circuits
Indiveri, Giacomo; Linares-Barranco, Bernabé; Hamilton, Tara Julia; van Schaik, André; Etienne-Cummings, Ralph; Delbruck, Tobi; Liu, Shih-Chii; Dudek, Piotr; Häfliger, Philipp; Renaud, Sylvie; Schemmel, Johannes; Cauwenberghs, Gert; Arthur, John; Hynna, Kai; Folowosele, Fopefolu; Saighi, Sylvain; Serrano-Gotarredona, Teresa; Wijekoon, Jayawan; Wang, Yingxue; Boahen, Kwabena
2011-01-01
Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain–machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance-based Hodgkin–Huxley models to bi-dimensional generalized adaptive integrate and fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips. PMID:21747754
Neuromorphic silicon neuron circuits
Directory of Open Access Journals (Sweden)
Giacomo eIndiveri
2011-05-01
Full Text Available Hardware implementations of spiking neurons can be extremely useful for a large variety of applications, ranging from high-speed modeling of large-scale neural systems to real-time behaving systems, to bidirectional brain-machine interfaces. The specific circuit solutions used to implement silicon neurons depend on the application requirements. In this paper we describe the most common building blocks and techniques used to implement these circuits, and present an overview of a wide range of neuromorphic silicon neurons, which implement different computational models, ranging from biophysically realistic and conductance based Hodgkin-Huxley models to bi-dimensional generalized adaptive Integrate and Fire models. We compare the different design methodologies used for each silicon neuron design described, and demonstrate their features with experimental results, measured from a wide range of fabricated VLSI chips.
Clustering predicts memory performance in networks of spiking and non-spiking neurons
Directory of Open Access Journals (Sweden)
Weiliang eChen
2011-03-01
Full Text Available The problem we address in this paper is that of finding effective and parsimonious patterns of connectivity in sparse associative memories. This problem must be addressed in real neuronal systems, so that results in artificial systems could throw light on real systems. We show that there are efficient patterns of connectivity and that these patterns are effective in models with either spiking or non-spiking neurons. This suggests that there may be some underlying general principles governing good connectivity in such networks. We also show that the clustering of the network, measured by Clustering Coefficient, has a strong linear correlation to the performance of associative memory. This result is important since a purely static measure of network connectivity appears to determine an important dynamic property of the network.
A network of spiking neurons that can represent interval timing: mean field analysis.
Gavornik, Jeffrey P; Shouval, Harel Z
2011-04-01
Despite the vital importance of our ability to accurately process and encode temporal information, the underlying neural mechanisms are largely unknown. We have previously described a theoretical framework that explains how temporal representations, similar to those reported in the visual cortex, can form in locally recurrent cortical networks as a function of reward modulated synaptic plasticity. This framework allows networks of both linear and spiking neurons to learn the temporal interval between a stimulus and paired reward signal presented during training. Here we use a mean field approach to analyze the dynamics of non-linear stochastic spiking neurons in a network trained to encode specific time intervals. This analysis explains how recurrent excitatory feedback allows a network structure to encode temporal representations.
Impact of Partial Time Delay on Temporal Dynamics of Watts-Strogatz Small-World Neuronal Networks
Yan, Hao; Sun, Xiaojuan
2017-06-01
In this paper, we mainly discuss effects of partial time delay on temporal dynamics of Watts-Strogatz (WS) small-world neuronal networks by controlling two parameters. One is the time delay τ and the other is the probability of partial time delay pdelay. Temporal dynamics of WS small-world neuronal networks are discussed with the aid of temporal coherence and mean firing rate. With the obtained simulation results, it is revealed that for small time delay τ, the probability pdelay could weaken temporal coherence and increase mean firing rate of neuronal networks, which indicates that it could improve neuronal firings of the neuronal networks while destroying firing regularity. For large time delay τ, temporal coherence and mean firing rate do not have great changes with respect to pdelay. Time delay τ always has great influence on both temporal coherence and mean firing rate no matter what is the value of pdelay. Moreover, with the analysis of spike trains and histograms of interspike intervals of neurons inside neuronal networks, it is found that the effects of partial time delays on temporal coherence and mean firing rate could be the result of locking between the period of neuronal firing activities and the value of time delay τ. In brief, partial time delay could have great influence on temporal dynamics of the neuronal networks.
Gunhanlar, N; Shpak, G; van der Kroeg, M; Gouty-Colomer, L A; Munshi, S T; Lendemeijer, B; Ghazvini, M; Dupont, C; Hoogendijk, W J G; Gribnau, J; de Vrij, F M S; Kushner, S A
2017-04-18
Progress in elucidating the molecular and cellular pathophysiology of neuropsychiatric disorders has been hindered by the limited availability of living human brain tissue. The emergence of induced pluripotent stem cells (iPSCs) has offered a unique alternative strategy using patient-derived functional neuronal networks. However, methods for reliably generating iPSC-derived neurons with mature electrophysiological characteristics have been difficult to develop. Here, we report a simplified differentiation protocol that yields electrophysiologically mature iPSC-derived cortical lineage neuronal networks without the need for astrocyte co-culture or specialized media. This protocol generates a consistent 60:40 ratio of neurons and astrocytes that arise from a common forebrain neural progenitor. Whole-cell patch-clamp recordings of 114 neurons derived from three independent iPSC lines confirmed their electrophysiological maturity, including resting membrane potential (-58.2±1.0 mV), capacitance (49.1±2.9 pF), action potential (AP) threshold (-50.9±0.5 mV) and AP amplitude (66.5±1.3 mV). Nearly 100% of neurons were capable of firing APs, of which 79% had sustained trains of mature APs with minimal accommodation (peak AP frequency: 11.9±0.5 Hz) and 74% exhibited spontaneous synaptic activity (amplitude, 16.03±0.82 pA; frequency, 1.09±0.17 Hz). We expect this protocol to be of broad applicability for implementing iPSC-based neuronal network models of neuropsychiatric disorders.Molecular Psychiatry advance online publication, 18 April 2017; doi:10.1038/mp.2017.56.
Spatio-temporal specialization of GABAergic septo-hippocampal neurons for rhythmic network activity.
Unal, Gunes; Crump, Michael G; Viney, Tim J; Éltes, Tímea; Katona, Linda; Klausberger, Thomas; Somogyi, Peter
2018-03-03
Medial septal GABAergic neurons of the basal forebrain innervate the hippocampus and related cortical areas, contributing to the coordination of network activity, such as theta oscillations and sharp wave-ripple events, via a preferential innervation of GABAergic interneurons. Individual medial septal neurons display diverse activity patterns, which may be related to their termination in different cortical areas and/or to the different types of innervated interneurons. To test these hypotheses, we extracellularly recorded and juxtacellularly labeled single medial septal neurons in anesthetized rats in vivo during hippocampal theta and ripple oscillations, traced their axons to distant cortical target areas, and analyzed their postsynaptic interneurons. Medial septal GABAergic neurons exhibiting different hippocampal theta phase preferences and/or sharp wave-ripple related activity terminated in restricted hippocampal regions, and selectively targeted a limited number of interneuron types, as established on the basis of molecular markers. We demonstrate the preferential innervation of bistratified cells in CA1 and of basket cells in CA3 by individual axons. One group of septal neurons was suppressed during sharp wave-ripples, maintained their firing rate across theta and non-theta network states and mainly fired along the descending phase of CA1 theta oscillations. In contrast, neurons that were active during sharp wave-ripples increased their firing significantly during "theta" compared to "non-theta" states, with most firing during the ascending phase of theta oscillations. These results demonstrate that specialized septal GABAergic neurons contribute to the coordination of network activity through parallel, target area- and cell type-selective projections to the hippocampus.
Batista, C A S; Viana, R L; Ferrari, F A S; Lopes, S R; Batista, A M; Coninck, J C P
2013-04-01
Thermally sensitive neurons present bursting activity for certain temperature ranges, characterized by fast repetitive spiking of action potential followed by a short quiescent period. Synchronization of bursting activity is possible in networks of coupled neurons, and it is sometimes an undesirable feature. Control procedures can suppress totally or partially this collective behavior, with potential applications in deep-brain stimulation techniques. We investigate the control of bursting synchronization in small-world networks of Hodgkin-Huxley-type thermally sensitive neurons with chemical synapses through two different strategies. One is the application of an external time-periodic electrical signal and another consists of a time-delayed feedback signal. We consider the effectiveness of both strategies in terms of protocols of applications suitable to be applied by pacemakers.
Li, Yu-Ye; Ding, Xue-Li
2014-12-01
Heterogeneity of the neurons and noise are inevitable in the real neuronal network. In this paper, Gaussian white noise induced spatial patterns including spiral waves and multiple spatial coherence resonances are studied in a network composed of Morris—Lecar neurons with heterogeneity characterized by parameter diversity. The relationship between the resonances and the transitions between ordered spiral waves and disordered spatial patterns are achieved. When parameter diversity is introduced, the maxima of multiple resonances increases first, and then decreases as diversity strength increases, which implies that the coherence degrees induced by noise are enhanced at an intermediate diversity strength. The synchronization degree of spatial patterns including ordered spiral waves and disordered patterns is identified to be a very low level. The results suggest that the nervous system can profit from both heterogeneity and noise, and the multiple spatial coherence resonances are achieved via the emergency of spiral waves instead of synchronization patterns.
Role of Noise in Complex Networks of FitzHugh-Nagumo Neurons
International Nuclear Information System (INIS)
Fortuna, Luigi; Frasca, Mattia; La Rosa, Manuela
2005-01-01
This paper deals with the open question related to the role of noise in complex networks of interconnected FitzHugh-Nagumo neurons. In this paper this problem is faced with extensive simulations of different network topologies. The results show that several topologies behave in an optimal way with respect to the range of noise level leading to an improvement in the stimulus-response coherence, while other with respect to the maximum values of the performance index. The best results in terms of both the suitable noise level and high stimulus response coherence have been obtained when a diversity in neuron characteristic parameters has been introduced and the neurons have been connected in a small-world topology
International Nuclear Information System (INIS)
Li Yu-Ye; Ding Xue-Li
2014-01-01
Heterogeneity of the neurons and noise are inevitable in the real neuronal network. In this paper, Gaussian white noise induced spatial patterns including spiral waves and multiple spatial coherence resonances are studied in a network composed of Morris—Lecar neurons with heterogeneity characterized by parameter diversity. The relationship between the resonances and the transitions between ordered spiral waves and disordered spatial patterns are achieved. When parameter diversity is introduced, the maxima of multiple resonances increases first, and then decreases as diversity strength increases, which implies that the coherence degrees induced by noise are enhanced at an intermediate diversity strength. The synchronization degree of spatial patterns including ordered spiral waves and disordered patterns is identified to be a very low level. The results suggest that the nervous system can profit from both heterogeneity and noise, and the multiple spatial coherence resonances are achieved via the emergency of spiral waves instead of synchronization patterns. (interdisciplinary physics and related areas of science and technology)
Anti-correlated cortical networks arise from spontaneous neuronal dynamics at slow timescales.
Kodama, Nathan X; Feng, Tianyi; Ullett, James J; Chiel, Hillel J; Sivakumar, Siddharth S; Galán, Roberto F
2018-01-12
In the highly interconnected architectures of the cerebral cortex, recurrent intracortical loops disproportionately outnumber thalamo-cortical inputs. These networks are also capable of generating neuronal activity without feedforward sensory drive. It is unknown, however, what spatiotemporal patterns may be solely attributed to intrinsic connections of the local cortical network. Using high-density microelectrode arrays, here we show that in the isolated, primary somatosensory cortex of mice, neuronal firing fluctuates on timescales from milliseconds to tens of seconds. Slower firing fluctuations reveal two spatially distinct neuronal ensembles, which correspond to superficial and deeper layers. These ensembles are anti-correlated: when one fires more, the other fires less and vice versa. This interplay is clearest at timescales of several seconds and is therefore consistent with shifts between active sensing and anticipatory behavioral states in mice.
Chimera states in a multilayer network of coupled and uncoupled neurons
Majhi, Soumen; Perc, Matjaž; Ghosh, Dibakar
2017-07-01
We study the emergence of chimera states in a multilayer neuronal network, where one layer is composed of coupled and the other layer of uncoupled neurons. Through the multilayer structure, the layer with coupled neurons acts as the medium by means of which neurons in the uncoupled layer share information in spite of the absence of physical connections among them. Neurons in the coupled layer are connected with electrical synapses, while across the two layers, neurons are connected through chemical synapses. In both layers, the dynamics of each neuron is described by the Hindmarsh-Rose square wave bursting dynamics. We show that the presence of two different types of connecting synapses within and between the two layers, together with the multilayer network structure, plays a key role in the emergence of between-layer synchronous chimera states and patterns of synchronous clusters. In particular, we find that these chimera states can emerge in the coupled layer regardless of the range of electrical synapses. Even in all-to-all and nearest-neighbor coupling within the coupled layer, we observe qualitatively identical between-layer chimera states. Moreover, we show that the role of information transmission delay between the two layers must not be neglected, and we obtain precise parameter bounds at which chimera states can be observed. The expansion of the chimera region and annihilation of cluster and fully coherent states in the parameter plane for increasing values of inter-layer chemical synaptic time delay are illustrated using effective range measurements. These results are discussed in the light of neuronal evolution, where the coexistence of coherent and incoherent dynamics during the developmental stage is particularly likely.
Directory of Open Access Journals (Sweden)
Katerina D Oikonomou
2014-09-01
Full Text Available Spiny neurons of amygdala, striatum, and cerebral cortex share four interesting features: [1] they are the most abundant cell type within their respective brain area, [2] covered by thousands of thorny protrusions (dendritic spines, [3] possess high levels of dendritic NMDA conductances, and [4] experience sustained somatic depolarizations in vivo and in vitro (UP states. In all spiny neurons of the forebrain, adequate glutamatergic inputs generate dendritic plateau potentials (dendritic UP states characterized by (i fast rise, (ii plateau phase lasting several hundred milliseconds and (iii abrupt decline at the end of the plateau phase. The dendritic plateau potential propagates towards the cell body decrementally to induce a long-lasting (longer than 100 ms, most often 200 – 800 ms steady depolarization (~20 mV amplitude, which resembles a neuronal UP state. Based on voltage-sensitive dye imaging, the plateau depolarization in the soma is precisely time-locked to the regenerative plateau potential taking place in the dendrite. The somatic plateau rises after the onset of the dendritic voltage transient and collapses with the breakdown of the dendritic plateau depolarization. We hypothesize that neuronal UP states in vivo reflect the occurrence of dendritic plateau potentials (dendritic UP states. We propose that the somatic voltage waveform during a neuronal UP state is determined by dendritic plateau potentials. A mammalian spiny neuron uses dendritic plateau potentials to detect and transform coherent network activity into a ubiquitous neuronal UP state. The biophysical properties of dendritic plateau potentials allow neurons to quickly attune to the ongoing network activity, as well as secure the stable amplitudes of successive UP states.
International Nuclear Information System (INIS)
Ozer, Mahmut; Uzuntarla, Muhammet
2008-01-01
The Hodgkin-Huxley (H-H) neuron model driven by stimuli just above threshold shows a noise-induced response delay with respect to time to the first spike for a certain range of noise strengths, an effect called 'noise delayed decay' (NDD). We study the response time of a network of coupled H-H neurons, and investigate how the NDD can be affected by the connection topology of the network and the coupling strength. We show that the NDD effect exists for weak and intermediate coupling strengths, whereas it disappears for strong coupling strength regardless of the connection topology. We also show that although the network structure has very little effect on the NDD for a weak coupling strength, the network structure plays a key role for an intermediate coupling strength by decreasing the NDD effect with the increasing number of random shortcuts, and thus provides an additional operating regime, that is absent in the regular network, in which the neurons may also exploit a spike time code
International Nuclear Information System (INIS)
Hajihosseini, Amirhossein; Maleki, Farzaneh; Rokni Lamooki, Gholam Reza
2011-01-01
Highlights: → We construct a recurrent neural network by generalizing a specific n-neuron network. → Several codimension 1 and 2 bifurcations take place in the newly constructed network. → The newly constructed network has higher capabilities to learn periodic signals. → The normal form theorem is applied to investigate dynamics of the network. → A series of bifurcation diagrams is given to support theoretical results. - Abstract: A class of recurrent neural networks is constructed by generalizing a specific class of n-neuron networks. It is shown that the newly constructed network experiences generic pitchfork and Hopf codimension one bifurcations. It is also proved that the emergence of generic Bogdanov-Takens, pitchfork-Hopf and Hopf-Hopf codimension two, and the degenerate Bogdanov-Takens bifurcation points in the parameter space is possible due to the intersections of codimension one bifurcation curves. The occurrence of bifurcations of higher codimensions significantly increases the capability of the newly constructed recurrent neural network to learn broader families of periodic signals.
Noise and Synchronization Analysis of the Cold-Receptor Neuronal Network Model
Directory of Open Access Journals (Sweden)
Ying Du
2014-01-01
Full Text Available This paper analyzes the dynamics of the cold receptor neural network model. First, it examines noise effects on neuronal stimulus in the model. From ISI plots, it is shown that there are considerable differences between purely deterministic simulations and noisy ones. The ISI-distance is used to measure the noise effects on spike trains quantitatively. It is found that spike trains observed in neural models can be more strongly affected by noise for different temperatures in some aspects; meanwhile, spike train has greater variability with the noise intensity increasing. The synchronization of neuronal network with different connectivity patterns is also studied. It is shown that chaotic and high period patterns are more difficult to get complete synchronization than the situation in single spike and low period patterns. The neuronal network will exhibit various patterns of firing synchronization by varying some key parameters such as the coupling strength. Different types of firing synchronization are diagnosed by a correlation coefficient and the ISI-distance method. The simulations show that the synchronization status of neurons is related to the network connectivity patterns.
Chimera-like states in a neuronal network model of the cat brain
Santos, M. S.; Szezech, J. D.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Batista, A. M.; Viana, R. L.; Kurths, J.
2017-08-01
Neuronal systems have been modeled by complex networks in different description levels. Recently, it has been verified that networks can simultaneously exhibit one coherent and other incoherent domain, known as chimera states. In this work, we study the existence of chimera states in a network considering the connectivity matrix based on the cat cerebral cortex. The cerebral cortex of the cat can be separated in 65 cortical areas organised into the four cognitive regions: visual, auditory, somatosensory-motor and frontolimbic. We consider a network where the local dynamics is given by the Hindmarsh-Rose model. The Hindmarsh-Rose equations are a well known model of neuronal activity that has been considered to simulate membrane potential in neuron. Here, we analyse under which conditions chimera states are present, as well as the affects induced by intensity of coupling on them. We observe the existence of chimera states in that incoherent structure can be composed of desynchronised spikes or desynchronised bursts. Moreover, we find that chimera states with desynchronised bursts are more robust to neuronal noise than with desynchronised spikes.
Balance of excitation and inhibition determines 1/f power spectrum in neuronal networks.
Lombardi, F; Herrmann, H J; de Arcangelis, L
2017-04-01
The 1/f-like decay observed in the power spectrum of electro-physiological signals, along with scale-free statistics of the so-called neuronal avalanches, constitutes evidence of criticality in neuronal systems. Recent in vitro studies have shown that avalanche dynamics at criticality corresponds to some specific balance of excitation and inhibition, thus suggesting that this is a basic feature of the critical state of neuronal networks. In particular, a lack of inhibition significantly alters the temporal structure of the spontaneous avalanche activity and leads to an anomalous abundance of large avalanches. Here, we study the relationship between network inhibition and the scaling exponent β of the power spectral density (PSD) of avalanche activity in a neuronal network model inspired in Self-Organized Criticality. We find that this scaling exponent depends on the percentage of inhibitory synapses and tends to the value β = 1 for a percentage of about 30%. More specifically, β is close to 2, namely, Brownian noise, for purely excitatory networks and decreases towards values in the interval [1, 1.4] as the percentage of inhibitory synapses ranges between 20% and 30%, in agreement with experimental findings. These results indicate that the level of inhibition affects the frequency spectrum of resting brain activity and suggest the analysis of the PSD scaling behavior as a possible tool to study pathological conditions.
Characterization of a patch-clamp microchannel array towards neuronal networks analysis
DEFF Research Database (Denmark)
Alberti, Massimo; Snakenborg, Detlef; Lopacinska, Joanna M.
2010-01-01
for simultaneous patch clamping of cultured cells or neurons in the same network. A disposable silicon/silicon dioxide (Si/SiO2) chip with a microhole array was integrated in a microfluidic system for cell handling, perfusion and electrical recording. Fluidic characterization showed that our PC mu CA can work...
Directory of Open Access Journals (Sweden)
Lorenzo L. Pesce
2013-01-01
Full Text Available Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons and processor pool sizes (1 to 256 processors. Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.
An FPGA Platform for Real-Time Simulation of Spiking Neuronal Networks.
Pani, Danilo; Meloni, Paolo; Tuveri, Giuseppe; Palumbo, Francesca; Massobrio, Paolo; Raffo, Luigi
2017-01-01
In the last years, the idea to dynamically interface biological neurons with artificial ones has become more and more urgent. The reason is essentially due to the design of innovative neuroprostheses where biological cell assemblies of the brain can be substituted by artificial ones. For closed-loop experiments with biological neuronal networks interfaced with in silico modeled networks, several technological challenges need to be faced, from the low-level interfacing between the living tissue and the computational model to the implementation of the latter in a suitable form for real-time processing. Field programmable gate arrays (FPGAs) can improve flexibility when simple neuronal models are required, obtaining good accuracy, real-time performance, and the possibility to create a hybrid system without any custom hardware, just programming the hardware to achieve the required functionality. In this paper, this possibility is explored presenting a modular and efficient FPGA design of an in silico spiking neural network exploiting the Izhikevich model. The proposed system, prototypically implemented on a Xilinx Virtex 6 device, is able to simulate a fully connected network counting up to 1,440 neurons, in real-time, at a sampling rate of 10 kHz, which is reasonable for small to medium scale extra-cellular closed-loop experiments.
Pesce, Lorenzo L; Lee, Hyong C; Hereld, Mark; Visser, Sid; Stevens, Rick L; Wildeman, Albert; van Drongelen, Wim
2013-01-01
Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.
Differential Patterns of Dysconnectivity in Mirror Neuron and Mentalizing Networks in Schizophrenia
Schilbach, Leonhard; Derntl, Birgit; Aleman, Andre; Caspers, Svenja; Clos, Mareike; Diederen, Kelly M J; Gruber, Oliver; Kogler, Lydia; Liemburg, Edith J; Sommer, Iris E; Müller, Veronika I; Cieslik, Edna C; Eickhoff, Simon B
Impairments of social cognition are well documented in patients with schizophrenia (SCZ), but the neural basis remains poorly understood. In light of evidence that suggests that the "mirror neuron system" (MNS) and the "mentalizing network" (MENT) are key substrates of intersubjectivity and joint
Repeated Stimulation of Cultured Networks of Rat Cortical Neurons Induces Parallel Memory Traces
le Feber, Joost; Witteveen, Tim; van Veenendaal, Tamar M.; Dijkstra, Jelle
2015-01-01
During systems consolidation, memories are spontaneously replayed favoring information transfer from hippocampus to neocortex. However, at present no empirically supported mechanism to accomplish a transfer of memory from hippocampal to extra-hippocampal sites has been offered. We used cultured neuronal networks on multielectrode arrays and…
Directory of Open Access Journals (Sweden)
Jan eHahne
2015-09-01
Full Text Available Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy...
Patel, Tapan P; Man, Karen; Firestein, Bonnie L; Meaney, David F
2015-03-30
Recent advances in genetically engineered calcium and membrane potential indicators provide the potential to estimate the activation dynamics of individual neurons within larger, mesoscale networks (100s-1000+neurons). However, a fully integrated automated workflow for the analysis and visualization of neural microcircuits from high speed fluorescence imaging data is lacking. Here we introduce FluoroSNNAP, Fluorescence Single Neuron and Network Analysis Package. FluoroSNNAP is an open-source, interactive software developed in MATLAB for automated quantification of numerous biologically relevant features of both the calcium dynamics of single-cells and network activity patterns. FluoroSNNAP integrates and improves upon existing tools for spike detection, synchronization analysis, and inference of functional connectivity, making it most useful to experimentalists with little or no programming knowledge. We apply FluoroSNNAP to characterize the activity patterns of neuronal microcircuits undergoing developmental maturation in vitro. Separately, we highlight the utility of single-cell analysis for phenotyping a mixed population of neurons expressing a human mutant variant of the microtubule associated protein tau and wild-type tau. We show the performance of semi-automated cell segmentation using spatiotemporal independent component analysis and significant improvement in detecting calcium transients using a template-based algorithm in comparison to peak-based or wavelet-based detection methods. Our software further enables automated analysis of microcircuits, which is an improvement over existing methods. We expect the dissemination of this software will facilitate a comprehensive analysis of neuronal networks, promoting the rapid interrogation of circuits in health and disease. Copyright © 2015. Published by Elsevier B.V.
Network dynamics in nociceptive pathways assessed by the neuronal avalanche model
Directory of Open Access Journals (Sweden)
Wu José
2012-04-01
Full Text Available Abstract Background Traditional electroencephalography provides a critical assessment of pain responses. The perception of pain, however, may involve a series of signal transmission pathways in higher cortical function. Recent studies have shown that a mathematical method, the neuronal avalanche model, may be applied to evaluate higher-order network dynamics. The neuronal avalanche is a cascade of neuronal activity, the size distribution of which can be approximated by a power law relationship manifested by the slope of a straight line (i.e., the α value. We investigated whether the neuronal avalanche could be a useful index for nociceptive assessment. Findings Neuronal activity was recorded with a 4 × 8 multichannel electrode array in the primary somatosensory cortex (S1 and anterior cingulate cortex (ACC. Under light anesthesia, peripheral pinch stimulation increased the slope of the α value in both the ACC and S1, whereas brush stimulation increased the α value only in the S1. The increase in α values was blocked in both regions under deep anesthesia. The increase in α values in the ACC induced by peripheral pinch stimulation was blocked by medial thalamic lesion, but the increase in α values in the S1 induced by brush and pinch stimulation was not affected. Conclusions The neuronal avalanche model shows a critical state in the cortical network for noxious-related signal processing. The α value may provide an index of brain network activity that distinguishes the responses to somatic stimuli from the control state. These network dynamics may be valuable for the evaluation of acute nociceptive processes and may be applied to chronic pathological pain conditions.
Directory of Open Access Journals (Sweden)
Kjell eFuxe
2012-06-01
Full Text Available Extrasynaptic neurotransmission is an important short distance form of volume transmission (VT and describes the extracellular diffusion of transmitters and modulators after synaptic spillover or extrasynaptic release in the local circuit regions binding to and activating mainly extrasynaptic neuronal and glial receptors in the neuroglial networks of the brain. Receptor-receptor interactions in G protein-coupled receptor (GPCR heteromers play a major role, on dendritic spines and nerve terminals including glutamate synapses, in the integrative processes of the extrasynaptic signaling. Heteromeric complexes between GPCR and ion-channel receptors play a special role in the integration of the synaptic and extrasynaptic signals. Changes in extracellular concentrations of the classical synaptic neurotransmitters glutamate and GABA found with microdialysis is likely an expression of the activity of the neuron-astrocyte unit of the brain and can be used as an index of VT-mediated actions of these two neurotransmitters in the brain. Thus, the activity of neurons may be functionally linked to the activity of astrocytes, which may release glutamate and GABA to the extracellular space where extrasynaptic glutamate and GABA receptors do exist. Wiring transmission (WT and VT are fundamental properties of all neurons of the CNS but the balance between WT and VT varies from one nerve cell population to the other. The focus is on the striatal cellular networks, and the WT and VT and their integration via receptor heteromers are described in the GABA projection neurons, the glutamate, dopamine, 5-hydroxytryptamine (5-HT and histamine striatal afferents, the cholinergic interneurons and different types of GABA interneurons. In addition, the role in these networks of VT signaling of the energy-dependent modulator adenosine and of endocannabinoids mainly formed in the striatal projection neurons will be underlined to understand the communication in the striatal
Simple cortical and thalamic neuron models for digital arithmetic circuit implementation
Directory of Open Access Journals (Sweden)
Takuya eNanami
2016-05-01
Full Text Available Trade-off between reproducibility of neuronal activities and computational efficiency is one ofcrucial subjects in computational neuroscience and neuromorphic engineering. A wide variety ofneuronal models have been studied from different viewpoints. The digital spiking silicon neuron(DSSN model is a qualitative model that focuses on efficient implementation by digital arithmeticcircuits. We expanded the DSSN model and found appropriate parameter sets with which itreproduces the dynamical behaviors of the ionic-conductance models of four classes of corticaland thalamic neurons. We first developed a 4-variable model by reducing the number of variablesin the ionic-conductance models and elucidated its mathematical structures using bifurcationanalysis. Then, expanded DSSN models were constructed that reproduce these mathematicalstructures and capture the characteristic behavior of each neuron class. We confirmed thatstatistics of the neuronal spike sequences are similar in the DSSN and the ionic-conductancemodels. Computational cost of the DSSN model is larger than that of the recent sophisticatedIntegrate-and-Fire-based models, but smaller than the ionic-conductance models. This modelis intended to provide another meeting point for above trade-off that satisfies the demand forlarge-scale neuronal network simulation with closer-to-biology models.
Directory of Open Access Journals (Sweden)
James Park
2016-10-01
Full Text Available Single-cell heterogeneity confounds efforts to understand how a population of cells organizes into cellular networks that underlie tissue-level function. This complexity is prominent in the mammalian suprachiasmatic nucleus (SCN. Here, individual neurons exhibit a remarkable amount of asynchronous behavior and transcriptional heterogeneity. However, SCN neurons are able to generate precisely coordinated synaptic and molecular outputs that synchronize the body to a common circadian cycle by organizing into cellular networks. To understand this emergent cellular network property, it is important to reconcile single-neuron heterogeneity with network organization. In light of recent studies suggesting that transcriptionally heterogeneous cells organize into distinct cellular phenotypes, we characterized the transcriptional, spatial, and functional organization of 352 SCN neurons from mice experiencing phase-shifts in their circadian cycle. Using the community structure detection method and multivariate analytical techniques, we identified previously undescribed neuronal phenotypes that are likely to participate in regulatory networks with known SCN cell types. Based on the newly discovered neuronal phenotypes, we developed a data-driven neuronal network structure in which multiple cell types interact through known synaptic and paracrine signaling mechanisms. These results provide a basis from which to interpret the functional variability of SCN neurons and describe methodologies towards understanding how a population of heterogeneous single cells organizes into cellular networks that underlie tissue-level function.
Fast computation with spikes in a recurrent neural network
International Nuclear Information System (INIS)
Jin, Dezhe Z.; Seung, H. Sebastian
2002-01-01
Neural networks with recurrent connections are sometimes regarded as too slow at computation to serve as models of the brain. Here we analytically study a counterexample, a network consisting of N integrate-and-fire neurons with self excitation, all-to-all inhibition, instantaneous synaptic coupling, and constant external driving inputs. When the inhibition and/or excitation are large enough, the network performs a winner-take-all computation for all possible external inputs and initial states of the network. The computation is done very quickly: As soon as the winner spikes once, the computation is completed since no other neurons will spike. For some initial states, the winner is the first neuron to spike, and the computation is done at the first spike of the network. In general, there are M potential winners, corresponding to the top M external inputs. When the external inputs are close in magnitude, M tends to be larger. If M>1, the selection of the actual winner is strongly influenced by the initial states. If a special relation between the excitation and inhibition is satisfied, the network always selects the neuron with the maximum external input as the winner
Neuronal synchrony detection on single-electron neural networks
International Nuclear Information System (INIS)
Oya, Takahide; Asai, Tetsuya; Kagaya, Ryo; Hirose, Tetsuya; Amemiya, Yoshihito
2006-01-01
Synchrony detection between burst and non-burst spikes is known to be one functional example of depressing synapses. Kanazawa et al. demonstrated synchrony detection with MOS depressing synapse circuits. They found that the performance of a network with depressing synapses that discriminates between burst and random input spikes increases non-monotonically as the static device mismatch is increased. We designed a single-electron depressing synapse and constructed the same network as in Kanazawa's study to develop noise-tolerant single-electron circuits. We examined the temperature characteristics and explored possible architecture that enables single-electron circuits to operate at T > 0 K
Dynamical behaviour of neuronal networks iterated with memory
International Nuclear Information System (INIS)
Melatagia, P.M.; Ndoundam, R.; Tchuente, M.
2005-11-01
We study memory iteration where the updating consider a longer history of each site and the set of interaction matrices is palindromic. We analyze two different ways of updating the networks: parallel iteration with memory and sequential iteration with memory that we introduce in this paper. For parallel iteration, we define Lyapunov functional which permits us to characterize the periods behaviour and explicitly bounds the transient lengths of neural networks iterated with memory. For sequential iteration, we use an algebraic invariant to characterize the periods behaviour of the studied model of neural computation. (author)
Defects formation and spiral waves in a network of neurons in presence of electromagnetic induction.
Rostami, Zahra; Jafari, Sajad
2018-04-01
Complex anatomical and physiological structure of an excitable tissue (e.g., cardiac tissue) in the body can represent different electrical activities through normal or abnormal behavior. Abnormalities of the excitable tissue coming from different biological reasons can lead to formation of some defects. Such defects can cause some successive waves that may end up to some additional reorganizing beating behaviors like spiral waves or target waves. In this study, formation of defects and the resulting emitted waves in an excitable tissue are investigated. We have considered a square array network of neurons with nearest-neighbor connections to describe the excitable tissue. Fundamentally, electrophysiological properties of ion currents in the body are responsible for exhibition of electrical spatiotemporal patterns. More precisely, fluctuation of accumulated ions inside and outside of cell causes variable electrical and magnetic field. Considering undeniable mutual effects of electrical field and magnetic field, we have proposed the new Hindmarsh-Rose (HR) neuronal model for the local dynamics of each individual neuron in the network. In this new neuronal model, the influence of magnetic flow on membrane potential is defined. This improved model holds more bifurcation parameters. Moreover, the dynamical behavior of the tissue is investigated in different states of quiescent, spiking, bursting and even chaotic state. The resulting spatiotemporal patterns are represented and the time series of some sampled neurons are displayed, as well.
Synchronous bursts on scale-free neuronal networks with attractive and repulsive coupling.
Directory of Open Access Journals (Sweden)
Qingyun Wang
Full Text Available This paper investigates the dependence of synchronization transitions of bursting oscillations on the information transmission delay over scale-free neuronal networks with attractive and repulsive coupling. It is shown that for both types of coupling, the delay always plays a subtle role in either promoting or impairing synchronization. In particular, depending on the inherent oscillation period of individual neurons, regions of irregular and regular propagating excitatory fronts appear intermittently as the delay increases. These delay-induced synchronization transitions are manifested as well-expressed minima in the measure for spatiotemporal synchrony. For attractive coupling, the minima appear at every integer multiple of the average oscillation period, while for the repulsive coupling, they appear at every odd multiple of the half of the average oscillation period. The obtained results are robust to the variations of the dynamics of individual neurons, the system size, and the neuronal firing type. Hence, they can be used to characterize attractively or repulsively coupled scale-free neuronal networks with delays.
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers
Directory of Open Access Journals (Sweden)
Jakob Jordan
2018-02-01
Full Text Available State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.
Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne
2018-01-01
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers
Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne
2018-01-01
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613
International Nuclear Information System (INIS)
Destexhe, A.
1994-01-01
Various types of spatiotemporal behavior are described for two-dimensional networks of excitatory and inhibitory neurons with time delayed interactions. It is described how the network behaves as several structural parameters are varied, such as the number of neurons, the connectivity, and the values of synaptic weights. A transition from spatially uniform oscillations to spatiotemporal chaos via intermittentlike behavior is observed. The properties of spatiotemporally chaotic solutions are investigated by evaluating the largest positive Lyapunov exponent and the loss of correlation with distance. Finally, properties of information transport are evaluated during uniform oscillations and spatiotemporal chaos. It is shown that the diffusion coefficient increases significantly in the spatiotemporal phase similar to the increase of transport coefficients at the onset of fluid turbulence. It is proposed that such a property should be seen in other media, such as chemical turbulence or networks of oscillators. The possibility of measuring information transport from appropriate experiments is also discussed
A complex-valued firing-rate model that approximates the dynamics of spiking networks.
Directory of Open Access Journals (Sweden)
Evan S Schaffer
2013-10-01
Full Text Available Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons.
A complex-valued firing-rate model that approximates the dynamics of spiking networks.
Schaffer, Evan S; Ostojic, Srdjan; Abbott, L F
2013-10-01
Firing-rate models provide an attractive approach for studying large neural networks because they can be simulated rapidly and are amenable to mathematical analysis. Traditional firing-rate models assume a simple form in which the dynamics are governed by a single time constant. These models fail to replicate certain dynamic features of populations of spiking neurons, especially those involving synchronization. We present a complex-valued firing-rate model derived from an eigenfunction expansion of the Fokker-Planck equation and apply it to the linear, quadratic and exponential integrate-and-fire models. Despite being almost as simple as a traditional firing-rate description, this model can reproduce firing-rate dynamics due to partial synchronization of the action potentials in a spiking model, and it successfully predicts the transition to spike synchronization in networks of coupled excitatory and inhibitory neurons.
Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.
Directory of Open Access Journals (Sweden)
Stojan Jovanović
2016-06-01
Full Text Available The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.
Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.
Jovanović, Stojan; Rotter, Stefan
2016-06-01
The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs) are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.
Guliyev , Namig; Ismailov , Vugar
2016-01-01
The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous function provided that an unlimited number of neurons in a hidden layer is permitted. In this paper, we consider constructive approximation on any finite interval of $\\mathbb{R}$ by neural networks with only one neuron in the hid...
Analyzing topological characteristics of neuronal functional networks in the rat brain
Energy Technology Data Exchange (ETDEWEB)
Lu, Hu [School of Computer Science and Communication Engineering, Jiangsu University, Jiangsu 212003 (China); School of Computer Science, Fudan University, Shanghai 200433 (China); Yang, Shengtao [Institutes of Brain Science, Fudan University, Shanghai 200433 (China); Song, Yuqing [School of Computer Science and Communication Engineering, Jiangsu University, Jiangsu 212003 (China); Wei, Hui [School of Computer Science, Fudan University, Shanghai 200433 (China)
2014-08-28
In this study, we recorded spike trains from brain cortical neurons of several behavioral rats in vivo by using multi-electrode recordings. An NFN was constructed in each trial, obtaining a total of 150 NFNs in this study. The topological characteristics of NFNs were analyzed by using the two most important characteristics of complex networks, namely, small-world structure and community structure. We found that the small-world properties exist in different NFNs constructed in this study. Modular function Q was used to determine the existence of community structure in NFNs, through which we found that community-structure characteristics, which are related to recorded spike train data sets, are more evident in the Y-maze task than in the DM-GM task. Our results can also be used to analyze further the relationship between small-world characteristics and the cognitive behavioral responses of rats. - Highlights: • We constructed the neuronal function networks based on the recorded neurons. • We analyzed the two main complex network characteristics, namely, small-world structure and community structure. • NFNs which were constructed based on the recorded neurons in this study exhibit small-world properties. • Some NFNs have community structure characteristics.
Analyzing topological characteristics of neuronal functional networks in the rat brain
International Nuclear Information System (INIS)
Lu, Hu; Yang, Shengtao; Song, Yuqing; Wei, Hui
2014-01-01
In this study, we recorded spike trains from brain cortical neurons of several behavioral rats in vivo by using multi-electrode recordings. An NFN was constructed in each trial, obtaining a total of 150 NFNs in this study. The topological characteristics of NFNs were analyzed by using the two most important characteristics of complex networks, namely, small-world structure and community structure. We found that the small-world properties exist in different NFNs constructed in this study. Modular function Q was used to determine the existence of community structure in NFNs, through which we found that community-structure characteristics, which are related to recorded spike train data sets, are more evident in the Y-maze task than in the DM-GM task. Our results can also be used to analyze further the relationship between small-world characteristics and the cognitive behavioral responses of rats. - Highlights: • We constructed the neuronal function networks based on the recorded neurons. • We analyzed the two main complex network characteristics, namely, small-world structure and community structure. • NFNs which were constructed based on the recorded neurons in this study exhibit small-world properties. • Some NFNs have community structure characteristics
Phase transitions and self-organized criticality in networks of stochastic spiking neurons.
Brochini, Ludmila; de Andrade Costa, Ariadne; Abadi, Miguel; Roque, Antônio C; Stolfi, Jorge; Kinouchi, Osame
2016-11-07
Phase transitions and critical behavior are crucial issues both in theoretical and experimental neuroscience. We report analytic and computational results about phase transitions and self-organized criticality (SOC) in networks with general stochastic neurons. The stochastic neuron has a firing probability given by a smooth monotonic function Φ(V) of the membrane potential V, rather than a sharp firing threshold. We find that such networks can operate in several dynamic regimes (phases) depending on the average synaptic weight and the shape of the firing function Φ. In particular, we encounter both continuous and discontinuous phase transitions to absorbing states. At the continuous transition critical boundary, neuronal avalanches occur whose distributions of size and duration are given by power laws, as observed in biological neural networks. We also propose and test a new mechanism to produce SOC: the use of dynamic neuronal gains - a form of short-term plasticity probably located at the axon initial segment (AIS) - instead of depressing synapses at the dendrites (as previously studied in the literature). The new self-organization mechanism produces a slightly supercritical state, that we called SOSC, in accord to some intuitions of Alan Turing.
To Break or to Brake Neuronal Network Accelerated by Ammonium Ions?
Directory of Open Access Journals (Sweden)
Vladimir V Dynnik
Full Text Available The aim of present study was to investigate the effects of ammonium ions on in vitro neuronal network activity and to search alternative methods of acute ammonia neurotoxicity prevention.Rat hippocampal neuronal and astrocytes co-cultures in vitro, fluorescent microscopy and perforated patch clamp were used to monitor the changes in intracellular Ca2+- and membrane potential produced by ammonium ions and various modulators in the cells implicated in neural networks.Low concentrations of NH4Cl (0.1-4 mM produce short temporal effects on network activity. Application of 5-8 mM NH4Cl: invariably transforms diverse network firing regimen to identical burst patterns, characterized by substantial neuronal membrane depolarization at plateau phase of potential and high-amplitude Ca2+-oscillations; raises frequency and average for period of oscillations Ca2+-level in all cells implicated in network; results in the appearance of group of «run out» cells with high intracellular Ca2+ and steadily diminished amplitudes of oscillations; increases astrocyte Ca2+-signalling, characterized by the appearance of groups of cells with increased intracellular Ca2+-level and/or chaotic Ca2+-oscillations. Accelerated network activity may be suppressed by the blockade of NMDA or AMPA/kainate-receptors or by overactivation of AMPA/kainite-receptors. Ammonia still activate neuronal firing in the presence of GABA(A receptors antagonist bicuculline, indicating that «disinhibition phenomenon» is not implicated in the mechanisms of networks acceleration. Network activity may also be slowed down by glycine, agonists of metabotropic inhibitory receptors, betaine, L-carnitine, L-arginine, etc.Obtained results demonstrate that ammonium ions accelerate neuronal networks firing, implicating ionotropic glutamate receptors, having preserved the activities of group of inhibitory ionotropic and metabotropic receptors. This may mean, that ammonia neurotoxicity might be prevented by
Shinya, Ishii; Munetaka, Shidara; Katsunari, Shibata
2006-01-01
In an experiment of multi-trial task to obtain a reward, reward expectancy neurons,###which responded only in the non-reward trials that are necessary to advance###toward the reward, have been observed in the anterior cingulate cortex of monkeys.###In this paper, to explain the emergence of the reward expectancy neuron in###terms of reinforcement learning theory, a model that consists of a recurrent neural###network trained based on reinforcement learning is proposed. The analysis of the###hi...
Neuronal oscillations form parietal/frontal networks during contour integration.
Castellano, Marta; Plöchl, Michael; Vicente, Raul; Pipa, Gordon
2014-01-01
The ability to integrate visual features into a global coherent percept that can be further categorized and manipulated are fundamental abilities of the neural system. While the processing of visual information involves activation of early visual cortices, the recruitment of parietal and frontal cortices has been shown to be crucial for perceptual processes. Yet is it not clear how both cortical and long-range oscillatory activity leads to the integration of visual features into a coherent percept. Here, we will investigate perceptual grouping through the analysis of a contour categorization task, where the local elements that form contour must be linked into a coherent structure, which is then further processed and manipulated to perform the categorization task. The contour formation in our visual stimulus is a dynamic process where, for the first time, visual perception of contours is disentangled from the onset of visual stimulation or from motor preparation, cognitive processes that until now have been behaviorally attached to perceptual processes. Our main finding is that, while local and long-range synchronization at several frequencies seem to be an ongoing phenomena, categorization of a contour could only be predicted through local oscillatory activity within parietal/frontal sources, which in turn, would synchronize at gamma (>30 Hz) frequency. Simultaneously, fronto-parietal beta (13-30 Hz) phase locking forms a network spanning across neural sources that are not category specific. Both long range networks, i.e., the gamma network that is category specific, and the beta network that is not category specific, are functionally distinct but spatially overlapping. Altogether, we show that a critical mechanism underlying contour categorization involves oscillatory activity within parietal/frontal cortices, as well as its synchronization across distal cortical sites.
Developmental changes of neuronal networks associated with strategic social decision-making.
Steinmann, Elisabeth; Schmalor, Antonia; Prehn-Kristensen, Alexander; Wolff, Stephan; Galka, Andreas; Möhring, Jan; Gerber, Wolf-Dieter; Petermann, Franz; Stephani, Ulrich; Siniatchkin, Michael
2014-04-01
One of the important prerequisites for successful social interaction is the willingness of each individual to cooperate socially. Using the ultimatum game, several studies have demonstrated that the process of decision-making to cooperate or to defeat in interaction with a partner is associated with activation of the dorsolateral prefrontal cortex (DLPFC), anterior cingulate cortex (ACC), anterior insula (AI), and inferior frontal cortex (IFC). This study investigates developmental changes in this neuronal network. 15 healthy children (8-12 years), 15 adolescents (13-18 years) and 15 young adults (19-28 years) were investigated using the ultimatum game. Neuronal networks representing decision-making based on strategic thinking were characterized using functional MRI. In all age groups, the process of decision-making in reaction to unfair offers was associated with hemodynamic changes in similar regions. Compared with children, however, healthy adults and adolescents revealed greater activation in the IFC and the fusiform gyrus, as well as the nucleus accumbens. In contrast, healthy children displayed more activation in the AI, the dorsal part of the ACC, and the DLPFC. There were no differences in brain activations between adults and adolescents. The neuronal mechanisms underlying strategic social decision making are already developed by the age of eight. Decision-making based on strategic thinking is associated with age-dependent involvement of different brain regions. Neuronal networks underlying theory of mind and reward anticipation are more activated in adults and adolescents with regard to the increasing perspective taking with age. In relation to emotional reactivity and respective compensatory coping in younger ages, children have higher activations in a neuronal network associated with emotional processing and executive control. Copyright © 2014 Elsevier Ltd. All rights reserved.
Effect of acute lateral hemisection of the spinal cord on spinal neurons of postural networks
Zelenin, P. V.; Lyalka, V. F.; Orlovsky, G. N.; Deliagina, T. G.
2016-01-01
In quadrupeds, acute lateral hemisection of the spinal cord (LHS) severely impairs postural functions, which recover over time. Postural limb reflexes (PLRs) represent a substantial component of postural corrections in intact animals. The aim of the present study was to characterize the effects of acute LHS on two populations of spinal neurons (F and E) mediating PLRs. For this purpose, in decerebrate rabbits, responses of individual neurons from L5 to stimulation causing PLRs were recorded before and during reversible LHS (caused by temporal cold block of signal transmission in lateral spinal pathways at L1), as well as after acute surgical (Sur) LHS at L1. Results obtained after Sur-LHS were compared to control data obtained in our previous study. We found that acute LHS caused disappearance of PLRs on the affected side. It also changed a proportion of different types of neurons on that side. A significant decrease and increase in the proportion of F- and non-modulated neurons, respectively, was found. LHS caused a significant decrease in most parameters of activity in F-neurons located in the ventral horn on the lesioned side and in E-neurons of the dorsal horn on both sides. These changes were caused by a significant decrease in the efficacy of posture-related sensory input from the ipsilateral limb to F-neurons, and from the contralateral limb to both F- and E-neurons. These distortions in operation of postural networks underlie the impairment of postural control after acute LHS, and represent a starting point for the subsequent recovery of postural functions. PMID:27702647
Directory of Open Access Journals (Sweden)
Taras A Gritsun
Full Text Available A typical property of isolated cultured neuronal networks of dissociated rat cortical cells is synchronized spiking, called bursting, starting about one week after plating, when the dissociated cells have sufficiently sent out their neurites and formed enough synaptic connections. This paper is the third in a series of three on simulation models of cultured networks. Our two previous studies [26], [27] have shown that random recurrent network activity models generate intra- and inter-bursting patterns similar to experimental data. The networks were noise or pacemaker-driven and had Izhikevich-neuronal elements with only short-term plastic (STP synapses (so, no long-term potentiation, LTP, or depression, LTD, was included. However, elevated pre-phases (burst leaders and after-phases of burst main shapes, that usually arise during the development of the network, were not yet simulated in sufficient detail. This lack of detail may be due to the fact that the random models completely missed network topology .and a growth model. Therefore, the present paper adds, for the first time, a growth model to the activity model, to give the network a time dependent topology and to explain burst shapes in more detail. Again, without LTP or LTD mechanisms. The integrated growth-activity model yielded realistic bursting patterns. The automatic adjustment of various mutually interdependent network parameters is one of the major advantages of our current approach. Spatio-temporal bursting activity was validated against experiment. Depending on network size, wave reverberation mechanisms were seen along the network boundaries, which may explain the generation of phases of elevated firing before and after the main phase of the burst shape.In summary, the results show that adding topology and growth explain burst shapes in great detail and suggest that young networks still lack/do not need LTP or LTD mechanisms.
Perea, Gertrudis; Gómez, Ricardo; Mederos, Sara; Covelo, Ana; Ballesteros, Jesús J; Schlosser, Laura; Hernández-Vivanco, Alicia; Martín-Fernández, Mario; Quintana, Ruth; Rayan, Abdelrahman; Díez, Adolfo; Fuenzalida, Marco; Agarwal, Amit; Bergles, Dwight E; Bettler, Bernhard; Manahan-Vaughan, Denise; Martín, Eduardo D; Kirchhoff, Frank; Araque, Alfonso
2016-12-24
Interneurons are critical for proper neural network function and can activate Ca 2+ signaling in astrocytes. However, the impact of the interneuron-astrocyte signaling into neuronal network operation remains unknown. Using the simplest hippocampal Astrocyte-Neuron network, i.e., GABAergic interneuron, pyramidal neuron, single CA3-CA1 glutamatergic synapse, and astrocytes, we found that interneuron-astrocyte signaling dynamically affected excitatory neurotransmission in an activity- and time-dependent manner, and determined the sign (inhibition vs potentiation) of the GABA-mediated effects. While synaptic inhibition was mediated by GABA A receptors, potentiation involved astrocyte GABA B receptors, astrocytic glutamate release, and presynaptic metabotropic glutamate receptors. Using conditional astrocyte-specific GABA B receptor ( Gabbr1 ) knockout mice, we confirmed the glial source of the interneuron-induced potentiation, and demonstrated the involvement of astrocytes in hippocampal theta and gamma oscillations in vivo. Therefore, astrocytes decode interneuron activity and transform inhibitory into excitatory signals, contributing to the emergence of novel network properties resulting from the interneuron-astrocyte interplay.
Robust spatial memory maps in flickering neuronal networks: a topological model
Dabaghian, Yuri; Babichev, Andrey; Memoli, Facundo; Chowdhury, Samir; Rice University Collaboration; Ohio State University Collaboration
It is widely accepted that the hippocampal place cells provide a substrate of the neuronal representation of the environment--the ``cognitive map''. However, hippocampal network, as any other network in the brain is transient: thousands of hippocampal neurons die every day and the connections formed by these cells constantly change due to various forms of synaptic plasticity. What then explains the remarkable reliability of our spatial memories? We propose a computational approach to answering this question based on a couple of insights. First, we propose that the hippocampal cognitive map is fundamentally topological, and hence it is amenable to analysis by topological methods. We then apply several novel methods from homology theory, to understand how dynamic connections between cells influences the speed and reliability of spatial learning. We simulate the rat's exploratory movements through different environments and study how topological invariants of these environments arise in a network of simulated neurons with ``flickering'' connectivity. We find that despite transient connectivity the network of place cells produces a stable representation of the topology of the environment.
Merrison-Hort, Robert; Soffe, Stephen R; Borisyuk, Roman
2018-01-01
Although, in most animals, brain connectivity varies between individuals, behaviour is often similar across a species. What fundamental structural properties are shared across individual networks that define this behaviour? We describe a probabilistic model of connectivity in the hatchling Xenopus tadpole spinal cord which, when combined with a spiking model, reliably produces rhythmic activity corresponding to swimming. The probabilistic model allows calculation of structural characteristics that reflect common network properties, independent of individual network realisations. We use the structural characteristics to study examples of neuronal dynamics, in the complete network and various sub-networks, and this allows us to explain the basis for key experimental findings, and make predictions for experiments. We also study how structural and functional features differ between detailed anatomical connectomes and those generated by our new, simpler, model (meta-model). PMID:29589828
Mean-field analysis of orientation selectivity in inhibition-dominated networks of spiking neurons.
Sadeh, Sadra; Cardanobile, Stefano; Rotter, Stefan
2014-01-01
Mechanisms underlying the emergence of orientation selectivity in the primary visual cortex are highly debated. Here we study the contribution of inhibition-dominated random recurrent networks to orientation selectivity, and more generally to sensory processing. By simulating and analyzing large-scale networks of spiking neurons, we investigate tuning amplification and contrast invariance of orientation selectivity in these networks. In particular, we show how selective attenuation of the common mode and amplification of the modulation component take place in these networks. Selective attenuation of the baseline, which is governed by the exceptional eigenvalue of the connectivity matrix, removes the unspecific, redundant signal component and ensures the invariance of selectivity across different contrasts. Selective amplification of modulation, which is governed by the operating regime of the network and depends on the strength of coupling, amplifies the informative signal component and thus increases the signal-to-noise ratio. Here, we perform a mean-field analysis which accounts for this process.
Effects of neuronal loss in the dynamic model of neural networks
International Nuclear Information System (INIS)
Yoon, B-G; Choi, J; Choi, M Y
2008-01-01
We study the phase transitions and dynamic behavior of the dynamic model of neural networks, with an emphasis on the effects of neuronal loss due to external stress. In the absence of loss the overall results obtained numerically are found to agree excellently with the theoretical ones. When the external stress is turned on, some neurons may deteriorate and die; such loss of neurons, in general, weakens the memory in the system. As the loss increases beyond a critical value, the order parameter measuring the strength of memory decreases to zero either continuously or discontinuously, namely, the system loses its memory via a second- or a first-order transition, depending on the ratio of the refractory period to the duration of action potential
Directory of Open Access Journals (Sweden)
Atsushi Saito
2018-03-01
Full Text Available High-intensity and low frequency (1–100 kHz time-varying electromagnetic fields stimulate the human body through excitation of the nervous system. In power frequency range (50/60 Hz, a frequency-dependent threshold of the external electric field-induced neuronal modulation in cultured neuronal networks was used as one of the biological indicator in international guidelines; however, the threshold of the magnetic field-induced neuronal modulation has not been elucidated. In this study, we exposed rat brain-derived neuronal networks to a high-intensity power frequency magnetic field (hPF-MF, and evaluated the modulation of synchronized bursting activity using a multi-electrode array (MEA-based extracellular recording technique. As a result of short-term hPF-MF exposure (50–400 mT root-mean-square (rms, 50 Hz, sinusoidal wave, 6 s, the synchronized bursting activity was increased in the 400 mT-exposed group. On the other hand, no change was observed in the 50–200 mT-exposed groups. In order to clarify the mechanisms of the 400 mT hPF-MF exposure-induced neuronal response, we evaluated it after blocking inhibitory synapses using bicuculline methiodide (BMI; subsequently, increase in bursting activity was observed with BMI application, and the response of 400 mT hPF-MF exposure disappeared. Therefore, it was suggested that the response of hPF-MF exposure was involved in the inhibitory input. Next, we screened the inhibitory pacemaker-like neuronal activity which showed autonomous 4–10 Hz firing with CNQX and D-AP5 application, and it was confirmed that the activity was reduced after 400 mT hPF-MF exposure. Comparison of these experimental results with estimated values of the induced electric field (E-field in the culture medium revealed that the change in synchronized bursting activity occurred over 0.3 V/m, which was equivalent to the findings of a previous study that used the external electric fields. In addition, the results suggested that
Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.
Directory of Open Access Journals (Sweden)
Gabriel Koch Ocker
2015-08-01
Full Text Available The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.
Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.
Ocker, Gabriel Koch; Litwin-Kumar, Ashok; Doiron, Brent
2015-08-01
The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.
Directory of Open Access Journals (Sweden)
Sebastien Naze
2015-05-01
Full Text Available Epileptic seizure dynamics span multiple scales in space and time. Understanding seizure mechanisms requires identifying the relations between seizure components within and across these scales, together with the analysis of their dynamical repertoire. Mathematical models have been developed to reproduce seizure dynamics across scales ranging from the single neuron to the neural population. In this study, we develop a network model of spiking neurons and systematically investigate the conditions, under which the network displays the emergent dynamic behaviors known from the Epileptor, which is a well-investigated abstract model of epileptic neural activity. This approach allows us to study the biophysical parameters and variables leading to epileptiform discharges at cellular and network levels. Our network model is composed of two neuronal populations, characterized by fast excitatory bursting neurons and regular spiking inhibitory neurons, embedded in a common extracellular environment represented by a slow variable. By systematically analyzing the parameter landscape offered by the simulation framework, we reproduce typical sequences of neural activity observed during status epilepticus. We find that exogenous fluctuations from extracellular environment and electro-tonic couplings play a major role in the progression of the seizure, which supports previous studies and further validates our model. We also investigate the influence of chemical synaptic coupling in the generation of spontaneous seizure-like events. Our results argue towards a temporal shift of typical spike waves with fast discharges as synaptic strengths are varied. We demonstrate that spike waves, including interictal spikes, are generated primarily by inhibitory neurons, whereas fast discharges during the wave part are due to excitatory neurons. Simulated traces are compared with in vivo experimental data from rodents at different stages of the disorder. We draw the conclusion
Finite post synaptic potentials cause a fast neuronal response
Directory of Open Access Journals (Sweden)
Moritz eHelias
2011-02-01
Full Text Available A generic property of the communication between neurons is the exchange of pulsesat discrete time points, the action potentials. However, the prevalenttheory of spiking neuronal networks of integrate-and-fire model neuronsrelies on two assumptions: the superposition of many afferent synapticimpulses is approximated by Gaussian white noise, equivalent to avanishing magnitude of the synaptic impulses, and the transfer oftime varying signals by neurons is assessable by linearization. Goingbeyond both approximations, we find that in the presence of synapticimpulses the response to transient inputs differs qualitatively fromprevious predictions. It is instantaneous rather than exhibiting low-passcharacteristics, depends non-linearly on the amplitude of the impulse,is asymmetric for excitation and inhibition and is promoted by a characteristiclevel of synaptic background noise. These findings resolve contradictionsbetween the earlier theory and experimental observations. Here wereview the recent theoretical progress that enabled these insights.We explain why the membrane potential near threshold is sensitiveto properties of the afferent noise and show how this shapes the neuralresponse. A further extension of the theory to time evolution in discretesteps quantifies simulation artifacts and yields improved methodsto cross check results.
International Nuclear Information System (INIS)
Zhao, Zhiguo; Gu, Huaguang
2015-01-01
Highlights: • Time delay-induced multiple synchronous behaviors was simulated in neuronal networks. • Multiple behaviors appear at time delays shorter than a bursting period of neurons. • The more spikes per burst of bursting, the more synchronous regions of time delay. • From regular to random via small-world networks, synchronous degree becomes weak. • An interpretation of the multiple behaviors and the influence of network are provided. - Abstract: Time delay induced-multiple synchronous behaviors are simulated in neuronal network composed of many inhibitory neurons and appear at different time delays shorter than a period of endogenous bursting of individual neurons. It is different from previous investigations wherein only one of multiple synchronous behaviors appears at time delay shorter than a period of endogenous firing and others appear at time delay longer than the period duration. The bursting patterns of the synchronous behaviors are identified based on the dynamics of an individual neuron stimulated by a signal similar to the inhibitory coupling current, which is applied at the decaying branch of a spike and suitable phase within the quiescent state of the endogenous bursting. If a burst of endogenous bursting contains more spikes, the synchronous behaviors appear at more regions of time delay. As the coupling strength increases, the multiple synchronous behaviors appear in a sequence because the different threshold of coupling current or strength is needed to achieve synchronous behaviors. From regular, to small-world, and to random networks, synchronous degree of the multiple synchronous behaviors becomes weak, and synchronous bursting patterns with lower spikes per burst disappear, which is properly interpreted by the difference of coupling current between neurons induced by different degree and the high threshold of coupling current to achieve synchronization for the absent synchronous bursting patterns. The results of the influence of
Biophysical synaptic dynamics in an analog VLSI network of Hodgkin-Huxley neurons.
Yu, Theodore; Cauwenberghs, Gert
2009-01-01
We study synaptic dynamics in a biophysical network of four coupled spiking neurons implemented in an analog VLSI silicon microchip. The four neurons implement a generalized Hodgkin-Huxley model with individually configurable rate-based kinetics of opening and closing of Na+ and K+ ion channels. The twelve synapses implement a rate-based first-order kinetic model of neurotransmitter and receptor dynamics, accounting for NMDA and non-NMDA type chemical synapses. The implemented models on the chip are fully configurable by 384 parameters accounting for conductances, reversal potentials, and pre/post-synaptic voltage-dependence of the channel kinetics. We describe the models and present experimental results from the chip characterizing single neuron dynamics, single synapse dynamics, and multi-neuron network dynamics showing phase-locking behavior as a function of synaptic coupling strength. The 3mm x 3mm microchip consumes 1.29 mW power making it promising for applications including neuromorphic modeling and neural prostheses.
Spiking Regularity and Coherence in Complex Hodgkin–Huxley Neuron Networks
International Nuclear Information System (INIS)
Zhi-Qiang, Sun; Ping, Xie; Wei, Li; Peng-Ye, Wang
2010-01-01
We study the effects of the strength of coupling between neurons on the spiking regularity and coherence in a complex network with randomly connected Hodgkin–Huxley neurons driven by colored noise. It is found that for the given topology realization and colored noise correlation time, there exists an optimal strength of coupling, at which the spiking regularity of the network reaches the best level. Moreover, when the temporal regularity reaches the best level, the spatial coherence of the system has already increased to a relatively high level. In addition, for the given number of neurons and noise correlation time, the values of average regularity and spatial coherence at the optimal strength of coupling are nearly independent of the topology realization. Furthermore, there exists an optimal value of colored noise correlation time at which the spiking regularity can reach its best level. These results may be helpful for understanding of the real neuron world. (cross-disciplinary physics and related areas of science and technology)
Dal Maschio, Marco; Donovan, Joseph C; Helmbrecht, Thomas O; Baier, Herwig
2017-05-17
We introduce a flexible method for high-resolution interrogation of circuit function, which combines simultaneous 3D two-photon stimulation of multiple targeted neurons, volumetric functional imaging, and quantitative behavioral tracking. This integrated approach was applied to dissect how an ensemble of premotor neurons in the larval zebrafish brain drives a basic motor program, the bending of the tail. We developed an iterative photostimulation strategy to identify minimal subsets of channelrhodopsin (ChR2)-expressing neurons that are sufficient to initiate tail movements. At the same time, the induced network activity was recorded by multiplane GCaMP6 imaging across the brain. From this dataset, we computationally identified activity patterns associated with distinct components of the elicited behavior and characterized the contributions of individual neurons. Using photoactivatable GFP (paGFP), we extended our protocol to visualize single functionally identified neurons and reconstruct their morphologies. Together, this toolkit enables linking behavior to circuit activity with unprecedented resolution. Copyright © 2017 Elsevier Inc. All rights reserved.
Leader neurons in population bursts of 2D living neural networks
International Nuclear Information System (INIS)
Eckmann, J-P; Zbinden, Cyrille; Jacobi, Shimshon; Moses, Elisha; Marom, Shimon
2008-01-01
Eytan and Marom (2006 J. Neurosci. 26 8465-76) recently showed that the spontaneous bursting activity of rat neuron cultures includes 'first-to-fire' cells that consistently fire earlier than others. Here, we analyze the behavior of these neurons in long-term recordings of spontaneous activity of rat hippocampal and rat cortical neuron cultures from three different laboratories. We identify precursor events that may either subside ('aborted bursts') or can lead to a full-blown burst ('pre-bursts'). We find that the activation in the pre-burst typically has a first neuron ('leader'), followed by a localized response in its neighborhood. Locality is diminished in the bursts themselves. The long-term dynamics of the leaders is relatively robust, evolving with a half-life of 23-34 h. Stimulation of the culture alters the leader distribution, but the distribution stabilizes within about 1 h. We show that the leaders carry information about the identity of the burst, as measured by the signature of the number of spikes per neuron in a burst. The number of spikes from leaders in the first few spikes of a precursor event is furthermore shown to be predictive with regard to the transition into a burst (pre-burst versus aborted burst). We conclude that the leaders play a role in the development of the bursts and conjecture that they are part of an underlying sub-network that is excited first and then acts as a nucleation center for the burst
Dynamics in a Delayed Neural Network Model of Two Neurons with Inertial Coupling
Directory of Open Access Journals (Sweden)
Changjin Xu
2012-01-01
Full Text Available A delayed neural network model of two neurons with inertial coupling is dealt with in this paper. The stability is investigated and Hopf bifurcation is demonstrated. Applying the normal form theory and the center manifold argument, we derive the explicit formulas for determining the properties of the bifurcating periodic solutions. An illustrative example is given to demonstrate the effectiveness of the obtained results.
Yilmaz, Ergin; Baysal, Veli; Ozer, Mahmut; Perc, Matjaž
2016-02-01
We study the effects of an autapse, which is mathematically described as a self-feedback loop, on the propagation of weak, localized pacemaker activity across a Newman-Watts small-world network consisting of stochastic Hodgkin-Huxley neurons. We consider that only the pacemaker neuron, which is stimulated by a subthreshold periodic signal, has an electrical autapse that is characterized by a coupling strength and a delay time. We focus on the impact of the coupling strength, the network structure, the properties of the weak periodic stimulus, and the properties of the autapse on the transmission of localized pacemaker activity. Obtained results indicate the existence of optimal channel noise intensity for the propagation of the localized rhythm. Under optimal conditions, the autapse can significantly improve the propagation of pacemaker activity, but only for a specific range of the autaptic coupling strength. Moreover, the autaptic delay time has to be equal to the intrinsic oscillation period of the Hodgkin-Huxley neuron or its integer multiples. We analyze the inter-spike interval histogram and show that the autapse enhances or suppresses the propagation of the localized rhythm by increasing or decreasing the phase locking between the spiking of the pacemaker neuron and the weak periodic signal. In particular, when the autaptic delay time is equal to the intrinsic period of oscillations an optimal phase locking takes place, resulting in a dominant time scale of the spiking activity. We also investigate the effects of the network structure and the coupling strength on the propagation of pacemaker activity. We find that there exist an optimal coupling strength and an optimal network structure that together warrant an optimal propagation of the localized rhythm.
Representation and traversal of documentation space. Data analysis, neuron networks and image banks
International Nuclear Information System (INIS)
Lelu, A.; Rosenblatt, D.
1986-01-01
Improvements in the visual representation of considerable amounts of data for the user is necessary for progress in documentation systems. We review practical implementations in this area, which additionally integrate concepts arising from data analysis in the most general sense. The relationship between data analysis and neuron networks is then established. Following a description of simulation experiments, we finally present software for outputting and traversing image banks which integrate most of the concept developed in this article [fr
Rodríguez, Ana R.; O'Neill, Kate M.; Swiatkowski, Przemyslaw; Patel, Mihir V.; Firestein, Bonnie L.
2018-02-01
Objective. This study investigates the effect that overexpression of cytosolic PSD-95 interactor (cypin), a regulator of synaptic PSD-95 protein localization and a core regulator of dendrite branching, exerts on the electrical activity of rat hippocampal neurons and networks. Approach. We cultured rat hippocampal neurons and used lipid-mediated transfection and lentiviral gene transfer to achieve high levels of cypin or cypin mutant (cypinΔPDZ PSD-95 non-binding) expression cellularly and network-wide, respectively. Main results. Our analysis revealed that although overexpression of cypin and cypinΔPDZ increase dendrite numbers and decrease spine density, cypin and cypinΔPDZ distinctly regulate neuronal activity. At the single cell level, cypin promotes decreases in bursting activity while cypinΔPDZ reduces sEPSC frequency and further decreases bursting compared to cypin. At the network level, by using the Fano factor as a measure of spike count variability, cypin overexpression results in an increase in variability of spike count, and this effect is abolished when cypin cannot bind PSD-95. This variability is also dependent on baseline activity levels and on mean spike rate over time. Finally, our spike sorting data show that overexpression of cypin results in a more complex distribution of spike waveforms and that binding to PSD-95 is essential for this complexity. Significance. Our data suggest that dendrite morphology does not play a major role in cypin action on electrical activity.
Theoretical Neuroanatomy:Analyzing the Structure, Dynamics,and Function of Neuronal Networks
Seth, Anil K.; Edelman, Gerald M.
The mammalian brain is an extraordinary object: its networks give rise to our conscious experiences as well as to the generation of adaptive behavior for the organism within its environment. Progress in understanding the structure, dynamics and function of the brain faces many challenges. Biological neural networks change over time, their detailed structure is difficult to elucidate, and they are highly heterogeneous both in their neuronal units and synaptic connections. In facing these challenges, graph-theoretic and information-theoretic approaches have yielded a number of useful insights and promise many more.
Liu, Qingshan; Dang, Chuangyin; Cao, Jinde
2010-07-01
In this paper, based on a one-neuron recurrent neural network, a novel k-winners-take-all ( k -WTA) network is proposed. Finite time convergence of the proposed neural network is proved using the Lyapunov method. The k-WTA operation is first converted equivalently into a linear programming problem. Then, a one-neuron recurrent neural network is proposed to get the kth or (k+1)th largest inputs of the k-WTA problem. Furthermore, a k-WTA network is designed based on the proposed neural network to perform the k-WTA operation. Compared with the existing k-WTA networks, the proposed network has simple structure and finite time convergence. In addition, simulation results on numerical examples show the effectiveness and performance of the proposed k-WTA network.
Matsubara, Takashi; Torikai, Hiroyuki
2016-04-01
Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.
Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong
2018-01-01
Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm–neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS. PMID:29671822
Sadeh, Sadra; Rotter, Stefan
2014-01-01
Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity.
Ponzi, Adam; Wickens, Jeff
2010-04-28
The striatum is composed of GABAergic medium spiny neurons with inhibitory collaterals forming a sparse random asymmetric network and receiving an excitatory glutamatergic cortical projection. Because the inhibitory collaterals are sparse and weak, their role in striatal network dynamics is puzzling. However, here we show by simulation of a striatal inhibitory network model composed of spiking neurons that cells form assemblies that fire in sequential coherent episodes and display complex identity-temporal spiking patterns even when cortical excitation is simply constant or fluctuating noisily. Strongly correlated large-scale firing rate fluctuations on slow behaviorally relevant timescales of hundreds of milliseconds are shown by members of the same assembly whereas members of different assemblies show strong negative correlation, and we show how randomly connected spiking networks can generate this activity. Cells display highly irregular spiking with high coefficients of variation, broadly distributed low firing rates, and interspike interval distributions that are consistent with exponentially tailed power laws. Although firing rates vary coherently on slow timescales, precise spiking synchronization is absent in general. Our model only requires the minimal but striatally realistic assumptions of sparse to intermediate random connectivity, weak inhibitory synapses, and sufficient cortical excitation so that some cells are depolarized above the firing threshold during up states. Our results are in good qualitative agreement with experimental studies, consistent with recently determined striatal anatomy and physiology, and support a new view of endogenously generated metastable state switching dynamics of the striatal network underlying its information processing operations.
Response of spiking neurons to correlated inputs
International Nuclear Information System (INIS)
Moreno, Ruben; Rocha, Jaime de la; Renart, Alfonso; Parga, Nestor
2002-01-01
The effect of a temporally correlated afferent current on the firing rate of a leaky integrate-and-fire neuron is studied. This current is characterized in terms of rates, autocorrelations, and cross correlations, and correlation time scale τ c of excitatory and inhibitory inputs. The output rate ν out is calculated in the Fokker-Planck formalism in the limit of both small and large τ c compared to the membrane time constant τ of the neuron. By simulations we check the analytical results, provide an interpolation valid for all τ c , and study the neuron's response to rapid changes in the correlation magnitude
Synchronizations in small-world networks of spiking neurons: Diffusive versus sigmoid couplings
International Nuclear Information System (INIS)
Hasegawa, Hideo
2005-01-01
By using a semianalytical dynamical mean-field approximation previously proposed by the author [H. Hasegawa, Phys. Rev. E 70, 066107 (2004)], we have studied the synchronization of stochastic, small-world (SW) networks of FitzHugh-Nagumo neurons with diffusive couplings. The difference and similarity between results for diffusive and sigmoid couplings have been discussed. It has been shown that with introducing the weak heterogeneity to regular networks, the synchronization may be slightly increased for diffusive couplings, while it is decreased for sigmoid couplings. This increase in the synchronization for diffusive couplings is shown to be due to their local, negative feedback contributions, but not due to the short average distance in SW networks. Synchronization of SW networks depends not only on their structure but also on the type of couplings
Spiking neuron devices consisting of single-flux-quantum circuits
International Nuclear Information System (INIS)
Hirose, Tetsuya; Asai, Tetsuya; Amemiya, Yoshihito
2006-01-01
Single-flux-quantum (SFQ) circuits can be used for making spiking neuron devices, which are useful elements for constructing intelligent, brain-like computers. The device we propose is based on the leaky integrate-and-fire neuron (IFN) model and uses a SFQ pulse as an action signal or a spike of neurons. The operation of the neuron device is confirmed by computer simulator. It can operate with a short delay of 100 ps or less and is the highest-speed neuron device ever reported
Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail
2016-01-01
With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework.
Husson, Steven J.; Costa, Wagner Steuer; Wabnig, Sebastian; Stirman, Jeffrey N.; Watson, Joseph D.; Spencer, W. Clay; Akerboom, Jasper; Looger, Loren L.; Treinin, Millet; Miller, David M.; Lu, Hang; Gottschalk, Alexander
2012-01-01
Summary Background Nociception generally evokes rapid withdrawal behavior in order to protect the tissue from harmful insults. Most nociceptive neurons responding to mechanical insults display highly branched dendrites, an anatomy shared by Caenorhabditis elegans FLP and PVD neurons, which mediate harsh touch responses. Although several primary molecular nociceptive sensors have been characterized, less is known about modulation and amplification of noxious signals within nociceptor neurons. First, we analyzed the FLP/PVD network by optogenetics and studied integration of signals from these cells in downstream interneurons. Second, we investigated which genes modulate PVD function, based on prior single neuron mRNA profiling of PVD. Results Selectively photoactivating PVD, FLP and downstream interneurons using Channelrhodopsin-2 (ChR2) enabled functionally dissecting this nociceptive network, without interfering signals by other mechanoreceptors. Forward or reverse escape behaviors were determined by PVD and FLP, via integration by command interneurons. To identify mediators of PVD function, acting downstream of primary nocisensor molecules, we knocked down PVD-specific transcripts by RNAi and quantified light-evoked PVD-dependent behavior. Cell-specific disruption of synaptobrevin or voltage-gated Ca2+-channels (VGCCs) showed that PVD signals chemically to command interneurons. Knocking down the DEG/ENaC channel ASIC-1 and the TRPM channel GTL-1 indicated that ASIC-1 may extend PVD’s dynamic range and that GTL-1 may amplify its signals. These channels act cell-autonomously in PVD, downstream of primary mechanosensory molecules. Conclusions Our work implicates TRPM channels in modifying excitability of, and DEG/ENaCs in potentiating signal output from a mechano-nociceptor neuron. ASIC-1 and GTL-1 homologues, if functionally conserved, may denote valid targets for novel analgesics. PMID:22483941
Nicola, Wilten; Tripp, Bryan; Scott, Matthew
2016-01-01
A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.
Directory of Open Access Journals (Sweden)
Wilten eNicola
2016-02-01
Full Text Available A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF. The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks
Paraskevov, A V; Zendrikov, D K
2017-03-23
We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.
Magou, George C; Pfister, Bryan J; Berlin, Joshua R
2015-10-22
The basis for acute seizures following traumatic brain injury (TBI) remains unclear. Animal models of TBI have revealed acute hyperexcitablility in cortical neurons that could underlie seizure activity, but studying initiating events causing hyperexcitability is difficult in these models. In vitro models of stretch injury with cultured cortical neurons, a surrogate for TBI, allow facile investigation of cellular changes after injury but they have only demonstrated post-injury hypoexcitability. The goal of this study was to determine if neuronal hyperexcitability could be triggered by in vitro stretch injury. Controlled uniaxial stretch injury was delivered to a spatially delimited region of a spontaneously active network of cultured rat cortical neurons, yielding a region of stretch-injured neurons and adjacent regions of non-stretched neurons that did not directly experience stretch injury. Spontaneous electrical activity was measured in non-stretched and stretch-injured neurons, and in control neuronal networks not subjected to stretch injury. Non-stretched neurons in stretch-injured cultures displayed a three-fold increase in action potential firing rate and bursting activity 30-60 min post-injury. Stretch-injured neurons, however, displayed dramatically lower rates of action potential firing and bursting. These results demonstrate that acute hyperexcitability can be observed in non-stretched neurons located in regions adjacent to the site of stretch injury, consistent with reports that seizure activity can arise from regions surrounding the site of localized brain injury. Thus, this in vitro procedure for localized neuronal stretch injury may provide a model to study the earliest cellular changes in neuronal function associated with acute post-traumatic seizures. Copyright © 2015. Published by Elsevier B.V.
Exercise-induced neuronal plasticity in central autonomic networks: role in cardiovascular control.
Michelini, Lisete C; Stern, Javier E
2009-09-01
It is now well established that brain plasticity is an inherent property not only of the developing but also of the adult brain. Numerous beneficial effects of exercise, including improved memory, cognitive function and neuroprotection, have been shown to involve an important neuroplastic component. However, whether major adaptive cardiovascular adjustments during exercise, needed to ensure proper blood perfusion of peripheral tissues, also require brain neuroplasticity, is presently unknown. This review will critically evaluate current knowledge on proposed mechanisms that are likely to underlie the continuous resetting of baroreflex control of heart rate during/after exercise and following exercise training. Accumulating evidence indicates that not only somatosensory afferents (conveyed by skeletal muscle receptors, baroreceptors and/or cardiopulmonary receptors) but also projections arising from central command neurons (in particular, peptidergic hypothalamic pre-autonomic neurons) converge into the nucleus tractus solitarii (NTS) in the dorsal brainstem, to co-ordinate complex cardiovascular adaptations during dynamic exercise. This review focuses in particular on a reciprocally interconnected network between the NTS and the hypothalamic paraventricular nucleus (PVN), which is proposed to act as a pivotal anatomical and functional substrate underlying integrative feedforward and feedback cardiovascular adjustments during exercise. Recent findings supporting neuroplastic adaptive changes within the NTS-PVN reciprocal network (e.g. remodelling of afferent inputs, structural and functional neuronal plasticity and changes in neurotransmitter content) will be discussed within the context of their role as important underlying cellular mechanisms supporting the tonic activation and improved efficacy of these central pathways in response to circulatory demand at rest and during exercise, both in sedentary and in trained individuals. We hope this review will stimulate
De Angelis, Francesco
2017-06-01
SERS investigations and electrical recording of neuronal networks with three-dimensional plasmonic nanoantennas Michele Dipalo, Valeria Caprettini, Anbrea Barbaglia, Laura Lovato, Francesco De Angelis e-mail: francesco.deangelis@iit.it Istituto Italiano di Tecnologia, Via Morego 30, 16163, Genova Biological systems are analysed mainly by optical, chemical or electrical methods. Normally each of these techniques provides only partial information about the environment, while combined investigations could reveal new phenomena occurring in complex systems such as in-vitro neuronal networks. Aiming at the merging of optical and electrical investigations of biological samples, we introduced three-dimensional plasmonic nanoantennas on CMOS-based electrical sensors [1]. The overall device is then capable of enhanced Raman Analysis of cultured cells combined with electrical recording of neuronal activity. The Raman measurements show a much higher sensitivity when performed on the tip of the nanoantenna in respect to the flat substrate [2]; this effect is a combination of the high plasmonic field enhancement and of the tight adhesion of cells on the nanoantenna tip. Furthermore, when plasmonic opto-poration is exploited [3] the 3D nanoelectrodes are able to penetrate through the cell membrane thus accessing the intracellular environment. Our latest results (unpublished) show that the technique is completely non-invasive and solves many problems related to state-of-the-art intracellular recording approaches on large neuronal networks. This research received funding from ERC-IDEAS Program: "Neuro-Plasmonics" [Grant n. 616213]. References: [1] M. Dipalo, G. C. Messina, H. Amin, R. La Rocca, V. Shalabaeva, A. Simi, A. Maccione, P. Zilio, L. Berdondini, F. De Angelis, Nanoscale 2015, 7, 3703. [2] R. La Rocca, G. C. Messina, M. Dipalo, V. Shalabaeva, F. De Angelis, Small 2015, 11, 4632. [3] G. C. Messina et al., Spatially, Temporally, and Quantitatively Controlled Delivery of
A unified view on weakly correlated recurrent networks
Directory of Open Access Journals (Sweden)
Dmytro eGrytskyy
2013-10-01
Full Text Available The diversity of neuron models used in contemporary theoretical neuroscience to investigate specific properties of covariances in the spiking activity raises the question how these models relate to each other. In particular it is hard to distinguish between generic properties of covariances and peculiarities due to the abstracted model. Here we present a unified view on pairwise covariances in recurrent networks in the irregular regime. We consider the binary neuron model, the leaky integrate-and-fire model, and the Hawkes process. We show that linear approximation maps each of these models to either of two classes of linear rate models, including the Ornstein-Uhlenbeck process as a special case. The distinction between both classes is the location of additive noise in the rate dynamics, which is located on the output side for spiking models and on the input side for the binary model. Both classes allow closed form solutions for the covariance. For output noise it separates into an echo term and a term due to correlated input. The unified framework enables us to transfer results between models. For example, we generalize the binary model and the Hawkes process to the situation with synaptic conduction delays and simplify derivations for established results. Our approach is applicable to general network structures and suitable for the calculation of population averages. The derived averages are exact for fixed out-degree network architectures and approximate for fixed in-degree. We demonstrate how taking into account fluctuations in the linearization procedure increases the accuracy of the effective theory and we explain the class dependent differences between covariances in the time and the frequency domain. Finally we show that the oscillatory instability emerging in networks of integrate-and-fire models with delayed inhibitory feedback is a model-invariant feature: the same structure of poles in the complex frequency plane determines the
The chronotron: a neuron that learns to fire temporally precise spike patterns.
Directory of Open Access Journals (Sweden)
Răzvan V Florian
Full Text Available In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons, one that provides high memory capacity (E-learning, and one that has a higher biological plausibility (I-learning. With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.
International Nuclear Information System (INIS)
Li Yuye; Jia Bing; Gu Huaguang; An Shucheng
2012-01-01
Diversity in the neurons and noise are inevitable in the real neuronal network. In this paper, parameter diversity induced spiral waves and multiple spatial coherence resonances in a two-dimensional neuronal network without or with noise are simulated. The relationship between the multiple resonances and the multiple transitions between patterns of spiral waves are identified. The coherence degrees induced by the diversity are suppressed when noise is introduced and noise density is increased. The results suggest that natural nervous system might profit from both parameter diversity and noise, provided a possible approach to control formation and transition of spiral wave by the cooperation between the diversity and noise. (general)
Searching for collective behavior in a large network of sensory neurons.
Directory of Open Access Journals (Sweden)
Gašper Tkačik
2014-01-01
Full Text Available Maximum entropy models are the least structured probability distributions that exactly reproduce a chosen set of statistics measured in an interacting network. Here we use this principle to construct probabilistic models which describe the correlated spiking activity of populations of up to 120 neurons in the salamander retina as it responds to natural movies. Already in groups as small as 10 neurons, interactions between spikes can no longer be regarded as small perturbations in an otherwise independent system; for 40 or more neurons pairwise interactions need to be supplemented by a global interaction that controls the distribution of synchrony in the population. Here we show that such "K-pairwise" models--being systematic extensions of the previously used pairwise Ising models--provide an excellent account of the data. We explore the properties of the neural vocabulary by: 1 estimating its entropy, which constrains the population's capacity to represent visual information; 2 classifying activity patterns into a small set of metastable collective modes; 3 showing that the neural codeword ensembles are extremely inhomogenous; 4 demonstrating that the state of individual neurons is highly predictable from the rest of the population, allowing the capacity for error correction.
Effects of Some Neurobiological Factors in a Self-organized Critical Model Based on Neural Networks
International Nuclear Information System (INIS)
Zhou Liming; Zhang Yingyue; Chen Tianlun
2005-01-01
Based on an integrate-and-fire mechanism, we investigate the effect of changing the efficacy of the synapse, the transmitting time-delayed, and the relative refractoryperiod on the self-organized criticality in our neural network model.
Inference of topology and the nature of synapses, and the flow of information in neuronal networks
Borges, F. S.; Lameu, E. L.; Iarosz, K. C.; Protachevicz, P. R.; Caldas, I. L.; Viana, R. L.; Macau, E. E. N.; Batista, A. M.; Baptista, M. S.
2018-02-01
The characterization of neuronal connectivity is one of the most important matters in neuroscience. In this work, we show that a recently proposed informational quantity, the causal mutual information, employed with an appropriate methodology, can be used not only to correctly infer the direction of the underlying physical synapses, but also to identify their excitatory or inhibitory nature, considering easy to handle and measure bivariate time series. The success of our approach relies on a surprising property found in neuronal networks by which nonadjacent neurons do "understand" each other (positive mutual information), however, this exchange of information is not capable of causing effect (zero transfer entropy). Remarkably, inhibitory connections, responsible for enhancing synchronization, transfer more information than excitatory connections, known to enhance entropy in the network. We also demonstrate that our methodology can be used to correctly infer directionality of synapses even in the presence of dynamic and observational Gaussian noise, and is also successful in providing the effective directionality of intermodular connectivity, when only mean fields can be measured.
Irlbacher, Kerstin; Kraft, Antje; Kehrer, Stefanie; Brandt, Stephan A
2014-10-01
Cognitive control can be reactive or proactive in nature. Reactive control mechanisms, which support the resolution of interference, start after its onset. Conversely, proactive control involves the anticipation and prevention of interference prior to its occurrence. The interrelation of both types of cognitive control is currently under debate: Are they mediated by different neuronal networks? Or are there neuronal structures that have the potential to act in a proactive as well as in a reactive manner? This review illustrates the way in which integrating knowledge gathered from behavioral studies, functional imaging, and human electroencephalography proves useful in answering these questions. We focus on studies that investigate interference resolution at the level of working memory representations. In summary, different mechanisms are instrumental in supporting reactive and proactive control. Distinct neuronal networks are involved, though some brain regions, especially pre-SMA, possess functions that are relevant to both control modes. Therefore, activation of these brain areas could be observed in reactive, as well as proactive control, but at different times during information processing. Copyright © 2014 Elsevier Ltd. All rights reserved.
Impact of sub and supra-threshold adaptation currents in networks of spiking neurons.
Colliaux, David; Yger, Pierre; Kaneko, Kunihiko
2015-12-01
Neuronal adaptation is the intrinsic capacity of the brain to change, by various mechanisms, its dynamical responses as a function of the context. Such a phenomena, widely observed in vivo and in vitro, is known to be crucial in homeostatic regulation of the activity and gain control. The effects of adaptation have already been studied at the single-cell level, resulting from either voltage or calcium gated channels both activated by the spiking activity and modulating the dynamical responses of the neurons. In this study, by disentangling those effects into a linear (sub-threshold) and a non-linear (supra-threshold) part, we focus on the the functional role of those two distinct components of adaptation onto the neuronal activity at various scales, starting from single-cell responses up to recurrent networks dynamics, and under stationary or non-stationary stimulations. The effects of slow currents on collective dynamics, like modulation of population oscillation and reliability of spike patterns, is quantified for various types of adaptation in sparse recurrent networks.
Detection of M-Sequences from Spike Sequence in Neuronal Networks
Directory of Open Access Journals (Sweden)
Yoshi Nishitani
2012-01-01
Full Text Available In circuit theory, it is well known that a linear feedback shift register (LFSR circuit generates pseudorandom bit sequences (PRBS, including an M-sequence with the maximum period of length. In this study, we tried to detect M-sequences known as a pseudorandom sequence generated by the LFSR circuit from time series patterns of stimulated action potentials. Stimulated action potentials were recorded from dissociated cultures of hippocampal neurons grown on a multielectrode array. We could find several M-sequences from a 3-stage LFSR circuit (M3. These results show the possibility of assembling LFSR circuits or its equivalent ones in a neuronal network. However, since the M3 pattern was composed of only four spike intervals, the possibility of an accidental detection was not zero. Then, we detected M-sequences from random spike sequences which were not generated from an LFSR circuit and compare the result with the number of M-sequences from the originally observed raster data. As a result, a significant difference was confirmed: a greater number of “0–1” reversed the 3-stage M-sequences occurred than would have accidentally be detected. This result suggests that some LFSR equivalent circuits are assembled in neuronal networks.
The energy demand of fast neuronal network oscillations: insights from brain slice preparations
Directory of Open Access Journals (Sweden)
Oliver eKann
2012-01-01
Full Text Available Fast neuronal network oscillations in the gamma range (30-100 Hz in the cerebral cortex have been implicated in higher cognitive functions such as sensual perception, working memory, and, perhaps, consciousness. However, little is known about the energy demand of gamma oscillations. This is mainly caused by technical limitations that are associated with simultaneous recordings of neuronal activity and energy metabolism in small neuronal networks and at the level of mitochondria in vivo. Thus recent studies have focused on brain slice preparations to address the energy demand of gamma oscillations in vitro. Here, reports will be summarized and discussed that combined electrophysiological recordings, oxygen sensor microelectrodes and live-cell fluorescence imaging in acutely prepared slices and organotypic slice cultures of the hippocampus from both, mouse and rat. These reports consistently show that gamma oscillations can be reliably induced in hippocampal slice preparations by different pharmacological tools. They suggest that gamma oscillations are associated with high energy demand, requiring both rapid adaptation of oxidative energy metabolism and sufficient supply with oxygen and nutrients. These findings might help to explain the exceptional vulnerability of higher cognitive functions during pathological processes of the brain, such as circulatory disturbances, genetic mitochondrial diseases, and neurodegeneration.
Segers, L S; Nuding, S C; Ott, M M; Dean, J B; Bolser, D C; O'Connor, R; Morris, K F; Lindsey, B G
2015-01-01
Models of brain stem ventral respiratory column (VRC) circuits typically emphasize populations of neurons, each active during a particular phase of the respiratory cycle. We have proposed that "tonic" pericolumnar expiratory (t-E) neurons tune breathing during baroreceptor-evoked reductions and central chemoreceptor-evoked enhancements of inspiratory (I) drive. The aims of this study were to further characterize the coordinated activity of t-E neurons and test the hypothesis that peripheral chemoreceptors also modulate drive via inhibition of t-E neurons and disinhibition of their inspiratory neuron targets. Spike trains of 828 VRC neurons were acquired by multielectrode arrays along with phrenic nerve signals from 22 decerebrate, vagotomized, neuromuscularly blocked, artificially ventilated adult cats. Forty-eight of 191 t-E neurons fired synchronously with another t-E neuron as indicated by cross-correlogram central peaks; 32 of the 39 synchronous pairs were elements of groups with mutual pairwise correlations. Gravitational clustering identified fluctuations in t-E neuron synchrony. A network model supported the prediction that inhibitory populations with spike synchrony reduce target neuron firing probabilities, resulting in offset or central correlogram troughs. In five animals, stimulation of carotid chemoreceptors evoked changes in the firing rates of 179 of 240 neurons. Thirty-two neuron pairs had correlogram troughs consistent with convergent and divergent t-E inhibition of I cells and disinhibitory enhancement of drive. Four of 10 t-E neurons that responded to sequential stimulation of peripheral and central chemoreceptors triggered 25 cross-correlograms with offset features. The results support the hypothesis that multiple afferent systems dynamically tune inspiratory drive in part via coordinated t-E neurons. Copyright © 2015 the American Physiological Society.
Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network
Directory of Open Access Journals (Sweden)
Adam ePonzi
2012-03-01
Full Text Available The striatal medium spiny neuron (MSNs network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri stimulus time histograms (PSTH of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioural task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviourally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would in when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and delineate the range of parameters where this behaviour is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response
Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network.
Ponzi, Adam; Wickens, Jeff
2012-01-01
The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior.
Decorrelation of Neural-Network Activity by Inhibitory Feedback
Einevoll, Gaute T.; Diesmann, Markus
2012-01-01
Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between
Multi-channels coupling-induced pattern transition in a tri-layer neuronal network
Wu, Fuqiang; Wang, Ya; Ma, Jun; Jin, Wuyin; Hobiny, Aatef
2018-03-01
Neurons in nerve system show complex electrical behaviors due to complex connection types and diversity in excitability. A tri-layer network is constructed to investigate the signal propagation and pattern formation by selecting different coupling channels between layers. Each layer is set as different states, and the local kinetics is described by Hindmarsh-Rose neuron model. By changing the number of coupling channels between layers and the state of the first layer, the collective behaviors of each layer and synchronization pattern of network are investigated. A statistical factor of synchronization on each layer is calculated. It is found that quiescent state in the second layer can be excited and disordered state in the third layer is suppressed when the first layer is controlled by a pacemaker, and the developed state is dependent on the number of coupling channels. Furthermore, the collapse in the first layer can cause breakdown of other layers in the network, and the mechanism is that disordered state in the third layer is enhanced when sampled signals from the collapsed layer can impose continuous disturbance on the next layer.
The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code
Directory of Open Access Journals (Sweden)
Susanne Kunkel
2017-06-01
Full Text Available NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.
Local excitation-inhibition ratio for synfire chain propagation in feed-forward neuronal networks
Guo, Xinmeng; Yu, Haitao; Wang, Jiang; Liu, Jing; Cao, Yibin; Deng, Bin
2017-09-01
A leading hypothesis holds that spiking activity propagates along neuronal sub-populations which are connected in a feed-forward manner, and the propagation efficiency would be affected by the dynamics of sub-populations. In this paper, how the interaction between local excitation and inhibition effects on synfire chain propagation in feed-forward network (FFN) is investigated. The simulation results show that there is an appropriate excitation-inhibition (EI) ratio maximizing the performance of synfire chain propagation. The optimal EI ratio can significantly enhance the selectivity of FFN to synchronous signals, which thereby increases the stability to background noise. Moreover, the effect of network topology on synfire chain propagation is also investigated. It is found that synfire chain propagation can be maximized by an optimal interlayer linking probability. We also find that external noise is detrimental to synchrony propagation by inducing spiking jitter. The results presented in this paper may provide insights into the effects of network dynamics on neuronal computations.
Conceptual Network Model From Sensory Neurons to Astrocytes of the Human Nervous System.
Yang, Yiqun; Yeo, Chai Kiat
2015-07-01
From a single-cell animal like paramecium to vertebrates like ape, the nervous system plays an important role in responding to the variations of the environment. Compared to animals, the nervous system in the human body possesses more intricate organization and utility. The nervous system anatomy has been understood progressively, yet the explanation at the cell level regarding complete information transmission is still lacking. Along the signal pathway toward the brain, an external stimulus first activates action potentials in the sensing neuron and these electric pulses transmit along the spinal nerve or cranial nerve to the neurons in the brain. Second, calcium elevation is triggered in the branch of astrocyte at the tripartite synapse. Third, the local calcium wave expands to the entire territory of the astrocyte. Finally, the calcium wave propagates to the neighboring astrocyte via gap junction channel. In our study, we integrate the existing mathematical model and biological experiments in each step of the signal transduction to establish a conceptual network model for the human nervous system. The network is composed of four layers and the communication protocols of each layer could be adapted to entities with different characterizations. We verify our simulation results against the available biological experiments and mathematical models and provide a test case of the integrated network. As the production of conscious episode in the human nervous system is still under intense research, our model serves as a useful tool to facilitate, complement and verify current and future study in human cognition.
Directory of Open Access Journals (Sweden)
Marc Ebner
2011-01-01
Full Text Available Cognitive brain functions, for example, sensory perception, motor control and learning, are understood as computation by axonal-dendritic chemical synapses in networks of integrate-and-fire neurons. Cognitive brain functions may occur either consciously or nonconsciously (on “autopilot”. Conscious cognition is marked by gamma synchrony EEG, mediated largely by dendritic-dendritic gap junctions, sideways connections in input/integration layers. Gap-junction-connected neurons define a sub-network within a larger neural network. A theoretical model (the “conscious pilot” suggests that as gap junctions open and close, a gamma-synchronized subnetwork, or zone moves through the brain as an executive agent, converting nonconscious “auto-pilot” cognition to consciousness, and enhancing computation by coherent processing and collective integration. In this study we implemented sideways “gap junctions” in a single-layer artificial neural network to perform figure/ground separation. The set of neurons connected through gap junctions form a reconfigurable resistive grid or sub-network zone. In the model, outgoing spikes are temporally integrated and spatially averaged using the fixed resistive grid set up by neurons of similar function which are connected through gap-junctions. This spatial average, essentially a feedback signal from the neuron's output, determines whether particular gap junctions between neurons will open or close. Neurons connected through open gap junctions synchronize their output spikes. We have tested our gap-junction-defined sub-network in a one-layer neural network on artificial retinal inputs using real-world images. Our system is able to perform figure/ground separation where the laterally connected sub-network of neurons represents a perceived object. Even though we only show results for visual stimuli, our approach should generalize to other modalities. The system demonstrates a moving sub-network zone of
Stochastic Resonance in Neuronal Network Motifs with Ornstein-Uhlenbeck Colored Noise
Directory of Open Access Journals (Sweden)
Xuyang Lou
2014-01-01
Full Text Available We consider here the effect of the Ornstein-Uhlenbeck colored noise on the stochastic resonance of the feed-forward-loop (FFL network motif. The FFL motif is modeled through the FitzHugh-Nagumo neuron model as well as the chemical coupling. Our results show that the noise intensity and the correlation time of the noise process serve as the control parameters, which have great impacts on the stochastic dynamics of the FFL motif. We find that, with a proper choice of noise intensities and the correlation time of the noise process, the signal-to-noise ratio (SNR can display more than one peak.
International Nuclear Information System (INIS)
Yoshioka, Masahiko
2002-01-01
We study associative memory neural networks of the Hodgkin-Huxley type of spiking neurons in which multiple periodic spatiotemporal patterns of spike timing are memorized as limit-cycle-type attractors. In encoding the spatiotemporal patterns, we assume the spike-timing-dependent synaptic plasticity with the asymmetric time window. Analysis for periodic solution of retrieval state reveals that if the area of the negative part of the time window is equivalent to the positive part, then crosstalk among encoded patterns vanishes. Phase transition due to the loss of the stability of periodic solution is observed when we assume fast α function for direct interaction among neurons. In order to evaluate the critical point of this phase transition, we employ Floquet theory in which the stability problem of the infinite number of spiking neurons interacting with α function is reduced to the eigenvalue problem with the finite size of matrix. Numerical integration of the single-body dynamics yields the explicit value of the matrix, which enables us to determine the critical point of the phase transition with a high degree of precision
Action observation and mirror neuron network: a tool for motor stroke rehabilitation.
Sale, P; Franceschini, M
2012-06-01
Mirror neurons are a specific class of neurons that are activated and discharge both during observation of the same or similar motor act performed by another individual and during the execution of a motor act. Different studies based on non invasive neuroelectrophysiological assessment or functional brain imaging techniques have demonstrated the presence of the mirror neuron and their mechanism in humans. Various authors have demonstrated that in the human these networks are activated when individuals learn motor actions via execution (as in traditional motor learning), imitation, observation (as in observational learning) and motor imagery. Activation of these brain areas (inferior parietal lobe and the ventral premotor cortex, as well as the caudal part of the inferior frontal gyrus [IFG]) following observation or motor imagery may thereby facilitate subsequent movement execution by directly matching the observed or imagined action to the internal simulation of that action. It is therefore believed that this multi-sensory action-observation system enables individuals to (re) learn impaired motor functions through the activation of these internal action-related representations. In humans, the mirror mechanism is also located in various brain segment: in Broca's area, which is involved in language processing and speech production and not only in centres that mediate voluntary movement, but also in cortical areas that mediate visceromotor emotion-related behaviours. On basis of this finding, during the last 10 years various studies were carry out regarding the clinical use of action observation for motor rehabilitation of sub-acute and chronic stroke patients.
Directory of Open Access Journals (Sweden)
Rainer Engelken
2016-08-01
Full Text Available Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks of spiking neuron models. A recent study investigated spiking balanced networks of leaky integrate and fire (LIF neurons and compared their dynamics to a matched rate network with identical topology, where single unit input-output functions were chosen from isolated LIF neurons receiving Gaussian white noise input. A mathematical analogy between the chaotic instability in networks of rate units and the spiking network dynamics was proposed. Here we revisit the behavior of the spiking LIF networks and these matched rate networks. We find expected hallmarks of a chaotic instability in the rate network: For supercritical coupling strength near the transition point, the autocorrelation time diverges. For subcritical coupling strengths, we observe critical slowing down in response to small external perturbations. In the spiking network, we found in contrast that the timescale of the autocorrelations is insensitive to the coupling strength and that rate deviations resulting from small input perturbations rapidly decay. The decay speed even accelerates for increasing coupling strength. In conclusion, our reanalysis demonstrates fundamental differences between the behavior of pulse-coupled spiking LIF networks and rate networks with matched topology and input-output function. In particular there is no indication of a corresponding chaotic instability in the spiking network.
DEFF Research Database (Denmark)
Bader, Benjamin M; Steder, Anne; Klein, Anders Bue
2017-01-01
The numerous γ-aminobutyric acid type A receptor (GABAAR) subtypes are differentially expressed and mediate distinct functions at neuronal level. In this study we have investigated GABAAR-mediated modulation of the spontaneous activity patterns of primary neuronal networks from murine frontal...... of the information extractable from the MEA recordings offers interesting insights into the contributions of various GABAAR subtypes/subgroups to cortical network activity and the putative functional interplay between these receptors in these neurons....... cortex by characterizing the effects induced by a wide selection of pharmacological tools at a plethora of activity parameters in microelectrode array (MEA) recordings. The basic characteristics of the primary cortical neurons used in the recordings were studied in some detail, and the expression levels...
Cortical neurons and networks are dormant but fully responsive during isoelectric brain state.
Altwegg-Boussac, Tristan; Schramm, Adrien E; Ballestero, Jimena; Grosselin, Fanny; Chavez, Mario; Lecas, Sarah; Baulac, Michel; Naccache, Lionel; Demeret, Sophie; Navarro, Vincent; Mahon, Séverine; Charpier, Stéphane
2017-09-01
A continuous isoelectric electroencephalogram reflects an interruption of endogenously-generated activity in cortical networks and systematically results in a complete dissolution of conscious processes. This electro-cerebral inactivity occurs during various brain disorders, including hypothermia, drug intoxication, long-lasting anoxia and brain trauma. It can also be induced in a therapeutic context, following the administration of high doses of barbiturate-derived compounds, to interrupt a hyper-refractory status epilepticus. Although altered sensory responses can be occasionally observed on an isoelectric electroencephalogram, the electrical membrane properties and synaptic responses of individual neurons during this cerebral state remain largely unknown. The aim of the present study was to characterize the intracellular correlates of a barbiturate-induced isoelectric electroencephalogram and to analyse the sensory-evoked synaptic responses that can emerge from a brain deprived of spontaneous electrical activity. We first examined the sensory responsiveness from patients suffering from intractable status epilepticus and treated by administration of thiopental. Multimodal sensory responses could be evoked on the flat electroencephalogram, including visually-evoked potentials that were significantly amplified and delayed, with a high trial-to-trial reproducibility compared to awake healthy subjects. Using an analogous pharmacological procedure to induce prolonged electro-cerebral inactivity in the rat, we could describe its cortical and subcortical intracellular counterparts. Neocortical, hippocampal and thalamo-cortical neurons were all silent during the isoelectric state and displayed a flat membrane potential significantly hyperpolarized compared with spontaneously active control states. Nonetheless, all recorded neurons could fire action potentials in response to intracellularly injected depolarizing current pulses and their specific intrinsic
Adaptive coupling optimized spiking coherence and synchronization in Newman-Watts neuronal networks.
Gong, Yubing; Xu, Bo; Wu, Ya'nan
2013-09-01
In this paper, we have numerically studied the effect of adaptive coupling on the temporal coherence and synchronization of spiking activity in Newman-Watts Hodgkin-Huxley neuronal networks. It is found that random shortcuts can enhance the spiking synchronization more rapidly when the increment speed of adaptive coupling is increased and can optimize the temporal coherence of spikes only when the increment speed of adaptive coupling is appropriate. It is also found that adaptive coupling strength can enhance the synchronization of spikes and can optimize the temporal coherence of spikes when random shortcuts are appropriate. These results show that adaptive coupling has a big influence on random shortcuts related spiking activity and can enhance and optimize the temporal coherence and synchronization of spiking activity of the network. These findings can help better understand the roles of adaptive coupling for improving the information processing and transmission in neural systems.
Merolla, Paul A; Arthur, John V; Alvarez-Icaza, Rodrigo; Cassidy, Andrew S; Sawada, Jun; Akopyan, Filipp; Jackson, Bryan L; Imam, Nabil; Guo, Chen; Nakamura, Yutaka; Brezzo, Bernard; Vo, Ivan; Esser, Steven K; Appuswamy, Rathinakumar; Taba, Brian; Amir, Arnon; Flickner, Myron D; Risk, William P; Manohar, Rajit; Modha, Dharmendra S
2014-08-08
Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts. Copyright © 2014, American Association for the Advancement of Science.
A Markov model for the temporal dynamics of balanced random networks of finite size
Lagzi, Fereshteh; Rotter, Stefan
2014-01-01
The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between
Network-state modulation of power-law frequency-scaling in visual cortical neurons.
El Boustani, Sami; Marre, Olivier; Béhuret, Sébastien; Baudot, Pierre; Yger, Pierre; Bal, Thierry; Destexhe, Alain; Frégnac, Yves
2009-09-01
Various types of neural-based signals, such as EEG, local field potentials and intracellular synaptic potentials, integrate multiple sources of activity distributed across large assemblies. They have in common a power-law frequency-scaling structure at high frequencies, but it is still unclear whether this scaling property is dominated by intrinsic neuronal properties or by network activity. The latter case is particularly interesting because if frequency-scaling reflects the network state it could be used to characterize the functional impact of the connectivity. In intracellularly recorded neurons of cat primary visual cortex in vivo, the power spectral density of V(m) activity displays a power-law structure at high frequencies with a fractional scaling exponent. We show that this exponent is not constant, but depends on the visual statistics used to drive the network. To investigate the determinants of this frequency-scaling, we considered a generic recurrent model of cortex receiving a retinotopically organized external input. Similarly to the in vivo case, our in computo simulations show that the scaling exponent reflects the correlation level imposed in the input. This systematic dependence was also replicated at the single cell level, by controlling independently, in a parametric way, the strength and the temporal decay of the pairwise correlation between presynaptic inputs. This last model was implemented in vitro by imposing the correlation control in artificial presynaptic spike trains through dynamic-clamp techniques. These in vitro manipulations induced a modulation of the scaling exponent, similar to that observed in vivo and predicted in computo. We conclude that the frequency-scaling exponent of the V(m) reflects stimulus-driven correlations in the cortical network activity. Therefore, we propose that the scaling exponent could be used to read-out the "effective" connectivity responsible for the dynamical signature of the population signals measured
Network-state modulation of power-law frequency-scaling in visual cortical neurons.
Directory of Open Access Journals (Sweden)
Sami El Boustani
2009-09-01
Full Text Available Various types of neural-based signals, such as EEG, local field potentials and intracellular synaptic potentials, integrate multiple sources of activity distributed across large assemblies. They have in common a power-law frequency-scaling structure at high frequencies, but it is still unclear whether this scaling property is dominated by intrinsic neuronal properties or by network activity. The latter case is particularly interesting because if frequency-scaling reflects the network state it could be used to characterize the functional impact of the connectivity. In intracellularly recorded neurons of cat primary visual cortex in vivo, the power spectral density of V(m activity displays a power-law structure at high frequencies with a fractional scaling exponent. We show that this exponent is not constant, but depends on the visual statistics used to drive the network. To investigate the determinants of this frequency-scaling, we considered a generic recurrent model of cortex receiving a retinotopically organized external input. Similarly to the in vivo case, our in computo simulations show that the scaling exponent reflects the correlation level imposed in the input. This systematic dependence was also replicated at the single cell level, by controlling independently, in a parametric way, the strength and the temporal decay of the pairwise correlation between presynaptic inputs. This last model was implemented in vitro by imposing the correlation control in artificial presynaptic spike trains through dynamic-clamp techniques. These in vitro manipulations induced a modulation of the scaling exponent, similar to that observed in vivo and predicted in computo. We conclude that the frequency-scaling exponent of the V(m reflects stimulus-driven correlations in the cortical network activity. Therefore, we propose that the scaling exponent could be used to read-out the "effective" connectivity responsible for the dynamical signature of the population
Kilpatrick, Zachary P.; Bressloff, Paul C.
2009-01-01
We study the spatiotemporal dynamics of a two-dimensional excitatory neuronal network with synaptic depression. Coupling between populations of neurons is taken to be nonlocal, while depression is taken to be local and presynaptic. We show that the network supports a wide range of spatially structured oscillations, which are suggestive of phenomena seen in cortical slice experiments and in vivo. The particular form of the oscillations depends on initial conditions and the level of background noise. Given an initial, spatially localized stimulus, activity evolves to a spatially localized oscillating core that periodically emits target waves. Low levels of noise can spontaneously generate several pockets of oscillatory activity that interact via their target patterns. Periodic activity in space can also organize into spiral waves, provided that there is some source of rotational symmetry breaking due to external stimuli or noise. In the high gain limit, no oscillatory behavior exists, but a transient stimulus can lead to a single, outward propagating target wave. © Springer Science + Business Media, LLC 2009.
Kilpatrick, Zachary P.
2009-10-29
We study the spatiotemporal dynamics of a two-dimensional excitatory neuronal network with synaptic depression. Coupling between populations of neurons is taken to be nonlocal, while depression is taken to be local and presynaptic. We show that the network supports a wide range of spatially structured oscillations, which are suggestive of phenomena seen in cortical slice experiments and in vivo. The particular form of the oscillations depends on initial conditions and the level of background noise. Given an initial, spatially localized stimulus, activity evolves to a spatially localized oscillating core that periodically emits target waves. Low levels of noise can spontaneously generate several pockets of oscillatory activity that interact via their target patterns. Periodic activity in space can also organize into spiral waves, provided that there is some source of rotational symmetry breaking due to external stimuli or noise. In the high gain limit, no oscillatory behavior exists, but a transient stimulus can lead to a single, outward propagating target wave. © Springer Science + Business Media, LLC 2009.
Optimization behavior of brainstem respiratory neurons. A cerebral neural network model.
Poon, C S
1991-01-01
A recent model of respiratory control suggested that the steady-state respiratory responses to CO2 and exercise may be governed by an optimal control law in the brainstem respiratory neurons. It was not certain, however, whether such complex optimization behavior could be accomplished by a realistic biological neural network. To test this hypothesis, we developed a hybrid computer-neural model in which the dynamics of the lung, brain and other tissue compartments were simulated on a digital computer. Mimicking the "controller" was a human subject who pedalled on a bicycle with varying speed (analog of ventilatory output) with a view to minimize an analog signal of the total cost of breathing (chemical and mechanical) which was computed interactively and displayed on an oscilloscope. In this manner, the visuomotor cortex served as a proxy (homolog) of the brainstem respiratory neurons in the model. Results in 4 subjects showed a linear steady-state ventilatory CO2 response to arterial PCO2 during simulated CO2 inhalation and a nearly isocapnic steady-state response during simulated exercise. Thus, neural optimization is a plausible mechanism for respiratory control during exercise and can be achieved by a neural network with cognitive computational ability without the need for an exercise stimulus.
Dong, Jing; Gao, Lingqi; Han, Junde; Zhang, Junjie; Zheng, Jijian
2017-07-01
Deprivation of spontaneous rhythmic electrical activity in early development by anesthesia administration, among other interventions, induces neuronal apoptosis. However, it is unclear whether enhancement of neuronal electrical activity attenuates neuronal apoptosis in either normal development or after anesthesia exposure. The present study investigated the effects of dopamine, an enhancer of spontaneous rhythmic electrical activity, on ketamine-induced neuronal apoptosis in the developing rat retina. TUNEL and immunohistochemical assays indicated that ketamine time- and dose-dependently aggravated physiological and ketamine-induced apoptosis and inhibited early-synchronized spontaneous network activity. Dopamine administration reversed ketamine-induced neuronal apoptosis, but did not reverse the inhibitory effects of ketamine on early synchronized spontaneous network activity despite enhancing it in controls. Blockade of D1, D2, and A2A receptors and inhibition of cAMP/PKA signaling partially antagonized the protective effect of dopamine against ketamine-induced apoptosis. Together, these data indicate that dopamine attenuates ketamine-induced neuronal apoptosis in the developing rat retina by activating the D1, D2, and A2A receptors, and upregulating cAMP/PKA signaling, rather than through modulation of early synchronized spontaneous network activity.
Atypical cross talk between mentalizing and mirror neuron networks in autism spectrum disorder.
Fishman, Inna; Keown, Christopher L; Lincoln, Alan J; Pineda, Jaime A; Müller, Ralph-Axel
2014-07-01
Converging evidence indicates that brain abnormalities in autism spectrum disorder (ASD) involve atypical network connectivity, but it is unclear whether altered connectivity is especially prominent in brain networks that participate in social cognition. To investigate whether adolescents with ASD show altered functional connectivity in 2 brain networks putatively impaired in ASD and involved in social processing, theory of mind (ToM) and mirror neuron system (MNS). Cross-sectional study using resting-state functional magnetic resonance imaging involving 25 adolescents with ASD between the ages of 11 and 18 years and 25 typically developing adolescents matched for age, handedness, and nonverbal IQ. Statistical parametric maps testing the degree of whole-brain functional connectivity and social functioning measures. Relative to typically developing controls, participants with ASD showed a mixed pattern of both over- and underconnectivity in the ToM network, which was associated with greater social impairment. Increased connectivity in the ASD group was detected primarily between the regions of the MNS and ToM, and was correlated with sociocommunicative measures, suggesting that excessive ToM-MNS cross talk might be associated with social impairment. In a secondary analysis comparing a subset of the 15 participants with ASD with the most severe symptomology and a tightly matched subset of 15 typically developing controls, participants with ASD showed exclusive overconnectivity effects in both ToM and MNS networks, which were also associated with greater social dysfunction. Adolescents with ASD showed atypically increased functional connectivity involving the mentalizing and mirror neuron systems, largely reflecting greater cross talk between the 2. This finding is consistent with emerging evidence of reduced network segregation in ASD and challenges the prevailing theory of general long-distance underconnectivity in ASD. This excess ToM-MNS connectivity may reflect
International Nuclear Information System (INIS)
Huang Xu-Hui; Hu Gang
2014-01-01
Phase transitions widely exist in nature and occur when some control parameters are changed. In neural systems, their macroscopic states are represented by the activity states of neuron populations, and phase transitions between different activity states are closely related to corresponding functions in the brain. In particular, phase transitions to some rhythmic synchronous firing states play significant roles on diverse brain functions and disfunctions, such as encoding rhythmical external stimuli, epileptic seizure, etc. However, in previous studies, phase transitions in neuronal networks are almost driven by network parameters (e.g., external stimuli), and there has been no investigation about the transitions between typical activity states of neuronal networks in a self-organized way by applying plastic connection weights. In this paper, we discuss phase transitions in electrically coupled and lattice-based small-world neuronal networks (LBSW networks) under spike-timing-dependent plasticity (STDP). By applying STDP on all electrical synapses, various known and novel phase transitions could emerge in LBSW networks, particularly, the phenomenon of self-organized phase transitions (SOPTs): repeated transitions between synchronous and asynchronous firing states. We further explore the mechanics generating SOPTs on the basis of synaptic weight dynamics. (interdisciplinary physics and related areas of science and technology)
Liu, Jianbo; Khalil, Hassan K.; Oweiss, Karim G.
2011-08-01
Controlling the spatiotemporal firing pattern of an intricately connected network of neurons through microstimulation is highly desirable in many applications. We investigated in this paper the feasibility of using a model-based approach to the analysis and control of a basal ganglia (BG) network model of Hodgkin-Huxley (HH) spiking neurons through microstimulation. Detailed analysis of this network model suggests that it can reproduce the experimentally observed characteristics of BG neurons under a normal and a pathological Parkinsonian state. A simplified neuronal firing rate model, identified from the detailed HH network model, is shown to capture the essential network dynamics. Mathematical analysis of the simplified model reveals the presence of a systematic relationship between the network's structure and its dynamic response to spatiotemporally patterned microstimulation. We show that both the network synaptic organization and the local mechanism of microstimulation can impose tight constraints on the possible spatiotemporal firing patterns that can be generated by the microstimulated network, which may hinder the effectiveness of microstimulation to achieve a desired objective under certain conditions. Finally, we demonstrate that the feedback control design aided by the mathematical analysis of the simplified model is indeed effective in driving the BG network in the normal and Parskinsonian states to follow a prescribed spatiotemporal firing pattern. We further show that the rhythmic/oscillatory patterns that characterize a dopamine-depleted BG network can be suppressed as a direct consequence of controlling the spatiotemporal pattern of a subpopulation of the output Globus Pallidus internalis (GPi) neurons in the network. This work may provide plausible explanations for the mechanisms underlying the therapeutic effects of deep brain stimulation (DBS) in Parkinson's disease and pave the way towards a model-based, network level analysis and closed
Theta rhythm-like bidirectional cycling dynamics of living neuronal networks in vitro.
Gladkov, Arseniy; Grinchuk, Oleg; Pigareva, Yana; Mukhina, Irina; Kazantsev, Victor; Pimashkin, Alexey
2018-01-01
The phenomena of synchronization, rhythmogenesis and coherence observed in brain networks are believed to be a dynamic substrate for cognitive functions such as learning and memory. However, researchers are still debating whether the rhythmic activity emerges from the network morphology that developed during neurogenesis or as a result of neuronal dynamics achieved under certain conditions. In the present study, we observed self-organized spiking activity that converged to long, complex and rhythmically repeated superbursts in neural networks formed by mature hippocampal cultures with a high cellular density. The superburst lasted for tens of seconds and consisted of hundreds of short (50-100 ms) small bursts with a high spiking rate of 139.0 ± 78.6 Hz that is associated with high-frequency oscillations in the hippocampus. In turn, the bursting frequency represents a theta rhythm (11.2 ± 1.5 Hz). The distribution of spikes within the bursts was non-random, representing a set of well-defined spatio-temporal base patterns or motifs. The long superburst was classified into two types. Each type was associated with a unique direction of spike propagation and, hence, was encoded by a binary sequence with random switching between the two "functional" states. The precisely structured bidirectional rhythmic activity that developed in self-organizing cultured networks was quite similar to the activity observed in the in vivo experiments.
Theta rhythm-like bidirectional cycling dynamics of living neuronal networks in vitro.
Directory of Open Access Journals (Sweden)
Arseniy Gladkov
Full Text Available The phenomena of synchronization, rhythmogenesis and coherence observed in brain networks are believed to be a dynamic substrate for cognitive functions such as learning and memory. However, researchers are still debating whether the rhythmic activity emerges from the network morphology that developed during neurogenesis or as a result of neuronal dynamics achieved under certain conditions. In the present study, we observed self-organized spiking activity that converged to long, complex and rhythmically repeated superbursts in neural networks formed by mature hippocampal cultures with a high cellular density. The superburst lasted for tens of seconds and consisted of hundreds of short (50-100 ms small bursts with a high spiking rate of 139.0 ± 78.6 Hz that is associated with high-frequency oscillations in the hippocampus. In turn, the bursting frequency represents a theta rhythm (11.2 ± 1.5 Hz. The distribution of spikes within the bursts was non-random, representing a set of well-defined spatio-temporal base patterns or motifs. The long superburst was classified into two types. Each type was associated with a unique direction of spike propagation and, hence, was encoded by a binary sequence with random switching between the two "functional" states. The precisely structured bidirectional rhythmic activity that developed in self-organizing cultured networks was quite similar to the activity observed in the in vivo experiments.
Reward-dependent learning in neuronal networks for planning and decision making.
Dehaene, S; Changeux, J P
2000-01-01
Neuronal network models have been proposed for the organization of evaluation and decision processes in prefrontal circuitry and their putative neuronal and molecular bases. The models all include an implementation and simulation of an elementary reward mechanism. Their central hypothesis is that tentative rules of behavior, which are coded by clusters of active neurons in prefrontal cortex, are selected or rejected based on an evaluation by this reward signal, which may be conveyed, for instance, by the mesencephalic dopaminergic neurons with which the prefrontal cortex is densely interconnected. At the molecular level, the reward signal is postulated to be a neurotransmitter such as dopamine, which exerts a global modulatory action on prefrontal synaptic efficacies, either via volume transmission or via targeted synaptic triads. Negative reinforcement has the effect of destabilizing the currently active rule-coding clusters; subsequently, spontaneous activity varies again from one cluster to another, giving the organism the chance to discover and learn a new rule. Thus, reward signals function as effective selection signals that either maintain or suppress currently active prefrontal representations as a function of their current adequacy. Simulations of this variation-selection have successfully accounted for the main features of several major tasks that depend on prefrontal cortex integrity, such as the delayed-response test, the Wisconsin card sorting test, the Tower of London test and the Stroop test. For the more complex tasks, we have found it necessary to supplement the external reward input with a second mechanism that supplies an internal reward; it consists of an auto-evaluation loop which short-circuits the reward input from the exterior. This allows for an internal evaluation of covert motor intentions without actualizing them as behaviors, by simply testing them covertly by comparison with memorized former experiences. This element of architecture
le Feber, Joost; Postma, Wybren; de Weerd, Eddy; Weusthof, Marcel; Rutten, Wim L. C.
2015-01-01
Cultured neurons on multi electrode arrays (MEAs) have been widely used to study various aspects of neuronal (network) functioning. A possible drawback of this approach is the lack of structure in these networks. At the single cell level, several solutions have been proposed to enable directed connectivity, and promising results were obtained. At the level of connected sub-populations, a few attempts have been made with promising results. First assessment of the designs' functionality, however, suggested room for further improvement. We designed a two chamber MEA aiming to create a unidirectional connection between the networks in both chambers (“emitting” and “receiving”). To achieve this unidirectionality, all interconnecting channels contained barbs that hindered axon growth in the opposite direction (from receiving to emitting chamber). Visual inspection showed that axons predominantly grew through the channels in the promoted direction. This observation was confirmed by spontaneous activity recordings. Cross-correlation between the signals from two electrodes inside the channels suggested signal propagation at ≈2 m/s from emitting to receiving chamber. Cross-correlation between the firing patterns in both chambers indicated that most correlated activity was initiated in the emitting chamber, which was also reflected by a significantly lower fraction of partial bursts (i.e., a one-chamber-only burst) in the emitting chamber. Finally, electrical stimulation in the emitting chamber induced a fast response in that chamber, and a slower response in the receiving chamber. Stimulation in the receiving chamber evoked a fast response in that chamber, but no response in the emitting chamber. These results confirm the predominantly unidirectional nature of the connecting channels from emitting to receiving chamber. PMID:26578869
Directory of Open Access Journals (Sweden)
Joost eLe Feber
2015-11-01
Full Text Available Cultured neurons on multi electrode arrays (MEAs have been widely used to study various as-pects of neuronal (network functioning. A possible drawback of this approach is the lack of structure in these networks. At the single cell level, several solutions have been proposed to ena-ble directed connectivity, and promising results were obtained. At the level of connected sub-populations, a few attempts have been made with promising results. First assessment of the de-signs’ functionality, however, suggested room for further improvement.We designed a two chamber MEA aiming to create a unidirectional connection between the net-works in both chambers (‘emitting’ and ‘receiving’. To achieve this unidirectionality, all inter-connecting channels contained barbs that hindered axon growth in the opposite direction (from receiving to emitting chamber. Visual inspection showed that axons predominantly grew through the channels in the promoted direction . This observation was confirmed by spontaneous activity recordings. Cross-correlation between the signals from two electrodes inside the channels suggested signal propagation at ≈2 m/s from emitting to receiving chamber. Cross-correlation between the firing patterns in both chambers indicated that most correlated activity was initiated in the emitting chamber, which was also reflected by a significantly lower fraction of partial bursts (e. a one-chamber-only burst in the emitting chamber. Finally, electrical stimulation in the emitting chamber induced a fast response in that chamber, and a slower response in the receiving chamber. Stimulation in the receiving chamber evoked a fast response in that chamber, but no response in the emitting chamber. These results confirm the predominantly unidirectional nature of the connecting channels from emitting to receiving chamber.
International Nuclear Information System (INIS)
Yu, Haitao; Guo, Xinmeng; Wang, Jiang; Deng, Bin; Wei, Xile
2014-01-01
The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient for the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks
Energy Technology Data Exchange (ETDEWEB)
Yu, Haitao; Guo, Xinmeng; Wang, Jiang, E-mail: jiangwang@tju.edu.cn; Deng, Bin; Wei, Xile [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)
2014-09-01
The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient for the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks.
Self-processing and the default mode network: Interactions with the mirror neuron system
Directory of Open Access Journals (Sweden)
Istvan eMolnar-Szakacs
2013-09-01
Full Text Available Recent evidence for the fractionation of the default mode network (DMN into functionally distinguishable subdivisions with unique patterns of connectivity calls for a reconceptualization of the relationship between this network and self-referential processing. Advances in resting-state functional connectivity analyses are beginning to reveal increasingly complex patterns of organization within the key nodes of the DMN - medial prefrontal cortex (MPFC and posterior cingulate cortex (PCC – as well as between these nodes and other brain systems. Here we review recent examinations of the relationships between the DMN and various aspects of self-relevant and social-cognitive processing in light of emerging evidence for heterogeneity within this network. Drawing from a rapidly evolving social cognitive neuroscience literature, we propose that embodied simulation and mentalizing are processes which allow us to gain insight into another's physical and mental state by providing privileged access to our own physical and mental states. Embodiment implies that the same neural systems are engaged for self- and other-understanding through a simulation mechanism, while mentalizing refers to the use of high-level conceptual information to make inferences about the mental states of self and others. These mechanisms work together to provide a coherent representation of the self and by extension, of others. Nodes of the DMN selectively interact with brain systems for embodiment and mentalizing, including the mirror neuron system, to produce appropriate mappings in the service of social cognitive demands.
Artificial neuron-glia networks learning approach based on cooperative coevolution.
Mesejo, Pablo; Ibáñez, Oscar; Fernández-Blanco, Enrique; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana B
2015-06-01
Artificial Neuron-Glia Networks (ANGNs) are a novel bio-inspired machine learning approach. They extend classical Artificial Neural Networks (ANNs) by incorporating recent findings and suppositions about the way information is processed by neural and astrocytic networks in the most evolved living organisms. Although ANGNs are not a consolidated method, their performance against the traditional approach, i.e. without artificial astrocytes, was already demonstrated on classification problems. However, the corresponding learning algorithms developed so far strongly depends on a set of glial parameters which are manually tuned for each specific problem. As a consequence, previous experimental tests have to be done in order to determine an adequate set of values, making such manual parameter configuration time-consuming, error-prone, biased and problem dependent. Thus, in this paper, we propose a novel learning approach for ANGNs that fully automates the learning process, and gives the possibility of testing any kind of reasonable parameter configuration for each specific problem. This new learning algorithm, based on coevolutionary genetic algorithms, is able to properly learn all the ANGNs parameters. Its performance is tested on five classification problems achieving significantly better results than ANGN and competitive results with ANN approaches.
The advantage of flexible neuronal tunings in neural network models for motor learning
Directory of Open Access Journals (Sweden)
Ellisha N Marongelli
2013-07-01
Full Text Available Human motor adaptation to novel environments is often modeled by a basis function network that transforms desired movement properties into estimated forces. This network employs a layer of nodes that have fixed broad tunings that generalize across the input domain. Learning is achieved by updating the weights of these nodes in response to training experience. This conventional model is unable to account for rapid flexibility observed in human spatial generalization during motor adaptation. However, added plasticity in the breadths of the basis function tunings can achieve this flexibility, and several neurophysiological experiments have revealed flexibility in tunings of sensorimotor neurons. We found a model, Locally Weighted Projection Regression (LWPR, which uniquely possesses the structure of a basis function network in which both the weights and tuning widths of the nodes are updated incrementally during adaptation. We presented this LWPR model with training functions of different spatial complexities and monitored incremental updates to receptive field sizes. An inverse pattern of dependence of receptive field adaptation on experienced error became evident, underlying both a relationship between generalization and complexity, and a unique behavior in which generalization always narrows after a sudden switch in environmental complexity. These results implicate a model with a flexible structure, like LWPR, as a viable alternative model for human motor adaptation that can account for previously observed plasticity in spatial generalization. This theory can be tested by using the behaviors observed in our experiments as novel hypotheses in human studies.
Wei, Yawei; Venayagamoorthy, Ganesh Kumar
2017-09-01
To prevent large interconnected power system from a cascading failure, brownout or even blackout, grid operators require access to faster than real-time information to make appropriate just-in-time control decisions. However, the communication and computational system limitations of currently used supervisory control and data acquisition (SCADA) system can only deliver delayed information. However, the deployment of synchrophasor measurement devices makes it possible to capture and visualize, in near-real-time, grid operational data with extra granularity. In this paper, a cellular computational network (CCN) approach for frequency situational intelligence (FSI) in a power system is presented. The distributed and scalable computing unit of the CCN framework makes it particularly flexible for customization for a particular set of prediction requirements. Two soft-computing algorithms have been implemented in the CCN framework: a cellular generalized neuron network (CCGNN) and a cellular multi-layer perceptron network (CCMLPN), for purposes of providing multi-timescale frequency predictions, ranging from 16.67 ms to 2 s. These two developed CCGNN and CCMLPN systems were then implemented on two different scales of power systems, one of which installed a large photovoltaic plant. A real-time power system simulator at weather station within the Real-Time Power and Intelligent Systems (RTPIS) laboratory at Clemson, SC, was then used to derive typical FSI results. Copyright © 2017 Elsevier Ltd. All rights reserved.
Visual language recognition with a feed-forward network of spiking neurons
Energy Technology Data Exchange (ETDEWEB)
Rasmussen, Craig E [Los Alamos National Laboratory; Garrett, Kenyan [Los Alamos National Laboratory; Sottile, Matthew [GALOIS; Shreyas, Ns [INDIANA UNIV.
2010-01-01
An analogy is made and exploited between the recognition of visual objects and language parsing. A subset of regular languages is used to define a one-dimensional 'visual' language, in which the words are translational and scale invariant. This allows an exploration of the viewpoint invariant languages that can be solved by a network of concurrent, hierarchically connected processors. A language family is defined that is hierarchically tiling system recognizable (HREC). As inspired by nature, an algorithm is presented that constructs a cellular automaton that recognizes strings from a language in the HREC family. It is demonstrated how a language recognizer can be implemented from the cellular automaton using a feed-forward network of spiking neurons. This parser recognizes fixed-length strings from the language in parallel and as the computation is pipelined, a new string can be parsed in each new interval of time. The analogy with formal language theory allows inferences to be drawn regarding what class of objects can be recognized by visual cortex operating in purely feed-forward fashion and what class of objects requires a more complicated network architecture.
The advantage of flexible neuronal tunings in neural network models for motor learning
Marongelli, Ellisha N.; Thoroughman, Kurt A.
2013-01-01
Human motor adaptation to novel environments is often modeled by a basis function network that transforms desired movement properties into estimated forces. This network employs a layer of nodes that have fixed broad tunings that generalize across the input domain. Learning is achieved by updating the weights of these nodes in response to training experience. This conventional model is unable to account for rapid flexibility observed in human spatial generalization during motor adaptation. However, added plasticity in the widths of the basis function tunings can achieve this flexibility, and several neurophysiological experiments have revealed flexibility in tunings of sensorimotor neurons. We found a model, Locally Weighted Projection Regression (LWPR), which uniquely possesses the structure of a basis function network in which both the weights and tuning widths of the nodes are updated incrementally during adaptation. We presented this LWPR model with training functions of different spatial complexities and monitored incremental updates to receptive field widths. An inverse pattern of dependence of receptive field adaptation on experienced error became evident, underlying both a relationship between generalization and complexity, and a unique behavior in which generalization always narrows after a sudden switch in environmental complexity. These results implicate a model that is flexible in both basis function widths and weights, like LWPR, as a viable alternative model for human motor adaptation that can account for previously observed plasticity in spatial generalization. This theory can be tested by using the behaviors observed in our experiments as novel hypotheses in human studies. PMID:23888141
Autapse-induced multiple stochastic resonances in a modular neuronal network
Yang, XiaoLi; Yu, YanHu; Sun, ZhongKui
2017-08-01
This study investigates the nontrivial effects of autapse on stochastic resonance in a modular neuronal network subjected to bounded noise. The resonance effect of autapse is detected by imposing a self-feedback loop with autaptic strength and autaptic time delay to each constituent neuron. Numerical simulations have demonstrated that bounded noise with the proper level of amplitude can induce stochastic resonance; moreover, the noise induced resonance dynamics can be significantly shaped by the autapse. In detail, for a specific range of autaptic strength, multiple stochastic resonances can be induced when the autaptic time delays are appropriately adjusted. These appropriately adjusted delays are detected to nearly approach integer multiples of the period of the external weak signal when the autaptic strength is very near zero; otherwise, they do not match the period of the external weak signal when the autaptic strength is slightly greater than zero. Surprisingly, in both cases, the differences between arbitrary two adjacent adjusted autaptic delays are always approximately equal to the period of the weak signal. The phenomenon of autaptic delay induced multiple stochastic resonances is further confirmed to be robust against the period of the external weak signal and the intramodule probability of subnetwork. These findings could have important implications for weak signal detection and information propagation in realistic neural systems.
Gonzalez-Burgos, Guillermo; Lewis, David A
2008-09-01
Synchronization of neuronal activity in the neocortex may underlie the coordination of neural representations and thus is critical for optimal cognitive function. Because cognitive deficits are the major determinant of functional outcome in schizophrenia, identifying their neural basis is important for the development of new therapeutic interventions. Here we review the data suggesting that phasic synaptic inhibition mediated by specific subtypes of cortical gamma-aminobutyric acid (GABA) neurons is essential for the production of synchronized network oscillations. We also discuss evidence indicating that GABA neurotransmission is altered in schizophrenia and propose mechanisms by which such alterations can decrease the strength of inhibitory connections in a cell-type-specific manner. We suggest that some alterations observed in the neocortex of schizophrenia subjects may be compensatory responses that partially restore inhibitory synaptic efficacy. The findings of altered neural synchrony and impaired cognitive function in schizophrenia suggest that such compensatory responses are insufficient and that interventions aimed at augmenting the efficacy of GABA neurotransmission might be of therapeutic value.
Noise in attractor networks in the brain produced by graded firing rate representations.
Directory of Open Access Journals (Sweden)
Tristan J Webb
Full Text Available Representations in the cortex are often distributed with graded firing rates in the neuronal populations. The firing rate probability distribution of each neuron to a set of stimuli is often exponential or gamma. In processes in the brain, such as decision-making, that are influenced by the noise produced by the close to random spike timings of each neuron for a given mean rate, the noise with this graded type of representation may be larger than with the binary firing rate distribution that is usually investigated. In integrate-and-fire simulations of an attractor decision-making network, we show that the noise is indeed greater for a given sparseness of the representation for graded, exponential, than for binary firing rate distributions. The greater noise was measured by faster escaping times from the spontaneous firing rate state when the decision cues are applied, and this corresponds to faster decision or reaction times. The greater noise was also evident as less stability of the spontaneous firing state before the decision cues are applied. The implication is that spiking-related noise will continue to be a factor that influences processes such as decision-making, signal detection, short-term memory, and memory recall even with the quite large networks found in the cerebral cortex. In these networks there are several thousand recurrent collateral synapses onto each neuron. The greater noise with graded firing rate distributions has the advantage that it can increase the speed of operation of cortical circuitry.
Complex dynamics of a delayed discrete neural network of two nonidentical neurons
Energy Technology Data Exchange (ETDEWEB)
Chen, Yuanlong [Mathematics Department, GuangDong University of Finance, Guangzhou 510521 (China); Huang, Tingwen [Mathematics Department, Texas A and M University at Qatar, P. O. Box 23874, Doha (Qatar); Huang, Yu, E-mail: stshyu@mail.sysu.edu.cn [Mathematics Department, Sun Yat-Sen University, Guangzhou 510275, People' s Republic China (China)
2014-03-15
In this paper, we discover that a delayed discrete Hopfield neural network of two nonidentical neurons with self-connections and no self-connections can demonstrate chaotic behaviors. To this end, we first transform the model, by a novel way, into an equivalent system which has some interesting properties. Then, we identify the chaotic invariant set for this system and show that the dynamics of this system within this set is topologically conjugate to the dynamics of the full shift map with two symbols. This confirms chaos in the sense of Devaney. Our main results generalize the relevant results of Huang and Zou [J. Nonlinear Sci. 15, 291–303 (2005)], Kaslik and Balint [J. Nonlinear Sci. 18, 415–432 (2008)] and Chen et al. [Sci. China Math. 56(9), 1869–1878 (2013)]. We also give some numeric simulations to verify our theoretical results.
Complex dynamics of a delayed discrete neural network of two nonidentical neurons
International Nuclear Information System (INIS)
Chen, Yuanlong; Huang, Tingwen; Huang, Yu
2014-01-01
In this paper, we discover that a delayed discrete Hopfield neural network of two nonidentical neurons with self-connections and no self-connections can demonstrate chaotic behaviors. To this end, we first transform the model, by a novel way, into an equivalent system which has some interesting properties. Then, we identify the chaotic invariant set for this system and show that the dynamics of this system within this set is topologically conjugate to the dynamics of the full shift map with two symbols. This confirms chaos in the sense of Devaney. Our main results generalize the relevant results of Huang and Zou [J. Nonlinear Sci. 15, 291–303 (2005)], Kaslik and Balint [J. Nonlinear Sci. 18, 415–432 (2008)] and Chen et al. [Sci. China Math. 56(9), 1869–1878 (2013)]. We also give some numeric simulations to verify our theoretical results
Complex dynamics of a delayed discrete neural network of two nonidentical neurons.
Chen, Yuanlong; Huang, Tingwen; Huang, Yu
2014-03-01
In this paper, we discover that a delayed discrete Hopfield neural network of two nonidentical neurons with self-connections and no self-connections can demonstrate chaotic behaviors. To this end, we first transform the model, by a novel way, into an equivalent system which has some interesting properties. Then, we identify the chaotic invariant set for this system and show that the dynamics of this system within this set is topologically conjugate to the dynamics of the full shift map with two symbols. This confirms chaos in the sense of Devaney. Our main results generalize the relevant results of Huang and Zou [J. Nonlinear Sci. 15, 291-303 (2005)], Kaslik and Balint [J. Nonlinear Sci. 18, 415-432 (2008)] and Chen et al. [Sci. China Math. 56(9), 1869-1878 (2013)]. We also give some numeric simulations to verify our theoretical results.
Coexistence of intermittencies in the neuronal network of the epileptic brain.
Koronovskii, Alexey A; Hramov, Alexander E; Grubov, Vadim V; Moskalenko, Olga I; Sitnikova, Evgenia; Pavlov, Alexey N
2016-03-01
Intermittent behavior occurs widely in nature. At present, several types of intermittencies are known and well-studied. However, consideration of intermittency has usually been limited to the analysis of cases when only one certain type of intermittency takes place. In this paper, we report on the temporal behavior of the complex neuronal network in the epileptic brain, when two types of intermittent behavior coexist and alternate with each other. We prove the presence of this phenomenon in physiological experiments with WAG/Rij rats being the model living system of absence epilepsy. In our paper, the deduced theoretical law for distributions of the lengths of laminar phases prescribing the power law with a degree of -2 agrees well with the experimental neurophysiological data.
Anatomical differences in the mirror neuron system and social cognition network in autism.
Hadjikhani, Nouchine; Joseph, Robert M; Snyder, Josh; Tager-Flusberg, Helen
2006-09-01
Autism spectrum disorder (ASD) is a neurodevelopmental disorder associated with impaired social and emotional skills, the anatomical substrate of which is still unknown. In this study, we compared a group of 14 high-functioning ASD adults with a group of controls matched for sex, age, intelligence quotient, and handedness. We used an automated technique of analysis that accurately measures the thickness of the cerebral cortex and generates cross-subject statistics in a coordinate system based on cortical anatomy. We found local decreases of gray matter in the ASD group in areas belonging to the mirror neuron system (MNS), argued to be the basis of empathic behavior. Cortical thinning of the MNS was correlated with ASD symptom severity. Cortical thinning was also observed in areas involved in emotion recognition and social cognition. These findings suggest that the social and emotional deficits characteristic of autism may reflect abnormal thinning of the MNS and the broader network of cortical areas subserving social cognition.
Stability and bifurcation in a simplified four-neuron BAM neural network with multiple delays
Directory of Open Access Journals (Sweden)
2006-01-01
Full Text Available We first study the distribution of the zeros of a fourth-degree exponential polynomial. Then we apply the obtained results to a simplified bidirectional associated memory (BAM neural network with four neurons and multiple time delays. By taking the sum of the delays as the bifurcation parameter, it is shown that under certain assumptions the steady state is absolutely stable. Under another set of conditions, there are some critical values of the delay, when the delay crosses these critical values, the Hopf bifurcation occurs. Furthermore, some explicit formulae determining the stability and the direction of periodic solutions bifurcating from Hopf bifurcations are obtained by applying the normal form theory and center manifold reduction. Numerical simulations supporting the theoretical analysis are also included.
International Nuclear Information System (INIS)
Wang, Baoying; Gong, Yubing; Xie, Huijuan; Wang, Qi
2016-01-01
Highlights: • Optimal autaptic delay enhanced synchronization transitions induced by synaptic delay in neuronal networks. • Optimal synaptic delay enhanced synchronization transitions induced by autaptic delay. • Optimal coupling strength enhanced synchronization transitions induced by autaptic or synaptic delay. - Abstract: In this paper, we numerically study the effect of electrical autaptic and synaptic delays on synchronization transitions induced by each other in Newman–Watts Hodgkin–Huxley neuronal networks. It is found that the synchronization transitions induced by synaptic delay vary with varying autaptic delay and become strongest when autaptic delay is optimal. Similarly, the synchronization transitions induced by autaptic delay vary with varying synaptic delay and become strongest at optimal synaptic delay. Also, there is optimal coupling strength by which the synchronization transitions induced by either synaptic or autaptic delay become strongest. These results show that electrical autaptic and synaptic delays can enhance synchronization transitions induced by each other in the neuronal networks. This implies that electrical autaptic and synaptic delays can cooperate with each other and more efficiently regulate the synchrony state of the neuronal networks. These findings could find potential implications for the information transmission in neural systems.
Optimal control of directional deep brain stimulation in the parkinsonian neuronal network
Fan, Denggui; Wang, Zhihui; Wang, Qingyun
2016-07-01
The effect of conventional deep brain stimulation (DBS) on debilitating symptoms of Parkinson's disease can be limited because it can only yield the spherical field. And, some side effects are clearly induced with influencing their adjacent ganglia. Recent experimental evidence for patients with Parkinson's disease has shown that a novel DBS electrode with 32 independent stimulation source contacts can effectively optimize the clinical therapy by enlarging the therapeutic windows, when it is applied on the subthalamic nucleus (STN). This is due to the selective activation in clusters of various stimulation contacts which can be steered directionally and accurately on the targeted regions of interest. In addition, because of the serious damage to the neural tissues, the charge-unbalanced stimulation is not typically indicated and the real DBS utilizes charge-balanced bi-phasic (CBBP) pulses. Inspired by this, we computationally investigate the optimal control of directional CBBP-DBS from the proposed parkinsonian neuronal network of basal ganglia-thalamocortical circuit. By appropriately tuning stimulation for different neuronal populations, it can be found that directional steering CBBP-DBS paradigms are superior to the spherical case in improving parkinsonian dynamical properties including the synchronization of neuronal populations and the reliability of thalamus relaying the information from cortex, which is in a good agreement with the physiological experiments. Furthermore, it can be found that directional steering stimulations can increase the optimal stimulation intensity of desynchronization by more than 1 mA compared to the spherical case. This is consistent with the experimental result with showing that there exists at least one steering direction that can allow increasing the threshold of side effects by 1 mA. In addition, we also simulate the local field potential (LFP) and dominant frequency (DF) of the STN neuronal population induced by the activation
Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks.
Pena, Rodrigo F O; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C; Lindner, Benjamin
2018-01-01
Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as
Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks
Directory of Open Access Journals (Sweden)
Rodrigo F. O. Pena
2018-03-01
Full Text Available Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i different neural subpopulations (e.g., excitatory and inhibitory neurons have different cellular or connectivity parameters; (ii the number and strength of the input connections are random (Erdős-Rényi topology and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of
Cheung, Kit; Schultz, Simon R; Luk, Wayne
2015-01-01
NeuroFlow is a scalable spiking neural network simulation platform for off-the-shelf high performance computing systems using customizable hardware processors such as Field-Programmable Gate Arrays (FPGAs). Unlike multi-core processors and application-specific integrated circuits, the processor architecture of NeuroFlow can be redesigned and reconfigured to suit a particular simulation to deliver optimized performance, such as the degree of parallelism to employ. The compilation process supports using PyNN, a simulator-independent neural network description language, to configure the processor. NeuroFlow supports a number of commonly used current or conductance based neuronal models such as integrate-and-fire and Izhikevich models, and the spike-timing-dependent plasticity (STDP) rule for learning. A 6-FPGA system can simulate a network of up to ~600,000 neurons and can achieve a real-time performance of 400,000 neurons. Using one FPGA, NeuroFlow delivers a speedup of up to 33.6 times the speed of an 8-core processor, or 2.83 times the speed of GPU-based platforms. With high flexibility and throughput, NeuroFlow provides a viable environment for large-scale neural network simulation.
De Cegli, Rossella; Iacobacci, Simona; Flore, Gemma; Gambardella, Gennaro; Mao, Lei; Cutillo, Luisa; Lauria, Mario; Klose, Joachim; Illingworth, Elizabeth; Banfi, Sandro; di Bernardo, Diego
2013-01-01
Gene expression profiles can be used to infer previously unknown transcriptional regulatory interaction among thousands of genes, via systems biology 'reverse engineering' approaches. We 'reverse engineered' an embryonic stem (ES)-specific transcriptional network from 171 gene expression profiles, measured in ES cells, to identify master regulators of gene expression ('hubs'). We discovered that E130012A19Rik (E13), highly expressed in mouse ES cells as compared with differentiated cells, was a central 'hub' of the network. We demonstrated that E13 is a protein-coding gene implicated in regulating the commitment towards the different neuronal subtypes and glia cells. The overexpression and knock-down of E13 in ES cell lines, undergoing differentiation into neurons and glia cells, caused a strong up-regulation of the glutamatergic neurons marker Vglut2 and a strong down-regulation of the GABAergic neurons marker GAD65 and of the radial glia marker Blbp. We confirmed E13 expression in the cerebral cortex of adult mice and during development. By immuno-based affinity purification, we characterized protein partners of E13, involved in the Polycomb complex. Our results suggest a role of E13 in regulating the division between glutamatergic projection neurons and GABAergic interneurons and glia cells possibly by epigenetic-mediated transcriptional regulation.
Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi
2012-10-01
We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.
Bayesian population decoding of spiking neurons.
Gerwinn, Sebastian; Macke, Jakob; Bethge, Matthias
2009-01-01
The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrate-and-fire neurons constitute a popular class of encoding models, in which spike times depend directly on the temporal structure of the inputs. However, optimal decoding rules for these models have only been studied explicitly in the noiseless case. Here, we study decoding rules for probabilistic inference of a continuous stimulus from the spike times of a population of leaky integrate-and-fire neurons with threshold noise. We derive three algorithms for approximating the posterior distribution over stimuli as a function of the observed spike trains. In addition to a reconstruction of the stimulus we thus obtain an estimate of the uncertainty as well. Furthermore, we derive a 'spike-by-spike' online decoding scheme that recursively updates the posterior with the arrival of each new spike. We use these decoding rules to reconstruct time-varying stimuli represented by a Gaussian process from spike trains of single neurons as well as neural populations.
Bayesian population decoding of spiking neurons
Directory of Open Access Journals (Sweden)
Sebastian Gerwinn
2009-10-01
Full Text Available The timing of action potentials in spiking neurons depends on the temporal dynamics of their inputs and contains information about temporal fluctuations in the stimulus. Leaky integrate-and-fire neurons constitute a popular class of encoding models, in which spike times depend directly on the temporal structure of the inputs. However, optimal decoding rules for these models have only been studied explicitly in the noiseless case. Here, we study decoding rules for probabilistic inference of a continuous stimulus from the spike times of a population of leaky integrate-and-fire neurons with threshold noise. We derive three algorithms for approximating the posterior distribution over stimuli as a function of the observed spike trains. In addition to a reconstruction of the stimulus we thus obtain an estimate of the uncertainty as well. Furthermore, we derive a `spike-by-spike' online decoding scheme that recursively updates the posterior with the arrival of each new spike. We use these decoding rules to reconstruct time-varying stimuli represented by a Gaussian process from spike trains of single neurons as well as neural populations.
Djurfeldt, Mikael
2012-07-01
The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31-42, 2008b) and an implementation in Python has been publicly released.
Fathiazar, Elham; Anemuller, Jorn; Kretzberg, Jutta
2016-08-01
Voltage-Sensitive Dye (VSD) imaging is an optical imaging method that allows measuring the graded voltage changes of multiple neurons simultaneously. In neuroscience, this method is used to reveal networks of neurons involved in certain tasks. However, the recorded relative dye fluorescence changes are usually low and signals are superimposed by noise and artifacts. Therefore, establishing a reliable method to identify which cells are activated by specific stimulus conditions is the first step to identify functional networks. In this paper, we present a statistical method to identify stimulus-activated network nodes as cells, whose activities during sensory network stimulation differ significantly from the un-stimulated control condition. This method is demonstrated based on voltage-sensitive dye recordings from up to 100 neurons in a ganglion of the medicinal leech responding to tactile skin stimulation. Without relying on any prior physiological knowledge, the network nodes identified by our statistical analysis were found to match well with published cell types involved in tactile stimulus processing and to be consistent across stimulus conditions and preparations.
Directory of Open Access Journals (Sweden)
Ryan T Canolty
Full Text Available Understanding the principles governing the dynamic coordination of functional brain networks remains an important unmet goal within neuroscience. How do distributed ensembles of neurons transiently coordinate their activity across a variety of spatial and temporal scales? While a complete mechanistic account of this process remains elusive, evidence suggests that neuronal oscillations may play a key role in this process, with different rhythms influencing both local computation and long-range communication. To investigate this question, we recorded multiple single unit and local field potential (LFP activity from microelectrode arrays implanted bilaterally in macaque motor areas. Monkeys performed a delayed center-out reach task either manually using their natural arm (Manual Control, MC or under direct neural control through a brain-machine interface (Brain Control, BC. In accord with prior work, we found that the spiking activity of individual neurons is coupled to multiple aspects of the ongoing motor beta rhythm (10-45 Hz during both MC and BC, with neurons exhibiting a diversity of coupling preferences. However, here we show that for identified single neurons, this beta-to-rate mapping can change in a reversible and task-dependent way. For example, as beta power increases, a given neuron may increase spiking during MC but decrease spiking during BC, or exhibit a reversible shift in the preferred phase of firing. The within-task stability of coupling, combined with the reversible cross-task changes in coupling, suggest that task-dependent changes in the beta-to-rate mapping play a role in the transient functional reorganization of neural ensembles. We characterize the range of task-dependent changes in the mapping from beta amplitude, phase, and inter-hemispheric phase differences to the spike rates of an ensemble of simultaneously-recorded neurons, and discuss the potential implications that dynamic remapping from oscillatory activity to
Zhang, Xu; Foderaro, Greg; Henriquez, Craig; Ferrari, Silvia
2018-03-01
Recent developments in neural stimulation and recording technologies are providing scientists with the ability of recording and controlling the activity of individual neurons in vitro or in vivo, with very high spatial and temporal resolution. Tools such as optogenetics, for example, are having a significant impact in the neuroscience fi