WorldWideScience

Sample records for neuronal network consisting

  1. Pulsed neural networks consisting of single-flux-quantum spiking neurons

    International Nuclear Information System (INIS)

    Hirose, T.; Asai, T.; Amemiya, Y.

    2007-01-01

    An inhibitory pulsed neural network was developed for brain-like information processing, by using single-flux-quantum (SFQ) circuits. It consists of spiking neuron devices that are coupled to each other through all-to-all inhibitory connections. The network selects neural activity. The operation of the neural network was confirmed by computer simulation. SFQ neuron devices can imitate the operation of the inhibition phenomenon of neural networks

  2. The pairwise phase consistency in cortical network and its relationship with neuronal activation

    Directory of Open Access Journals (Sweden)

    Wang Daming

    2017-01-01

    Full Text Available Gamma-band neuronal oscillation and synchronization with the range of 30-90 Hz are ubiquitous phenomenon across numerous brain areas and various species, and correlated with plenty of cognitive functions. The phase of the oscillation, as one aspect of CTC (Communication through Coherence hypothesis, underlies various functions for feature coding, memory processing and behaviour performing. The PPC (Pairwise Phase Consistency, an improved coherence measure, statistically quantifies the strength of phase synchronization. In order to evaluate the PPC and its relationships with input stimulus, neuronal activation and firing rate, a simplified spiking neuronal network is constructed to simulate orientation columns in primary visual cortex. If the input orientation stimulus is preferred for a certain orientation column, neurons within this corresponding column will obtain higher firing rate and stronger neuronal activation, which consequently engender higher PPC values, with higher PPC corresponding to higher firing rate. In addition, we investigate the PPC in time resolved analysis with a sliding window.

  3. Coherence resonance in globally coupled neuronal networks with different neuron numbers

    International Nuclear Information System (INIS)

    Ning Wei-Lian; Zhang Zheng-Zhen; Zeng Shang-You; Luo Xiao-Shu; Hu Jin-Lin; Zeng Shao-Wen; Qiu Yi; Wu Hui-Si

    2012-01-01

    Because a brain consists of tremendous neuronal networks with different neuron numbers ranging from tens to tens of thousands, we study the coherence resonance due to ion channel noises in globally coupled neuronal networks with different neuron numbers. We confirm that for all neuronal networks with different neuron numbers there exist the array enhanced coherence resonance and the optimal synaptic conductance to cause the maximal spiking coherence. Furthermoremore, the enhancement effects of coupling on spiking coherence and on optimal synaptic conductance are almost the same, regardless of the neuron numbers in the neuronal networks. Therefore for all the neuronal networks with different neuron numbers in the brain, relative weak synaptic conductance (0.1 mS/cm 2 ) is sufficient to induce the maximal spiking coherence and the best sub-threshold signal encoding. (interdisciplinary physics and related areas of science and technology)

  4. Autapse-induced synchronization in a coupled neuronal network

    International Nuclear Information System (INIS)

    Ma, Jun; Song, Xinlin; Jin, Wuyin; Wang, Chuni

    2015-01-01

    Highlights: • The functional effect of autapse on neuronal activity is detected. • Autapse driving plays active role in regulating electrical activities as pacemaker. • It confirms biological experimental results for rhythm synchronization between heterogeneous cells. - Abstract: The effect of autapse on coupled neuronal network is detected. In our studies, three identical neurons are connected with ring type and autapse connected to one neuron of the network. The autapse connected to neuron can impose time-delayed feedback in close loop on the neuron thus the dynamics of membrane potentials can be changed. Firstly, the effect of autapse driving on single neuron is confirmed that negative feedback can calm down the neuronal activity while positive feedback can excite the neuronal activity. Secondly, the collective electrical behaviors of neurons are regulated by a pacemaker, which associated with the autapse forcing. By using appropriate gain and time delay in the autapse, the neurons can reach synchronization and the membrane potentials of all neurons can oscillate with the same rhythm under mutual coupling. It indicates that autapse forcing plays an important role in changing the collective electric activities of neuronal network, and appropriate electric modes can be selected due to the switch of feedback type(positive or negative) in autapse. And the autapse-induced synchronization in network is also consistent with some biological experiments about synchronization between nonidentical neurons.

  5. Numerical simulation of coherent resonance in a model network of Rulkov neurons

    Science.gov (United States)

    Andreev, Andrey V.; Runnova, Anastasia E.; Pisarchik, Alexander N.

    2018-04-01

    In this paper we study the spiking behaviour of a neuronal network consisting of Rulkov elements. We find that the regularity of this behaviour maximizes at a certain level of environment noise. This effect referred to as coherence resonance is demonstrated in a random complex network of Rulkov neurons. An external stimulus added to some of neurons excites them, and then activates other neurons in the network. The network coherence is also maximized at the certain stimulus amplitude.

  6. Neurons from the adult human dentate nucleus: neural networks in the neuron classification.

    Science.gov (United States)

    Grbatinić, Ivan; Marić, Dušica L; Milošević, Nebojša T

    2015-04-07

    Topological (central vs. border neuron type) and morphological classification of adult human dentate nucleus neurons according to their quantified histomorphological properties using neural networks on real and virtual neuron samples. In the real sample 53.1% and 14.1% of central and border neurons, respectively, are classified correctly with total of 32.8% of misclassified neurons. The most important result present 62.2% of misclassified neurons in border neurons group which is even greater than number of correctly classified neurons (37.8%) in that group, showing obvious failure of network to classify neurons correctly based on computational parameters used in our study. On the virtual sample 97.3% of misclassified neurons in border neurons group which is much greater than number of correctly classified neurons (2.7%) in that group, again confirms obvious failure of network to classify neurons correctly. Statistical analysis shows that there is no statistically significant difference in between central and border neurons for each measured parameter (p>0.05). Total of 96.74% neurons are morphologically classified correctly by neural networks and each one belongs to one of the four histomorphological types: (a) neurons with small soma and short dendrites, (b) neurons with small soma and long dendrites, (c) neuron with large soma and short dendrites, (d) neurons with large soma and long dendrites. Statistical analysis supports these results (pneurons can be classified in four neuron types according to their quantitative histomorphological properties. These neuron types consist of two neuron sets, small and large ones with respect to their perykarions with subtypes differing in dendrite length i.e. neurons with short vs. long dendrites. Besides confirmation of neuron classification on small and large ones, already shown in literature, we found two new subtypes i.e. neurons with small soma and long dendrites and with large soma and short dendrites. These neurons are

  7. Connectivity and dynamics of neuronal networks as defined by the shape of individual neurons

    International Nuclear Information System (INIS)

    Ahnert, Sebastian E; A N Travencolo, Bruno; Costa, Luciano da Fontoura

    2009-01-01

    Biological neuronal networks constitute a special class of dynamical systems, as they are formed by individual geometrical components, namely the neurons. In the existing literature, relatively little attention has been given to the influence of neuron shape on the overall connectivity and dynamics of the emerging networks. The current work addresses this issue by considering simplified neuronal shapes consisting of circular regions (soma/axons) with spokes (dendrites). Networks are grown by placing these patterns randomly in the two-dimensional (2D) plane and establishing connections whenever a piece of dendrite falls inside an axon. Several topological and dynamical properties of the resulting graph are measured, including the degree distribution, clustering coefficients, symmetry of connections, size of the largest connected component, as well as three hierarchical measurements of the local topology. By varying the number of processes of the individual basic patterns, we can quantify relationships between the individual neuronal shape and the topological and dynamical features of the networks. Integrate-and-fire dynamics on these networks is also investigated with respect to transient activation from a source node, indicating that long-range connections play an important role in the propagation of avalanches.

  8. Growth of cortical neuronal network in vitro: Modeling and analysis

    International Nuclear Information System (INIS)

    Lai, P.-Y.; Jia, L. C.; Chan, C. K.

    2006-01-01

    We present a detailed analysis and theoretical growth models to account for recent experimental data on the growth of cortical neuronal networks in vitro [Phys. Rev. Lett. 93, 088101 (2004)]. The experimentally observed synchronized firing frequency of a well-connected neuronal network is shown to be proportional to the mean network connectivity. The growth of the network is consistent with the model of an early enhanced growth of connection, but followed by a retarded growth once the synchronized cluster is formed. Microscopic models with dominant excluded volume interactions are consistent with the observed exponential decay of the mean connection probability as a function of the mean network connectivity. The biological implications of the growth model are also discussed

  9. Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks.

    Science.gov (United States)

    Pena, Rodrigo F O; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C; Lindner, Benjamin

    2018-01-01

    Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as

  10. Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks

    Directory of Open Access Journals (Sweden)

    Rodrigo F. O. Pena

    2018-03-01

    Full Text Available Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i different neural subpopulations (e.g., excitatory and inhibitory neurons have different cellular or connectivity parameters; (ii the number and strength of the input connections are random (Erdős-Rényi topology and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of

  11. Network reconfiguration and neuronal plasticity in rhythm-generating networks.

    Science.gov (United States)

    Koch, Henner; Garcia, Alfredo J; Ramirez, Jan-Marino

    2011-12-01

    Neuronal networks are highly plastic and reconfigure in a state-dependent manner. The plasticity at the network level emerges through multiple intrinsic and synaptic membrane properties that imbue neurons and their interactions with numerous nonlinear properties. These properties are continuously regulated by neuromodulators and homeostatic mechanisms that are critical to maintain not only network stability and also adapt networks in a short- and long-term manner to changes in behavioral, developmental, metabolic, and environmental conditions. This review provides concrete examples from neuronal networks in invertebrates and vertebrates, and illustrates that the concepts and rules that govern neuronal networks and behaviors are universal.

  12. Developmental time windows for axon growth influence neuronal network topology.

    Science.gov (United States)

    Lim, Sol; Kaiser, Marcus

    2015-04-01

    Early brain connectivity development consists of multiple stages: birth of neurons, their migration and the subsequent growth of axons and dendrites. Each stage occurs within a certain period of time depending on types of neurons and cortical layers. Forming synapses between neurons either by growing axons starting at similar times for all neurons (much-overlapped time windows) or at different time points (less-overlapped) may affect the topological and spatial properties of neuronal networks. Here, we explore the extreme cases of axon formation during early development, either starting at the same time for all neurons (parallel, i.e., maximally overlapped time windows) or occurring for each neuron separately one neuron after another (serial, i.e., no overlaps in time windows). For both cases, the number of potential and established synapses remained comparable. Topological and spatial properties, however, differed: Neurons that started axon growth early on in serial growth achieved higher out-degrees, higher local efficiency and longer axon lengths while neurons demonstrated more homogeneous connectivity patterns for parallel growth. Second, connection probability decreased more rapidly with distance between neurons for parallel growth than for serial growth. Third, bidirectional connections were more numerous for parallel growth. Finally, we tested our predictions with C. elegans data. Together, this indicates that time windows for axon growth influence the topological and spatial properties of neuronal networks opening up the possibility to a posteriori estimate developmental mechanisms based on network properties of a developed network.

  13. Simulating synchronization in neuronal networks

    Science.gov (United States)

    Fink, Christian G.

    2016-06-01

    We discuss several techniques used in simulating neuronal networks by exploring how a network's connectivity structure affects its propensity for synchronous spiking. Network connectivity is generated using the Watts-Strogatz small-world algorithm, and two key measures of network structure are described. These measures quantify structural characteristics that influence collective neuronal spiking, which is simulated using the leaky integrate-and-fire model. Simulations show that adding a small number of random connections to an otherwise lattice-like connectivity structure leads to a dramatic increase in neuronal synchronization.

  14. NT2 derived neuronal and astrocytic network signalling.

    Directory of Open Access Journals (Sweden)

    Eric J Hill

    Full Text Available A major focus of stem cell research is the generation of neurons that may then be implanted to treat neurodegenerative diseases. However, a picture is emerging where astrocytes are partners to neurons in sustaining and modulating brain function. We therefore investigated the functional properties of NT2 derived astrocytes and neurons using electrophysiological and calcium imaging approaches. NT2 neurons (NT2Ns expressed sodium dependent action potentials, as well as responses to depolarisation and the neurotransmitter glutamate. NT2Ns exhibited spontaneous and coordinated calcium elevations in clusters and in extended processes, indicating local and long distance signalling. Tetrodotoxin sensitive network activity could also be evoked by electrical stimulation. Similarly, NT2 astrocytes (NT2As exhibited morphology and functional properties consistent with this glial cell type. NT2As responded to neuronal activity and to exogenously applied neurotransmitters with calcium elevations, and in contrast to neurons, also exhibited spontaneous rhythmic calcium oscillations. NT2As also generated propagating calcium waves that were gap junction and purinergic signalling dependent. Our results show that NT2 derived astrocytes exhibit appropriate functionality and that NT2N networks interact with NT2A networks in co-culture. These findings underline the utility of such cultures to investigate human brain cell type signalling under controlled conditions. Furthermore, since stem cell derived neuron function and survival is of great importance therapeutically, our findings suggest that the presence of complementary astrocytes may be valuable in supporting stem cell derived neuronal networks. Indeed, this also supports the intriguing possibility of selective therapeutic replacement of astrocytes in diseases where these cells are either lost or lose functionality.

  15. Coherent and intermittent ensemble oscillations emerge from networks of irregular spiking neurons.

    Science.gov (United States)

    Hoseini, Mahmood S; Wessel, Ralf

    2016-01-01

    Local field potential (LFP) recordings from spatially distant cortical circuits reveal episodes of coherent gamma oscillations that are intermittent, and of variable peak frequency and duration. Concurrently, single neuron spiking remains largely irregular and of low rate. The underlying potential mechanisms of this emergent network activity have long been debated. Here we reproduce such intermittent ensemble oscillations in a model network, consisting of excitatory and inhibitory model neurons with the characteristics of regular-spiking (RS) pyramidal neurons, and fast-spiking (FS) and low-threshold spiking (LTS) interneurons. We find that fluctuations in the external inputs trigger reciprocally connected and irregularly spiking RS and FS neurons in episodes of ensemble oscillations, which are terminated by the recruitment of the LTS population with concurrent accumulation of inhibitory conductance in both RS and FS neurons. The model qualitatively reproduces experimentally observed phase drift, oscillation episode duration distributions, variation in the peak frequency, and the concurrent irregular single-neuron spiking at low rate. Furthermore, consistent with previous experimental studies using optogenetic manipulation, periodic activation of FS, but not RS, model neurons causes enhancement of gamma oscillations. In addition, increasing the coupling between two model networks from low to high reveals a transition from independent intermittent oscillations to coherent intermittent oscillations. In conclusion, the model network suggests biologically plausible mechanisms for the generation of episodes of coherent intermittent ensemble oscillations with irregular spiking neurons in cortical circuits. Copyright © 2016 the American Physiological Society.

  16. Extracting functionally feedforward networks from a population of spiking neurons.

    Science.gov (United States)

    Vincent, Kathleen; Tauskela, Joseph S; Thivierge, Jean-Philippe

    2012-01-01

    Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABA(A) receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV

  17. An FPGA-based silicon neuronal network with selectable excitability silicon neurons

    Directory of Open Access Journals (Sweden)

    Jing eLi

    2012-12-01

    Full Text Available This paper presents a digital silicon neuronal network which simulates the nerve system in creatures and has the ability to execute intelligent tasks, such as associative memory. Two essential elements, the mathematical-structure-based digital spiking silicon neuron (DSSN and the transmitter release based silicon synapse, allow the network to show rich dynamic behaviors and are computationally efficient for hardware implementation. We adopt mixed pipeline and parallel structure and shift operations to design a sufficient large and complex network without excessive hardware resource cost. The network with $256$ full-connected neurons is built on a Digilent Atlys board equipped with a Xilinx Spartan-6 LX45 FPGA. Besides, a memory control block and USB control block are designed to accomplish the task of data communication between the network and the host PC. This paper also describes the mechanism of associative memory performed in the silicon neuronal network. The network is capable of retrieving stored patterns if the inputs contain enough information of them. The retrieving probability increases with the similarity between the input and the stored pattern increasing. Synchronization of neurons is observed when the successful stored pattern retrieval occurs.

  18. Orientation selectivity in inhibition-dominated networks of spiking neurons: effect of single neuron properties and network dynamics.

    Science.gov (United States)

    Sadeh, Sadra; Rotter, Stefan

    2015-01-01

    The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not

  19. Orientation selectivity in inhibition-dominated networks of spiking neurons: effect of single neuron properties and network dynamics.

    Directory of Open Access Journals (Sweden)

    Sadra Sadeh

    2015-01-01

    Full Text Available The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are

  20. Endogenous fields enhanced stochastic resonance in a randomly coupled neuronal network

    International Nuclear Information System (INIS)

    Deng, Bin; Wang, Lin; Wang, Jiang; Wei, Xi-le; Yu, Hai-tao

    2014-01-01

    Highlights: • We study effects of endogenous fields on stochastic resonance in a neural network. • Stochastic resonance can be notably enhanced by endogenous field feedback. • Endogenous field feedback delay plays a vital role in stochastic resonance. • The parameters of low-passed filter play a subtle role in SR. - Abstract: Endogenous field, evoked by structured neuronal network activity in vivo, is correlated with many vital neuronal processes. In this paper, the effects of endogenous fields on stochastic resonance (SR) in a randomly connected neuronal network are investigated. The network consists of excitatory and inhibitory neurons and the axonal conduction delays between neurons are also considered. Numerical results elucidate that endogenous field feedback results in more rhythmic macroscope activation of the network for proper time delay and feedback coefficient. The response of the network to the weak periodic stimulation can be notably enhanced by endogenous field feedback. Moreover, the endogenous field feedback delay plays a vital role in SR. We reveal that appropriately tuned delays of the feedback can either induce the enhancement of SR, appearing at every integer multiple of the weak input signal’s oscillation period, or the depression of SR, appearing at every integer multiple of half the weak input signal’s oscillation period for the same feedback coefficient. Interestingly, the parameters of low-passed filter which is used in obtaining the endogenous field feedback signal play a subtle role in SR

  1. Understanding the Generation of Network Bursts by Adaptive Oscillatory Neurons

    Directory of Open Access Journals (Sweden)

    Tanguy Fardet

    2018-02-01

    Full Text Available Experimental and numerical studies have revealed that isolated populations of oscillatory neurons can spontaneously synchronize and generate periodic bursts involving the whole network. Such a behavior has notably been observed for cultured neurons in rodent's cortex or hippocampus. We show here that a sufficient condition for this network bursting is the presence of an excitatory population of oscillatory neurons which displays spike-driven adaptation. We provide an analytic model to analyze network bursts generated by coupled adaptive exponential integrate-and-fire neurons. We show that, for strong synaptic coupling, intrinsically tonic spiking neurons evolve to reach a synchronized intermittent bursting state. The presence of inhibitory neurons or plastic synapses can then modulate this dynamics in many ways but is not necessary for its appearance. Thanks to a simple self-consistent equation, our model gives an intuitive and semi-quantitative tool to understand the bursting behavior. Furthermore, it suggests that after-hyperpolarization currents are sufficient to explain bursting termination. Through a thorough mapping between the theoretical parameters and ion-channel properties, we discuss the biological mechanisms that could be involved and the relevance of the explored parameter-space. Such an insight enables us to propose experimentally-testable predictions regarding how blocking fast, medium or slow after-hyperpolarization channels would affect the firing rate and burst duration, as well as the interburst interval.

  2. Constructing Precisely Computing Networks with Biophysical Spiking Neurons.

    Science.gov (United States)

    Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T

    2015-07-15

    While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks

  3. Effects of channel noise on firing coherence of small-world Hodgkin-Huxley neuronal networks

    Science.gov (United States)

    Sun, X. J.; Lei, J. Z.; Perc, M.; Lu, Q. S.; Lv, S. J.

    2011-01-01

    We investigate the effects of channel noise on firing coherence of Watts-Strogatz small-world networks consisting of biophysically realistic HH neurons having a fraction of blocked voltage-gated sodium and potassium ion channels embedded in their neuronal membranes. The intensity of channel noise is determined by the number of non-blocked ion channels, which depends on the fraction of working ion channels and the membrane patch size with the assumption of homogeneous ion channel density. We find that firing coherence of the neuronal network can be either enhanced or reduced depending on the source of channel noise. As shown in this paper, sodium channel noise reduces firing coherence of neuronal networks; in contrast, potassium channel noise enhances it. Furthermore, compared with potassium channel noise, sodium channel noise plays a dominant role in affecting firing coherence of the neuronal network. Moreover, we declare that the observed phenomena are independent of the rewiring probability.

  4. Stages of neuronal network formation

    International Nuclear Information System (INIS)

    Woiterski, Lydia; Käs, Josef A; Claudepierre, Thomas; Luxenhofer, Robert; Jordan, Rainer

    2013-01-01

    Graph theoretical approaches have become a powerful tool for investigating the architecture and dynamics of complex networks. The topology of network graphs revealed small-world properties for very different real systems among these neuronal networks. In this study, we observed the early development of mouse retinal ganglion cell (RGC) networks in vitro using time-lapse video microscopy. By means of a time-resolved graph theoretical analysis of the connectivity, shortest path length and the edge length, we were able to discover the different stages during the network formation. Starting from single cells, at the first stage neurons connected to each other ending up in a network with maximum complexity. In the further course, we observed a simplification of the network which manifested in a change of relevant network parameters such as the minimization of the path length. Moreover, we found that RGC networks self-organized as small-world networks at both stages; however, the optimization occurred only in the second stage. (paper)

  5. Bursting synchronization in clustered neuronal networks

    International Nuclear Information System (INIS)

    Yu Hai-Tao; Wang Jiang; Deng Bin; Wei Xi-Le

    2013-01-01

    Neuronal networks in the brain exhibit the modular (clustered) property, i.e., they are composed of certain subnetworks with differential internal and external connectivity. We investigate bursting synchronization in a clustered neuronal network. A transition to mutual-phase synchronization takes place on the bursting time scale of coupled neurons, while on the spiking time scale, they behave asynchronously. This synchronization transition can be induced by the variations of inter- and intracoupling strengths, as well as the probability of random links between different subnetworks. Considering that some pathological conditions are related with the synchronization of bursting neurons in the brain, we analyze the control of bursting synchronization by using a time-periodic external signal in the clustered neuronal network. Simulation results show a frequency locking tongue in the driving parameter plane, where bursting synchronization is maintained, even in the presence of external driving. Hence, effective synchronization suppression can be realized with the driving parameters outside the frequency locking region. (interdisciplinary physics and related areas of science and technology)

  6. Doubly stochastic coherence in complex neuronal networks

    Science.gov (United States)

    Gao, Yang; Wang, Jianjun

    2012-11-01

    A system composed of coupled FitzHugh-Nagumo neurons with various topological structures is investigated under the co-presence of two independently additive and multiplicative Gaussian white noises, in which particular attention is paid to the neuronal networks spiking regularity. As the additive noise intensity and the multiplicative noise intensity are simultaneously adjusted to optimal values, the temporal periodicity of the output of the system reaches the maximum, indicating the occurrence of doubly stochastic coherence. The network topology randomness exerts different influences on the temporal coherence of the spiking oscillation for dissimilar coupling strength regimes. At a small coupling strength, the spiking regularity shows nearly no difference in the regular, small-world, and completely random networks. At an intermediate coupling strength, the temporal periodicity in a small-world neuronal network can be improved slightly by adding a small fraction of long-range connections. At a large coupling strength, the dynamical behavior of the neurons completely loses the resonance property with regard to the additive noise intensity or the multiplicative noise intensity, and the spiking regularity decreases considerably with the increase of the network topology randomness. The network topology randomness plays more of a depressed role than a favorable role in improving the temporal coherence of the spiking oscillation in the neuronal network research study.

  7. Synaptic Plasticity and Spike Synchronisation in Neuronal Networks

    Science.gov (United States)

    Borges, Rafael R.; Borges, Fernando S.; Lameu, Ewandson L.; Protachevicz, Paulo R.; Iarosz, Kelly C.; Caldas, Iberê L.; Viana, Ricardo L.; Macau, Elbert E. N.; Baptista, Murilo S.; Grebogi, Celso; Batista, Antonio M.

    2017-12-01

    Brain plasticity, also known as neuroplasticity, is a fundamental mechanism of neuronal adaptation in response to changes in the environment or due to brain injury. In this review, we show our results about the effects of synaptic plasticity on neuronal networks composed by Hodgkin-Huxley neurons. We show that the final topology of the evolved network depends crucially on the ratio between the strengths of the inhibitory and excitatory synapses. Excitation of the same order of inhibition revels an evolved network that presents the rich-club phenomenon, well known to exist in the brain. For initial networks with considerably larger inhibitory strengths, we observe the emergence of a complex evolved topology, where neurons sparsely connected to other neurons, also a typical topology of the brain. The presence of noise enhances the strength of both types of synapses, but if the initial network has synapses of both natures with similar strengths. Finally, we show how the synchronous behaviour of the evolved network will reflect its evolved topology.

  8. Identification of neuronal network properties from the spectral analysis of calcium imaging signals in neuronal cultures.

    Science.gov (United States)

    Tibau, Elisenda; Valencia, Miguel; Soriano, Jordi

    2013-01-01

    Neuronal networks in vitro are prominent systems to study the development of connections in living neuronal networks and the interplay between connectivity, activity and function. These cultured networks show a rich spontaneous activity that evolves concurrently with the connectivity of the underlying network. In this work we monitor the development of neuronal cultures, and record their activity using calcium fluorescence imaging. We use spectral analysis to characterize global dynamical and structural traits of the neuronal cultures. We first observe that the power spectrum can be used as a signature of the state of the network, for instance when inhibition is active or silent, as well as a measure of the network's connectivity strength. Second, the power spectrum identifies prominent developmental changes in the network such as GABAA switch. And third, the analysis of the spatial distribution of the spectral density, in experiments with a controlled disintegration of the network through CNQX, an AMPA-glutamate receptor antagonist in excitatory neurons, reveals the existence of communities of strongly connected, highly active neurons that display synchronous oscillations. Our work illustrates the interest of spectral analysis for the study of in vitro networks, and its potential use as a network-state indicator, for instance to compare healthy and diseased neuronal networks.

  9. Self-consistent determination of the spike-train power spectrum in a neural network with sparse connectivity

    Directory of Open Access Journals (Sweden)

    Benjamin eDummer

    2014-09-01

    Full Text Available A major source of random variability in cortical networks is the quasi-random arrival of presynaptic action potentials from many other cells. In network studies as well as in the study of the response properties of single cells embedded in a network, synaptic background input is often approximated by Poissonian spike trains. However, the output statistics of the cells is in most cases far from being Poisson. This is inconsistent with the assumption of similar spike-train statistics for pre- and postsynaptic cells in a recurrent network. Here we tackle this problem for the popular class of integrate-and-fire neurons and study a self-consistent statistics of input and output spectra of neural spike trains. Instead of actually using a large network, we use an iterative scheme, in which we simulate a single neuron over several generations. In each of these generations, the neuron is stimulated with surrogate stochastic input that has a similar statistics as the output of the previous generation. For the surrogate input, we employ two distinct approximations: (i a superposition of renewal spike trains with the same interspike interval density as observed in the previous generation and (ii a Gaussian current with a power spectrum proportional to that observed in the previous generation. For input parameters that correspond to balanced input in the network, both the renewal and the Gaussian iteration procedure converge quickly and yield comparable results for the self-consistent spike-train power spectrum. We compare our results to large-scale simulations of a random sparsely connected network of leaky integrate-and-fire neurons (Brunel, J. Comp. Neurosci. 2000 and show that in the asynchronous regime close to a state of balanced synaptic input from the network, our iterative schemes provide excellent approximations to the autocorrelation of spike trains in the recurrent network.

  10. Inverse stochastic resonance in networks of spiking neurons.

    Science.gov (United States)

    Uzuntarla, Muhammet; Barreto, Ernest; Torres, Joaquin J

    2017-07-01

    Inverse Stochastic Resonance (ISR) is a phenomenon in which the average spiking rate of a neuron exhibits a minimum with respect to noise. ISR has been studied in individual neurons, but here, we investigate ISR in scale-free networks, where the average spiking rate is calculated over the neuronal population. We use Hodgkin-Huxley model neurons with channel noise (i.e., stochastic gating variable dynamics), and the network connectivity is implemented via electrical or chemical connections (i.e., gap junctions or excitatory/inhibitory synapses). We find that the emergence of ISR depends on the interplay between each neuron's intrinsic dynamical structure, channel noise, and network inputs, where the latter in turn depend on network structure parameters. We observe that with weak gap junction or excitatory synaptic coupling, network heterogeneity and sparseness tend to favor the emergence of ISR. With inhibitory coupling, ISR is quite robust. We also identify dynamical mechanisms that underlie various features of this ISR behavior. Our results suggest possible ways of experimentally observing ISR in actual neuronal systems.

  11. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons.

    Science.gov (United States)

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply-unlike processors in our current generation of computer hardware-an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling.

  12. Recurrently connected and localized neuronal communities initiate coordinated spontaneous activity in neuronal networks

    Science.gov (United States)

    Amin, Hayder; Maccione, Alessandro; Nieus, Thierry

    2017-01-01

    Developing neuronal systems intrinsically generate coordinated spontaneous activity that propagates by involving a large number of synchronously firing neurons. In vivo, waves of spikes transiently characterize the activity of developing brain circuits and are fundamental for activity-dependent circuit formation. In vitro, coordinated spontaneous spiking activity, or network bursts (NBs), interleaved within periods of asynchronous spikes emerge during the development of 2D and 3D neuronal cultures. Several studies have investigated this type of activity and its dynamics, but how a neuronal system generates these coordinated events remains unclear. Here, we investigate at a cellular level the generation of network bursts in spontaneously active neuronal cultures by exploiting high-resolution multielectrode array recordings and computational network modelling. Our analysis reveals that NBs are generated in specialized regions of the network (functional neuronal communities) that feature neuronal links with high cross-correlation peak values, sub-millisecond lags and that share very similar structural connectivity motifs providing recurrent interactions. We show that the particular properties of these local structures enable locally amplifying spontaneous asynchronous spikes and that this mechanism can lead to the initiation of NBs. Through the analysis of simulated and experimental data, we also show that AMPA currents drive the coordinated activity, while NMDA and GABA currents are only involved in shaping the dynamics of NBs. Overall, our results suggest that the presence of functional neuronal communities with recurrent local connections allows a neuronal system to generate spontaneous coordinated spiking activity events. As suggested by the rules used for implementing our computational model, such functional communities might naturally emerge during network development by following simple constraints on distance-based connectivity. PMID:28749937

  13. Recurrently connected and localized neuronal communities initiate coordinated spontaneous activity in neuronal networks.

    Directory of Open Access Journals (Sweden)

    Davide Lonardoni

    2017-07-01

    Full Text Available Developing neuronal systems intrinsically generate coordinated spontaneous activity that propagates by involving a large number of synchronously firing neurons. In vivo, waves of spikes transiently characterize the activity of developing brain circuits and are fundamental for activity-dependent circuit formation. In vitro, coordinated spontaneous spiking activity, or network bursts (NBs, interleaved within periods of asynchronous spikes emerge during the development of 2D and 3D neuronal cultures. Several studies have investigated this type of activity and its dynamics, but how a neuronal system generates these coordinated events remains unclear. Here, we investigate at a cellular level the generation of network bursts in spontaneously active neuronal cultures by exploiting high-resolution multielectrode array recordings and computational network modelling. Our analysis reveals that NBs are generated in specialized regions of the network (functional neuronal communities that feature neuronal links with high cross-correlation peak values, sub-millisecond lags and that share very similar structural connectivity motifs providing recurrent interactions. We show that the particular properties of these local structures enable locally amplifying spontaneous asynchronous spikes and that this mechanism can lead to the initiation of NBs. Through the analysis of simulated and experimental data, we also show that AMPA currents drive the coordinated activity, while NMDA and GABA currents are only involved in shaping the dynamics of NBs. Overall, our results suggest that the presence of functional neuronal communities with recurrent local connections allows a neuronal system to generate spontaneous coordinated spiking activity events. As suggested by the rules used for implementing our computational model, such functional communities might naturally emerge during network development by following simple constraints on distance-based connectivity.

  14. Synchronization of the small-world neuronal network with unreliable synapses

    International Nuclear Information System (INIS)

    Li, Chunguang; Zheng, Qunxian

    2010-01-01

    As is well known, synchronization phenomena are ubiquitous in neuronal systems. Recently a lot of work concerning the synchronization of the neuronal network has been accomplished. In these works, the synapses are usually considered reliable, but experimental results show that, in biological neuronal networks, synapses are usually unreliable. In our previous work, we have studied the synchronization of the neuronal network with unreliable synapses; however, we have not paid attention to the effect of topology on the synchronization of the neuronal network. Several recent studies have found that biological neuronal networks have typical properties of small-world networks, characterized by a short path length and high clustering coefficient. In this work, mainly based on the small-world neuronal network (SWNN) with inhibitory neurons, we study the effect of network topology on the synchronization of the neuronal network with unreliable synapses. Together with the network topology, the effects of the GABAergic reversal potential, time delay and noise are also considered. Interestingly, we found a counter-intuitive phenomenon for the SWNN with specific shortcut adding probability, that is, the less reliable the synapses, the better the synchronization performance of the SWNN. We also consider the effects of both local noise and global noise in this work. It is shown that these two different types of noise have distinct effects on the synchronization: one is negative and the other is positive

  15. Solving constraint satisfaction problems with networks of spiking neurons

    Directory of Open Access Journals (Sweden)

    Zeno eJonke

    2016-03-01

    Full Text Available Network of neurons in the brain apply – unlike processors in our current generation ofcomputer hardware – an event-based processing strategy, where short pulses (spikes areemitted sparsely by neurons to signal the occurrence of an event at a particular point intime. Such spike-based computations promise to be substantially more power-efficient thantraditional clocked processing schemes. However it turned out to be surprisingly difficult todesign networks of spiking neurons that can solve difficult computational problems on the levelof single spikes (rather than rates of spikes. We present here a new method for designingnetworks of spiking neurons via an energy function. Furthermore we show how the energyfunction of a network of stochastically firing neurons can be shaped in a quite transparentmanner by composing the networks of simple stereotypical network motifs. We show that thisdesign approach enables networks of spiking neurons to produce approximate solutions todifficult (NP-hard constraint satisfaction problems from the domains of planning/optimizationand verification/logical inference. The resulting networks employ noise as a computationalresource. Nevertheless the timing of spikes (rather than just spike rates plays an essential rolein their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines and Gibbs sampling.

  16. Stochastic resonance on Newman-Watts networks of Hodgkin-Huxley neurons with local periodic driving

    Energy Technology Data Exchange (ETDEWEB)

    Ozer, Mahmut [Zonguldak Karaelmas University, Engineering Faculty, Department of Electrical and Electronics Engineering, 67100 Zonguldak (Turkey)], E-mail: mahmutozer2002@yahoo.com; Perc, Matjaz [University of Maribor, Faculty of Natural Sciences and Mathematics, Department of Physics, Koroska cesta 160, SI-2000 Maribor (Slovenia); Uzuntarla, Muhammet [Zonguldak Karaelmas University, Engineering Faculty, Department of Electrical and Electronics Engineering, 67100 Zonguldak (Turkey)

    2009-03-02

    We study the phenomenon of stochastic resonance on Newman-Watts small-world networks consisting of biophysically realistic Hodgkin-Huxley neurons with a tunable intensity of intrinsic noise via voltage-gated ion channels embedded in neuronal membranes. Importantly thereby, the subthreshold periodic driving is introduced to a single neuron of the network, thus acting as a pacemaker trying to impose its rhythm on the whole ensemble. We show that there exists an optimal intensity of intrinsic ion channel noise by which the outreach of the pacemaker extends optimally across the whole network. This stochastic resonance phenomenon can be further amplified via fine-tuning of the small-world network structure, and depends significantly also on the coupling strength among neurons and the driving frequency of the pacemaker. In particular, we demonstrate that the noise-induced transmission of weak localized rhythmic activity peaks when the pacemaker frequency matches the intrinsic frequency of subthreshold oscillations. The implications of our findings for weak signal detection and information propagation across neural networks are discussed.

  17. Management of synchronized network activity by highly active neurons

    International Nuclear Information System (INIS)

    Shein, Mark; Raichman, Nadav; Ben-Jacob, Eshel; Volman, Vladislav; Hanein, Yael

    2008-01-01

    Increasing evidence supports the idea that spontaneous brain activity may have an important functional role. Cultured neuronal networks provide a suitable model system to search for the mechanisms by which neuronal spontaneous activity is maintained and regulated. This activity is marked by synchronized bursting events (SBEs)—short time windows (hundreds of milliseconds) of rapid neuronal firing separated by long quiescent periods (seconds). However, there exists a special subset of rapidly firing neurons whose activity also persists between SBEs. It has been proposed that these highly active (HA) neurons play an important role in the management (i.e. establishment, maintenance and regulation) of the synchronized network activity. Here, we studied the dynamical properties and the functional role of HA neurons in homogeneous and engineered networks, during early network development, upon recovery from chemical inhibition and in response to electrical stimulations. We found that their sequences of inter-spike intervals (ISI) exhibit long time correlations and a unimodal distribution. During the network's development and under intense inhibition, the observed activity follows a transition period during which mostly HA neurons are active. Studying networks with engineered geometry, we found that HA neurons are precursors (the first to fire) of the spontaneous SBEs and are more responsive to electrical stimulations

  18. Population coding in sparsely connected networks of noisy neurons.

    Science.gov (United States)

    Tripp, Bryan P; Orchard, Jeff

    2012-01-01

    This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behavior. However, population coding theory has often ignored network structure, or assumed discrete, fully connected populations (in contrast with the sparsely connected, continuous sheet of the cortex). In this study, we modeled a sheet of cortical neurons with sparse, primarily local connections, and found that a network with this structure could encode multiple internal state variables with high signal-to-noise ratio. However, we were unable to create high-fidelity networks by instantiating connections at random according to spatial connection probabilities. In our models, high-fidelity networks required additional structure, with higher cluster factors and correlations between the inputs to nearby neurons.

  19. Mechanism for propagation of rate signals through a 10-layer feedforward neuronal network

    International Nuclear Information System (INIS)

    Jie, Li; Wan-Qing, Yu; Ding, Xu; Feng, Liu; Wei, Wang

    2009-01-01

    Using numerical simulations, we explore the mechanism for propagation of rate signals through a 10-layer feedforward network composed of Hodgkin–Huxley (HH) neurons with sparse connectivity. When white noise is afferent to the input layer, neuronal firing becomes progressively more synchronous in successive layers and synchrony is well developed in deeper layers owing to the feedforward connections between neighboring layers. The synchrony ensures the successful propagation of rate signals through the network when the synaptic conductance is weak. As the synaptic time constant τ syn varies, coherence resonance is observed in the network activity due to the intrinsic property of HH neurons. This makes the output firing rate single-peaked as a function of τ syn , suggesting that the signal propagation can be modulated by the synaptic time constant. These results are consistent with experimental results and advance our understanding of how information is processed in feedforward networks. (cross-disciplinary physics and related areas of science and technology)

  20. Pattern formation and firing synchronization in networks of map neurons

    International Nuclear Information System (INIS)

    Wang Qingyun; Duan Zhisheng; Huang Lin; Chen Guanrong; Lu Qishao

    2007-01-01

    Patterns and collective phenomena such as firing synchronization are studied in networks of nonhomogeneous oscillatory neurons and mixtures of oscillatory and excitable neurons, with dynamics of each neuron described by a two-dimensional (2D) Rulkov map neuron. It is shown that as the coupling strength is increased, typical patterns emerge spatially, which propagate through the networks in the form of beautiful target waves or parallel ones depending on the size of networks. Furthermore, we investigate the transitions of firing synchronization characterized by the rate of firing when the coupling strength is increased. It is found that there exists an intermediate coupling strength; firing synchronization is minimal simultaneously irrespective of the size of networks. For further increasing the coupling strength, synchronization is enhanced. Since noise is inevitable in real neurons, we also investigate the effects of white noise on firing synchronization for different networks. For the networks of oscillatory neurons, it is shown that firing synchronization decreases when the noise level increases. For the missed networks, firing synchronization is robust under the noise conditions considered in this paper. Results presented in this paper should prove to be valuable for understanding the properties of collective dynamics in real neuronal networks

  1. The Hypocretin/Orexin Neuronal Networks in Zebrafish.

    Science.gov (United States)

    Elbaz, Idan; Levitas-Djerbi, Talia; Appelbaum, Lior

    2017-01-01

    The hypothalamic Hypocretin/Orexin (Hcrt) neurons secrete two Hcrt neuropeptides. These neurons and peptides play a major role in the regulation of feeding, sleep wake cycle, reward-seeking, addiction, and stress. Loss of Hcrt neurons causes the sleep disorder narcolepsy. The zebrafish has become an attractive model to study the Hcrt neuronal network because it is a transparent vertebrate that enables simple genetic manipulation, imaging of the structure and function of neuronal circuits in live animals, and high-throughput monitoring of behavioral performance during both day and night. The zebrafish Hcrt network comprises ~16-60 neurons, which similar to mammals, are located in the hypothalamus and widely innervate the brain and spinal cord, and regulate various fundamental behaviors such as feeding, sleep, and wakefulness. Here we review how the zebrafish contributes to the study of the Hcrt neuronal system molecularly, anatomically, physiologically, and pathologically.

  2. Visualizing neuronal network connectivity with connectivity pattern tables

    Directory of Open Access Journals (Sweden)

    Eilen Nordlie

    2010-01-01

    Full Text Available Complex ideas are best conveyed through well-designed illustrations. Up to now, computational neuroscientists have mostly relied on box-and-arrow diagrams of even complex neuronal networks, often using ad hoc notations with conflicting use of symbols from paper to paper. This significantly impedes the communication of ideas in neuronal network modeling. We present here Connectivity Pattern Tables (CPTs as a clutter-free visualization of connectivity in large neuronal networks containing two-dimensional populations of neurons. CPTs can be generated automatically from the same script code used to create the actual network in the NEST simulator. Through aggregation, CPTs can be viewed at different levels, providing either full detail or summary information. We also provide the open source ConnPlotter tool as a means to create connectivity pattern tables.

  3. Stochastic Wilson–Cowan models of neuronal network dynamics with memory and delay

    International Nuclear Information System (INIS)

    Goychuk, Igor; Goychuk, Andriy

    2015-01-01

    We consider a simple Markovian class of the stochastic Wilson–Cowan type models of neuronal network dynamics, which incorporates stochastic delay caused by the existence of a refractory period of neurons. From the point of view of the dynamics of the individual elements, we are dealing with a network of non-Markovian stochastic two-state oscillators with memory, which are coupled globally in a mean-field fashion. This interrelation of a higher-dimensional Markovian and lower-dimensional non-Markovian dynamics is discussed in its relevance to the general problem of the network dynamics of complex elements possessing memory. The simplest model of this class is provided by a three-state Markovian neuron with one refractory state, which causes firing delay with an exponentially decaying memory within the two-state reduced model. This basic model is used to study critical avalanche dynamics (the noise sustained criticality) in a balanced feedforward network consisting of the excitatory and inhibitory neurons. Such avalanches emerge due to the network size dependent noise (mesoscopic noise). Numerical simulations reveal an intermediate power law in the distribution of avalanche sizes with the critical exponent around −1.16. We show that this power law is robust upon a variation of the refractory time over several orders of magnitude. However, the avalanche time distribution is biexponential. It does not reflect any genuine power law dependence. (paper)

  4. Anti-correlated cortical networks arise from spontaneous neuronal dynamics at slow timescales.

    Science.gov (United States)

    Kodama, Nathan X; Feng, Tianyi; Ullett, James J; Chiel, Hillel J; Sivakumar, Siddharth S; Galán, Roberto F

    2018-01-12

    In the highly interconnected architectures of the cerebral cortex, recurrent intracortical loops disproportionately outnumber thalamo-cortical inputs. These networks are also capable of generating neuronal activity without feedforward sensory drive. It is unknown, however, what spatiotemporal patterns may be solely attributed to intrinsic connections of the local cortical network. Using high-density microelectrode arrays, here we show that in the isolated, primary somatosensory cortex of mice, neuronal firing fluctuates on timescales from milliseconds to tens of seconds. Slower firing fluctuations reveal two spatially distinct neuronal ensembles, which correspond to superficial and deeper layers. These ensembles are anti-correlated: when one fires more, the other fires less and vice versa. This interplay is clearest at timescales of several seconds and is therefore consistent with shifts between active sensing and anticipatory behavioral states in mice.

  5. Complete and phase synchronization in a heterogeneous small-world neuronal network

    International Nuclear Information System (INIS)

    Fang, Han; Qi-Shao, Lu; Quan-Bao, Ji; Marian, Wiercigroch

    2009-01-01

    Synchronous firing of neurons is thought to be important for information communication in neuronal networks. This paper investigates the complete and phase synchronization in a heterogeneous small-world chaotic Hindmarsh–Rose neuronal network. The effects of various network parameters on synchronization behaviour are discussed with some biological explanations. Complete synchronization of small-world neuronal networks is studied theoretically by the master stability function method. It is shown that the coupling strength necessary for complete or phase synchronization decreases with the neuron number, the node degree and the connection density are increased. The effect of heterogeneity of neuronal networks is also considered and it is found that the network heterogeneity has an adverse effect on synchrony. (general)

  6. Results on a Binding Neuron Model and Their Implications for Modified Hourglass Model for Neuronal Network

    Directory of Open Access Journals (Sweden)

    Viswanathan Arunachalam

    2013-01-01

    Full Text Available The classical models of single neuron like Hodgkin-Huxley point neuron or leaky integrate and fire neuron assume the influence of postsynaptic potentials to last till the neuron fires. Vidybida (2008 in a refreshing departure has proposed models for binding neurons in which the trace of an input is remembered only for a finite fixed period of time after which it is forgotten. The binding neurons conform to the behaviour of real neurons and are applicable in constructing fast recurrent networks for computer modeling. This paper develops explicitly several useful results for a binding neuron like the firing time distribution and other statistical characteristics. We also discuss the applicability of the developed results in constructing a modified hourglass network model in which there are interconnected neurons with excitatory as well as inhibitory inputs. Limited simulation results of the hourglass network are presented.

  7. Network feedback regulates motor output across a range of modulatory neuron activity.

    Science.gov (United States)

    Spencer, Robert M; Blitz, Dawn M

    2016-06-01

    Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5-35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. Copyright © 2016 the American Physiological Society.

  8. Hopf bifurcation of an (n + 1) -neuron bidirectional associative memory neural network model with delays.

    Science.gov (United States)

    Xiao, Min; Zheng, Wei Xing; Cao, Jinde

    2013-01-01

    Recent studies on Hopf bifurcations of neural networks with delays are confined to simplified neural network models consisting of only two, three, four, five, or six neurons. It is well known that neural networks are complex and large-scale nonlinear dynamical systems, so the dynamics of the delayed neural networks are very rich and complicated. Although discussing the dynamics of networks with a few neurons may help us to understand large-scale networks, there are inevitably some complicated problems that may be overlooked if simplified networks are carried over to large-scale networks. In this paper, a general delayed bidirectional associative memory neural network model with n + 1 neurons is considered. By analyzing the associated characteristic equation, the local stability of the trivial steady state is examined, and then the existence of the Hopf bifurcation at the trivial steady state is established. By applying the normal form theory and the center manifold reduction, explicit formulae are derived to determine the direction and stability of the bifurcating periodic solution. Furthermore, the paper highlights situations where the Hopf bifurcations are particularly critical, in the sense that the amplitude and the period of oscillations are very sensitive to errors due to tolerances in the implementation of neuron interconnections. It is shown that the sensitivity is crucially dependent on the delay and also significantly influenced by the feature of the number of neurons. Numerical simulations are carried out to illustrate the main results.

  9. Network and neuronal membrane properties in hybrid networks reciprocally regulate selectivity to rapid thalamocortical inputs.

    Science.gov (United States)

    Pesavento, Michael J; Pinto, David J

    2012-11-01

    Rapidly changing environments require rapid processing from sensory inputs. Varying deflection velocities of a rodent's primary facial vibrissa cause varying temporal neuronal activity profiles within the ventral posteromedial thalamic nucleus. Local neuron populations in a single somatosensory layer 4 barrel transform sparsely coded input into a spike count based on the input's temporal profile. We investigate this transformation by creating a barrel-like hybrid network with whole cell recordings of in vitro neurons from a cortical slice preparation, embedding the biological neuron in the simulated network by presenting virtual synaptic conductances via a conductance clamp. Utilizing the hybrid network, we examine the reciprocal network properties (local excitatory and inhibitory synaptic convergence) and neuronal membrane properties (input resistance) by altering the barrel population response to diverse thalamic input. In the presence of local network input, neurons are more selective to thalamic input timing; this arises from strong feedforward inhibition. Strongly inhibitory (damping) network regimes are more selective to timing and less selective to the magnitude of input but require stronger initial input. Input selectivity relies heavily on the different membrane properties of excitatory and inhibitory neurons. When inhibitory and excitatory neurons had identical membrane properties, the sensitivity of in vitro neurons to temporal vs. magnitude features of input was substantially reduced. Increasing the mean leak conductance of the inhibitory cells decreased the network's temporal sensitivity, whereas increasing excitatory leak conductance enhanced magnitude sensitivity. Local network synapses are essential in shaping thalamic input, and differing membrane properties of functional classes reciprocally modulate this effect.

  10. Computing with Spiking Neuron Networks

    NARCIS (Netherlands)

    H. Paugam-Moisy; S.M. Bohte (Sander); G. Rozenberg; T.H.W. Baeck (Thomas); J.N. Kok (Joost)

    2012-01-01

    htmlabstractAbstract Spiking Neuron Networks (SNNs) are often referred to as the 3rd gener- ation of neural networks. Highly inspired from natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an ac- curate modeling of synaptic interactions

  11. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks

    DEFF Research Database (Denmark)

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L

    2016-01-01

    on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network......With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical...... and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely...

  12. Energy-efficient neural information processing in individual neurons and neuronal networks.

    Science.gov (United States)

    Yu, Lianchun; Yu, Yuguo

    2017-11-01

    Brains are composed of networks of an enormous number of neurons interconnected with synapses. Neural information is carried by the electrical signals within neurons and the chemical signals among neurons. Generating these electrical and chemical signals is metabolically expensive. The fundamental issue raised here is whether brains have evolved efficient ways of developing an energy-efficient neural code from the molecular level to the circuit level. Here, we summarize the factors and biophysical mechanisms that could contribute to the energy-efficient neural code for processing input signals. The factors range from ion channel kinetics, body temperature, axonal propagation of action potentials, low-probability release of synaptic neurotransmitters, optimal input and noise, the size of neurons and neuronal clusters, excitation/inhibition balance, coding strategy, cortical wiring, and the organization of functional connectivity. Both experimental and computational evidence suggests that neural systems may use these factors to maximize the efficiency of energy consumption in processing neural signals. Studies indicate that efficient energy utilization may be universal in neuronal systems as an evolutionary consequence of the pressure of limited energy. As a result, neuronal connections may be wired in a highly economical manner to lower energy costs and space. Individual neurons within a network may encode independent stimulus components to allow a minimal number of neurons to represent whole stimulus characteristics efficiently. This basic principle may fundamentally change our view of how billions of neurons organize themselves into complex circuits to operate and generate the most powerful intelligent cognition in nature. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  13. Population coding in sparsely connected networks of noisy neurons

    OpenAIRE

    Tripp, Bryan P.; Orchard, Jeff

    2012-01-01

    This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and be...

  14. A spatially resolved network spike in model neuronal cultures reveals nucleation centers, circular traveling waves and drifting spiral waves.

    Science.gov (United States)

    Paraskevov, A V; Zendrikov, D K

    2017-03-23

    We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.

  15. Population Coding in Sparsely Connected Networks of Noisy Neurons

    Directory of Open Access Journals (Sweden)

    Bryan Patrick Tripp

    2012-05-01

    Full Text Available This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behaviour. However, population coding theory has often ignored network structure, or assumed discrete, fully-connected populations (in contrast with the sparsely connected, continuous sheet of the cortex. In this study, we model a sheet of cortical neurons with sparse, primarily local connections, and find that a network with this structure can encode multiple internal state variables with high signal-to-noise ratio. However, in our model, although connection probability varies with the distance between neurons, we find that the connections cannot be instantiated at random according to these probabilities, but must have additional structure if information is to be encoded with high fidelity.

  16. Bistability induces episodic spike communication by inhibitory neurons in neuronal networks.

    Science.gov (United States)

    Kazantsev, V B; Asatryan, S Yu

    2011-09-01

    Bistability is one of the important features of nonlinear dynamical systems. In neurodynamics, bistability has been found in basic Hodgkin-Huxley equations describing the cell membrane dynamics. When the neuron is clamped near its threshold, the stable rest potential may coexist with the stable limit cycle describing periodic spiking. However, this effect is often neglected in network computations where the neurons are typically reduced to threshold firing units (e.g., integrate-and-fire models). We found that the bistability may induce spike communication by inhibitory coupled neurons in the spiking network. The communication is realized in the form of episodic discharges with synchronous (correlated) spikes during the episodes. A spiking phase map is constructed to describe the synchronization and to estimate basic spike phase locking modes.

  17. Dislocation Coupling-Induced Transition of Synchronization in Two-Layer Neuronal Networks

    International Nuclear Information System (INIS)

    Qin Hui-Xin; Ma Jun; Wang Chun-Ni; Jin Wu-Yin

    2014-01-01

    The mutual coupling between neurons in a realistic neuronal system is much complex, and a two-layer neuronal network is designed to investigate the transition of electric activities of neurons. The Hindmarsh—Rose neuron model is used to describe the local dynamics of each neuron, and neurons in the two-layer networks are coupled in dislocated type. The coupling intensity between two-layer networks, and the coupling ratio (Pro), which defines the percentage involved in the coupling in each layer, are changed to observe the synchronization transition of collective behaviors in the two-layer networks. It is found that the two-layer networks of neurons becomes synchronized with increasing the coupling intensity and coupling ratio (Pro) beyond certain thresholds. An ordered wave in the first layer is useful to wake up the rest state in the second layer, or suppress the spatiotemporal state in the second layer under coupling by generating target wave or spiral waves. And the scheme of dislocation coupling can be used to suppress spatiotemporal chaos and excite quiescent neurons. (interdisciplinary physics and related areas of science and technology)

  18. Towards reproducible descriptions of neuronal network models.

    Directory of Open Access Journals (Sweden)

    Eilen Nordlie

    2009-08-01

    Full Text Available Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing--and thinking about--complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain.

  19. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.

    Science.gov (United States)

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T

    2016-12-01

    With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.

  20. Control of bursting synchronization in networks of Hodgkin-Huxley-type neurons with chemical synapses.

    Science.gov (United States)

    Batista, C A S; Viana, R L; Ferrari, F A S; Lopes, S R; Batista, A M; Coninck, J C P

    2013-04-01

    Thermally sensitive neurons present bursting activity for certain temperature ranges, characterized by fast repetitive spiking of action potential followed by a short quiescent period. Synchronization of bursting activity is possible in networks of coupled neurons, and it is sometimes an undesirable feature. Control procedures can suppress totally or partially this collective behavior, with potential applications in deep-brain stimulation techniques. We investigate the control of bursting synchronization in small-world networks of Hodgkin-Huxley-type thermally sensitive neurons with chemical synapses through two different strategies. One is the application of an external time-periodic electrical signal and another consists of a time-delayed feedback signal. We consider the effectiveness of both strategies in terms of protocols of applications suitable to be applied by pacemakers.

  1. Effect of correlating adjacent neurons for identifying communications: Feasibility experiment in a cultured neuronal network

    OpenAIRE

    Yoshi Nishitani; Chie Hosokawa; Yuko Mizuno-Matsumoto; Tomomitsu Miyoshi; Shinichi Tamura

    2017-01-01

    Neuronal networks have fluctuating characteristics, unlike the stable characteristics seen in computers. The underlying mechanisms that drive reliable communication among neuronal networks and their ability to perform intelligible tasks remain unknown. Recently, in an attempt to resolve this issue, we showed that stimulated neurons communicate via spikes that propagate temporally, in the form of spike trains. We named this phenomenon “spike wave propagation”. In these previous studies, using ...

  2. Bayesian Inference and Online Learning in Poisson Neuronal Networks.

    Science.gov (United States)

    Huang, Yanping; Rao, Rajesh P N

    2016-08-01

    Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.

  3. The Dynamics of Networks of Identical Theta Neurons.

    Science.gov (United States)

    Laing, Carlo R

    2018-02-05

    We consider finite and infinite all-to-all coupled networks of identical theta neurons. Two types of synaptic interactions are investigated: instantaneous and delayed (via first-order synaptic processing). Extensive use is made of the Watanabe/Strogatz (WS) ansatz for reducing the dimension of networks of identical sinusoidally-coupled oscillators. As well as the degeneracy associated with the constants of motion of the WS ansatz, we also find continuous families of solutions for instantaneously coupled neurons, resulting from the reversibility of the reduced model and the form of the synaptic input. We also investigate a number of similar related models. We conclude that the dynamics of networks of all-to-all coupled identical neurons can be surprisingly complicated.

  4. Statistical identification of stimulus-activated network nodes in multi-neuron voltage-sensitive dye optical recordings.

    Science.gov (United States)

    Fathiazar, Elham; Anemuller, Jorn; Kretzberg, Jutta

    2016-08-01

    Voltage-Sensitive Dye (VSD) imaging is an optical imaging method that allows measuring the graded voltage changes of multiple neurons simultaneously. In neuroscience, this method is used to reveal networks of neurons involved in certain tasks. However, the recorded relative dye fluorescence changes are usually low and signals are superimposed by noise and artifacts. Therefore, establishing a reliable method to identify which cells are activated by specific stimulus conditions is the first step to identify functional networks. In this paper, we present a statistical method to identify stimulus-activated network nodes as cells, whose activities during sensory network stimulation differ significantly from the un-stimulated control condition. This method is demonstrated based on voltage-sensitive dye recordings from up to 100 neurons in a ganglion of the medicinal leech responding to tactile skin stimulation. Without relying on any prior physiological knowledge, the network nodes identified by our statistical analysis were found to match well with published cell types involved in tactile stimulus processing and to be consistent across stimulus conditions and preparations.

  5. Effects of extracellular potassium diffusion on electrically coupled neuron networks

    Science.gov (United States)

    Wu, Xing-Xing; Shuai, Jianwei

    2015-02-01

    Potassium accumulation and diffusion during neuronal epileptiform activity have been observed experimentally, and potassium lateral diffusion has been suggested to play an important role in nonsynaptic neuron networks. We adopt a hippocampal CA1 pyramidal neuron network in a zero-calcium condition to better understand the influence of extracellular potassium dynamics on the stimulus-induced activity. The potassium concentration in the interstitial space for each neuron is regulated by potassium currents, Na+-K+ pumps, glial buffering, and ion diffusion. In addition to potassium diffusion, nearby neurons are also coupled through gap junctions. Our results reveal that the latency of the first spike responding to stimulus monotonically decreases with increasing gap-junction conductance but is insensitive to potassium diffusive coupling. The duration of network oscillations shows a bell-like shape with increasing potassium diffusive coupling at weak gap-junction coupling. For modest electrical coupling, there is an optimal K+ diffusion strength, at which the flow of potassium ions among the network neurons appropriately modulates interstitial potassium concentrations in a degree that provides the most favorable environment for the generation and continuance of the action potential waves in the network.

  6. Versatile Networks of Simulated Spiking Neurons Displaying Winner-Take-All Behavior

    Directory of Open Access Journals (Sweden)

    Yanqing eChen

    2013-03-01

    Full Text Available We describe simulations of large-scale networks of excitatory and inhibitory spiking neurons that can generate dynamically stable winner-take-all (WTA behavior. The network connectivity is a variant of center-surround architecture that we call center-annular-surround (CAS. In this architecture each neuron is excited by nearby neighbors and inhibited by more distant neighbors in an annular-surround region. The neural units of these networks simulate conductance-based spiking neurons that interact via mechanisms susceptible to both short-term synaptic plasticity and STDP. We show that such CAS networks display robust WTA behavior unlike the center-surround networks and other control architectures that we have studied. We find that a large-scale network of spiking neurons with separate populations of excitatory and inhibitory neurons can give rise to smooth maps of sensory input. In addition, we show that a humanoid Brain-Based-Device (BBD under the control of a spiking WTA neural network can learn to reach to target positions in its visual field, thus demonstrating the acquisition of sensorimotor coordination.

  7. Versatile networks of simulated spiking neurons displaying winner-take-all behavior.

    Science.gov (United States)

    Chen, Yanqing; McKinstry, Jeffrey L; Edelman, Gerald M

    2013-01-01

    We describe simulations of large-scale networks of excitatory and inhibitory spiking neurons that can generate dynamically stable winner-take-all (WTA) behavior. The network connectivity is a variant of center-surround architecture that we call center-annular-surround (CAS). In this architecture each neuron is excited by nearby neighbors and inhibited by more distant neighbors in an annular-surround region. The neural units of these networks simulate conductance-based spiking neurons that interact via mechanisms susceptible to both short-term synaptic plasticity and STDP. We show that such CAS networks display robust WTA behavior unlike the center-surround networks and other control architectures that we have studied. We find that a large-scale network of spiking neurons with separate populations of excitatory and inhibitory neurons can give rise to smooth maps of sensory input. In addition, we show that a humanoid brain-based-device (BBD) under the control of a spiking WTA neural network can learn to reach to target positions in its visual field, thus demonstrating the acquisition of sensorimotor coordination.

  8. Dynamics of Moment Neuronal Networks with Intra- and Inter-Interactions

    Directory of Open Access Journals (Sweden)

    Xuyan Xiang

    2015-01-01

    Full Text Available A framework of moment neuronal networks with intra- and inter-interactions is presented. It is to show how the spontaneous activity is propagated across the homogeneous and heterogeneous network. The input-output firing relationship and the stability are first explored for a homogeneous network. For heterogeneous network without the constraint of the correlation coefficients between neurons, a more sophisticated dynamics is then explored. With random interactions, the network gets easily synchronized. However, desynchronization is produced by a lateral interaction such as Mexico hat function. It is the external intralayer input unit that offers a more sophisticated and unexpected dynamics over the predecessors. Hence, the work further opens up the possibility of carrying out a stochastic computation in neuronal networks.

  9. A real-time hybrid neuron network for highly parallel cognitive systems.

    Science.gov (United States)

    Christiaanse, Gerrit Jan; Zjajo, Amir; Galuzzi, Carlo; van Leuken, Rene

    2016-08-01

    For comprehensive understanding of how neurons communicate with each other, new tools need to be developed that can accurately mimic the behaviour of such neurons and neuron networks under `real-time' constraints. In this paper, we propose an easily customisable, highly pipelined, neuron network design, which executes optimally scheduled floating-point operations for maximal amount of biophysically plausible neurons per FPGA family type. To reduce the required amount of resources without adverse effect on the calculation latency, a single exponent instance is used for multiple neuron calculation operations. Experimental results indicate that the proposed network design allows the simulation of up to 1188 neurons on Virtex7 (XC7VX550T) device in brain real-time yielding a speed-up of x12.4 compared to the state-of-the art.

  10. Stochastic resonance in small-world neuronal networks with hybrid electrical–chemical synapses

    International Nuclear Information System (INIS)

    Wang, Jiang; Guo, Xinmeng; Yu, Haitao; Liu, Chen; Deng, Bin; Wei, Xile; Chen, Yingyuan

    2014-01-01

    Highlights: •We study stochastic resonance in small-world neural networks with hybrid synapses. •The resonance effect depends largely on the probability of chemical synapse. •An optimal chemical synapse probability exists to evoke network resonance. •Network topology affects the stochastic resonance in hybrid neuronal networks. - Abstract: The dependence of stochastic resonance in small-world neuronal networks with hybrid electrical–chemical synapses on the probability of chemical synapse and the rewiring probability is investigated. A subthreshold periodic signal is imposed on one single neuron within the neuronal network as a pacemaker. It is shown that, irrespective of the probability of chemical synapse, there exists a moderate intensity of external noise optimizing the response of neuronal networks to the pacemaker. Moreover, the effect of pacemaker driven stochastic resonance of the system depends largely on the probability of chemical synapse. A high probability of chemical synapse will need lower noise intensity to evoke the phenomenon of stochastic resonance in the networked neuronal systems. In addition, for fixed noise intensity, there is an optimal chemical synapse probability, which can promote the propagation of the localized subthreshold pacemaker across neural networks. And the optimal chemical synapses probability turns even larger as the coupling strength decreases. Furthermore, the small-world topology has a significant impact on the stochastic resonance in hybrid neuronal networks. It is found that increasing the rewiring probability can always enhance the stochastic resonance until it approaches the random network limit

  11. Spiking sychronization regulated by noise in three types of Hodgkin—Huxley neuronal networks

    International Nuclear Information System (INIS)

    Zhang Zheng-Zhen; Zeng Shang-You; Tang Wen-Yan; Hu Jin-Lin; Zeng Shao-Wen; Ning Wei-Lian; Qiu Yi; Wu Hui-Si

    2012-01-01

    In this paper, we study spiking synchronization in three different types of Hodgkin—Huxley neuronal networks, which are the small-world, regular, and random neuronal networks. All the neurons are subjected to subthreshold stimulus and external noise. It is found that in each of all the neuronal networks there is an optimal strength of noise to induce the maximal spiking synchronization. We further demonstrate that in each of the neuronal networks there is a range of synaptic conductance to induce the effect that an optimal strength of noise maximizes the spiking synchronization. Only when the magnitude of the synaptic conductance is moderate, will the effect be considerable. However, if the synaptic conductance is small or large, the effect vanishes. As the connections between neurons increase, the synaptic conductance to maximize the effect decreases. Therefore, we show quantitatively that the noise-induced maximal synchronization in the Hodgkin—Huxley neuronal network is a general effect, regardless of the specific type of neuronal network

  12. Associative memory in phasing neuron networks

    Energy Technology Data Exchange (ETDEWEB)

    Nair, Niketh S [ORNL; Bochove, Erik J. [United States Air Force Research Laboratory, Kirtland Air Force Base; Braiman, Yehuda [ORNL

    2014-01-01

    We studied pattern formation in a network of coupled Hindmarsh-Rose model neurons and introduced a new model for associative memory retrieval using networks of Kuramoto oscillators. Hindmarsh-Rose Neural Networks can exhibit a rich set of collective dynamics that can be controlled by their connectivity. Specifically, we showed an instance of Hebb's rule where spiking was correlated with network topology. Based on this, we presented a simple model of associative memory in coupled phase oscillators.

  13. Performance of networks of artificial neurons: The role of clustering

    International Nuclear Information System (INIS)

    Kim, Beom Jun

    2004-01-01

    The performance of the Hopfield neural network model is numerically studied on various complex networks, such as the Watts-Strogatz network, the Barabasi-Albert network, and the neuronal network of Caenorhabditis elegans. Through the use of a systematic way of controlling the clustering coefficient, with the degree of each neuron kept unchanged, we find that the networks with the lower clustering exhibit much better performance. The results are discussed in the practical viewpoint of application, and the biological implications are also suggested

  14. Order-based representation in random networks of cortical neurons.

    Directory of Open Access Journals (Sweden)

    Goded Shahaf

    2008-11-01

    Full Text Available The wide range of time scales involved in neural excitability and synaptic transmission might lead to ongoing change in the temporal structure of responses to recurring stimulus presentations on a trial-to-trial basis. This is probably the most severe biophysical constraint on putative time-based primitives of stimulus representation in neuronal networks. Here we show that in spontaneously developing large-scale random networks of cortical neurons in vitro the order in which neurons are recruited following each stimulus is a naturally emerging representation primitive that is invariant to significant temporal changes in spike times. With a relatively small number of randomly sampled neurons, the information about stimulus position is fully retrievable from the recruitment order. The effective connectivity that makes order-based representation invariant to time warping is characterized by the existence of stations through which activity is required to pass in order to propagate further into the network. This study uncovers a simple invariant in a noisy biological network in vitro; its applicability under in vivo constraints remains to be seen.

  15. The synchronization of FitzHugh–Nagumo neuron network coupled by gap junction

    International Nuclear Information System (INIS)

    Zhan Yong; Zhang Suhua; Zhao Tongjun; An Hailong; Zhang Zhendong; Han Yingrong; Liu Hui; Zhang Yuhong

    2008-01-01

    It is well known that the strong coupling can synchronize a network of nonlinear oscillators. Synchronization provides the basis of the remarkable computational performance of the brain. In this paper the FitzHugh–Nagumo neuron network is constructed. The dependence of the synchronization on the coupling strength, the noise intensity and the size of the neuron network has been discussed. The results indicate that the coupling among neurons works to improve the synchronization, and noise increases the neuron random dynamics and the local fluctuations; the larger the size of network, the worse the synchronization. The dependence of the synchronization on the strength of the electric synapse coupling and chemical synapse coupling has also been discussed, which proves that electric synapse coupling can enhance the synchronization of the neuron network largely

  16. How the self-coupled neuron can affect the chaotic synchronization of network

    International Nuclear Information System (INIS)

    Jia Chenhui; Wang Jiang; Deng, Bin

    2009-01-01

    We have calculated 34 kinds of three-cell neuron networks' minimum coupling strength, from the result; we find that a self-coupled neuron can have some effect on the synchronization of the network. The reason is the self-coupled neurons make the number of neurons looks 'decrease', and they decrease the coupling strength of the other neurons which are coupled with them.

  17. Synchronization in a non-uniform network of excitatory spiking neurons

    Science.gov (United States)

    Echeveste, Rodrigo; Gros, Claudius

    Spontaneous synchronization of pulse coupled elements is ubiquitous in nature and seems to be of vital importance for life. Networks of pacemaker cells in the heart, extended populations of southeast asian fireflies, and neuronal oscillations in cortical networks, are examples of this. In the present work, a rich repertoire of dynamical states with different degrees of synchronization are found in a network of excitatory-only spiking neurons connected in a non-uniform fashion. In particular, uncorrelated and partially correlated states are found without the need for inhibitory neurons or external currents. The phase transitions between these states, as well the robustness, stability, and response of the network to external stimulus are studied.

  18. A distance constrained synaptic plasticity model of C. elegans neuronal network

    Science.gov (United States)

    Badhwar, Rahul; Bagler, Ganesh

    2017-03-01

    Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.

  19. Phase-locking and bistability in neuronal networks with synaptic depression

    Science.gov (United States)

    Akcay, Zeynep; Huang, Xinxian; Nadim, Farzan; Bose, Amitabha

    2018-02-01

    We consider a recurrent network of two oscillatory neurons that are coupled with inhibitory synapses. We use the phase response curves of the neurons and the properties of short-term synaptic depression to define Poincaré maps for the activity of the network. The fixed points of these maps correspond to phase-locked modes of the network. Using these maps, we analyze the conditions that allow short-term synaptic depression to lead to the existence of bistable phase-locked, periodic solutions. We show that bistability arises when either the phase response curve of the neuron or the short-term depression profile changes steeply enough. The results apply to any Type I oscillator and we illustrate our findings using the Quadratic Integrate-and-Fire and Morris-Lecar neuron models.

  20. Dynamical Encoding by Networks of Competing Neuron Groups: Winnerless Competition

    International Nuclear Information System (INIS)

    Rabinovich, M.; Volkovskii, A.; Lecanda, P.; Huerta, R.; Abarbanel, H. D. I.; Laurent, G.

    2001-01-01

    Following studies of olfactory processing in insects and fish, we investigate neural networks whose dynamics in phase space is represented by orbits near the heteroclinic connections between saddle regions (fixed points or limit cycles). These networks encode input information as trajectories along the heteroclinic connections. If there are N neurons in the network, the capacity is approximately e(N-1) ! , i.e., much larger than that of most traditional network structures. We show that a small winnerless competition network composed of FitzHugh-Nagumo spiking neurons efficiently transforms input information into a spatiotemporal output

  1. Spiral Wave in Small-World Networks of Hodgkin-Huxley Neurons

    International Nuclear Information System (INIS)

    Ma Jun; Zhang Cairong; Yang Lijian; Wu Ying

    2010-01-01

    The effect of small-world connection and noise on the formation and transition of spiral wave in the networks of Hodgkin-Huxley neurons are investigated in detail. Some interesting results are found in our numerical studies. i) The quiescent neurons are activated to propagate electric signal to others by generating and developing spiral wave from spiral seed in small area. ii) A statistical factor is defined to describe the collective properties and phase transition induced by the topology of networks and noise. iii) Stable rotating spiral wave can be generated and keeps robust when the rewiring probability is below certain threshold, otherwise, spiral wave can not be developed from the spiral seed and spiral wave breakup occurs for a stable rotating spiral wave. iv) Gaussian white noise is introduced on the membrane of neurons to study the noise-induced phase transition on spiral wave in small-world networks of neurons. It is confirmed that Gaussian white noise plays active role in supporting and developing spiral wave in the networks of neurons, and appearance of smaller factor of synchronization indicates high possibility to induce spiral wave. (interdisciplinary physics and related areas of science and technology)

  2. Complex Behavior in a Selective Aging Neuron Model Based on Small World Networks

    International Nuclear Information System (INIS)

    Zhang Guiqing; Chen Tianlun

    2008-01-01

    Complex behavior in a selective aging simple neuron model based on small world networks is investigated. The basic elements of the model are endowed with the main features of a neuron function. The structure of the selective aging neuron model is discussed. We also give some properties of the new network and find that the neuron model displays a power-law behavior. If the brain network is small world-like network, the mean avalanche size is almost the same unless the aging parameter is big enough.

  3. Pacemaker neuron and network oscillations depend on a neuromodulator-regulated linear current

    Directory of Open Access Journals (Sweden)

    Shunbing Zhao

    2010-05-01

    Full Text Available Linear leak currents have been implicated in the regulation of neuronal excitability, generation of neuronal and network oscillations, and network state transitions. Yet, few studies have directly tested the dependence of network oscillations on leak currents or explored the role of leak currents on network activity. In the oscillatory pyloric network of decapod crustaceans neuromodulatory inputs are necessary for pacemaker activity. A large subset of neuromodulators is known to activate a single voltage-gated inward current IMI, which has been shown to regulate the rhythmic activity of the network and its pacemaker neurons. Using the dynamic clamp technique, we show that the crucial component of IMI for the generation of oscillatory activity is only a close-to-linear portion of the current-voltage relationship. The nature of this conductance is such that the presence or the absence of neuromodulators effectively regulates the amount of leak current and the input resistance in the pacemaker neurons. When deprived of neuromodulatory inputs, pyloric oscillations are disrupted; yet, a linear reduction of the total conductance in a single neuron within the pacemaker group recovers not only the pacemaker activity in that neuron, but also leads to a recovery of oscillations in the entire pyloric network. The recovered activity produces proper frequency and phasing that is similar to that induced by neuromodulators. These results show that the passive properties of pacemaker neurons can significantly affect their capacity to generate and regulate the oscillatory activity of an entire network, and that this feature is exploited by neuromodulatory inputs.

  4. Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity

    Science.gov (United States)

    Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R.; Baldelli, Pietro; Benfenati, Fabio

    2013-01-01

    Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows. PMID:23970852

  5. Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity.

    Science.gov (United States)

    Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R; Baldelli, Pietro; Benfenati, Fabio

    2013-01-01

    Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows.

  6. Attractor dynamics in local neuronal networks

    Directory of Open Access Journals (Sweden)

    Jean-Philippe eThivierge

    2014-03-01

    Full Text Available Patterns of synaptic connectivity in various regions of the brain are characterized by the presence of synaptic motifs, defined as unidirectional and bidirectional synaptic contacts that follow a particular configuration and link together small groups of neurons. Recent computational work proposes that a relay network (two populations communicating via a third, relay population of neurons can generate precise patterns of neural synchronization. Here, we employ two distinct models of neuronal dynamics and show that simulated neural circuits designed in this way are caught in a global attractor of activity that prevents neurons from modulating their response on the basis of incoming stimuli. To circumvent the emergence of a fixed global attractor, we propose a mechanism of selective gain inhibition that promotes flexible responses to external stimuli. We suggest that local neuronal circuits may employ this mechanism to generate precise patterns of neural synchronization whose transient nature delimits the occurrence of a brief stimulus.

  7. Spike Code Flow in Cultured Neuronal Networks.

    Science.gov (United States)

    Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime; Kamimura, Takuya; Yagi, Yasushi; Mizuno-Matsumoto, Yuko; Chen, Yen-Wei

    2016-01-01

    We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of "1101" and "1011," which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the "maximum cross-correlations" among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  8. Extracting neuronal functional network dynamics via adaptive Granger causality analysis.

    Science.gov (United States)

    Sheikhattar, Alireza; Miran, Sina; Liu, Ji; Fritz, Jonathan B; Shamma, Shihab A; Kanold, Patrick O; Babadi, Behtash

    2018-04-24

    Quantifying the functional relations between the nodes in a network based on local observations is a key challenge in studying complex systems. Most existing time series analysis techniques for this purpose provide static estimates of the network properties, pertain to stationary Gaussian data, or do not take into account the ubiquitous sparsity in the underlying functional networks. When applied to spike recordings from neuronal ensembles undergoing rapid task-dependent dynamics, they thus hinder a precise statistical characterization of the dynamic neuronal functional networks underlying adaptive behavior. We develop a dynamic estimation and inference paradigm for extracting functional neuronal network dynamics in the sense of Granger, by integrating techniques from adaptive filtering, compressed sensing, point process theory, and high-dimensional statistics. We demonstrate the utility of our proposed paradigm through theoretical analysis, algorithm development, and application to synthetic and real data. Application of our techniques to two-photon Ca 2+ imaging experiments from the mouse auditory cortex reveals unique features of the functional neuronal network structures underlying spontaneous activity at unprecedented spatiotemporal resolution. Our analysis of simultaneous recordings from the ferret auditory and prefrontal cortical areas suggests evidence for the role of rapid top-down and bottom-up functional dynamics across these areas involved in robust attentive behavior.

  9. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.

    Directory of Open Access Journals (Sweden)

    Gabriel Koch Ocker

    2015-08-01

    Full Text Available The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.

  10. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.

    Science.gov (United States)

    Ocker, Gabriel Koch; Litwin-Kumar, Ashok; Doiron, Brent

    2015-08-01

    The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.

  11. Burst analysis tool for developing neuronal networks exhibiting highly varying action potential dynamics

    Directory of Open Access Journals (Sweden)

    Fikret Emre eKapucu

    2012-06-01

    Full Text Available In this paper we propose a firing statistics based neuronal network burst detection algorithm for neuronal networks exhibiting highly variable action potential dynamics. Electrical activity of neuronal networks is generally analyzed by the occurrences of spikes and bursts both in time and space. Commonly accepted analysis tools employ burst detection algorithms based on predefined criteria. However, maturing neuronal networks, such as those originating from human embryonic stem cells (hESC, exhibit highly variable network structure and time-varying dynamics. To explore the developing burst/spike activities of such networks, we propose a burst detection algorithm which utilizes the firing statistics based on interspike interval (ISI histograms. Moreover, the algorithm calculates interspike interval thresholds for burst spikes as well as for pre-burst spikes and burst tails by evaluating the cumulative moving average and skewness of the ISI histogram. Because of the adaptive nature of the proposed algorithm, its analysis power is not limited by the type of neuronal cell network at hand. We demonstrate the functionality of our algorithm with two different types of microelectrode array (MEA data recorded from spontaneously active hESC-derived neuronal cell networks. The same data was also analyzed by two commonly employed burst detection algorithms and the differences in burst detection results are illustrated. The results demonstrate that our method is both adaptive to the firing statistics of the network and yields successful burst detection from the data. In conclusion, the proposed method is a potential tool for analyzing of hESC-derived neuronal cell networks and thus can be utilized in studies aiming to understand the development and functioning of human neuronal networks and as an analysis tool for in vitro drug screening and neurotoxicity assays.

  12. Autaptic pacemaker mediated propagation of weak rhythmic activity across small-world neuronal networks

    Science.gov (United States)

    Yilmaz, Ergin; Baysal, Veli; Ozer, Mahmut; Perc, Matjaž

    2016-02-01

    We study the effects of an autapse, which is mathematically described as a self-feedback loop, on the propagation of weak, localized pacemaker activity across a Newman-Watts small-world network consisting of stochastic Hodgkin-Huxley neurons. We consider that only the pacemaker neuron, which is stimulated by a subthreshold periodic signal, has an electrical autapse that is characterized by a coupling strength and a delay time. We focus on the impact of the coupling strength, the network structure, the properties of the weak periodic stimulus, and the properties of the autapse on the transmission of localized pacemaker activity. Obtained results indicate the existence of optimal channel noise intensity for the propagation of the localized rhythm. Under optimal conditions, the autapse can significantly improve the propagation of pacemaker activity, but only for a specific range of the autaptic coupling strength. Moreover, the autaptic delay time has to be equal to the intrinsic oscillation period of the Hodgkin-Huxley neuron or its integer multiples. We analyze the inter-spike interval histogram and show that the autapse enhances or suppresses the propagation of the localized rhythm by increasing or decreasing the phase locking between the spiking of the pacemaker neuron and the weak periodic signal. In particular, when the autaptic delay time is equal to the intrinsic period of oscillations an optimal phase locking takes place, resulting in a dominant time scale of the spiking activity. We also investigate the effects of the network structure and the coupling strength on the propagation of pacemaker activity. We find that there exist an optimal coupling strength and an optimal network structure that together warrant an optimal propagation of the localized rhythm.

  13. Altering neuronal excitability to preserve network connectivity in a computational model of Alzheimer's disease.

    Directory of Open Access Journals (Sweden)

    Willem de Haan

    2017-09-01

    Full Text Available Neuronal hyperactivity and hyperexcitability of the cerebral cortex and hippocampal region is an increasingly observed phenomenon in preclinical Alzheimer's disease (AD. In later stages, oscillatory slowing and loss of functional connectivity are ubiquitous. Recent evidence suggests that neuronal dynamics have a prominent role in AD pathophysiology, making it a potentially interesting therapeutic target. However, although neuronal activity can be manipulated by various (non-pharmacological means, intervening in a highly integrated system that depends on complex dynamics can produce counterintuitive and adverse effects. Computational dynamic network modeling may serve as a virtual test ground for developing effective interventions. To explore this approach, a previously introduced large-scale neural mass network with human brain topology was used to simulate the temporal evolution of AD-like, activity-dependent network degeneration. In addition, six defense strategies that either enhanced or diminished neuronal excitability were tested against the degeneration process, targeting excitatory and inhibitory neurons combined or separately. Outcome measures described oscillatory, connectivity and topological features of the damaged networks. Over time, the various interventions produced diverse large-scale network effects. Contrary to our hypothesis, the most successful strategy was a selective stimulation of all excitatory neurons in the network; it substantially prolonged the preservation of network integrity. The results of this study imply that functional network damage due to pathological neuronal activity can be opposed by targeted adjustment of neuronal excitability levels. The present approach may help to explore therapeutic effects aimed at preserving or restoring neuronal network integrity and contribute to better-informed intervention choices in future clinical trials in AD.

  14. Detection of 5-hydroxytryptamine (5-HT) in vitro using a hippocampal neuronal network-based biosensor with extracellular potential analysis of neurons.

    Science.gov (United States)

    Hu, Liang; Wang, Qin; Qin, Zhen; Su, Kaiqi; Huang, Liquan; Hu, Ning; Wang, Ping

    2015-04-15

    5-hydroxytryptamine (5-HT) is an important neurotransmitter in regulating emotions and related behaviors in mammals. To detect and monitor the 5-HT, effective and convenient methods are demanded in investigation of neuronal network. In this study, hippocampal neuronal networks (HNNs) endogenously expressing 5-HT receptors were employed as sensing elements to build an in vitro neuronal network-based biosensor. The electrophysiological characteristics were analyzed in both neuron and network levels. The firing rates and amplitudes were derived from signal to determine the biosensor response characteristics. The experimental results demonstrate a dose-dependent inhibitory effect of 5-HT on hippocampal neuron activities, indicating the effectiveness of this hybrid biosensor in detecting 5-HT with a response range from 0.01μmol/L to 10μmol/L. In addition, the cross-correlation analysis of HNNs activities suggests 5-HT could weaken HNN connectivity reversibly, providing more specificity of this biosensor in detecting 5-HT. Moreover, 5-HT induced spatiotemporal firing pattern alterations could be monitored in neuron and network levels simultaneously by this hybrid biosensor in a convenient and direct way. With those merits, this neuronal network-based biosensor will be promising to be a valuable and utility platform for the study of neurotransmitter in vitro. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Building functional networks of spiking model neurons.

    Science.gov (United States)

    Abbott, L F; DePasquale, Brian; Memmesheimer, Raoul-Martin

    2016-03-01

    Most of the networks used by computer scientists and many of those studied by modelers in neuroscience represent unit activities as continuous variables. Neurons, however, communicate primarily through discontinuous spiking. We review methods for transferring our ability to construct interesting networks that perform relevant tasks from the artificial continuous domain to more realistic spiking network models. These methods raise a number of issues that warrant further theoretical and experimental study.

  16. Sequentially switching cell assemblies in random inhibitory networks of spiking neurons in the striatum.

    Science.gov (United States)

    Ponzi, Adam; Wickens, Jeff

    2010-04-28

    The striatum is composed of GABAergic medium spiny neurons with inhibitory collaterals forming a sparse random asymmetric network and receiving an excitatory glutamatergic cortical projection. Because the inhibitory collaterals are sparse and weak, their role in striatal network dynamics is puzzling. However, here we show by simulation of a striatal inhibitory network model composed of spiking neurons that cells form assemblies that fire in sequential coherent episodes and display complex identity-temporal spiking patterns even when cortical excitation is simply constant or fluctuating noisily. Strongly correlated large-scale firing rate fluctuations on slow behaviorally relevant timescales of hundreds of milliseconds are shown by members of the same assembly whereas members of different assemblies show strong negative correlation, and we show how randomly connected spiking networks can generate this activity. Cells display highly irregular spiking with high coefficients of variation, broadly distributed low firing rates, and interspike interval distributions that are consistent with exponentially tailed power laws. Although firing rates vary coherently on slow timescales, precise spiking synchronization is absent in general. Our model only requires the minimal but striatally realistic assumptions of sparse to intermediate random connectivity, weak inhibitory synapses, and sufficient cortical excitation so that some cells are depolarized above the firing threshold during up states. Our results are in good qualitative agreement with experimental studies, consistent with recently determined striatal anatomy and physiology, and support a new view of endogenously generated metastable state switching dynamics of the striatal network underlying its information processing operations.

  17. Short-term memory in networks of dissociated cortical neurons.

    Science.gov (United States)

    Dranias, Mark R; Ju, Han; Rajaram, Ezhilarasan; VanDongen, Antonius M J

    2013-01-30

    Short-term memory refers to the ability to store small amounts of stimulus-specific information for a short period of time. It is supported by both fading and hidden memory processes. Fading memory relies on recurrent activity patterns in a neuronal network, whereas hidden memory is encoded using synaptic mechanisms, such as facilitation, which persist even when neurons fall silent. We have used a novel computational and optogenetic approach to investigate whether these same memory processes hypothesized to support pattern recognition and short-term memory in vivo, exist in vitro. Electrophysiological activity was recorded from primary cultures of dissociated rat cortical neurons plated on multielectrode arrays. Cultures were transfected with ChannelRhodopsin-2 and optically stimulated using random dot stimuli. The pattern of neuronal activity resulting from this stimulation was analyzed using classification algorithms that enabled the identification of stimulus-specific memories. Fading memories for different stimuli, encoded in ongoing neural activity, persisted and could be distinguished from each other for as long as 1 s after stimulation was terminated. Hidden memories were detected by altered responses of neurons to additional stimulation, and this effect persisted longer than 1 s. Interestingly, network bursts seem to eliminate hidden memories. These results are similar to those that have been reported from similar experiments in vivo and demonstrate that mechanisms of information processing and short-term memory can be studied using cultured neuronal networks, thereby setting the stage for therapeutic applications using this platform.

  18. A simplified protocol for differentiation of electrophysiologically mature neuronal networks from human induced pluripotent stem cells.

    Science.gov (United States)

    Gunhanlar, N; Shpak, G; van der Kroeg, M; Gouty-Colomer, L A; Munshi, S T; Lendemeijer, B; Ghazvini, M; Dupont, C; Hoogendijk, W J G; Gribnau, J; de Vrij, F M S; Kushner, S A

    2017-04-18

    Progress in elucidating the molecular and cellular pathophysiology of neuropsychiatric disorders has been hindered by the limited availability of living human brain tissue. The emergence of induced pluripotent stem cells (iPSCs) has offered a unique alternative strategy using patient-derived functional neuronal networks. However, methods for reliably generating iPSC-derived neurons with mature electrophysiological characteristics have been difficult to develop. Here, we report a simplified differentiation protocol that yields electrophysiologically mature iPSC-derived cortical lineage neuronal networks without the need for astrocyte co-culture or specialized media. This protocol generates a consistent 60:40 ratio of neurons and astrocytes that arise from a common forebrain neural progenitor. Whole-cell patch-clamp recordings of 114 neurons derived from three independent iPSC lines confirmed their electrophysiological maturity, including resting membrane potential (-58.2±1.0 mV), capacitance (49.1±2.9 pF), action potential (AP) threshold (-50.9±0.5 mV) and AP amplitude (66.5±1.3 mV). Nearly 100% of neurons were capable of firing APs, of which 79% had sustained trains of mature APs with minimal accommodation (peak AP frequency: 11.9±0.5 Hz) and 74% exhibited spontaneous synaptic activity (amplitude, 16.03±0.82 pA; frequency, 1.09±0.17 Hz). We expect this protocol to be of broad applicability for implementing iPSC-based neuronal network models of neuropsychiatric disorders.Molecular Psychiatry advance online publication, 18 April 2017; doi:10.1038/mp.2017.56.

  19. Spike Code Flow in Cultured Neuronal Networks

    Directory of Open Access Journals (Sweden)

    Shinichi Tamura

    2016-01-01

    Full Text Available We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of “1101” and “1011,” which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the “maximum cross-correlations” among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  20. A Model to Explain the Emergence of Reward Expectancy neurons using Reinforcement Learning and Neural Network

    OpenAIRE

    Shinya, Ishii; Munetaka, Shidara; Katsunari, Shibata

    2006-01-01

    In an experiment of multi-trial task to obtain a reward, reward expectancy neurons,###which responded only in the non-reward trials that are necessary to advance###toward the reward, have been observed in the anterior cingulate cortex of monkeys.###In this paper, to explain the emergence of the reward expectancy neuron in###terms of reinforcement learning theory, a model that consists of a recurrent neural###network trained based on reinforcement learning is proposed. The analysis of the###hi...

  1. Complexity in neuronal noise depends on network interconnectivity.

    Science.gov (United States)

    Serletis, Demitre; Zalay, Osbert C; Valiante, Taufik A; Bardakjian, Berj L; Carlen, Peter L

    2011-06-01

    "Noise," or noise-like activity (NLA), defines background electrical membrane potential fluctuations at the cellular level of the nervous system, comprising an important aspect of brain dynamics. Using whole-cell voltage recordings from fast-spiking stratum oriens interneurons and stratum pyramidale neurons located in the CA3 region of the intact mouse hippocampus, we applied complexity measures from dynamical systems theory (i.e., 1/f(γ) noise and correlation dimension) and found evidence for complexity in neuronal NLA, ranging from high- to low-complexity dynamics. Importantly, these high- and low-complexity signal features were largely dependent on gap junction and chemical synaptic transmission. Progressive neuronal isolation from the surrounding local network via gap junction blockade (abolishing gap junction-dependent spikelets) and then chemical synaptic blockade (abolishing excitatory and inhibitory post-synaptic potentials), or the reverse order of these treatments, resulted in emergence of high-complexity NLA dynamics. Restoring local network interconnectivity via blockade washout resulted in resolution to low-complexity behavior. These results suggest that the observed increase in background NLA complexity is the result of reduced network interconnectivity, thereby highlighting the potential importance of the NLA signal to the study of network state transitions arising in normal and abnormal brain dynamics (such as in epilepsy, for example).

  2. Neuronal network analyses: premises, promises and uncertainties

    OpenAIRE

    Parker, David

    2010-01-01

    Neuronal networks assemble the cellular components needed for sensory, motor and cognitive functions. Any rational intervention in the nervous system will thus require an understanding of network function. Obtaining this understanding is widely considered to be one of the major tasks facing neuroscience today. Network analyses have been performed for some years in relatively simple systems. In addition to the direct insights these systems have provided, they also illustrate some of the diffic...

  3. Reciprocal cholinergic and GABAergic modulation of the small ventrolateral pacemaker neurons of Drosophila's circadian clock neuron network.

    Science.gov (United States)

    Lelito, Katherine R; Shafer, Orie T

    2012-04-01

    The relatively simple clock neuron network of Drosophila is a valuable model system for the neuronal basis of circadian timekeeping. Unfortunately, many key neuronal classes of this network are inaccessible to electrophysiological analysis. We have therefore adopted the use of genetically encoded sensors to address the physiology of the fly's circadian clock network. Using genetically encoded Ca(2+) and cAMP sensors, we have investigated the physiological responses of two specific classes of clock neuron, the large and small ventrolateral neurons (l- and s-LN(v)s), to two neurotransmitters implicated in their modulation: acetylcholine (ACh) and γ-aminobutyric acid (GABA). Live imaging of l-LN(v) cAMP and Ca(2+) dynamics in response to cholinergic agonist and GABA application were well aligned with published electrophysiological data, indicating that our sensors were capable of faithfully reporting acute physiological responses to these transmitters within single adult clock neuron soma. We extended these live imaging methods to s-LN(v)s, critical neuronal pacemakers whose physiological properties in the adult brain are largely unknown. Our s-LN(v) experiments revealed the predicted excitatory responses to bath-applied cholinergic agonists and the predicted inhibitory effects of GABA and established that the antagonism of ACh and GABA extends to their effects on cAMP signaling. These data support recently published but physiologically untested models of s-LN(v) modulation and lead to the prediction that cholinergic and GABAergic inputs to s-LN(v)s will have opposing effects on the phase and/or period of the molecular clock within these critical pacemaker neurons.

  4. The Role of Adult-Born Neurons in the Constantly Changing Olfactory Bulb Network

    Directory of Open Access Journals (Sweden)

    Sarah Malvaut

    2016-01-01

    Full Text Available The adult mammalian brain is remarkably plastic and constantly undergoes structurofunctional modifications in response to environmental stimuli. In many regions plasticity is manifested by modifications in the efficacy of existing synaptic connections or synapse formation and elimination. In a few regions, however, plasticity is brought by the addition of new neurons that integrate into established neuronal networks. This type of neuronal plasticity is particularly prominent in the olfactory bulb (OB where thousands of neuronal progenitors are produced on a daily basis in the subventricular zone (SVZ and migrate along the rostral migratory stream (RMS towards the OB. In the OB, these neuronal precursors differentiate into local interneurons, mature, and functionally integrate into the bulbar network by establishing output synapses with principal neurons. Despite continuous progress, it is still not well understood how normal functioning of the OB is preserved in the constantly remodelling bulbar network and what role adult-born neurons play in odor behaviour. In this review we will discuss different levels of morphofunctional plasticity effected by adult-born neurons and their functional role in the adult OB and also highlight the possibility that different subpopulations of adult-born cells may fulfill distinct functions in the OB neuronal network and odor behaviour.

  5. The Role of Adult-Born Neurons in the Constantly Changing Olfactory Bulb Network

    Science.gov (United States)

    Malvaut, Sarah; Saghatelyan, Armen

    2016-01-01

    The adult mammalian brain is remarkably plastic and constantly undergoes structurofunctional modifications in response to environmental stimuli. In many regions plasticity is manifested by modifications in the efficacy of existing synaptic connections or synapse formation and elimination. In a few regions, however, plasticity is brought by the addition of new neurons that integrate into established neuronal networks. This type of neuronal plasticity is particularly prominent in the olfactory bulb (OB) where thousands of neuronal progenitors are produced on a daily basis in the subventricular zone (SVZ) and migrate along the rostral migratory stream (RMS) towards the OB. In the OB, these neuronal precursors differentiate into local interneurons, mature, and functionally integrate into the bulbar network by establishing output synapses with principal neurons. Despite continuous progress, it is still not well understood how normal functioning of the OB is preserved in the constantly remodelling bulbar network and what role adult-born neurons play in odor behaviour. In this review we will discuss different levels of morphofunctional plasticity effected by adult-born neurons and their functional role in the adult OB and also highlight the possibility that different subpopulations of adult-born cells may fulfill distinct functions in the OB neuronal network and odor behaviour. PMID:26839709

  6. How adaptation shapes spike rate oscillations in recurrent neuronal networks

    Directory of Open Access Journals (Sweden)

    Moritz eAugustin

    2013-02-01

    Full Text Available Neural mass signals from in-vivo recordings often show oscillations with frequencies ranging from <1 Hz to 100 Hz. Fast rhythmic activity in the beta and gamma range can be generated by network based mechanisms such as recurrent synaptic excitation-inhibition loops. Slower oscillations might instead depend on neuronal adaptation currents whose timescales range from tens of milliseconds to seconds. Here we investigate how the dynamics of such adaptation currents contribute to spike rate oscillations and resonance properties in recurrent networks of excitatory and inhibitory neurons. Based on a network of sparsely coupled spiking model neurons with two types of adaptation current and conductance based synapses with heterogeneous strengths and delays we use a mean-field approach to analyze oscillatory network activity. For constant external input, we find that spike-triggered adaptation currents provide a mechanism to generate slow oscillations over a wide range of adaptation timescales as long as recurrent synaptic excitation is sufficiently strong. Faster rhythms occur when recurrent inhibition is slower than excitation and oscillation frequency increases with the strength of inhibition. Adaptation facilitates such network based oscillations for fast synaptic inhibition and leads to decreased frequencies. For oscillatory external input, adaptation currents amplify a narrow band of frequencies and cause phase advances for low frequencies in addition to phase delays at higher frequencies. Our results therefore identify the different key roles of neuronal adaptation dynamics for rhythmogenesis and selective signal propagation in recurrent networks.

  7. Bi-directional astrocytic regulation of neuronal activity within a network

    Directory of Open Access Journals (Sweden)

    Susan Yu Gordleeva

    2012-11-01

    Full Text Available The concept of a tripartite synapse holds that astrocytes can affect both the pre- and postsynaptic compartments through the Ca2+-dependent release of gliotransmitters. Because astrocytic Ca2+ transients usually last for a few seconds, we assumed that astrocytic regulation of synaptic transmission may also occur on the scale of seconds. Here, we considered the basic physiological functions of tripartite synapses and investigated astrocytic regulation at the level of neural network activity. The firing dynamics of individual neurons in a spontaneous firing network was described by the Hodgkin-Huxley model. The neurons received excitatory synaptic input driven by the Poisson spike train with variable frequency. The mean field concentration of the released neurotransmitter was used to describe the presynaptic dynamics. The amplitudes of the excitatory postsynaptic currents (PSCs obeyed the gamma distribution law. In our model, astrocytes depressed the presynaptic release and enhanced the postsynaptic currents. As a result, low frequency synaptic input was suppressed while high frequency input was amplified. The analysis of the neuron spiking frequency as an indicator of network activity revealed that tripartite synaptic transmission dramatically changed the local network operation compared to bipartite synapses. Specifically, the astrocytes supported homeostatic regulation of the network activity by increasing or decreasing firing of the neurons. Thus, the astrocyte activation may modulate a transition of neural network into bistable regime of activity with two stable firing levels and spontaneous transitions between them.

  8. Diagnosis of cranial hemangioma: Comparison between logistic regression analysis and neuronal network

    International Nuclear Information System (INIS)

    Arana, E.; Marti-Bonmati, L.; Bautista, D.; Paredes, R.

    1998-01-01

    To study the utility of logistic regression and the neuronal network in the diagnosis of cranial hemangiomas. Fifteen patients presenting hemangiomas were selected form a total of 167 patients with cranial lesions. All were evaluated by plain radiography and computed tomography (CT). Nineteen variables in their medical records were reviewed. Logistic regression and neuronal network models were constructed and validated by the jackknife (leave-one-out) approach. The yields of the two models were compared by means of ROC curves, using the area under the curve as parameter. Seven men and 8 women presented hemangiomas. The mean age of these patients was 38.4 (15.4 years (mea ± standard deviation). Logistic regression identified as significant variables the shape, soft tissue mass and periosteal reaction. The neuronal network lent more importance to the existence of ossified matrix, ruptured cortical vein and the mixed calcified-blastic (trabeculated) pattern. The neuronal network showed a greater yield than logistic regression (Az, 0.9409) (0.004 versus 0.7211± 0.075; p<0.001). The neuronal network discloses hidden interactions among the variables, providing a higher yield in the characterization of cranial hemangiomas and constituting a medical diagnostic acid. (Author)29 refs

  9. Synaptic network activity induces neuronal differentiation of adult hippocampal precursor cells through BDNF signaling

    Directory of Open Access Journals (Sweden)

    Harish Babu

    2009-09-01

    Full Text Available Adult hippocampal neurogenesis is regulated by activity. But how do neural precursor cells in the hippocampus respond to surrounding network activity and translate increased neural activity into a developmental program? Here we show that long-term potential (LTP-like synaptic activity within a cellular network of mature hippocampal neurons promotes neuronal differentiation of newly generated cells. In co-cultures of precursor cells with primary hippocampal neurons, LTP-like synaptic plasticity induced by addition of glycine in Mg2+-free media for 5 min, produced synchronous network activity and subsequently increased synaptic strength between neurons. Furthermore, this synchronous network activity led to a significant increase in neuronal differentiation from the co-cultured neural precursor cells. When applied directly to precursor cells, glycine and Mg2+-free solution did not induce neuronal differentiation. Synaptic plasticity-induced neuronal differentiation of precursor cells was observed in the presence of GABAergic neurotransmission blockers but was dependent on NMDA-mediated Ca2+ influx. Most importantly, neuronal differentiation required the release of brain-derived neurotrophic factor (BDNF from the underlying substrate hippocampal neurons as well as TrkB receptor phosphorylation in precursor cells. This suggests that activity-dependent stem cell differentiation within the hippocampal network is mediated via synaptically evoked BDNF signaling.

  10. Causal Interrogation of Neuronal Networks and Behavior through Virally Transduced Ivermectin Receptors.

    Science.gov (United States)

    Obenhaus, Horst A; Rozov, Andrei; Bertocchi, Ilaria; Tang, Wannan; Kirsch, Joachim; Betz, Heinrich; Sprengel, Rolf

    2016-01-01

    The causal interrogation of neuronal networks involved in specific behaviors requires the spatially and temporally controlled modulation of neuronal activity. For long-term manipulation of neuronal activity, chemogenetic tools provide a reasonable alternative to short-term optogenetic approaches. Here we show that virus mediated gene transfer of the ivermectin (IVM) activated glycine receptor mutant GlyRα1 (AG) can be used for the selective and reversible silencing of specific neuronal networks in mice. In the striatum, dorsal hippocampus, and olfactory bulb, GlyRα1 (AG) promoted IVM dependent effects in representative behavioral assays. Moreover, GlyRα1 (AG) mediated silencing had a strong and reversible impact on neuronal ensemble activity and c-Fos activation in the olfactory bulb. Together our results demonstrate that long-term, reversible and re-inducible neuronal silencing via GlyRα1 (AG) is a promising tool for the interrogation of network mechanisms underlying the control of behavior and memory formation.

  11. Degree of synchronization modulated by inhibitory neurons in clustered excitatory-inhibitory recurrent networks

    Science.gov (United States)

    Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua

    2018-01-01

    An excitatory-inhibitory recurrent neuronal network is established to numerically study the effect of inhibitory neurons on the synchronization degree of neuronal systems. The obtained results show that, with the number of inhibitory neurons and the coupling strength from an inhibitory neuron to an excitatory neuron increasing, inhibitory neurons can not only reduce the synchronization degree when the synchronization degree of the excitatory population is initially higher, but also enhance it when it is initially lower. Meanwhile, inhibitory neurons could also help the neuronal networks to maintain moderate synchronized states. In this paper, we call this effect as modulation effect of inhibitory neurons. With the obtained results, it is further revealed that the ratio of excitatory neurons to inhibitory neurons being nearly 4 : 1 is an economic and affordable choice for inhibitory neurons to realize this modulation effect.

  12. Consistence of Network Filtering Rules

    Institute of Scientific and Technical Information of China (English)

    SHE Kun; WU Yuancheng; HUANG Juncai; ZHOU Mingtian

    2004-01-01

    The inconsistence of firewall/VPN(Virtual Private Network) rule makes a huge maintainable cost.With development of Multinational Company,SOHO office,E-government the number of firewalls/VPN will increase rapidly.Rule table in stand-alone or network will be increased in geometric series accordingly.Checking the consistence of rule table manually is inadequate.A formal approach can define semantic consistence,make a theoretic foundation of intelligent management about rule tables.In this paper,a kind of formalization of host rules and network ones for auto rule-validation based on SET theory were proporsed and a rule validation scheme was defined.The analysis results show the superior performance of the methods and demonstrate its potential for the intelligent management based on rule tables.

  13. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    Science.gov (United States)

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  14. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons.

    Directory of Open Access Journals (Sweden)

    Dejan Pecevski

    2011-12-01

    Full Text Available An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away" and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.

  15. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons.

    Science.gov (United States)

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-12-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away") and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.

  16. Noise and Synchronization Analysis of the Cold-Receptor Neuronal Network Model

    Directory of Open Access Journals (Sweden)

    Ying Du

    2014-01-01

    Full Text Available This paper analyzes the dynamics of the cold receptor neural network model. First, it examines noise effects on neuronal stimulus in the model. From ISI plots, it is shown that there are considerable differences between purely deterministic simulations and noisy ones. The ISI-distance is used to measure the noise effects on spike trains quantitatively. It is found that spike trains observed in neural models can be more strongly affected by noise for different temperatures in some aspects; meanwhile, spike train has greater variability with the noise intensity increasing. The synchronization of neuronal network with different connectivity patterns is also studied. It is shown that chaotic and high period patterns are more difficult to get complete synchronization than the situation in single spike and low period patterns. The neuronal network will exhibit various patterns of firing synchronization by varying some key parameters such as the coupling strength. Different types of firing synchronization are diagnosed by a correlation coefficient and the ISI-distance method. The simulations show that the synchronization status of neurons is related to the network connectivity patterns.

  17. Novel transcriptional networks regulated by CLOCK in human neurons.

    Science.gov (United States)

    Fontenot, Miles R; Berto, Stefano; Liu, Yuxiang; Werthmann, Gordon; Douglas, Connor; Usui, Noriyoshi; Gleason, Kelly; Tamminga, Carol A; Takahashi, Joseph S; Konopka, Genevieve

    2017-11-01

    The molecular mechanisms underlying human brain evolution are not fully understood; however, previous work suggested that expression of the transcription factor CLOCK in the human cortex might be relevant to human cognition and disease. In this study, we investigated this novel transcriptional role for CLOCK in human neurons by performing chromatin immunoprecipitation sequencing for endogenous CLOCK in adult neocortices and RNA sequencing following CLOCK knockdown in differentiated human neurons in vitro. These data suggested that CLOCK regulates the expression of genes involved in neuronal migration, and a functional assay showed that CLOCK knockdown increased neuronal migratory distance. Furthermore, dysregulation of CLOCK disrupts coexpressed networks of genes implicated in neuropsychiatric disorders, and the expression of these networks is driven by hub genes with human-specific patterns of expression. These data support a role for CLOCK-regulated transcriptional cascades involved in human brain evolution and function. © 2017 Fontenot et al.; Published by Cold Spring Harbor Laboratory Press.

  18. Human embryonic stem cell-derived neurons adopt and regulate the activity of an established neural network

    Science.gov (United States)

    Weick, Jason P.; Liu, Yan; Zhang, Su-Chun

    2011-01-01

    Whether hESC-derived neurons can fully integrate with and functionally regulate an existing neural network remains unknown. Here, we demonstrate that hESC-derived neurons receive unitary postsynaptic currents both in vitro and in vivo and adopt the rhythmic firing behavior of mouse cortical networks via synaptic integration. Optical stimulation of hESC-derived neurons expressing Channelrhodopsin-2 elicited both inhibitory and excitatory postsynaptic currents and triggered network bursting in mouse neurons. Furthermore, light stimulation of hESC-derived neurons transplanted to the hippocampus of adult mice triggered postsynaptic currents in host pyramidal neurons in acute slice preparations. Thus, hESC-derived neurons can participate in and modulate neural network activity through functional synaptic integration, suggesting they are capable of contributing to neural network information processing both in vitro and in vivo. PMID:22106298

  19. Effect of Transcranial Magnetic Stimulation on Neuronal Networks

    Science.gov (United States)

    Unsal, Ahmet; Hadimani, Ravi; Jiles, David

    2013-03-01

    The human brain contains around 100 billion nerve cells controlling our day to day activities. Consequently, brain disorders often result in impairments such as paralysis, loss of coordination and seizure. It has been said that 1 in 5 Americans suffer some diagnosable mental disorder. There is an urgent need to understand the disorders, prevent them and if possible, develop permanent cure for them. As a result, a significant amount of research activities is being directed towards brain research. Transcranial Magnetic Stimulation (TMS) is a promising tool for diagnosing and treating brain disorders. It is a non-invasive treatment method that produces a current flow in the brain which excites the neurons. Even though TMS has been verified to have advantageous effects on various brain related disorders, there have not been enough studies on the impact of TMS on cells. In this study, we are investigating the electrophysiological effects of TMS on one dimensional neuronal culture grown in a circular pathway. Electrical currents are produced on the neuronal networks depending on the directionality of the applied field. This aids in understanding how neuronal networks react under TMS treatment.

  20. Neuronal Networks on Nanocellulose Scaffolds.

    Science.gov (United States)

    Jonsson, Malin; Brackmann, Christian; Puchades, Maja; Brattås, Karoline; Ewing, Andrew; Gatenholm, Paul; Enejder, Annika

    2015-11-01

    Proliferation, integration, and neurite extension of PC12 cells, a widely used culture model for cholinergic neurons, were studied in nanocellulose scaffolds biosynthesized by Gluconacetobacter xylinus to allow a three-dimensional (3D) extension of neurites better mimicking neuronal networks in tissue. The interaction with control scaffolds was compared with cationized nanocellulose (trimethyl ammonium betahydroxy propyl [TMAHP] cellulose) to investigate the impact of surface charges on the cell interaction mechanisms. Furthermore, coatings with extracellular matrix proteins (collagen, fibronectin, and laminin) were investigated to determine the importance of integrin-mediated cell attachment. Cell proliferation was evaluated by a cellular proliferation assay, while cell integration and neurite propagation were studied by simultaneous label-free Coherent anti-Stokes Raman Scattering and second harmonic generation microscopy, providing 3D images of PC12 cells and arrangement of nanocellulose fibrils, respectively. Cell attachment and proliferation were enhanced by TMAHP modification, but not by protein coating. Protein coating instead promoted active interaction between the cells and the scaffold, hence lateral cell migration and integration. Irrespective of surface modification, deepest cell integration measured was one to two cell layers, whereas neurites have a capacity to integrate deeper than the cell bodies in the scaffold due to their fine dimensions and amoeba-like migration pattern. Neurites with lengths of >50 μm were observed, successfully connecting individual cells and cell clusters. In conclusion, TMAHP-modified nanocellulose scaffolds promote initial cellular scaffold adhesion, which combined with additional cell-scaffold treatments enables further formation of 3D neuronal networks.

  1. A Neuronal Network Model for Pitch Selectivity and Representation

    OpenAIRE

    Huang, Chengcheng; Rinzel, John

    2016-01-01

    Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among c...

  2. How structure determines correlations in neuronal networks.

    Directory of Open Access Journals (Sweden)

    Volker Pernice

    2011-05-01

    Full Text Available Networks are becoming a ubiquitous metaphor for the understanding of complex biological systems, spanning the range between molecular signalling pathways, neural networks in the brain, and interacting species in a food web. In many models, we face an intricate interplay between the topology of the network and the dynamics of the system, which is generally very hard to disentangle. A dynamical feature that has been subject of intense research in various fields are correlations between the noisy activity of nodes in a network. We consider a class of systems, where discrete signals are sent along the links of the network. Such systems are of particular relevance in neuroscience, because they provide models for networks of neurons that use action potentials for communication. We study correlations in dynamic networks with arbitrary topology, assuming linear pulse coupling. With our novel approach, we are able to understand in detail how specific structural motifs affect pairwise correlations. Based on a power series decomposition of the covariance matrix, we describe the conditions under which very indirect interactions will have a pronounced effect on correlations and population dynamics. In random networks, we find that indirect interactions may lead to a broad distribution of activation levels with low average but highly variable correlations. This phenomenon is even more pronounced in networks with distance dependent connectivity. In contrast, networks with highly connected hubs or patchy connections often exhibit strong average correlations. Our results are particularly relevant in view of new experimental techniques that enable the parallel recording of spiking activity from a large number of neurons, an appropriate interpretation of which is hampered by the currently limited understanding of structure-dynamics relations in complex networks.

  3. The influence of single neuron dynamics and network topology on time delay-induced multiple synchronous behaviors in inhibitory coupled network

    International Nuclear Information System (INIS)

    Zhao, Zhiguo; Gu, Huaguang

    2015-01-01

    Highlights: • Time delay-induced multiple synchronous behaviors was simulated in neuronal networks. • Multiple behaviors appear at time delays shorter than a bursting period of neurons. • The more spikes per burst of bursting, the more synchronous regions of time delay. • From regular to random via small-world networks, synchronous degree becomes weak. • An interpretation of the multiple behaviors and the influence of network are provided. - Abstract: Time delay induced-multiple synchronous behaviors are simulated in neuronal network composed of many inhibitory neurons and appear at different time delays shorter than a period of endogenous bursting of individual neurons. It is different from previous investigations wherein only one of multiple synchronous behaviors appears at time delay shorter than a period of endogenous firing and others appear at time delay longer than the period duration. The bursting patterns of the synchronous behaviors are identified based on the dynamics of an individual neuron stimulated by a signal similar to the inhibitory coupling current, which is applied at the decaying branch of a spike and suitable phase within the quiescent state of the endogenous bursting. If a burst of endogenous bursting contains more spikes, the synchronous behaviors appear at more regions of time delay. As the coupling strength increases, the multiple synchronous behaviors appear in a sequence because the different threshold of coupling current or strength is needed to achieve synchronous behaviors. From regular, to small-world, and to random networks, synchronous degree of the multiple synchronous behaviors becomes weak, and synchronous bursting patterns with lower spikes per burst disappear, which is properly interpreted by the difference of coupling current between neurons induced by different degree and the high threshold of coupling current to achieve synchronization for the absent synchronous bursting patterns. The results of the influence of

  4. Inference of neuronal network spike dynamics and topology from calcium imaging data

    Directory of Open Access Journals (Sweden)

    Henry eLütcke

    2013-12-01

    Full Text Available Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP occurrence ('spike trains' from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties.

  5. Network bursts in cortical neuronal cultures: 'noise - versus pacemaker'- driven neural network simulations

    NARCIS (Netherlands)

    Gritsun, T.; Stegenga, J.; le Feber, Jakob; Rutten, Wim

    2009-01-01

    In this paper we address the issue of spontaneous bursting activity in cortical neuronal cultures and explain what might cause this collective behavior using computer simulations of two different neural network models. While the common approach to acivate a passive network is done by introducing

  6. Impact of Partial Time Delay on Temporal Dynamics of Watts-Strogatz Small-World Neuronal Networks

    Science.gov (United States)

    Yan, Hao; Sun, Xiaojuan

    2017-06-01

    In this paper, we mainly discuss effects of partial time delay on temporal dynamics of Watts-Strogatz (WS) small-world neuronal networks by controlling two parameters. One is the time delay τ and the other is the probability of partial time delay pdelay. Temporal dynamics of WS small-world neuronal networks are discussed with the aid of temporal coherence and mean firing rate. With the obtained simulation results, it is revealed that for small time delay τ, the probability pdelay could weaken temporal coherence and increase mean firing rate of neuronal networks, which indicates that it could improve neuronal firings of the neuronal networks while destroying firing regularity. For large time delay τ, temporal coherence and mean firing rate do not have great changes with respect to pdelay. Time delay τ always has great influence on both temporal coherence and mean firing rate no matter what is the value of pdelay. Moreover, with the analysis of spike trains and histograms of interspike intervals of neurons inside neuronal networks, it is found that the effects of partial time delays on temporal coherence and mean firing rate could be the result of locking between the period of neuronal firing activities and the value of time delay τ. In brief, partial time delay could have great influence on temporal dynamics of the neuronal networks.

  7. To Break or to Brake Neuronal Network Accelerated by Ammonium Ions?

    Directory of Open Access Journals (Sweden)

    Vladimir V Dynnik

    Full Text Available The aim of present study was to investigate the effects of ammonium ions on in vitro neuronal network activity and to search alternative methods of acute ammonia neurotoxicity prevention.Rat hippocampal neuronal and astrocytes co-cultures in vitro, fluorescent microscopy and perforated patch clamp were used to monitor the changes in intracellular Ca2+- and membrane potential produced by ammonium ions and various modulators in the cells implicated in neural networks.Low concentrations of NH4Cl (0.1-4 mM produce short temporal effects on network activity. Application of 5-8 mM NH4Cl: invariably transforms diverse network firing regimen to identical burst patterns, characterized by substantial neuronal membrane depolarization at plateau phase of potential and high-amplitude Ca2+-oscillations; raises frequency and average for period of oscillations Ca2+-level in all cells implicated in network; results in the appearance of group of «run out» cells with high intracellular Ca2+ and steadily diminished amplitudes of oscillations; increases astrocyte Ca2+-signalling, characterized by the appearance of groups of cells with increased intracellular Ca2+-level and/or chaotic Ca2+-oscillations. Accelerated network activity may be suppressed by the blockade of NMDA or AMPA/kainate-receptors or by overactivation of AMPA/kainite-receptors. Ammonia still activate neuronal firing in the presence of GABA(A receptors antagonist bicuculline, indicating that «disinhibition phenomenon» is not implicated in the mechanisms of networks acceleration. Network activity may also be slowed down by glycine, agonists of metabotropic inhibitory receptors, betaine, L-carnitine, L-arginine, etc.Obtained results demonstrate that ammonium ions accelerate neuronal networks firing, implicating ionotropic glutamate receptors, having preserved the activities of group of inhibitory ionotropic and metabotropic receptors. This may mean, that ammonia neurotoxicity might be prevented by

  8. Clustering promotes switching dynamics in networks of noisy neurons

    Science.gov (United States)

    Franović, Igor; Klinshov, Vladimir

    2018-02-01

    Macroscopic variability is an emergent property of neural networks, typically manifested in spontaneous switching between the episodes of elevated neuronal activity and the quiescent episodes. We investigate the conditions that facilitate switching dynamics, focusing on the interplay between the different sources of noise and heterogeneity of the network topology. We consider clustered networks of rate-based neurons subjected to external and intrinsic noise and derive an effective model where the network dynamics is described by a set of coupled second-order stochastic mean-field systems representing each of the clusters. The model provides an insight into the different contributions to effective macroscopic noise and qualitatively indicates the parameter domains where switching dynamics may occur. By analyzing the mean-field model in the thermodynamic limit, we demonstrate that clustering promotes multistability, which gives rise to switching dynamics in a considerably wider parameter region compared to the case of a non-clustered network with sparse random connection topology.

  9. Distribution of spinal neuronal networks controlling forward and backward locomotion.

    Science.gov (United States)

    Merkulyeva, Natalia; Veshchitskii, Aleksandr; Gorsky, Oleg; Pavlova, Natalia; Zelenin, Pavel V; Gerasimenko, Yury; Deliagina, Tatiana G; Musienko, Pavel

    2018-04-20

    Higher vertebrates, including humans, are capable not only of forward (FW) locomotion but also of walking in other directions relative to the body axis [backward (BW), sideways, etc.]. While the neural mechanisms responsible for controlling FW locomotion have been studied in considerable detail, the mechanisms controlling steps in other directions are mostly unknown. The aim of the present study was to investigate the distribution of spinal neuronal networks controlling FW and BW locomotion. First, we applied electrical epidural stimulation (ES) to different segments of the spinal cord from L2 to S2 to reveal zones triggering FW and BW locomotion in decerebrate cats of either sex. Second, to determine the location of spinal neurons activated during FW and BW locomotion, we used c-fos immunostaining. We found that the neuronal networks responsible for FW locomotion were distributed broadly in the lumbosacral spinal cord and could be activated by ES of any segment from L3 to S2. By contrast, networks generating BW locomotion were activated by ES of a limited zone from the caudal part of L5 to the caudal part of L7. In the intermediate part of the gray matter within this zone, a significantly higher number of c- fos -positive interneurons was revealed in BW-stepping cats compared with FW-stepping cats. We suggest that this region of the spinal cord contains the network that determines the BW direction of locomotion. Significance Statement Sequential and single steps in various directions relative to the body axis [forward (FW), backward (BW), sideways, etc.] are used during locomotion and to correct for perturbations, respectively. The mechanisms controlling step direction are unknown. In the present study, for the first time we compared the distributions of spinal neuronal networks controlling FW and BW locomotion. Using a marker to visualize active neurons, we demonstrated that in the intermediate part of the gray matter within L6 and L7 spinal segments

  10. Effect of acute stretch injury on action potential and network activity of rat neocortical neurons in culture.

    Science.gov (United States)

    Magou, George C; Pfister, Bryan J; Berlin, Joshua R

    2015-10-22

    The basis for acute seizures following traumatic brain injury (TBI) remains unclear. Animal models of TBI have revealed acute hyperexcitablility in cortical neurons that could underlie seizure activity, but studying initiating events causing hyperexcitability is difficult in these models. In vitro models of stretch injury with cultured cortical neurons, a surrogate for TBI, allow facile investigation of cellular changes after injury but they have only demonstrated post-injury hypoexcitability. The goal of this study was to determine if neuronal hyperexcitability could be triggered by in vitro stretch injury. Controlled uniaxial stretch injury was delivered to a spatially delimited region of a spontaneously active network of cultured rat cortical neurons, yielding a region of stretch-injured neurons and adjacent regions of non-stretched neurons that did not directly experience stretch injury. Spontaneous electrical activity was measured in non-stretched and stretch-injured neurons, and in control neuronal networks not subjected to stretch injury. Non-stretched neurons in stretch-injured cultures displayed a three-fold increase in action potential firing rate and bursting activity 30-60 min post-injury. Stretch-injured neurons, however, displayed dramatically lower rates of action potential firing and bursting. These results demonstrate that acute hyperexcitability can be observed in non-stretched neurons located in regions adjacent to the site of stretch injury, consistent with reports that seizure activity can arise from regions surrounding the site of localized brain injury. Thus, this in vitro procedure for localized neuronal stretch injury may provide a model to study the earliest cellular changes in neuronal function associated with acute post-traumatic seizures. Copyright © 2015. Published by Elsevier B.V.

  11. Mechanisms of Winner-Take-All and Group Selection in Neuronal Spiking Networks.

    Science.gov (United States)

    Chen, Yanqing

    2017-01-01

    A major function of central nervous systems is to discriminate different categories or types of sensory input. Neuronal networks accomplish such tasks by learning different sensory maps at several stages of neural hierarchy, such that different neurons fire selectively to reflect different internal or external patterns and states. The exact mechanisms of such map formation processes in the brain are not completely understood. Here we study the mechanism by which a simple recurrent/reentrant neuronal network accomplish group selection and discrimination to different inputs in order to generate sensory maps. We describe the conditions and mechanism of transition from a rhythmic epileptic state (in which all neurons fire synchronized and indiscriminately to any input) to a winner-take-all state in which only a subset of neurons fire for a specific input. We prove an analytic condition under which a stable bump solution and a winner-take-all state can emerge from the local recurrent excitation-inhibition interactions in a three-layer spiking network with distinct excitatory and inhibitory populations, and demonstrate the importance of surround inhibitory connection topology on the stability of dynamic patterns in spiking neural network.

  12. A combined Bodian-Nissl stain for improved network analysis in neuronal cell culture.

    Science.gov (United States)

    Hightower, M; Gross, G W

    1985-11-01

    Bodian and Nissl procedures were combined to stain dissociated mouse spinal cord cells cultured on coverslips. The Bodian technique stains fine neuronal processes in great detail as well as an intracellular fibrillar network concentrated around the nucleus and in proximal neurites. The Nissl stain clearly delimits neuronal cytoplasm in somata and in large dendrites. A combination of these techniques allows the simultaneous depiction of neuronal perikarya and all afferent and efferent processes. Costaining with little background staining by either procedure suggests high specificity for neurons. This procedure could be exploited for routine network analysis of cultured neurons.

  13. Oscillations in the bistable regime of neuronal networks.

    Science.gov (United States)

    Roxin, Alex; Compte, Albert

    2016-07-01

    Bistability between attracting fixed points in neuronal networks has been hypothesized to underlie persistent activity observed in several cortical areas during working memory tasks. In network models this kind of bistability arises due to strong recurrent excitation, sufficient to generate a state of high activity created in a saddle-node (SN) bifurcation. On the other hand, canonical network models of excitatory and inhibitory neurons (E-I networks) robustly produce oscillatory states via a Hopf (H) bifurcation due to the E-I loop. This mechanism for generating oscillations has been invoked to explain the emergence of brain rhythms in the β to γ bands. Although both bistability and oscillatory activity have been intensively studied in network models, there has not been much focus on the coincidence of the two. Here we show that when oscillations emerge in E-I networks in the bistable regime, their phenomenology can be explained to a large extent by considering coincident SN and H bifurcations, known as a codimension two Takens-Bogdanov bifurcation. In particular, we find that such oscillations are not composed of a stable limit cycle, but rather are due to noise-driven oscillatory fluctuations. Furthermore, oscillations in the bistable regime can, in principle, have arbitrarily low frequency.

  14. Replicating receptive fields of simple and complex cells in primary visual cortex in a neuronal network model with temporal and population sparseness and reliability.

    Science.gov (United States)

    Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi

    2012-10-01

    We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.

  15. The energy demand of fast neuronal network oscillations: insights from brain slice preparations

    Directory of Open Access Journals (Sweden)

    Oliver eKann

    2012-01-01

    Full Text Available Fast neuronal network oscillations in the gamma range (30-100 Hz in the cerebral cortex have been implicated in higher cognitive functions such as sensual perception, working memory, and, perhaps, consciousness. However, little is known about the energy demand of gamma oscillations. This is mainly caused by technical limitations that are associated with simultaneous recordings of neuronal activity and energy metabolism in small neuronal networks and at the level of mitochondria in vivo. Thus recent studies have focused on brain slice preparations to address the energy demand of gamma oscillations in vitro. Here, reports will be summarized and discussed that combined electrophysiological recordings, oxygen sensor microelectrodes and live-cell fluorescence imaging in acutely prepared slices and organotypic slice cultures of the hippocampus from both, mouse and rat. These reports consistently show that gamma oscillations can be reliably induced in hippocampal slice preparations by different pharmacological tools. They suggest that gamma oscillations are associated with high energy demand, requiring both rapid adaptation of oxidative energy metabolism and sufficient supply with oxygen and nutrients. These findings might help to explain the exceptional vulnerability of higher cognitive functions during pathological processes of the brain, such as circulatory disturbances, genetic mitochondrial diseases, and neurodegeneration.

  16. Modulation of neuronal network activity with ghrelin

    NARCIS (Netherlands)

    Stoyanova, Irina; Rutten, Wim; le Feber, Jakob

    2012-01-01

    Ghrelin is a neuropeptide regulating multiple physiological processes, including high brain functions such as learning and memory formation. However, the effect of ghrelin on network activity patterns and developments has not been studied yet. Therefore, we used dissociated cortical neurons plated

  17. Efficient transmission of subthreshold signals in complex networks of spiking neurons.

    Science.gov (United States)

    Torres, Joaquin J; Elices, Irene; Marro, J

    2015-01-01

    We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances--that naturally balances the network with excitatory and inhibitory synapses--and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest.

  18. Efficient transmission of subthreshold signals in complex networks of spiking neurons.

    Directory of Open Access Journals (Sweden)

    Joaquin J Torres

    Full Text Available We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances--that naturally balances the network with excitatory and inhibitory synapses--and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest.

  19. Niche-dependent development of functional neuronal networks from embryonic stem cell-derived neural populations

    Directory of Open Access Journals (Sweden)

    Siebler Mario

    2009-08-01

    Full Text Available Abstract Background The present work was performed to investigate the ability of two different embryonic stem (ES cell-derived neural precursor populations to generate functional neuronal networks in vitro. The first ES cell-derived neural precursor population was cultivated as free-floating neural aggregates which are known to form a developmental niche comprising different types of neural cells, including neural precursor cells (NPCs, progenitor cells and even further matured cells. This niche provides by itself a variety of different growth factors and extracellular matrix proteins that influence the proliferation and differentiation of neural precursor and progenitor cells. The second population was cultivated adherently in monolayer cultures to control most stringently the extracellular environment. This population comprises highly homogeneous NPCs which are supposed to represent an attractive way to provide well-defined neuronal progeny. However, the ability of these different ES cell-derived immature neural cell populations to generate functional neuronal networks has not been assessed so far. Results While both precursor populations were shown to differentiate into sufficient quantities of mature NeuN+ neurons that also express GABA or vesicular-glutamate-transporter-2 (vGlut2, only aggregate-derived neuronal populations exhibited a synchronously oscillating network activity 2–4 weeks after initiating the differentiation as detected by the microelectrode array technology. Neurons derived from homogeneous NPCs within monolayer cultures did merely show uncorrelated spiking activity even when differentiated for up to 12 weeks. We demonstrated that these neurons exhibited sparsely ramified neurites and an embryonic vGlut2 distribution suggesting an inhibited terminal neuronal maturation. In comparison, neurons derived from heterogeneous populations within neural aggregates appeared as fully mature with a dense neurite network and punctuated

  20. A Neuron- and a Synapse Chip for Artificial Neural Networks

    DEFF Research Database (Denmark)

    Lansner, John; Lehmann, Torsten

    1992-01-01

    A cascadable, analog, CMOS chip set has been developed for hardware implementations of artificial neural networks (ANN's):I) a neuron chip containing an array of neurons with hyperbolic tangent activation functions and adjustable gains, and II) a synapse chip (or a matrix-vector multiplier) where...

  1. Memristor-based neural networks: Synaptic versus neuronal stochasticity

    KAUST Repository

    Naous, Rawan

    2016-11-02

    In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered.

  2. Chimera-like states in a neuronal network model of the cat brain

    Science.gov (United States)

    Santos, M. S.; Szezech, J. D.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Batista, A. M.; Viana, R. L.; Kurths, J.

    2017-08-01

    Neuronal systems have been modeled by complex networks in different description levels. Recently, it has been verified that networks can simultaneously exhibit one coherent and other incoherent domain, known as chimera states. In this work, we study the existence of chimera states in a network considering the connectivity matrix based on the cat cerebral cortex. The cerebral cortex of the cat can be separated in 65 cortical areas organised into the four cognitive regions: visual, auditory, somatosensory-motor and frontolimbic. We consider a network where the local dynamics is given by the Hindmarsh-Rose model. The Hindmarsh-Rose equations are a well known model of neuronal activity that has been considered to simulate membrane potential in neuron. Here, we analyse under which conditions chimera states are present, as well as the affects induced by intensity of coupling on them. We observe the existence of chimera states in that incoherent structure can be composed of desynchronised spikes or desynchronised bursts. Moreover, we find that chimera states with desynchronised bursts are more robust to neuronal noise than with desynchronised spikes.

  3. Self-organized criticality occurs in non-conservative neuronal networks during `up' states

    Science.gov (United States)

    Millman, Daniel; Mihalas, Stefan; Kirkwood, Alfredo; Niebur, Ernst

    2010-10-01

    During sleep, under anaesthesia and in vitro, cortical neurons in sensory, motor, association and executive areas fluctuate between so-called up and down states, which are characterized by distinct membrane potentials and spike rates. Another phenomenon observed in preparations similar to those that exhibit up and down states-such as anaesthetized rats, brain slices and cultures devoid of sensory input, as well as awake monkey cortex-is self-organized criticality (SOC). SOC is characterized by activity `avalanches' with a branching parameter near unity and size distribution that obeys a power law with a critical exponent of about -3/2. Recent work has demonstrated SOC in conservative neuronal network models, but critical behaviour breaks down when biologically realistic `leaky' neurons are introduced. Here, we report robust SOC behaviour in networks of non-conservative leaky integrate-and-fire neurons with short-term synaptic depression. We show analytically and numerically that these networks typically have two stable activity levels, corresponding to up and down states, that the networks switch spontaneously between these states and that up states are critical and down states are subcritical.

  4. Heterogeneous delay-induced asynchrony and resonance in a small-world neuronal network system

    Science.gov (United States)

    Yu, Wen-Ting; Tang, Jun; Ma, Jun; Yang, Xianqing

    2016-06-01

    A neuronal network often involves time delay caused by the finite signal propagation time in a given biological network. This time delay is not a homogenous fluctuation in a biological system. The heterogeneous delay-induced asynchrony and resonance in a noisy small-world neuronal network system are numerically studied in this work by calculating synchronization measure and spike interval distribution. We focus on three different delay conditions: double-values delay, triple-values delay, and Gaussian-distributed delay. Our results show the following: 1) the heterogeneity in delay results in asynchronous firing in the neuronal network, and 2) maximum synchronization could be achieved through resonance given that the delay values are integer or half-integer times of each other.

  5. Impact of delays on the synchronization transitions of modular neuronal networks with hybrid synapses

    Science.gov (United States)

    Liu, Chen; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Tsang, Kaiming; Chan, Wailok

    2013-09-01

    The combined effects of the information transmission delay and the ratio of the electrical and chemical synapses on the synchronization transitions in the hybrid modular neuronal network are investigated in this paper. Numerical results show that the synchronization of neuron activities can be either promoted or destroyed as the information transmission delay increases, irrespective of the probability of electrical synapses in the hybrid-synaptic network. Interestingly, when the number of the electrical synapses exceeds a certain level, further increasing its proportion can obviously enhance the spatiotemporal synchronization transitions. Moreover, the coupling strength has a significant effect on the synchronization transition. The dominated type of the synapse always has a more profound effect on the emergency of the synchronous behaviors. Furthermore, the results of the modular neuronal network structures demonstrate that excessive partitioning of the modular network may result in the dramatic detriment of neuronal synchronization. Considering that information transmission delays are inevitable in intra- and inter-neuronal networks communication, the obtained results may have important implications for the exploration of the synchronization mechanism underlying several neural system diseases such as Parkinson's Disease.

  6. In Vitro Reconstruction of Neuronal Networks Derived from Human iPS Cells Using Microfabricated Devices.

    Directory of Open Access Journals (Sweden)

    Yuzo Takayama

    Full Text Available Morphology and function of the nervous system is maintained via well-coordinated processes both in central and peripheral nervous tissues, which govern the homeostasis of organs/tissues. Impairments of the nervous system induce neuronal disorders such as peripheral neuropathy or cardiac arrhythmia. Although further investigation is warranted to reveal the molecular mechanisms of progression in such diseases, appropriate model systems mimicking the patient-specific communication between neurons and organs are not established yet. In this study, we reconstructed the neuronal network in vitro either between neurons of the human induced pluripotent stem (iPS cell derived peripheral nervous system (PNS and central nervous system (CNS, or between PNS neurons and cardiac cells in a morphologically and functionally compartmentalized manner. Networks were constructed in photolithographically microfabricated devices with two culture compartments connected by 20 microtunnels. We confirmed that PNS and CNS neurons connected via synapses and formed a network. Additionally, calcium-imaging experiments showed that the bundles originating from the PNS neurons were functionally active and responded reproducibly to external stimuli. Next, we confirmed that CNS neurons showed an increase in calcium activity during electrical stimulation of networked bundles from PNS neurons in order to demonstrate the formation of functional cell-cell interactions. We also confirmed the formation of synapses between PNS neurons and mature cardiac cells. These results indicate that compartmentalized culture devices are promising tools for reconstructing network-wide connections between PNS neurons and various organs, and might help to understand patient-specific molecular and functional mechanisms under normal and pathological conditions.

  7. Multiple synchronization transitions in scale-free neuronal networks with electrical and chemical hybrid synapses

    International Nuclear Information System (INIS)

    Liu, Chen; Wang, Jiang; Wang, Lin; Yu, Haitao; Deng, Bin; Wei, Xile; Tsang, Kaiming; Chan, Wailok

    2014-01-01

    Highlights: • Synchronization transitions in hybrid scale-free neuronal networks are investigated. • Multiple synchronization transitions can be induced by the time delay. • Effect of synchronization transitions depends on the ratio of the electrical and chemical synapses. • Coupling strength and the density of inter-neuronal links can enhance the synchronization. -- Abstract: The impacts of information transmission delay on the synchronization transitions in scale-free neuronal networks with electrical and chemical hybrid synapses are investigated. Numerical results show that multiple appearances of synchronization regions transitions can be induced by different information transmission delays. With the time delay increasing, the synchronization of neuronal activities can be enhanced or destroyed, irrespective of the probability of chemical synapses in the whole hybrid neuronal network. In particular, for larger probability of electrical synapses, the regions of synchronous activities appear broader with stronger synchronization ability of electrical synapses compared with chemical ones. Moreover, it can be found that increasing the coupling strength can promote synchronization monotonously, playing the similar role of the increasing the probability of the electrical synapses. Interestingly, the structures and parameters of the scale-free neuronal networks, especially the structural evolvement plays a more subtle role in the synchronization transitions. In the network formation process, it is found that every new vertex is attached to the more old vertices already present in the network, the more synchronous activities will be emerge

  8. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity.

    Science.gov (United States)

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework.

  9. Network dynamics in nociceptive pathways assessed by the neuronal avalanche model

    Directory of Open Access Journals (Sweden)

    Wu José

    2012-04-01

    Full Text Available Abstract Background Traditional electroencephalography provides a critical assessment of pain responses. The perception of pain, however, may involve a series of signal transmission pathways in higher cortical function. Recent studies have shown that a mathematical method, the neuronal avalanche model, may be applied to evaluate higher-order network dynamics. The neuronal avalanche is a cascade of neuronal activity, the size distribution of which can be approximated by a power law relationship manifested by the slope of a straight line (i.e., the α value. We investigated whether the neuronal avalanche could be a useful index for nociceptive assessment. Findings Neuronal activity was recorded with a 4 × 8 multichannel electrode array in the primary somatosensory cortex (S1 and anterior cingulate cortex (ACC. Under light anesthesia, peripheral pinch stimulation increased the slope of the α value in both the ACC and S1, whereas brush stimulation increased the α value only in the S1. The increase in α values was blocked in both regions under deep anesthesia. The increase in α values in the ACC induced by peripheral pinch stimulation was blocked by medial thalamic lesion, but the increase in α values in the S1 induced by brush and pinch stimulation was not affected. Conclusions The neuronal avalanche model shows a critical state in the cortical network for noxious-related signal processing. The α value may provide an index of brain network activity that distinguishes the responses to somatic stimuli from the control state. These network dynamics may be valuable for the evaluation of acute nociceptive processes and may be applied to chronic pathological pain conditions.

  10. Cytokines and cytokine networks target neurons to modulate long-term potentiation.

    Science.gov (United States)

    Prieto, G Aleph; Cotman, Carl W

    2017-04-01

    Cytokines play crucial roles in the communication between brain cells including neurons and glia, as well as in the brain-periphery interactions. In the brain, cytokines modulate long-term potentiation (LTP), a cellular correlate of memory. Whether cytokines regulate LTP by direct effects on neurons or by indirect mechanisms mediated by non-neuronal cells is poorly understood. Elucidating neuron-specific effects of cytokines has been challenging because most brain cells express cytokine receptors. Moreover, cytokines commonly increase the expression of multiple cytokines in their target cells, thus increasing the complexity of brain cytokine networks even after single-cytokine challenges. Here, we review evidence on both direct and indirect-mediated modulation of LTP by cytokines. We also describe novel approaches based on neuron- and synaptosome-enriched systems to identify cytokines able to directly modulate LTP, by targeting neurons and synapses. These approaches can test multiple samples in parallel, thus allowing the study of multiple cytokines simultaneously. Hence, a cytokine networks perspective coupled with neuron-specific analysis may contribute to delineation of maps of the modulation of LTP by cytokines. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  12. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  13. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Directory of Open Access Journals (Sweden)

    Jakob Jordan

    2018-02-01

    Full Text Available State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  14. An FPGA Platform for Real-Time Simulation of Spiking Neuronal Networks.

    Science.gov (United States)

    Pani, Danilo; Meloni, Paolo; Tuveri, Giuseppe; Palumbo, Francesca; Massobrio, Paolo; Raffo, Luigi

    2017-01-01

    In the last years, the idea to dynamically interface biological neurons with artificial ones has become more and more urgent. The reason is essentially due to the design of innovative neuroprostheses where biological cell assemblies of the brain can be substituted by artificial ones. For closed-loop experiments with biological neuronal networks interfaced with in silico modeled networks, several technological challenges need to be faced, from the low-level interfacing between the living tissue and the computational model to the implementation of the latter in a suitable form for real-time processing. Field programmable gate arrays (FPGAs) can improve flexibility when simple neuronal models are required, obtaining good accuracy, real-time performance, and the possibility to create a hybrid system without any custom hardware, just programming the hardware to achieve the required functionality. In this paper, this possibility is explored presenting a modular and efficient FPGA design of an in silico spiking neural network exploiting the Izhikevich model. The proposed system, prototypically implemented on a Xilinx Virtex 6 device, is able to simulate a fully connected network counting up to 1,440 neurons, in real-time, at a sampling rate of 10 kHz, which is reasonable for small to medium scale extra-cellular closed-loop experiments.

  15. Spiking Regularity and Coherence in Complex Hodgkin–Huxley Neuron Networks

    International Nuclear Information System (INIS)

    Zhi-Qiang, Sun; Ping, Xie; Wei, Li; Peng-Ye, Wang

    2010-01-01

    We study the effects of the strength of coupling between neurons on the spiking regularity and coherence in a complex network with randomly connected Hodgkin–Huxley neurons driven by colored noise. It is found that for the given topology realization and colored noise correlation time, there exists an optimal strength of coupling, at which the spiking regularity of the network reaches the best level. Moreover, when the temporal regularity reaches the best level, the spatial coherence of the system has already increased to a relatively high level. In addition, for the given number of neurons and noise correlation time, the values of average regularity and spatial coherence at the optimal strength of coupling are nearly independent of the topology realization. Furthermore, there exists an optimal value of colored noise correlation time at which the spiking regularity can reach its best level. These results may be helpful for understanding of the real neuron world. (cross-disciplinary physics and related areas of science and technology)

  16. Leader neurons in population bursts of 2D living neural networks

    International Nuclear Information System (INIS)

    Eckmann, J-P; Zbinden, Cyrille; Jacobi, Shimshon; Moses, Elisha; Marom, Shimon

    2008-01-01

    Eytan and Marom (2006 J. Neurosci. 26 8465-76) recently showed that the spontaneous bursting activity of rat neuron cultures includes 'first-to-fire' cells that consistently fire earlier than others. Here, we analyze the behavior of these neurons in long-term recordings of spontaneous activity of rat hippocampal and rat cortical neuron cultures from three different laboratories. We identify precursor events that may either subside ('aborted bursts') or can lead to a full-blown burst ('pre-bursts'). We find that the activation in the pre-burst typically has a first neuron ('leader'), followed by a localized response in its neighborhood. Locality is diminished in the bursts themselves. The long-term dynamics of the leaders is relatively robust, evolving with a half-life of 23-34 h. Stimulation of the culture alters the leader distribution, but the distribution stabilizes within about 1 h. We show that the leaders carry information about the identity of the burst, as measured by the signature of the number of spikes per neuron in a burst. The number of spikes from leaders in the first few spikes of a precursor event is furthermore shown to be predictive with regard to the transition into a burst (pre-burst versus aborted burst). We conclude that the leaders play a role in the development of the bursts and conjecture that they are part of an underlying sub-network that is excited first and then acts as a nucleation center for the burst

  17. Decoding spikes in a spiking neuronal network

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, University of Sussex, Brighton BN1 9QH (United Kingdom); Ding, Mingzhou [Department of Mathematics, Florida Atlantic University, Boca Raton, FL 33431 (United States)

    2004-06-04

    We investigate how to reliably decode the input information from the output of a spiking neuronal network. A maximum likelihood estimator of the input signal, together with its Fisher information, is rigorously calculated. The advantage of the maximum likelihood estimation over the 'brute-force rate coding' estimate is clearly demonstrated. It is pointed out that the ergodic assumption in neuroscience, i.e. a temporal average is equivalent to an ensemble average, is in general not true. Averaging over an ensemble of neurons usually gives a biased estimate of the input information. A method on how to compensate for the bias is proposed. Reconstruction of dynamical input signals with a group of spiking neurons is extensively studied and our results show that less than a spike is sufficient to accurately decode dynamical inputs.

  18. Decoding spikes in a spiking neuronal network

    International Nuclear Information System (INIS)

    Feng Jianfeng; Ding, Mingzhou

    2004-01-01

    We investigate how to reliably decode the input information from the output of a spiking neuronal network. A maximum likelihood estimator of the input signal, together with its Fisher information, is rigorously calculated. The advantage of the maximum likelihood estimation over the 'brute-force rate coding' estimate is clearly demonstrated. It is pointed out that the ergodic assumption in neuroscience, i.e. a temporal average is equivalent to an ensemble average, is in general not true. Averaging over an ensemble of neurons usually gives a biased estimate of the input information. A method on how to compensate for the bias is proposed. Reconstruction of dynamical input signals with a group of spiking neurons is extensively studied and our results show that less than a spike is sufficient to accurately decode dynamical inputs

  19. [Functional organization and structure of the serotonergic neuronal network of terrestrial snail].

    Science.gov (United States)

    Nikitin, E S; Balaban, P M

    2011-01-01

    The extension of knowledge how the brain works requires permanent improvement of methods of recording of neuronal activity and increase in the number of neurons recorded simultaneously to better understand the collective work of neuronal networks and assemblies. Conventional methods allow simultaneous intracellular recording up to 2-5 neurons and their membrane potentials, currents or monosynaptic connections or observation of spiking of neuronal groups with subsequent discrimination of individual spikes with loss of details of the dynamics of membrane potential. We recorded activity of a compact group of serotonergic neurons (up to 56 simultaneously) in the ganglion of a terrestrial mollusk using the method of optical recording of membrane potential that allowed to record individual action potentials in details with action potential parameters and to reveal morphology of the neurons rcorded. We demonstrated clear clustering in the group in relation with the dynamics of action potentials and phasic or tonic components in the neuronal responses to external electrophysiological and tactile stimuli. Also, we showed that identified neuron Pd2 could induce activation of a significant number of neurons in the group whereas neuron Pd4 did not induce any activation. However, its activation is delayed with regard to activation of the reacting group of neurons. Our data strongly support the concept of possible delegation of the integrative function by the network to a single neuron.

  20. Effects of the network structure and coupling strength on the noise-induced response delay of a neuronal network

    International Nuclear Information System (INIS)

    Ozer, Mahmut; Uzuntarla, Muhammet

    2008-01-01

    The Hodgkin-Huxley (H-H) neuron model driven by stimuli just above threshold shows a noise-induced response delay with respect to time to the first spike for a certain range of noise strengths, an effect called 'noise delayed decay' (NDD). We study the response time of a network of coupled H-H neurons, and investigate how the NDD can be affected by the connection topology of the network and the coupling strength. We show that the NDD effect exists for weak and intermediate coupling strengths, whereas it disappears for strong coupling strength regardless of the connection topology. We also show that although the network structure has very little effect on the NDD for a weak coupling strength, the network structure plays a key role for an intermediate coupling strength by decreasing the NDD effect with the increasing number of random shortcuts, and thus provides an additional operating regime, that is absent in the regular network, in which the neurons may also exploit a spike time code

  1. Computational Models of Neuron-Astrocyte Interactions Lead to Improved Efficacy in the Performance of Neural Networks

    Science.gov (United States)

    Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B.

    2012-01-01

    The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem. PMID:22649480

  2. Consistency analysis of network traffic repositories

    NARCIS (Netherlands)

    Lastdrager, Elmer; Lastdrager, E.E.H.; Pras, Aiko

    Traffic repositories with TCP/IP header information are very important for network analysis. Researchers often assume that such repositories reliably represent all traffic that has been flowing over the network; little thoughts are made regarding the consistency of these repositories. Still, for

  3. Robust emergence of small-world structure in networks of spiking neurons.

    Science.gov (United States)

    Kwok, Hoi Fei; Jurica, Peter; Raffone, Antonino; van Leeuwen, Cees

    2007-03-01

    Spontaneous activity in biological neural networks shows patterns of dynamic synchronization. We propose that these patterns support the formation of a small-world structure-network connectivity optimal for distributed information processing. We present numerical simulations with connected Hindmarsh-Rose neurons in which, starting from random connection distributions, small-world networks evolve as a result of applying an adaptive rewiring rule. The rule connects pairs of neurons that tend fire in synchrony, and disconnects ones that fail to synchronize. Repeated application of the rule leads to small-world structures. This mechanism is robustly observed for bursting and irregular firing regimes.

  4. Memristor-based neural networks: Synaptic versus neuronal stochasticity

    KAUST Repository

    Naous, Rawan; Alshedivat, Maruan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled N.

    2016-01-01

    In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors

  5. Delay-induced diversity of firing behavior and ordered chaotic firing in adaptive neuronal networks

    International Nuclear Information System (INIS)

    Gong Yubing; Wang Li; Xu Bo

    2012-01-01

    In this paper, we study the effect of time delay on the firing behavior and temporal coherence and synchronization in Newman–Watts thermosensitive neuron networks with adaptive coupling. At beginning, the firing exhibit disordered spiking in absence of time delay. As time delay is increased, the neurons exhibit diversity of firing behaviors including bursting with multiple spikes in a burst, spiking, bursting with four, three and two spikes, firing death, and bursting with increasing amplitude. The spiking is the most ordered, exhibiting coherence resonance (CR)-like behavior, and the firing synchronization becomes enhanced with the increase of time delay. As growth rate of coupling strength or network randomness increases, CR-like behavior shifts to smaller time delay and the synchronization of firing increases. These results show that time delay can induce diversity of firing behaviors in adaptive neuronal networks, and can order the chaotic firing by enhancing and optimizing the temporal coherence and enhancing the synchronization of firing. However, the phenomenon of firing death shows that time delay may inhibit the firing of adaptive neuronal networks. These findings provide new insight into the role of time delay in the firing activity of adaptive neuronal networks, and can help to better understand the complex firing phenomena in neural networks.

  6. Robust spatial memory maps in flickering neuronal networks: a topological model

    Science.gov (United States)

    Dabaghian, Yuri; Babichev, Andrey; Memoli, Facundo; Chowdhury, Samir; Rice University Collaboration; Ohio State University Collaboration

    It is widely accepted that the hippocampal place cells provide a substrate of the neuronal representation of the environment--the ``cognitive map''. However, hippocampal network, as any other network in the brain is transient: thousands of hippocampal neurons die every day and the connections formed by these cells constantly change due to various forms of synaptic plasticity. What then explains the remarkable reliability of our spatial memories? We propose a computational approach to answering this question based on a couple of insights. First, we propose that the hippocampal cognitive map is fundamentally topological, and hence it is amenable to analysis by topological methods. We then apply several novel methods from homology theory, to understand how dynamic connections between cells influences the speed and reliability of spatial learning. We simulate the rat's exploratory movements through different environments and study how topological invariants of these environments arise in a network of simulated neurons with ``flickering'' connectivity. We find that despite transient connectivity the network of place cells produces a stable representation of the topology of the environment.

  7. Effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks

    Science.gov (United States)

    Sun, Xiaojuan; Perc, Matjaž; Kurths, Jürgen

    2017-05-01

    In this paper, we study effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks. Our focus is on the impact of two parameters, namely the time delay τ and the probability of partial time delay pdelay, whereby the latter determines the probability with which a connection between two neurons is delayed. Our research reveals that partial time delays significantly affect phase synchronization in this system. In particular, partial time delays can either enhance or decrease phase synchronization and induce synchronization transitions with changes in the mean firing rate of neurons, as well as induce switching between synchronized neurons with period-1 firing to synchronized neurons with period-2 firing. Moreover, in comparison to a neuronal network where all connections are delayed, we show that small partial time delay probabilities have especially different influences on phase synchronization of neuronal networks.

  8. Turbofan engine diagnostics neuron network size optimization method which takes into account overlaerning effect

    Directory of Open Access Journals (Sweden)

    О.С. Якушенко

    2010-01-01

    Full Text Available  The article is devoted to the problem of gas turbine engine (GTE technical state class automatic recognition with operation parameters by neuron networks. The one of main problems for creation the neuron networks is determination of their optimal structures size (amount of layers in network and count of neurons in each layer.The method of neuron network size optimization intended for classification of GTE technical state is considered in the article. Optimization is cared out with taking into account of overlearning effect possibility when a learning network loses property of generalization and begins strictly describing educational data set. To determinate a moment when overlearning effect is appeared in learning neuron network the method  of three data sets is used. The method is based on the comparison of recognition quality parameters changes which were calculated during recognition of educational and control data sets. As the moment when network overlearning effect is appeared the moment when control data set recognition quality begins deteriorating but educational data set recognition quality continues still improving is used. To determinate this moment learning process periodically is terminated and simulation of network with education and control data sets is fulfilled. The optimization of two-, three- and four-layer networks is conducted and some results of optimization are shown. Also the extended educational set is created and shown. The set describes 16 GTE technical state classes and each class is represented with 200 points (200 possible technical state class realizations instead of 20 points using in the former articles. It was done to increase representativeness of data set.In the article the algorithm of optimization is considered and some results which were obtained with it are shown. The results of experiments were analyzed to determinate most optimal neuron network structure. This structure provides most high-quality GTE

  9. Role of Noise in Complex Networks of FitzHugh-Nagumo Neurons

    International Nuclear Information System (INIS)

    Fortuna, Luigi; Frasca, Mattia; La Rosa, Manuela

    2005-01-01

    This paper deals with the open question related to the role of noise in complex networks of interconnected FitzHugh-Nagumo neurons. In this paper this problem is faced with extensive simulations of different network topologies. The results show that several topologies behave in an optimal way with respect to the range of noise level leading to an improvement in the stimulus-response coherence, while other with respect to the maximum values of the performance index. The best results in terms of both the suitable noise level and high stimulus response coherence have been obtained when a diversity in neuron characteristic parameters has been introduced and the neurons have been connected in a small-world topology

  10. Complete Neuron-Astrocyte Interaction Model: Digital Multiplierless Design and Networking Mechanism.

    Science.gov (United States)

    Haghiri, Saeed; Ahmadi, Arash; Saif, Mehrdad

    2017-02-01

    Glial cells, also known as neuroglia or glia, are non-neuronal cells providing support and protection for neurons in the central nervous system (CNS). They also act as supportive cells in the brain. Among a variety of glial cells, the star-shaped glial cells, i.e., astrocytes, are the largest cell population in the brain. The important role of astrocyte such as neuronal synchronization, synaptic information regulation, feedback to neural activity and extracellular regulation make the astrocytes play a vital role in brain disease. This paper presents a modified complete neuron-astrocyte interaction model that is more suitable for efficient and large scale biological neural network realization on digital platforms. Simulation results show that the modified complete interaction model can reproduce biological-like behavior of the original neuron-astrocyte mechanism. The modified interaction model is investigated in terms of digital realization feasibility and cost targeting a low cost hardware implementation. Networking behavior of this interaction is investigated and compared between two cases: i) the neuron spiking mechanism without astrocyte effects, and ii) the effect of astrocyte in regulating the neurons behavior and synaptic transmission via controlling the LTP and LTD processes. Hardware implementation on FPGA shows that the modified model mimics the main mechanism of neuron-astrocyte communication with higher performance and considerably lower hardware overhead cost compared with the original interaction model.

  11. Analyzing topological characteristics of neuronal functional networks in the rat brain

    International Nuclear Information System (INIS)

    Lu, Hu; Yang, Shengtao; Song, Yuqing; Wei, Hui

    2014-01-01

    In this study, we recorded spike trains from brain cortical neurons of several behavioral rats in vivo by using multi-electrode recordings. An NFN was constructed in each trial, obtaining a total of 150 NFNs in this study. The topological characteristics of NFNs were analyzed by using the two most important characteristics of complex networks, namely, small-world structure and community structure. We found that the small-world properties exist in different NFNs constructed in this study. Modular function Q was used to determine the existence of community structure in NFNs, through which we found that community-structure characteristics, which are related to recorded spike train data sets, are more evident in the Y-maze task than in the DM-GM task. Our results can also be used to analyze further the relationship between small-world characteristics and the cognitive behavioral responses of rats. - Highlights: • We constructed the neuronal function networks based on the recorded neurons. • We analyzed the two main complex network characteristics, namely, small-world structure and community structure. • NFNs which were constructed based on the recorded neurons in this study exhibit small-world properties. • Some NFNs have community structure characteristics

  12. Analyzing topological characteristics of neuronal functional networks in the rat brain

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Hu [School of Computer Science and Communication Engineering, Jiangsu University, Jiangsu 212003 (China); School of Computer Science, Fudan University, Shanghai 200433 (China); Yang, Shengtao [Institutes of Brain Science, Fudan University, Shanghai 200433 (China); Song, Yuqing [School of Computer Science and Communication Engineering, Jiangsu University, Jiangsu 212003 (China); Wei, Hui [School of Computer Science, Fudan University, Shanghai 200433 (China)

    2014-08-28

    In this study, we recorded spike trains from brain cortical neurons of several behavioral rats in vivo by using multi-electrode recordings. An NFN was constructed in each trial, obtaining a total of 150 NFNs in this study. The topological characteristics of NFNs were analyzed by using the two most important characteristics of complex networks, namely, small-world structure and community structure. We found that the small-world properties exist in different NFNs constructed in this study. Modular function Q was used to determine the existence of community structure in NFNs, through which we found that community-structure characteristics, which are related to recorded spike train data sets, are more evident in the Y-maze task than in the DM-GM task. Our results can also be used to analyze further the relationship between small-world characteristics and the cognitive behavioral responses of rats. - Highlights: • We constructed the neuronal function networks based on the recorded neurons. • We analyzed the two main complex network characteristics, namely, small-world structure and community structure. • NFNs which were constructed based on the recorded neurons in this study exhibit small-world properties. • Some NFNs have community structure characteristics.

  13. Dynamical patterns of calcium signaling in a functional model of neuron-astrocyte networks

    DEFF Research Database (Denmark)

    Postnov, D.E.; Koreshkov, R.N.; Brazhe, N.A.

    2009-01-01

    We propose a functional mathematical model for neuron-astrocyte networks. The model incorporates elements of the tripartite synapse and the spatial branching structure of coupled astrocytes. We consider glutamate-induced calcium signaling as a specific mode of excitability and transmission...... in astrocytic-neuronal networks. We reproduce local and global dynamical patterns observed experimentally....

  14. Neuronal network disintegration: common pathways linking neurodegenerative diseases.

    Science.gov (United States)

    Ahmed, Rebekah M; Devenney, Emma M; Irish, Muireann; Ittner, Arne; Naismith, Sharon; Ittner, Lars M; Rohrer, Jonathan D; Halliday, Glenda M; Eisen, Andrew; Hodges, John R; Kiernan, Matthew C

    2016-11-01

    Neurodegeneration refers to a heterogeneous group of brain disorders that progressively evolve. It has been increasingly appreciated that many neurodegenerative conditions overlap at multiple levels and therefore traditional clinicopathological correlation approaches to better classify a disease have met with limited success. Neuronal network disintegration is fundamental to neurodegeneration, and concepts based around such a concept may better explain the overlap between their clinical and pathological phenotypes. In this Review, promoters of overlap in neurodegeneration incorporating behavioural, cognitive, metabolic, motor, and extrapyramidal presentations will be critically appraised. In addition, evidence that may support the existence of large-scale networks that might be contributing to phenotypic differentiation will be considered across a neurodegenerative spectrum. Disintegration of neuronal networks through different pathological processes, such as prion-like spread, may provide a better paradigm of disease and thereby facilitate the identification of novel therapies for neurodegeneration. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  15. Identifying Controlling Nodes in Neuronal Networks in Different Scales

    Science.gov (United States)

    Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen

    2012-01-01

    Recent studies have detected hubs in neuronal networks using degree, betweenness centrality, motif and synchronization and revealed the importance of hubs in their structural and functional roles. In addition, the analysis of complex networks in different scales are widely used in physics community. This can provide detailed insights into the intrinsic properties of networks. In this study, we focus on the identification of controlling regions in cortical networks of cats’ brain in microscopic, mesoscopic and macroscopic scales, based on single-objective evolutionary computation methods. The problem is investigated by considering two measures of controllability separately. The impact of the number of driver nodes on controllability is revealed and the properties of controlling nodes are shown in a statistical way. Our results show that the statistical properties of the controlling nodes display a concave or convex shape with an increase of the allowed number of controlling nodes, revealing a transition in choosing driver nodes from the areas with a large degree to the areas with a low degree. Interestingly, the community Auditory in cats’ brain, which has sparse connections with other communities, plays an important role in controlling the neuronal networks. PMID:22848475

  16. Single-cell Transcriptional Analysis Reveals Novel Neuronal Phenotypes and Interaction Networks involved In the Central Circadian Clock

    Directory of Open Access Journals (Sweden)

    James Park

    2016-10-01

    Full Text Available Single-cell heterogeneity confounds efforts to understand how a population of cells organizes into cellular networks that underlie tissue-level function. This complexity is prominent in the mammalian suprachiasmatic nucleus (SCN. Here, individual neurons exhibit a remarkable amount of asynchronous behavior and transcriptional heterogeneity. However, SCN neurons are able to generate precisely coordinated synaptic and molecular outputs that synchronize the body to a common circadian cycle by organizing into cellular networks. To understand this emergent cellular network property, it is important to reconcile single-neuron heterogeneity with network organization. In light of recent studies suggesting that transcriptionally heterogeneous cells organize into distinct cellular phenotypes, we characterized the transcriptional, spatial, and functional organization of 352 SCN neurons from mice experiencing phase-shifts in their circadian cycle. Using the community structure detection method and multivariate analytical techniques, we identified previously undescribed neuronal phenotypes that are likely to participate in regulatory networks with known SCN cell types. Based on the newly discovered neuronal phenotypes, we developed a data-driven neuronal network structure in which multiple cell types interact through known synaptic and paracrine signaling mechanisms. These results provide a basis from which to interpret the functional variability of SCN neurons and describe methodologies towards understanding how a population of heterogeneous single cells organizes into cellular networks that underlie tissue-level function.

  17. A reanalysis of "Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons".

    Science.gov (United States)

    Engelken, Rainer; Farkhooi, Farzad; Hansel, David; van Vreeswijk, Carl; Wolf, Fred

    2016-01-01

    Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks of spiking neuron models. A recent study investigated spiking balanced networks of leaky integrate and fire (LIF) neurons and compared their dynamics to a matched rate network with identical topology, where single unit input-output functions were chosen from isolated LIF neurons receiving Gaussian white noise input. A mathematical analogy between the chaotic instability in networks of rate units and the spiking network dynamics was proposed. Here we revisit the behavior of the spiking LIF networks and these matched rate networks. We find expected hallmarks of a chaotic instability in the rate network: For supercritical coupling strength near the transition point, the autocorrelation time diverges. For subcritical coupling strengths, we observe critical slowing down in response to small external perturbations. In the spiking network, we found in contrast that the timescale of the autocorrelations is insensitive to the coupling strength and that rate deviations resulting from small input perturbations rapidly decay. The decay speed even accelerates for increasing coupling strength. In conclusion, our reanalysis demonstrates fundamental differences between the behavior of pulse-coupled spiking LIF networks and rate networks with matched topology and input-output function. In particular there is no indication of a corresponding chaotic instability in the spiking network.

  18. Spiking, Bursting, and Population Dynamics in a Network of Growth Transform Neurons.

    Science.gov (United States)

    Gangopadhyay, Ahana; Chakrabartty, Shantanu

    2017-04-27

    This paper investigates the dynamical properties of a network of neurons, each of which implements an asynchronous mapping based on polynomial growth transforms. In the first part of this paper, we present a geometric approach for visualizing the dynamics of the network where each of the neurons traverses a trajectory in a dual optimization space, whereas the network itself traverses a trajectory in an equivalent primal optimization space. We show that as the network learns to solve basic classification tasks, different choices of primal-dual mapping produce unique but interpretable neural dynamics like noise shaping, spiking, and bursting. While the proposed framework is general enough, in this paper, we demonstrate its use for designing support vector machines (SVMs) that exhibit noise-shaping properties similar to those of ΣΔ modulators, and for designing SVMs that learn to encode information using spikes and bursts. It is demonstrated that the emergent switching, spiking, and burst dynamics produced by each neuron encodes its respective margin of separation from a classification hyperplane whose parameters are encoded by the network population dynamics. We believe that the proposed growth transform neuron model and the underlying geometric framework could serve as an important tool to connect well-established machine learning algorithms like SVMs to neuromorphic principles like spiking, bursting, population encoding, and noise shaping.

  19. Graph-based unsupervised segmentation algorithm for cultured neuronal networks' structure characterization and modeling.

    Science.gov (United States)

    de Santos-Sierra, Daniel; Sendiña-Nadal, Irene; Leyva, Inmaculada; Almendral, Juan A; Ayali, Amir; Anava, Sarit; Sánchez-Ávila, Carmen; Boccaletti, Stefano

    2015-06-01

    Large scale phase-contrast images taken at high resolution through the life of a cultured neuronal network are analyzed by a graph-based unsupervised segmentation algorithm with a very low computational cost, scaling linearly with the image size. The processing automatically retrieves the whole network structure, an object whose mathematical representation is a matrix in which nodes are identified neurons or neurons' clusters, and links are the reconstructed connections between them. The algorithm is also able to extract any other relevant morphological information characterizing neurons and neurites. More importantly, and at variance with other segmentation methods that require fluorescence imaging from immunocytochemistry techniques, our non invasive measures entitle us to perform a longitudinal analysis during the maturation of a single culture. Such an analysis furnishes the way of individuating the main physical processes underlying the self-organization of the neurons' ensemble into a complex network, and drives the formulation of a phenomenological model yet able to describe qualitatively the overall scenario observed during the culture growth. © 2014 International Society for Advancement of Cytometry.

  20. Dual-mode operation of neuronal networks involved in left-right alternation

    DEFF Research Database (Denmark)

    Talpalar, Adolfo E.; Bouvier, Julien; Borgius, Lotta

    2013-01-01

    All forms of locomotion are repetitive motor activities that require coordinated bilateral activation of muscles. The executive elements of locomotor control are networks of spinal neurons that determine gait pattern through the sequential activation of motor-neuron pools on either side of the bo...

  1. Analysis of connectivity map: Control to glutamate injured and phenobarbital treated neuronal network

    Science.gov (United States)

    Kamal, Hassan; Kanhirodan, Rajan; Srinivas, Kalyan V.; Sikdar, Sujit K.

    2010-04-01

    We study the responses of a cultured neural network when it is exposed to epileptogenesis glutamate injury causing epilepsy and subsequent treatment with phenobarbital by constructing connectivity map of neurons using correlation matrix. This study is particularly useful in understanding the pharmaceutical drug induced changes in the neuronal network properties with insights into changes at the systems biology level.

  2. Developmental changes of neuronal networks associated with strategic social decision-making.

    Science.gov (United States)

    Steinmann, Elisabeth; Schmalor, Antonia; Prehn-Kristensen, Alexander; Wolff, Stephan; Galka, Andreas; Möhring, Jan; Gerber, Wolf-Dieter; Petermann, Franz; Stephani, Ulrich; Siniatchkin, Michael

    2014-04-01

    One of the important prerequisites for successful social interaction is the willingness of each individual to cooperate socially. Using the ultimatum game, several studies have demonstrated that the process of decision-making to cooperate or to defeat in interaction with a partner is associated with activation of the dorsolateral prefrontal cortex (DLPFC), anterior cingulate cortex (ACC), anterior insula (AI), and inferior frontal cortex (IFC). This study investigates developmental changes in this neuronal network. 15 healthy children (8-12 years), 15 adolescents (13-18 years) and 15 young adults (19-28 years) were investigated using the ultimatum game. Neuronal networks representing decision-making based on strategic thinking were characterized using functional MRI. In all age groups, the process of decision-making in reaction to unfair offers was associated with hemodynamic changes in similar regions. Compared with children, however, healthy adults and adolescents revealed greater activation in the IFC and the fusiform gyrus, as well as the nucleus accumbens. In contrast, healthy children displayed more activation in the AI, the dorsal part of the ACC, and the DLPFC. There were no differences in brain activations between adults and adolescents. The neuronal mechanisms underlying strategic social decision making are already developed by the age of eight. Decision-making based on strategic thinking is associated with age-dependent involvement of different brain regions. Neuronal networks underlying theory of mind and reward anticipation are more activated in adults and adolescents with regard to the increasing perspective taking with age. In relation to emotional reactivity and respective compensatory coping in younger ages, children have higher activations in a neuronal network associated with emotional processing and executive control. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Spatiotemporal dynamics on small-world neuronal networks: The roles of two types of time-delayed coupling

    Energy Technology Data Exchange (ETDEWEB)

    Wu Hao; Jiang Huijun [Hefei National Laboratory for Physical Sciences at the Microscale and Department of Chemical Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Hou Zhonghuai, E-mail: hzhlj@ustc.edu.cn [Hefei National Laboratory for Physical Sciences at the Microscale and Department of Chemical Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2011-10-15

    Highlights: > We compare neuronal dynamics in dependence on two types of delayed coupling. > Distinct results induced by different delayed coupling can be achieved. > Time delays in type 1 coupling can induce a most spatiotemporal ordered state. > For type 2 coupling, the systems exhibit synchronization transitions with delay. - Abstract: We investigate temporal coherence and spatial synchronization on small-world networks consisting of noisy Terman-Wang (TW) excitable neurons in dependence on two types of time-delayed coupling: {l_brace}x{sub j}(t - {tau}) - x{sub i}(t){r_brace} and {l_brace}x{sub j}(t - {tau}) - x{sub i}(t - {tau}){r_brace}. For the former case, we show that time delay in the coupling can dramatically enhance temporal coherence and spatial synchrony of the noise-induced spike trains. In addition, if the delay time {tau} is tuned to nearly match the intrinsic spike period of the neuronal network, the system dynamics reaches a most ordered state, which is both periodic in time and nearly synchronized in space, demonstrating an interesting resonance phenomenon with delay. For the latter case, however, we cannot achieve a similar spatiotemporal ordered state, but the neuronal dynamics exhibits interesting synchronization transitions with time delay from zigzag fronts of excitations to dynamic clustering anti-phase synchronization (APS), and further to clustered chimera states which have spatially distributed anti-phase coherence separated by incoherence. Furthermore, we also show how these findings are influenced by the change of the noise intensity and the rewiring probability of the small-world networks. Finally, qualitative analysis is given to illustrate the numerical results.

  4. Spatiotemporal dynamics on small-world neuronal networks: The roles of two types of time-delayed coupling

    International Nuclear Information System (INIS)

    Wu Hao; Jiang Huijun; Hou Zhonghuai

    2011-01-01

    Highlights: → We compare neuronal dynamics in dependence on two types of delayed coupling. → Distinct results induced by different delayed coupling can be achieved. → Time delays in type 1 coupling can induce a most spatiotemporal ordered state. → For type 2 coupling, the systems exhibit synchronization transitions with delay. - Abstract: We investigate temporal coherence and spatial synchronization on small-world networks consisting of noisy Terman-Wang (TW) excitable neurons in dependence on two types of time-delayed coupling: {x j (t - τ) - x i (t)} and {x j (t - τ) - x i (t - τ)}. For the former case, we show that time delay in the coupling can dramatically enhance temporal coherence and spatial synchrony of the noise-induced spike trains. In addition, if the delay time τ is tuned to nearly match the intrinsic spike period of the neuronal network, the system dynamics reaches a most ordered state, which is both periodic in time and nearly synchronized in space, demonstrating an interesting resonance phenomenon with delay. For the latter case, however, we cannot achieve a similar spatiotemporal ordered state, but the neuronal dynamics exhibits interesting synchronization transitions with time delay from zigzag fronts of excitations to dynamic clustering anti-phase synchronization (APS), and further to clustered chimera states which have spatially distributed anti-phase coherence separated by incoherence. Furthermore, we also show how these findings are influenced by the change of the noise intensity and the rewiring probability of the small-world networks. Finally, qualitative analysis is given to illustrate the numerical results.

  5. Hidden neuronal correlations in cultured networks

    International Nuclear Information System (INIS)

    Segev, Ronen; Baruchi, Itay; Hulata, Eyal; Ben-Jacob, Eshel

    2004-01-01

    Utilization of a clustering algorithm on neuronal spatiotemporal correlation matrices recorded during a spontaneous activity of in vitro networks revealed the existence of hidden correlations: the sequence of synchronized bursting events (SBEs) is composed of statistically distinguishable subgroups each with its own distinct pattern of interneuron spatiotemporal correlations. These findings hint that each of the SBE subgroups can serve as a template for coding, storage, and retrieval of a specific information

  6. Synchronous bursts on scale-free neuronal networks with attractive and repulsive coupling.

    Directory of Open Access Journals (Sweden)

    Qingyun Wang

    Full Text Available This paper investigates the dependence of synchronization transitions of bursting oscillations on the information transmission delay over scale-free neuronal networks with attractive and repulsive coupling. It is shown that for both types of coupling, the delay always plays a subtle role in either promoting or impairing synchronization. In particular, depending on the inherent oscillation period of individual neurons, regions of irregular and regular propagating excitatory fronts appear intermittently as the delay increases. These delay-induced synchronization transitions are manifested as well-expressed minima in the measure for spatiotemporal synchrony. For attractive coupling, the minima appear at every integer multiple of the average oscillation period, while for the repulsive coupling, they appear at every odd multiple of the half of the average oscillation period. The obtained results are robust to the variations of the dynamics of individual neurons, the system size, and the neuronal firing type. Hence, they can be used to characterize attractively or repulsively coupled scale-free neuronal networks with delays.

  7. Temporal sequence learning in winner-take-all networks of spiking neurons demonstrated in a brain-based device.

    Science.gov (United States)

    McKinstry, Jeffrey L; Edelman, Gerald M

    2013-01-01

    Animal behavior often involves a temporally ordered sequence of actions learned from experience. Here we describe simulations of interconnected networks of spiking neurons that learn to generate patterns of activity in correct temporal order. The simulation consists of large-scale networks of thousands of excitatory and inhibitory neurons that exhibit short-term synaptic plasticity and spike-timing dependent synaptic plasticity. The neural architecture within each area is arranged to evoke winner-take-all (WTA) patterns of neural activity that persist for tens of milliseconds. In order to generate and switch between consecutive firing patterns in correct temporal order, a reentrant exchange of signals between these areas was necessary. To demonstrate the capacity of this arrangement, we used the simulation to train a brain-based device responding to visual input by autonomously generating temporal sequences of motor actions.

  8. Spatio-temporal specialization of GABAergic septo-hippocampal neurons for rhythmic network activity.

    Science.gov (United States)

    Unal, Gunes; Crump, Michael G; Viney, Tim J; Éltes, Tímea; Katona, Linda; Klausberger, Thomas; Somogyi, Peter

    2018-03-03

    Medial septal GABAergic neurons of the basal forebrain innervate the hippocampus and related cortical areas, contributing to the coordination of network activity, such as theta oscillations and sharp wave-ripple events, via a preferential innervation of GABAergic interneurons. Individual medial septal neurons display diverse activity patterns, which may be related to their termination in different cortical areas and/or to the different types of innervated interneurons. To test these hypotheses, we extracellularly recorded and juxtacellularly labeled single medial septal neurons in anesthetized rats in vivo during hippocampal theta and ripple oscillations, traced their axons to distant cortical target areas, and analyzed their postsynaptic interneurons. Medial septal GABAergic neurons exhibiting different hippocampal theta phase preferences and/or sharp wave-ripple related activity terminated in restricted hippocampal regions, and selectively targeted a limited number of interneuron types, as established on the basis of molecular markers. We demonstrate the preferential innervation of bistratified cells in CA1 and of basket cells in CA3 by individual axons. One group of septal neurons was suppressed during sharp wave-ripples, maintained their firing rate across theta and non-theta network states and mainly fired along the descending phase of CA1 theta oscillations. In contrast, neurons that were active during sharp wave-ripples increased their firing significantly during "theta" compared to "non-theta" states, with most firing during the ascending phase of theta oscillations. These results demonstrate that specialized septal GABAergic neurons contribute to the coordination of network activity through parallel, target area- and cell type-selective projections to the hippocampus.

  9. Effects of network structure on the synchronizability of nonlinearly coupled Hindmarsh–Rose neurons

    International Nuclear Information System (INIS)

    Li, Chun-Hsien; Yang, Suh-Yuh

    2015-01-01

    This work is devoted to investigate the effects of network structure on the synchronizability of nonlinearly coupled dynamical network of Hindmarsh–Rose neurons with a sigmoidal coupling function. We mainly focus on the networks that exhibit the small-world character or scale-free property. By checking the first nonzero eigenvalue of the outer-coupling matrix, which is closely related to the synchronization threshold, the synchronizabilities of three specific network ensembles with prescribed network structures are compared. Interestingly, we find that networks with more connections will not necessarily result in better synchronizability. - Highlights: • We investigate the effects of network structure on the synchronizability of nonlinearly coupled Hindmarsh–Rose neurons. • We mainly consider the networks that exhibit the small-world character or scale-free property. • The synchronizability of three specific network ensembles with prescribed network structures are compared. • Networks with more connections will not necessarily result in better synchronizability

  10. Effects of network structure on the synchronizability of nonlinearly coupled Hindmarsh–Rose neurons

    Energy Technology Data Exchange (ETDEWEB)

    Li, Chun-Hsien, E-mail: chli@nknucc.nknu.edu.tw [Department of Mathematics, National Kaohsiung Normal University, Yanchao District, Kaohsiung City 82444, Taiwan (China); Yang, Suh-Yuh, E-mail: syyang@math.ncu.edu.tw [Department of Mathematics, National Central University, Jhongli District, Taoyuan City 32001, Taiwan (China)

    2015-10-23

    This work is devoted to investigate the effects of network structure on the synchronizability of nonlinearly coupled dynamical network of Hindmarsh–Rose neurons with a sigmoidal coupling function. We mainly focus on the networks that exhibit the small-world character or scale-free property. By checking the first nonzero eigenvalue of the outer-coupling matrix, which is closely related to the synchronization threshold, the synchronizabilities of three specific network ensembles with prescribed network structures are compared. Interestingly, we find that networks with more connections will not necessarily result in better synchronizability. - Highlights: • We investigate the effects of network structure on the synchronizability of nonlinearly coupled Hindmarsh–Rose neurons. • We mainly consider the networks that exhibit the small-world character or scale-free property. • The synchronizability of three specific network ensembles with prescribed network structures are compared. • Networks with more connections will not necessarily result in better synchronizability.

  11. Connectivities and synchronous firing in cortical neuronal networks

    International Nuclear Information System (INIS)

    Jia, L.C.; Sano, M.; Lai, P.-Y.; Chan, C.K.

    2004-01-01

    Network connectivities (k-bar) of cortical neural cultures are studied by synchronized firing and determined from measured correlations between fluorescence intensities of firing neurons. The bursting frequency (f) during synchronized firing of the networks is found to be an increasing function of k-bar. With f taken to be proportional to k-bar, a simple random model with a k-bar dependent connection probability p(k-bar) has been constructed to explain our experimental findings successfully

  12. Attractor switching in neuron networks and Spatiotemporal filters for motion processing

    NARCIS (Netherlands)

    Subramanian, Easwara Naga

    2008-01-01

    From a broader perspective, we address two important questions, viz., (a) what kind of mechanism would enable a neuronal network to switch between various tasks or stored patterns? (b) what are the properties of neurons that are used by the visual system in early motion detection? To address (a) we

  13. Automated quantification of neuronal networks and single-cell calcium dynamics using calcium imaging.

    Science.gov (United States)

    Patel, Tapan P; Man, Karen; Firestein, Bonnie L; Meaney, David F

    2015-03-30

    Recent advances in genetically engineered calcium and membrane potential indicators provide the potential to estimate the activation dynamics of individual neurons within larger, mesoscale networks (100s-1000+neurons). However, a fully integrated automated workflow for the analysis and visualization of neural microcircuits from high speed fluorescence imaging data is lacking. Here we introduce FluoroSNNAP, Fluorescence Single Neuron and Network Analysis Package. FluoroSNNAP is an open-source, interactive software developed in MATLAB for automated quantification of numerous biologically relevant features of both the calcium dynamics of single-cells and network activity patterns. FluoroSNNAP integrates and improves upon existing tools for spike detection, synchronization analysis, and inference of functional connectivity, making it most useful to experimentalists with little or no programming knowledge. We apply FluoroSNNAP to characterize the activity patterns of neuronal microcircuits undergoing developmental maturation in vitro. Separately, we highlight the utility of single-cell analysis for phenotyping a mixed population of neurons expressing a human mutant variant of the microtubule associated protein tau and wild-type tau. We show the performance of semi-automated cell segmentation using spatiotemporal independent component analysis and significant improvement in detecting calcium transients using a template-based algorithm in comparison to peak-based or wavelet-based detection methods. Our software further enables automated analysis of microcircuits, which is an improvement over existing methods. We expect the dissemination of this software will facilitate a comprehensive analysis of neuronal networks, promoting the rapid interrogation of circuits in health and disease. Copyright © 2015. Published by Elsevier B.V.

  14. Soft chitosan microbeads scaffold for 3D functional neuronal networks.

    Science.gov (United States)

    Tedesco, Maria Teresa; Di Lisa, Donatella; Massobrio, Paolo; Colistra, Nicolò; Pesce, Mattia; Catelani, Tiziano; Dellacasa, Elena; Raiteri, Roberto; Martinoia, Sergio; Pastorino, Laura

    2018-02-01

    The availability of 3D biomimetic in vitro neuronal networks of mammalian neurons represents a pivotal step for the development of brain-on-a-chip experimental models to study neuronal (dys)functions and particularly neuronal connectivity. The use of hydrogel-based scaffolds for 3D cell cultures has been extensively studied in the last years. However, limited work on biomimetic 3D neuronal cultures has been carried out to date. In this respect, here we investigated the use of a widely popular polysaccharide, chitosan (CHI), for the fabrication of a microbead based 3D scaffold to be coupled to primary neuronal cells. CHI microbeads were characterized by optical and atomic force microscopies. The cell/scaffold interaction was deeply characterized by transmission electron microscopy and by immunocytochemistry using confocal microscopy. Finally, a preliminary electrophysiological characterization by micro-electrode arrays was carried out. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Self-Organized Criticality in a Simple Neuron Model Based on Scale-Free Networks

    International Nuclear Information System (INIS)

    Lin Min; Wang Gang; Chen Tianlun

    2006-01-01

    A simple model for a set of interacting idealized neurons in scale-free networks is introduced. The basic elements of the model are endowed with the main features of a neuron function. We find that our model displays power-law behavior of avalanche sizes and generates long-range temporal correlation. More importantly, we find different dynamical behavior for nodes with different connectivity in the scale-free networks.

  16. Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State

    Science.gov (United States)

    Lagzi, Fereshteh; Rotter, Stefan

    2015-01-01

    We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the “within” versus “between” connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed “winnerless competition”, which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might

  17. Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State.

    Science.gov (United States)

    Lagzi, Fereshteh; Rotter, Stefan

    2015-01-01

    We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the "within" versus "between" connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed "winnerless competition", which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might suggest a

  18. Spiking Neural Networks with Unsupervised Learning Based on STDP Using Resistive Synaptic Devices and Analog CMOS Neuron Circuit.

    Science.gov (United States)

    Kwon, Min-Woo; Baek, Myung-Hyun; Hwang, Sungmin; Kim, Sungjun; Park, Byung-Gook

    2018-09-01

    We designed the CMOS analog integrate and fire (I&F) neuron circuit can drive resistive synaptic device. The neuron circuit consists of a current mirror for spatial integration, a capacitor for temporal integration, asymmetric negative and positive pulse generation part, a refractory part, and finally a back-propagation pulse generation part for learning of the synaptic devices. The resistive synaptic devices were fabricated using HfOx switching layer by atomic layer deposition (ALD). The resistive synaptic device had gradual set and reset characteristics and the conductance was adjusted by spike-timing-dependent-plasticity (STDP) learning rule. We carried out circuit simulation of synaptic device and CMOS neuron circuit. And we have developed an unsupervised spiking neural networks (SNNs) for 5 × 5 pattern recognition and classification using the neuron circuit and synaptic devices. The hardware-based SNNs can autonomously and efficiently control the weight updates of the synapses between neurons, without the aid of software calculations.

  19. Phase transitions and self-organized criticality in networks of stochastic spiking neurons.

    Science.gov (United States)

    Brochini, Ludmila; de Andrade Costa, Ariadne; Abadi, Miguel; Roque, Antônio C; Stolfi, Jorge; Kinouchi, Osame

    2016-11-07

    Phase transitions and critical behavior are crucial issues both in theoretical and experimental neuroscience. We report analytic and computational results about phase transitions and self-organized criticality (SOC) in networks with general stochastic neurons. The stochastic neuron has a firing probability given by a smooth monotonic function Φ(V) of the membrane potential V, rather than a sharp firing threshold. We find that such networks can operate in several dynamic regimes (phases) depending on the average synaptic weight and the shape of the firing function Φ. In particular, we encounter both continuous and discontinuous phase transitions to absorbing states. At the continuous transition critical boundary, neuronal avalanches occur whose distributions of size and duration are given by power laws, as observed in biological neural networks. We also propose and test a new mechanism to produce SOC: the use of dynamic neuronal gains - a form of short-term plasticity probably located at the axon initial segment (AIS) - instead of depressing synapses at the dendrites (as previously studied in the literature). The new self-organization mechanism produces a slightly supercritical state, that we called SOSC, in accord to some intuitions of Alan Turing.

  20. Activity-dependent switch of GABAergic inhibition into glutamatergic excitation in astrocyte-neuron networks.

    Science.gov (United States)

    Perea, Gertrudis; Gómez, Ricardo; Mederos, Sara; Covelo, Ana; Ballesteros, Jesús J; Schlosser, Laura; Hernández-Vivanco, Alicia; Martín-Fernández, Mario; Quintana, Ruth; Rayan, Abdelrahman; Díez, Adolfo; Fuenzalida, Marco; Agarwal, Amit; Bergles, Dwight E; Bettler, Bernhard; Manahan-Vaughan, Denise; Martín, Eduardo D; Kirchhoff, Frank; Araque, Alfonso

    2016-12-24

    Interneurons are critical for proper neural network function and can activate Ca 2+ signaling in astrocytes. However, the impact of the interneuron-astrocyte signaling into neuronal network operation remains unknown. Using the simplest hippocampal Astrocyte-Neuron network, i.e., GABAergic interneuron, pyramidal neuron, single CA3-CA1 glutamatergic synapse, and astrocytes, we found that interneuron-astrocyte signaling dynamically affected excitatory neurotransmission in an activity- and time-dependent manner, and determined the sign (inhibition vs potentiation) of the GABA-mediated effects. While synaptic inhibition was mediated by GABA A receptors, potentiation involved astrocyte GABA B receptors, astrocytic glutamate release, and presynaptic metabotropic glutamate receptors. Using conditional astrocyte-specific GABA B receptor ( Gabbr1 ) knockout mice, we confirmed the glial source of the interneuron-induced potentiation, and demonstrated the involvement of astrocytes in hippocampal theta and gamma oscillations in vivo. Therefore, astrocytes decode interneuron activity and transform inhibitory into excitatory signals, contributing to the emergence of novel network properties resulting from the interneuron-astrocyte interplay.

  1. Delay-enhanced coherence of spiral waves in noisy Hodgkin-Huxley neuronal networks

    International Nuclear Information System (INIS)

    Wang Qingyun; Perc, Matjaz; Duan Zhisheng; Chen Guanrong

    2008-01-01

    We study the spatial dynamics of spiral waves in noisy Hodgkin-Huxley neuronal ensembles evoked by different information transmission delays and network topologies. In classical settings of coherence resonance the intensity of noise is fine-tuned so as to optimize the system's response. Here, we keep the noise intensity constant, and instead, vary the length of information transmission delay amongst coupled neurons. We show that there exists an intermediate transmission delay by which the spiral waves are optimally ordered, hence indicating the existence of delay-enhanced coherence of spatial dynamics in the examined system. Additionally, we examine the robustness of this phenomenon as the diffusive interaction topology changes towards the small-world type, and discover that shortcut links amongst distant neurons hinder the emergence of coherent spiral waves irrespective of transmission delay length. Presented results thus provide insights that could facilitate the understanding of information transmission delay on realistic neuronal networks

  2. Effects of spike-time-dependent plasticity on the stochastic resonance of small-world neuronal networks

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Haitao; Guo, Xinmeng; Wang, Jiang, E-mail: jiangwang@tju.edu.cn; Deng, Bin; Wei, Xile [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2014-09-01

    The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient for the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks.

  3. Effects of spike-time-dependent plasticity on the stochastic resonance of small-world neuronal networks

    International Nuclear Information System (INIS)

    Yu, Haitao; Guo, Xinmeng; Wang, Jiang; Deng, Bin; Wei, Xile

    2014-01-01

    The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient for the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks

  4. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Science.gov (United States)

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-11-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  5. Consistency of Network Traffic Repositories: An Overview

    NARCIS (Netherlands)

    Lastdrager, E.; Lastdrager, E.E.H.; Pras, Aiko

    2009-01-01

    Traffc repositories with TCP/IP header information are very important for network analysis. Researchers often assume that such repositories reliably represent all traffc that has been flowing over the network; little thoughts are made regarding the consistency of these repositories. Still, for

  6. Spiking neuron devices consisting of single-flux-quantum circuits

    International Nuclear Information System (INIS)

    Hirose, Tetsuya; Asai, Tetsuya; Amemiya, Yoshihito

    2006-01-01

    Single-flux-quantum (SFQ) circuits can be used for making spiking neuron devices, which are useful elements for constructing intelligent, brain-like computers. The device we propose is based on the leaky integrate-and-fire neuron (IFN) model and uses a SFQ pulse as an action signal or a spike of neurons. The operation of the neuron device is confirmed by computer simulator. It can operate with a short delay of 100 ps or less and is the highest-speed neuron device ever reported

  7. Transition Dynamics of a Dentate Gyrus-CA3 Neuronal Network during Temporal Lobe Epilepsy

    Directory of Open Access Journals (Sweden)

    Liyuan Zhang

    2017-07-01

    Full Text Available In temporal lobe epilepsy (TLE, the variation of chemical receptor expression underlies the basis of neural network activity shifts, resulting in neuronal hyperexcitability and epileptiform discharges. However, dynamical mechanisms involved in the transitions of TLE are not fully understood, because of the neuronal diversity and the indeterminacy of network connection. Hence, based on Hodgkin–Huxley (HH type neurons and Pinsky–Rinzel (PR type neurons coupling with glutamatergic and GABAergic synaptic connections respectively, we propose a computational framework which contains dentate gyrus (DG region and CA3 region. By regulating the concentration range of N-methyl-D-aspartate-type glutamate receptor (NMDAR, we demonstrate the pyramidal neuron can generate transitions from interictal to seizure discharges. This suggests that enhanced endogenous activity of NMDAR contributes to excitability in pyramidal neuron. Moreover, we conclude that excitatory discharges in CA3 region vary considerably on account of the excitatory currents produced by the excitatory pyramidal neuron. Interestingly, by changing the backprojection connection, we find that glutamatergic type backprojection can promote the dominant frequency of firings and further motivate excitatory counterpropagation from CA3 region to DG region. However, GABAergic type backprojection can reduce firing rate and block morbid counterpropagation, which may be factored into the terminations of TLE. In addition, neuronal diversity dominated network shows weak correlation with different backprojections. Our modeling and simulation studies provide new insights into the mechanisms of seizures generation and connectionism in local hippocampus, along with the synaptic mechanisms of this disease.

  8. Transition Dynamics of a Dentate Gyrus-CA3 Neuronal Network during Temporal Lobe Epilepsy.

    Science.gov (United States)

    Zhang, Liyuan; Fan, Denggui; Wang, Qingyun

    2017-01-01

    In temporal lobe epilepsy (TLE), the variation of chemical receptor expression underlies the basis of neural network activity shifts, resulting in neuronal hyperexcitability and epileptiform discharges. However, dynamical mechanisms involved in the transitions of TLE are not fully understood, because of the neuronal diversity and the indeterminacy of network connection. Hence, based on Hodgkin-Huxley (HH) type neurons and Pinsky-Rinzel (PR) type neurons coupling with glutamatergic and GABAergic synaptic connections respectively, we propose a computational framework which contains dentate gyrus (DG) region and CA3 region. By regulating the concentration range of N-methyl-D-aspartate-type glutamate receptor (NMDAR), we demonstrate the pyramidal neuron can generate transitions from interictal to seizure discharges. This suggests that enhanced endogenous activity of NMDAR contributes to excitability in pyramidal neuron. Moreover, we conclude that excitatory discharges in CA3 region vary considerably on account of the excitatory currents produced by the excitatory pyramidal neuron. Interestingly, by changing the backprojection connection, we find that glutamatergic type backprojection can promote the dominant frequency of firings and further motivate excitatory counterpropagation from CA3 region to DG region. However, GABAergic type backprojection can reduce firing rate and block morbid counterpropagation, which may be factored into the terminations of TLE. In addition, neuronal diversity dominated network shows weak correlation with different backprojections. Our modeling and simulation studies provide new insights into the mechanisms of seizures generation and connectionism in local hippocampus, along with the synaptic mechanisms of this disease.

  9. Neuronal spike sorting based on radial basis function neural networks

    Directory of Open Access Journals (Sweden)

    Taghavi Kani M

    2011-02-01

    Full Text Available "nBackground: Studying the behavior of a society of neurons, extracting the communication mechanisms of brain with other tissues, finding treatment for some nervous system diseases and designing neuroprosthetic devices, require an algorithm to sort neuralspikes automatically. However, sorting neural spikes is a challenging task because of the low signal to noise ratio (SNR of the spikes. The main purpose of this study was to design an automatic algorithm for classifying neuronal spikes that are emitted from a specific region of the nervous system."n "nMethods: The spike sorting process usually consists of three stages: detection, feature extraction and sorting. We initially used signal statistics to detect neural spikes. Then, we chose a limited number of typical spikes as features and finally used them to train a radial basis function (RBF neural network to sort the spikes. In most spike sorting devices, these signals are not linearly discriminative. In order to solve this problem, the aforesaid RBF neural network was used."n "nResults: After the learning process, our proposed algorithm classified any arbitrary spike. The obtained results showed that even though the proposed Radial Basis Spike Sorter (RBSS reached to the same error as the previous methods, however, the computational costs were much lower compared to other algorithms. Moreover, the competitive points of the proposed algorithm were its good speed and low computational complexity."n "nConclusion: Regarding the results of this study, the proposed algorithm seems to serve the purpose of procedures that require real-time processing and spike sorting.

  10. Clustering predicts memory performance in networks of spiking and non-spiking neurons

    Directory of Open Access Journals (Sweden)

    Weiliang eChen

    2011-03-01

    Full Text Available The problem we address in this paper is that of finding effective and parsimonious patterns of connectivity in sparse associative memories. This problem must be addressed in real neuronal systems, so that results in artificial systems could throw light on real systems. We show that there are efficient patterns of connectivity and that these patterns are effective in models with either spiking or non-spiking neurons. This suggests that there may be some underlying general principles governing good connectivity in such networks. We also show that the clustering of the network, measured by Clustering Coefficient, has a strong linear correlation to the performance of associative memory. This result is important since a purely static measure of network connectivity appears to determine an important dynamic property of the network.

  11. Chimera states in a multilayer network of coupled and uncoupled neurons

    Science.gov (United States)

    Majhi, Soumen; Perc, Matjaž; Ghosh, Dibakar

    2017-07-01

    We study the emergence of chimera states in a multilayer neuronal network, where one layer is composed of coupled and the other layer of uncoupled neurons. Through the multilayer structure, the layer with coupled neurons acts as the medium by means of which neurons in the uncoupled layer share information in spite of the absence of physical connections among them. Neurons in the coupled layer are connected with electrical synapses, while across the two layers, neurons are connected through chemical synapses. In both layers, the dynamics of each neuron is described by the Hindmarsh-Rose square wave bursting dynamics. We show that the presence of two different types of connecting synapses within and between the two layers, together with the multilayer network structure, plays a key role in the emergence of between-layer synchronous chimera states and patterns of synchronous clusters. In particular, we find that these chimera states can emerge in the coupled layer regardless of the range of electrical synapses. Even in all-to-all and nearest-neighbor coupling within the coupled layer, we observe qualitatively identical between-layer chimera states. Moreover, we show that the role of information transmission delay between the two layers must not be neglected, and we obtain precise parameter bounds at which chimera states can be observed. The expansion of the chimera region and annihilation of cluster and fully coherent states in the parameter plane for increasing values of inter-layer chemical synaptic time delay are illustrated using effective range measurements. These results are discussed in the light of neuronal evolution, where the coexistence of coherent and incoherent dynamics during the developmental stage is particularly likely.

  12. Innate Synchronous Oscillations in Freely-Organized Small Neuronal Circuits

    Science.gov (United States)

    Shein Idelson, Mark; Ben-Jacob, Eshel; Hanein, Yael

    2010-01-01

    Background Information processing in neuronal networks relies on the network's ability to generate temporal patterns of action potentials. Although the nature of neuronal network activity has been intensively investigated in the past several decades at the individual neuron level, the underlying principles of the collective network activity, such as the synchronization and coordination between neurons, are largely unknown. Here we focus on isolated neuronal clusters in culture and address the following simple, yet fundamental questions: What is the minimal number of cells needed to exhibit collective dynamics? What are the internal temporal characteristics of such dynamics and how do the temporal features of network activity alternate upon crossover from minimal networks to large networks? Methodology/Principal Findings We used network engineering techniques to induce self-organization of cultured networks into neuronal clusters of different sizes. We found that small clusters made of as few as 40 cells already exhibit spontaneous collective events characterized by innate synchronous network oscillations in the range of 25 to 100 Hz. The oscillation frequency of each network appeared to be independent of cluster size. The duration and rate of the network events scale with cluster size but converge to that of large uniform networks. Finally, the investigation of two coupled clusters revealed clear activity propagation with master/slave asymmetry. Conclusions/Significance The nature of the activity patterns observed in small networks, namely the consistent emergence of similar activity across networks of different size and morphology, suggests that neuronal clusters self-regulate their activity to sustain network bursts with internal oscillatory features. We therefore suggest that clusters of as few as tens of cells can serve as a minimal but sufficient functional network, capable of sustaining oscillatory activity. Interestingly, the frequencies of these

  13. Nicotinic modulaton of neuronal networks: from receptors to cognition

    NARCIS (Netherlands)

    Mansvelder, H.D.; van Aerde, K.I.; Couey, J.J.; Brussaard, A.B.

    2006-01-01

    Rationale: Nicotine affects many aspects of human cognition, including attention and memory. Activation of nicotinic acetylcholine receptors (nAChRs) in neuronal networks modulates activity and information processing during cognitive tasks, which can be observed in electroencephalograms (EEGs) and

  14. Biophysical synaptic dynamics in an analog VLSI network of Hodgkin-Huxley neurons.

    Science.gov (United States)

    Yu, Theodore; Cauwenberghs, Gert

    2009-01-01

    We study synaptic dynamics in a biophysical network of four coupled spiking neurons implemented in an analog VLSI silicon microchip. The four neurons implement a generalized Hodgkin-Huxley model with individually configurable rate-based kinetics of opening and closing of Na+ and K+ ion channels. The twelve synapses implement a rate-based first-order kinetic model of neurotransmitter and receptor dynamics, accounting for NMDA and non-NMDA type chemical synapses. The implemented models on the chip are fully configurable by 384 parameters accounting for conductances, reversal potentials, and pre/post-synaptic voltage-dependence of the channel kinetics. We describe the models and present experimental results from the chip characterizing single neuron dynamics, single synapse dynamics, and multi-neuron network dynamics showing phase-locking behavior as a function of synaptic coupling strength. The 3mm x 3mm microchip consumes 1.29 mW power making it promising for applications including neuromorphic modeling and neural prostheses.

  15. Motif statistics and spike correlations in neuronal networks

    International Nuclear Information System (INIS)

    Hu, Yu; Shea-Brown, Eric; Trousdale, James; Josić, Krešimir

    2013-01-01

    Motifs are patterns of subgraphs of complex networks. We studied the impact of such patterns of connectivity on the level of correlated, or synchronized, spiking activity among pairs of cells in a recurrent network of integrate and fire neurons. For a range of network architectures, we find that the pairwise correlation coefficients, averaged across the network, can be closely approximated using only three statistics of network connectivity. These are the overall network connection probability and the frequencies of two second order motifs: diverging motifs, in which one cell provides input to two others, and chain motifs, in which two cells are connected via a third intermediary cell. Specifically, the prevalence of diverging and chain motifs tends to increase correlation. Our method is based on linear response theory, which enables us to express spiking statistics using linear algebra, and a resumming technique, which extrapolates from second order motifs to predict the overall effect of coupling on network correlation. Our motif-based results seek to isolate the effect of network architecture perturbatively from a known network state. (paper)

  16. Efficient network reconstruction from dynamical cascades identifies small-world topology of neuronal avalanches.

    Directory of Open Access Journals (Sweden)

    Sinisa Pajevic

    2009-01-01

    Full Text Available Cascading activity is commonly found in complex systems with directed interactions such as metabolic networks, neuronal networks, or disease spreading in social networks. Substantial insight into a system's organization can be obtained by reconstructing the underlying functional network architecture from the observed activity cascades. Here we focus on Bayesian approaches and reduce their computational demands by introducing the Iterative Bayesian (IB and Posterior Weighted Averaging (PWA methods. We introduce a special case of PWA, cast in nonparametric form, which we call the normalized count (NC algorithm. NC efficiently reconstructs random and small-world functional network topologies and architectures from subcritical, critical, and supercritical cascading dynamics and yields significant improvements over commonly used correlation methods. With experimental data, NC identified a functional and structural small-world topology and its corresponding traffic in cortical networks with neuronal avalanche dynamics.

  17. Single or multiple synchronization transitions in scale-free neuronal networks with electrical or chemical coupling

    International Nuclear Information System (INIS)

    Hao Yinghang; Gong, Yubing; Wang Li; Ma Xiaoguang; Yang Chuanlu

    2011-01-01

    Research highlights: → Single synchronization transition for gap-junctional coupling. → Multiple synchronization transitions for chemical synaptic coupling. → Gap junctions and chemical synapses have different impacts on synchronization transition. → Chemical synapses may play a dominant role in neurons' information processing. - Abstract: In this paper, we have studied time delay- and coupling strength-induced synchronization transitions in scale-free modified Hodgkin-Huxley (MHH) neuron networks with gap-junctions and chemical synaptic coupling. It is shown that the synchronization transitions are much different for these two coupling types. For gap-junctions, the neurons exhibit a single synchronization transition with time delay and coupling strength, while for chemical synapses, there are multiple synchronization transitions with time delay, and the synchronization transition with coupling strength is dependent on the time delay lengths. For short delays we observe a single synchronization transition, whereas for long delays the neurons exhibit multiple synchronization transitions as the coupling strength is varied. These results show that gap junctions and chemical synapses have different impacts on the pattern formation and synchronization transitions of the scale-free MHH neuronal networks, and chemical synapses, compared to gap junctions, may play a dominant and more active function in the firing activity of the networks. These findings would be helpful for further understanding the roles of gap junctions and chemical synapses in the firing dynamics of neuronal networks.

  18. Single or multiple synchronization transitions in scale-free neuronal networks with electrical or chemical coupling

    Energy Technology Data Exchange (ETDEWEB)

    Hao Yinghang [School of Physics, Ludong University, Yantai 264025 (China); Gong, Yubing, E-mail: gongyubing09@hotmail.co [School of Physics, Ludong University, Yantai 264025 (China); Wang Li; Ma Xiaoguang; Yang Chuanlu [School of Physics, Ludong University, Yantai 264025 (China)

    2011-04-15

    Research highlights: Single synchronization transition for gap-junctional coupling. Multiple synchronization transitions for chemical synaptic coupling. Gap junctions and chemical synapses have different impacts on synchronization transition. Chemical synapses may play a dominant role in neurons' information processing. - Abstract: In this paper, we have studied time delay- and coupling strength-induced synchronization transitions in scale-free modified Hodgkin-Huxley (MHH) neuron networks with gap-junctions and chemical synaptic coupling. It is shown that the synchronization transitions are much different for these two coupling types. For gap-junctions, the neurons exhibit a single synchronization transition with time delay and coupling strength, while for chemical synapses, there are multiple synchronization transitions with time delay, and the synchronization transition with coupling strength is dependent on the time delay lengths. For short delays we observe a single synchronization transition, whereas for long delays the neurons exhibit multiple synchronization transitions as the coupling strength is varied. These results show that gap junctions and chemical synapses have different impacts on the pattern formation and synchronization transitions of the scale-free MHH neuronal networks, and chemical synapses, compared to gap junctions, may play a dominant and more active function in the firing activity of the networks. These findings would be helpful for further understanding the roles of gap junctions and chemical synapses in the firing dynamics of neuronal networks.

  19. SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks

    OpenAIRE

    Wang, Linnan; Ye, Jinmian; Zhao, Yiyang; Wu, Wei; Li, Ang; Song, Shuaiwen Leon; Xu, Zenglin; Kraska, Tim

    2018-01-01

    Going deeper and wider in neural architectures improves the accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need change to less desired network architectures, or nontrivially dissect a network across multiGPUs. These distract DL practitioners from concentrating on their original machine learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling runtime to enable the network training far be...

  20. Atypical cross talk between mentalizing and mirror neuron networks in autism spectrum disorder.

    Science.gov (United States)

    Fishman, Inna; Keown, Christopher L; Lincoln, Alan J; Pineda, Jaime A; Müller, Ralph-Axel

    2014-07-01

    Converging evidence indicates that brain abnormalities in autism spectrum disorder (ASD) involve atypical network connectivity, but it is unclear whether altered connectivity is especially prominent in brain networks that participate in social cognition. To investigate whether adolescents with ASD show altered functional connectivity in 2 brain networks putatively impaired in ASD and involved in social processing, theory of mind (ToM) and mirror neuron system (MNS). Cross-sectional study using resting-state functional magnetic resonance imaging involving 25 adolescents with ASD between the ages of 11 and 18 years and 25 typically developing adolescents matched for age, handedness, and nonverbal IQ. Statistical parametric maps testing the degree of whole-brain functional connectivity and social functioning measures. Relative to typically developing controls, participants with ASD showed a mixed pattern of both over- and underconnectivity in the ToM network, which was associated with greater social impairment. Increased connectivity in the ASD group was detected primarily between the regions of the MNS and ToM, and was correlated with sociocommunicative measures, suggesting that excessive ToM-MNS cross talk might be associated with social impairment. In a secondary analysis comparing a subset of the 15 participants with ASD with the most severe symptomology and a tightly matched subset of 15 typically developing controls, participants with ASD showed exclusive overconnectivity effects in both ToM and MNS networks, which were also associated with greater social dysfunction. Adolescents with ASD showed atypically increased functional connectivity involving the mentalizing and mirror neuron systems, largely reflecting greater cross talk between the 2. This finding is consistent with emerging evidence of reduced network segregation in ASD and challenges the prevailing theory of general long-distance underconnectivity in ASD. This excess ToM-MNS connectivity may reflect

  1. Rich-Club Organization in Effective Connectivity among Cortical Neurons

    Science.gov (United States)

    Shimono, Masanori; Ito, Shinya; Yeh, Fang-Chin; Timme, Nicholas; Myroshnychenko, Maxym; Lapish, Christopher C.; Tosi, Zachary; Hottowy, Pawel; Smith, Wesley C.; Masmanidis, Sotiris C.; Litke, Alan M.; Sporns, Olaf; Beggs, John M.

    2016-01-01

    The performance of complex networks, like the brain, depends on how effectively their elements communicate. Despite the importance of communication, it is virtually unknown how information is transferred in local cortical networks, consisting of hundreds of closely spaced neurons. To address this, it is important to record simultaneously from hundreds of neurons at a spacing that matches typical axonal connection distances, and at a temporal resolution that matches synaptic delays. We used a 512-electrode array (60 μm spacing) to record spontaneous activity at 20 kHz from up to 500 neurons simultaneously in slice cultures of mouse somatosensory cortex for 1 h at a time. We applied a previously validated version of transfer entropy to quantify information transfer. Similar to in vivo reports, we found an approximately lognormal distribution of firing rates. Pairwise information transfer strengths also were nearly lognormally distributed, similar to reports of synaptic strengths. Some neurons transferred and received much more information than others, which is consistent with previous predictions. Neurons with the highest outgoing and incoming information transfer were more strongly connected to each other than chance, thus forming a “rich club.” We found similar results in networks recorded in vivo from rodent cortex, suggesting the generality of these findings. A rich-club structure has been found previously in large-scale human brain networks and is thought to facilitate communication between cortical regions. The discovery of a small, but information-rich, subset of neurons within cortical regions suggests that this population will play a vital role in communication, learning, and memory. SIGNIFICANCE STATEMENT Many studies have focused on communication networks between cortical brain regions. In contrast, very few studies have examined communication networks within a cortical region. This is the first study to combine such a large number of neurons (several

  2. Rich-Club Organization in Effective Connectivity among Cortical Neurons.

    Science.gov (United States)

    Nigam, Sunny; Shimono, Masanori; Ito, Shinya; Yeh, Fang-Chin; Timme, Nicholas; Myroshnychenko, Maxym; Lapish, Christopher C; Tosi, Zachary; Hottowy, Pawel; Smith, Wesley C; Masmanidis, Sotiris C; Litke, Alan M; Sporns, Olaf; Beggs, John M

    2016-01-20

    The performance of complex networks, like the brain, depends on how effectively their elements communicate. Despite the importance of communication, it is virtually unknown how information is transferred in local cortical networks, consisting of hundreds of closely spaced neurons. To address this, it is important to record simultaneously from hundreds of neurons at a spacing that matches typical axonal connection distances, and at a temporal resolution that matches synaptic delays. We used a 512-electrode array (60 μm spacing) to record spontaneous activity at 20 kHz from up to 500 neurons simultaneously in slice cultures of mouse somatosensory cortex for 1 h at a time. We applied a previously validated version of transfer entropy to quantify information transfer. Similar to in vivo reports, we found an approximately lognormal distribution of firing rates. Pairwise information transfer strengths also were nearly lognormally distributed, similar to reports of synaptic strengths. Some neurons transferred and received much more information than others, which is consistent with previous predictions. Neurons with the highest outgoing and incoming information transfer were more strongly connected to each other than chance, thus forming a "rich club." We found similar results in networks recorded in vivo from rodent cortex, suggesting the generality of these findings. A rich-club structure has been found previously in large-scale human brain networks and is thought to facilitate communication between cortical regions. The discovery of a small, but information-rich, subset of neurons within cortical regions suggests that this population will play a vital role in communication, learning, and memory. Significance statement: Many studies have focused on communication networks between cortical brain regions. In contrast, very few studies have examined communication networks within a cortical region. This is the first study to combine such a large number of neurons (several

  3. APPLICATION OF UKRAINIAN GRID INFRASTRUCTURE FOR INVESTIGATION OF NONLINEAR DYNAMICS IN LARGE NEURONAL NETWORKS

    Directory of Open Access Journals (Sweden)

    O. О. Sudakov

    2015-12-01

    Full Text Available In present work the Ukrainian National Grid (UNG infrastructure was applied for investigation of synchronization in large networks of interacting neurons. This application is important for solving of modern neuroscience problems related to mechanisms of nervous system activities (memory, cognition etc. and nervous pathologies (epilepsy, Parkinsonism, etc.. Modern non-linear dynamics theories and applications provides powerful basis for computer simulations of biological neuronal networks and investigation of phenomena which mechanisms hardly could be clarified by other approaches. Cubic millimeter of brain tissue contains about 105 neurons, so realistic (Hodgkin-Huxley model and phenomenological (Kuramoto-Sakaguchi, FitzHugh-Nagumo, etc. models simulations require consideration of large neurons numbers.

  4. Computational modeling of seizure dynamics using coupled neuronal networks: factors shaping epileptiform activity.

    Directory of Open Access Journals (Sweden)

    Sebastien Naze

    2015-05-01

    Full Text Available Epileptic seizure dynamics span multiple scales in space and time. Understanding seizure mechanisms requires identifying the relations between seizure components within and across these scales, together with the analysis of their dynamical repertoire. Mathematical models have been developed to reproduce seizure dynamics across scales ranging from the single neuron to the neural population. In this study, we develop a network model of spiking neurons and systematically investigate the conditions, under which the network displays the emergent dynamic behaviors known from the Epileptor, which is a well-investigated abstract model of epileptic neural activity. This approach allows us to study the biophysical parameters and variables leading to epileptiform discharges at cellular and network levels. Our network model is composed of two neuronal populations, characterized by fast excitatory bursting neurons and regular spiking inhibitory neurons, embedded in a common extracellular environment represented by a slow variable. By systematically analyzing the parameter landscape offered by the simulation framework, we reproduce typical sequences of neural activity observed during status epilepticus. We find that exogenous fluctuations from extracellular environment and electro-tonic couplings play a major role in the progression of the seizure, which supports previous studies and further validates our model. We also investigate the influence of chemical synaptic coupling in the generation of spontaneous seizure-like events. Our results argue towards a temporal shift of typical spike waves with fast discharges as synaptic strengths are varied. We demonstrate that spike waves, including interictal spikes, are generated primarily by inhibitory neurons, whereas fast discharges during the wave part are due to excitatory neurons. Simulated traces are compared with in vivo experimental data from rodents at different stages of the disorder. We draw the conclusion

  5. Balance of excitation and inhibition determines 1/f power spectrum in neuronal networks.

    Science.gov (United States)

    Lombardi, F; Herrmann, H J; de Arcangelis, L

    2017-04-01

    The 1/f-like decay observed in the power spectrum of electro-physiological signals, along with scale-free statistics of the so-called neuronal avalanches, constitutes evidence of criticality in neuronal systems. Recent in vitro studies have shown that avalanche dynamics at criticality corresponds to some specific balance of excitation and inhibition, thus suggesting that this is a basic feature of the critical state of neuronal networks. In particular, a lack of inhibition significantly alters the temporal structure of the spontaneous avalanche activity and leads to an anomalous abundance of large avalanches. Here, we study the relationship between network inhibition and the scaling exponent β of the power spectral density (PSD) of avalanche activity in a neuronal network model inspired in Self-Organized Criticality. We find that this scaling exponent depends on the percentage of inhibitory synapses and tends to the value β = 1 for a percentage of about 30%. More specifically, β is close to 2, namely, Brownian noise, for purely excitatory networks and decreases towards values in the interval [1, 1.4] as the percentage of inhibitory synapses ranges between 20% and 30%, in agreement with experimental findings. These results indicate that the level of inhibition affects the frequency spectrum of resting brain activity and suggest the analysis of the PSD scaling behavior as a possible tool to study pathological conditions.

  6. A network of spiking neurons that can represent interval timing: mean field analysis.

    Science.gov (United States)

    Gavornik, Jeffrey P; Shouval, Harel Z

    2011-04-01

    Despite the vital importance of our ability to accurately process and encode temporal information, the underlying neural mechanisms are largely unknown. We have previously described a theoretical framework that explains how temporal representations, similar to those reported in the visual cortex, can form in locally recurrent cortical networks as a function of reward modulated synaptic plasticity. This framework allows networks of both linear and spiking neurons to learn the temporal interval between a stimulus and paired reward signal presented during training. Here we use a mean field approach to analyze the dynamics of non-linear stochastic spiking neurons in a network trained to encode specific time intervals. This analysis explains how recurrent excitatory feedback allows a network structure to encode temporal representations.

  7. The influence of hubs in the structure of a neuronal network during an epileptic seizure

    Science.gov (United States)

    Rodrigues, Abner Cardoso; Cerdeira, Hilda A.; Machado, Birajara Soares

    2016-02-01

    In this work, we propose changes in the structure of a neuronal network with the intention to provoke strong synchronization to simulate episodes of epileptic seizure. Starting with a network of Izhikevich neurons we slowly increase the number of connections in selected nodes in a controlled way, to produce (or not) hubs. We study how these structures alter the synchronization on the spike firings interval, on individual neurons as well as on mean values, as a function of the concentration of connections for random and non-random (hubs) distribution. We also analyze how the post-ictal signal varies for the different distributions. We conclude that a network with hubs is more appropriate to represent an epileptic state.

  8. Patterning human neuronal networks on photolithographically engineered silicon dioxide substrates functionalized with glial analogues.

    Science.gov (United States)

    Hughes, Mark A; Brennan, Paul M; Bunting, Andrew S; Cameron, Katherine; Murray, Alan F; Shipston, Mike J

    2014-05-01

    Interfacing neurons with silicon semiconductors is a challenge being tackled through various bioengineering approaches. Such constructs inform our understanding of neuronal coding and learning and ultimately guide us toward creating intelligent neuroprostheses. A fundamental prerequisite is to dictate the spatial organization of neuronal cells. We sought to pattern neurons using photolithographically defined arrays of polymer parylene-C, activated with fetal calf serum. We used a purified human neuronal cell line [Lund human mesencephalic (LUHMES)] to establish whether neurons remain viable when isolated on-chip or whether they require a supporting cell substrate. When cultured in isolation, LUHMES neurons failed to pattern and did not show any morphological signs of differentiation. We therefore sought a cell type with which to prepattern parylene regions, hypothesizing that this cellular template would enable secondary neuronal adhesion and network formation. From a range of cell lines tested, human embryonal kidney (HEK) 293 cells patterned with highest accuracy. LUHMES neurons adhered to pre-established HEK 293 cell clusters and this coculture environment promoted morphological differentiation of neurons. Neurites extended between islands of adherent cell somata, creating an orthogonally arranged neuronal network. HEK 293 cells appear to fulfill a role analogous to glia, dictating cell adhesion, and generating an environment conducive to neuronal survival. We next replaced HEK 293 cells with slower growing glioma-derived precursors. These primary human cells patterned accurately on parylene and provided a similarly effective scaffold for neuronal adhesion. These findings advance the use of this microfabrication-compatible platform for neuronal patterning. Copyright © 2013 Wiley Periodicals, Inc.

  9. Error-backpropagation in temporally encoded networks of spiking neurons

    NARCIS (Netherlands)

    S.M. Bohte (Sander); J.A. La Poutré (Han); J.N. Kok (Joost)

    2000-01-01

    textabstractFor a network of spiking neurons that encodes information in the timing of individual spike-times, we derive a supervised learning rule, emph{SpikeProp, akin to traditional error-backpropagation and show how to overcome the discontinuities introduced by thresholding. With this algorithm,

  10. Peripheral chemoreceptors tune inspiratory drive via tonic expiratory neuron hubs in the medullary ventral respiratory column network.

    Science.gov (United States)

    Segers, L S; Nuding, S C; Ott, M M; Dean, J B; Bolser, D C; O'Connor, R; Morris, K F; Lindsey, B G

    2015-01-01

    Models of brain stem ventral respiratory column (VRC) circuits typically emphasize populations of neurons, each active during a particular phase of the respiratory cycle. We have proposed that "tonic" pericolumnar expiratory (t-E) neurons tune breathing during baroreceptor-evoked reductions and central chemoreceptor-evoked enhancements of inspiratory (I) drive. The aims of this study were to further characterize the coordinated activity of t-E neurons and test the hypothesis that peripheral chemoreceptors also modulate drive via inhibition of t-E neurons and disinhibition of their inspiratory neuron targets. Spike trains of 828 VRC neurons were acquired by multielectrode arrays along with phrenic nerve signals from 22 decerebrate, vagotomized, neuromuscularly blocked, artificially ventilated adult cats. Forty-eight of 191 t-E neurons fired synchronously with another t-E neuron as indicated by cross-correlogram central peaks; 32 of the 39 synchronous pairs were elements of groups with mutual pairwise correlations. Gravitational clustering identified fluctuations in t-E neuron synchrony. A network model supported the prediction that inhibitory populations with spike synchrony reduce target neuron firing probabilities, resulting in offset or central correlogram troughs. In five animals, stimulation of carotid chemoreceptors evoked changes in the firing rates of 179 of 240 neurons. Thirty-two neuron pairs had correlogram troughs consistent with convergent and divergent t-E inhibition of I cells and disinhibitory enhancement of drive. Four of 10 t-E neurons that responded to sequential stimulation of peripheral and central chemoreceptors triggered 25 cross-correlograms with offset features. The results support the hypothesis that multiple afferent systems dynamically tune inspiratory drive in part via coordinated t-E neurons. Copyright © 2015 the American Physiological Society.

  11. From in silico astrocyte cell models to neuron-astrocyte network models: A review.

    Science.gov (United States)

    Oschmann, Franziska; Berry, Hugues; Obermayer, Klaus; Lenk, Kerstin

    2018-01-01

    The idea that astrocytes may be active partners in synaptic information processing has recently emerged from abundant experimental reports. Because of their spatial proximity to neurons and their bidirectional communication with them, astrocytes are now considered as an important third element of the synapse. Astrocytes integrate and process synaptic information and by doing so generate cytosolic calcium signals that are believed to reflect neuronal transmitter release. Moreover, they regulate neuronal information transmission by releasing gliotransmitters into the synaptic cleft affecting both pre- and postsynaptic receptors. Concurrent with the first experimental reports of the astrocytic impact on neural network dynamics, computational models describing astrocytic functions have been developed. In this review, we give an overview over the published computational models of astrocytic functions, from single-cell dynamics to the tripartite synapse level and network models of astrocytes and neurons. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Barreloid Borders and Neuronal Activity Shape Panglial Gap Junction-Coupled Networks in the Mouse Thalamus.

    Science.gov (United States)

    Claus, Lena; Philippot, Camille; Griemsmann, Stephanie; Timmermann, Aline; Jabs, Ronald; Henneberger, Christian; Kettenmann, Helmut; Steinhäuser, Christian

    2018-01-01

    The ventral posterior nucleus of the thalamus plays an important role in somatosensory information processing. It contains elongated cellular domains called barreloids, which are the structural basis for the somatotopic organization of vibrissae representation. So far, the organization of glial networks in these barreloid structures and its modulation by neuronal activity has not been studied. We have developed a method to visualize thalamic barreloid fields in acute slices. Combining electrophysiology, immunohistochemistry, and electroporation in transgenic mice with cell type-specific fluorescence labeling, we provide the first structure-function analyses of barreloidal glial gap junction networks. We observed coupled networks, which comprised both astrocytes and oligodendrocytes. The spread of tracers or a fluorescent glucose derivative through these networks was dependent on neuronal activity and limited by the barreloid borders, which were formed by uncoupled or weakly coupled oligodendrocytes. Neuronal somata were distributed homogeneously across barreloid fields with their processes running in parallel to the barreloid borders. Many astrocytes and oligodendrocytes were not part of the panglial networks. Thus, oligodendrocytes are the cellular elements limiting the communicating panglial network to a single barreloid, which might be important to ensure proper metabolic support to active neurons located within a particular vibrissae signaling pathway. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Bifurcation analysis on a generalized recurrent neural network with two interconnected three-neuron components

    International Nuclear Information System (INIS)

    Hajihosseini, Amirhossein; Maleki, Farzaneh; Rokni Lamooki, Gholam Reza

    2011-01-01

    Highlights: → We construct a recurrent neural network by generalizing a specific n-neuron network. → Several codimension 1 and 2 bifurcations take place in the newly constructed network. → The newly constructed network has higher capabilities to learn periodic signals. → The normal form theorem is applied to investigate dynamics of the network. → A series of bifurcation diagrams is given to support theoretical results. - Abstract: A class of recurrent neural networks is constructed by generalizing a specific class of n-neuron networks. It is shown that the newly constructed network experiences generic pitchfork and Hopf codimension one bifurcations. It is also proved that the emergence of generic Bogdanov-Takens, pitchfork-Hopf and Hopf-Hopf codimension two, and the degenerate Bogdanov-Takens bifurcation points in the parameter space is possible due to the intersections of codimension one bifurcation curves. The occurrence of bifurcations of higher codimensions significantly increases the capability of the newly constructed recurrent neural network to learn broader families of periodic signals.

  14. Optimal Detection of a Localized Perturbation in Random Networks of Integrate-and-Fire Neurons

    Science.gov (United States)

    Bernardi, Davide; Lindner, Benjamin

    2017-06-01

    Experimental and theoretical studies suggest that cortical networks are chaotic and coding relies on averages over large populations. However, there is evidence that rats can respond to the short stimulation of a single cortical cell, a theoretically unexplained fact. We study effects of single-cell stimulation on a large recurrent network of integrate-and-fire neurons and propose a simple way to detect the perturbation. Detection rates obtained from simulations and analytical estimates are similar to experimental response rates if the readout is slightly biased towards specific neurons. Near-optimal detection is attained for a broad range of intermediate values of the mean coupling between neurons.

  15. Mapping cortical mesoscopic networks of single spiking cortical or sub-cortical neurons.

    Science.gov (United States)

    Xiao, Dongsheng; Vanni, Matthieu P; Mitelut, Catalin C; Chan, Allen W; LeDue, Jeffrey M; Xie, Yicheng; Chen, Andrew Cn; Swindale, Nicholas V; Murphy, Timothy H

    2017-02-04

    Understanding the basis of brain function requires knowledge of cortical operations over wide-spatial scales, but also within the context of single neurons. In vivo, wide-field GCaMP imaging and sub-cortical/cortical cellular electrophysiology were used in mice to investigate relationships between spontaneous single neuron spiking and mesoscopic cortical activity. We make use of a rich set of cortical activity motifs that are present in spontaneous activity in anesthetized and awake animals. A mesoscale spike-triggered averaging procedure allowed the identification of motifs that are preferentially linked to individual spiking neurons by employing genetically targeted indicators of neuronal activity. Thalamic neurons predicted and reported specific cycles of wide-scale cortical inhibition/excitation. In contrast, spike-triggered maps derived from single cortical neurons yielded spatio-temporal maps expected for regional cortical consensus function. This approach can define network relationships between any point source of neuronal spiking and mesoscale cortical maps.

  16. Three-dimensional chimera patterns in networks of spiking neuron oscillators

    Science.gov (United States)

    Kasimatis, T.; Hizanidis, J.; Provata, A.

    2018-05-01

    We study the stable spatiotemporal patterns that arise in a three-dimensional (3D) network of neuron oscillators, whose dynamics is described by the leaky integrate-and-fire (LIF) model. More specifically, we investigate the form of the chimera states induced by a 3D coupling matrix with nonlocal topology. The observed patterns are in many cases direct generalizations of the corresponding two-dimensional (2D) patterns, e.g., spheres, layers, and cylinder grids. We also find cylindrical and "cross-layered" chimeras that do not have an equivalent in 2D systems. Quantitative measures are calculated, such as the ratio of synchronized and unsynchronized neurons as a function of the coupling range, the mean phase velocities, and the distribution of neurons in mean phase velocities. Based on these measures, the chimeras are categorized in two families. The first family of patterns is observed for weaker coupling and exhibits higher mean phase velocities for the unsynchronized areas of the network. The opposite holds for the second family, where the unsynchronized areas have lower mean phase velocities. The various measures demonstrate discontinuities, indicating criticality as the parameters cross from the first family of patterns to the second.

  17. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations

    Directory of Open Access Journals (Sweden)

    Jan eHahne

    2015-09-01

    Full Text Available Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy...

  18. Communication through resonance in spiking neuronal networks.

    Science.gov (United States)

    Hahn, Gerald; Bujan, Alejandro F; Frégnac, Yves; Aertsen, Ad; Kumar, Arvind

    2014-08-01

    The cortex processes stimuli through a distributed network of specialized brain areas. This processing requires mechanisms that can route neuronal activity across weakly connected cortical regions. Routing models proposed thus far are either limited to propagation of spiking activity across strongly connected networks or require distinct mechanisms that create local oscillations and establish their coherence between distant cortical areas. Here, we propose a novel mechanism which explains how synchronous spiking activity propagates across weakly connected brain areas supported by oscillations. In our model, oscillatory activity unleashes network resonance that amplifies feeble synchronous signals and promotes their propagation along weak connections ("communication through resonance"). The emergence of coherent oscillations is a natural consequence of synchronous activity propagation and therefore the assumption of different mechanisms that create oscillations and provide coherence is not necessary. Moreover, the phase-locking of oscillations is a side effect of communication rather than its requirement. Finally, we show how the state of ongoing activity could affect the communication through resonance and propose that modulations of the ongoing activity state could influence information processing in distributed cortical networks.

  19. Integrated workflows for spiking neuronal network simulations

    Directory of Open Access Journals (Sweden)

    Ján eAntolík

    2013-12-01

    Full Text Available The increasing availability of computational resources is enabling more detailed, realistic modelling in computational neuroscience, resulting in a shift towards more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeller's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modellers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity.To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualisation into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organised configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualisation stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modelling studies by relieving the user from manual handling of the flow of metadata between the individual

  20. An Asynchronous Recurrent Network of Cellular Automaton-Based Neurons and Its Reproduction of Spiking Neural Network Activities.

    Science.gov (United States)

    Matsubara, Takashi; Torikai, Hiroyuki

    2016-04-01

    Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.

  1. Memristors Empower Spiking Neurons With Stochasticity

    KAUST Repository

    Al-Shedivat, Maruan

    2015-06-01

    Recent theoretical studies have shown that probabilistic spiking can be interpreted as learning and inference in cortical microcircuits. This interpretation creates new opportunities for building neuromorphic systems driven by probabilistic learning algorithms. However, such systems must have two crucial features: 1) the neurons should follow a specific behavioral model, and 2) stochastic spiking should be implemented efficiently for it to be scalable. This paper proposes a memristor-based stochastically spiking neuron that fulfills these requirements. First, the analytical model of the memristor is enhanced so it can capture the behavioral stochasticity consistent with experimentally observed phenomena. The switching behavior of the memristor model is demonstrated to be akin to the firing of the stochastic spike response neuron model, the primary building block for probabilistic algorithms in spiking neural networks. Furthermore, the paper proposes a neural soma circuit that utilizes the intrinsic nondeterminism of memristive switching for efficient spike generation. The simulations and analysis of the behavior of a single stochastic neuron and a winner-take-all network built of such neurons and trained on handwritten digits confirm that the circuit can be used for building probabilistic sampling and pattern adaptation machinery in spiking networks. The findings constitute an important step towards scalable and efficient probabilistic neuromorphic platforms. © 2011 IEEE.

  2. The role of degree distribution in shaping the dynamics in networks of sparsely connected spiking neurons

    Directory of Open Access Journals (Sweden)

    Alex eRoxin

    2011-03-01

    Full Text Available Neuronal network models often assume a fixed probability of connectionbetween neurons. This assumption leads to random networks withbinomial in-degree and out-degree distributions which are relatively narrow. Here I study the effect of broaddegree distributions on network dynamics by interpolating between abinomial and a truncated powerlaw distribution for the in-degree andout-degree independently. This is done both for an inhibitory network(I network as well as for the recurrent excitatory connections in anetwork of excitatory and inhibitory neurons (EI network. In bothcases increasing the width of the in-degree distribution affects theglobal state of the network by driving transitions betweenasynchronous behavior and oscillations. This effect is reproduced ina simplified rate model which includes the heterogeneity in neuronalinput due to the in-degree of cells. On the other hand, broadeningthe out-degree distribution is shown to increase the fraction ofcommon inputs to pairs of neurons. This leads to increases in theamplitude of the cross-correlation (CC of synaptic currents. In thecase of the I network, despite strong oscillatory CCs in the currents, CCs of the membrane potential are low due to filtering and reset effects, leading to very weak CCs of the spikecount. In the asynchronous regime ofthe EI network, broadening the out-degree increases the amplitude ofCCs in the recurrent excitatory currents, while CC of the totalcurrent is essentially unaffected as are pairwise spikingcorrelations. This is due to a dynamic balance between excitatoryand inhibitory synaptic currents. In the oscillatory regime, changesin the out-degree can have a large effect on spiking correlations andeven on the qualitative dynamical state of the network.

  3. Intrinsically active and pacemaker neurons in pluripotent stem cell-derived neuronal populations.

    Science.gov (United States)

    Illes, Sebastian; Jakab, Martin; Beyer, Felix; Gelfert, Renate; Couillard-Despres, Sébastien; Schnitzler, Alfons; Ritter, Markus; Aigner, Ludwig

    2014-03-11

    Neurons generated from pluripotent stem cells (PSCs) self-organize into functional neuronal assemblies in vitro, generating synchronous network activities. Intriguingly, PSC-derived neuronal assemblies develop spontaneous activities that are independent of external stimulation, suggesting the presence of thus far undetected intrinsically active neurons (IANs). Here, by using mouse embryonic stem cells, we provide evidence for the existence of IANs in PSC-neuronal networks based on extracellular multielectrode array and intracellular patch-clamp recordings. IANs remain active after pharmacological inhibition of fast synaptic communication and possess intrinsic mechanisms required for autonomous neuronal activity. PSC-derived IANs are functionally integrated in PSC-neuronal populations, contribute to synchronous network bursting, and exhibit pacemaker properties. The intrinsic activity and pacemaker properties of the neuronal subpopulation identified herein may be particularly relevant for interventions involving transplantation of neural tissues. IANs may be a key element in the regulation of the functional activity of grafted as well as preexisting host neuronal networks.

  4. Assessing neuronal networks: understanding Alzheimer's disease.

    LENUS (Irish Health Repository)

    Bokde, Arun L W

    2012-02-01

    Findings derived from neuroimaging of the structural and functional organization of the human brain have led to the widely supported hypothesis that neuronal networks of temporally coordinated brain activity across different regional brain structures underpin cognitive function. Failure of integration within a network leads to cognitive dysfunction. The current discussion on Alzheimer\\'s disease (AD) argues that it presents in part a disconnection syndrome. Studies using functional magnetic resonance imaging, positron emission tomography and electroencephalography demonstrate that synchronicity of brain activity is altered in AD and correlates with cognitive deficits. Moreover, recent advances in diffusion tensor imaging have made it possible to track axonal projections across the brain, revealing substantial regional impairment in fiber-tract integrity in AD. Accumulating evidence points towards a network breakdown reflecting disconnection at both the structural and functional system level. The exact relationship among these multiple mechanistic variables and their contribution to cognitive alterations and ultimately decline is yet unknown. Focused research efforts aimed at the integration of both function and structure hold great promise not only in improving our understanding of cognition but also of its characteristic progressive metamorphosis in complex chronic neurodegenerative disorders such as AD.

  5. Large-scale modeling of epileptic seizures: scaling properties of two parallel neuronal network simulation algorithms.

    Science.gov (United States)

    Pesce, Lorenzo L; Lee, Hyong C; Hereld, Mark; Visser, Sid; Stevens, Rick L; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  6. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    Directory of Open Access Journals (Sweden)

    Lorenzo L. Pesce

    2013-01-01

    Full Text Available Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons and processor pool sizes (1 to 256 processors. Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  7. Detection of M-Sequences from Spike Sequence in Neuronal Networks

    Directory of Open Access Journals (Sweden)

    Yoshi Nishitani

    2012-01-01

    Full Text Available In circuit theory, it is well known that a linear feedback shift register (LFSR circuit generates pseudorandom bit sequences (PRBS, including an M-sequence with the maximum period of length. In this study, we tried to detect M-sequences known as a pseudorandom sequence generated by the LFSR circuit from time series patterns of stimulated action potentials. Stimulated action potentials were recorded from dissociated cultures of hippocampal neurons grown on a multielectrode array. We could find several M-sequences from a 3-stage LFSR circuit (M3. These results show the possibility of assembling LFSR circuits or its equivalent ones in a neuronal network. However, since the M3 pattern was composed of only four spike intervals, the possibility of an accidental detection was not zero. Then, we detected M-sequences from random spike sequences which were not generated from an LFSR circuit and compare the result with the number of M-sequences from the originally observed raster data. As a result, a significant difference was confirmed: a greater number of “0–1” reversed the 3-stage M-sequences occurred than would have accidentally be detected. This result suggests that some LFSR equivalent circuits are assembled in neuronal networks.

  8. Plasticity-induced characteristic changes of pattern dynamics and the related phase transitions in small-world neuronal networks

    International Nuclear Information System (INIS)

    Huang Xu-Hui; Hu Gang

    2014-01-01

    Phase transitions widely exist in nature and occur when some control parameters are changed. In neural systems, their macroscopic states are represented by the activity states of neuron populations, and phase transitions between different activity states are closely related to corresponding functions in the brain. In particular, phase transitions to some rhythmic synchronous firing states play significant roles on diverse brain functions and disfunctions, such as encoding rhythmical external stimuli, epileptic seizure, etc. However, in previous studies, phase transitions in neuronal networks are almost driven by network parameters (e.g., external stimuli), and there has been no investigation about the transitions between typical activity states of neuronal networks in a self-organized way by applying plastic connection weights. In this paper, we discuss phase transitions in electrically coupled and lattice-based small-world neuronal networks (LBSW networks) under spike-timing-dependent plasticity (STDP). By applying STDP on all electrical synapses, various known and novel phase transitions could emerge in LBSW networks, particularly, the phenomenon of self-organized phase transitions (SOPTs): repeated transitions between synchronous and asynchronous firing states. We further explore the mechanics generating SOPTs on the basis of synaptic weight dynamics. (interdisciplinary physics and related areas of science and technology)

  9. Collective stochastic coherence in recurrent neuronal networks

    Science.gov (United States)

    Sancristóbal, Belén; Rebollo, Beatriz; Boada, Pol; Sanchez-Vives, Maria V.; Garcia-Ojalvo, Jordi

    2016-09-01

    Recurrent networks of dynamic elements frequently exhibit emergent collective oscillations, which can show substantial regularity even when the individual elements are considerably noisy. How noise-induced dynamics at the local level coexists with regular oscillations at the global level is still unclear. Here we show that a combination of stochastic recurrence-based initiation with deterministic refractoriness in an excitable network can reconcile these two features, leading to maximum collective coherence for an intermediate noise level. We report this behaviour in the slow oscillation regime exhibited by a cerebral cortex network under dynamical conditions resembling slow-wave sleep and anaesthesia. Computational analysis of a biologically realistic network model reveals that an intermediate level of background noise leads to quasi-regular dynamics. We verify this prediction experimentally in cortical slices subject to varying amounts of extracellular potassium, which modulates neuronal excitability and thus synaptic noise. The model also predicts that this effectively regular state should exhibit noise-induced memory of the spatial propagation profile of the collective oscillations, which is also verified experimentally. Taken together, these results allow us to construe the high regularity observed experimentally in the brain as an instance of collective stochastic coherence.

  10. Model-based analysis and control of a network of basal ganglia spiking neurons in the normal and Parkinsonian states

    Science.gov (United States)

    Liu, Jianbo; Khalil, Hassan K.; Oweiss, Karim G.

    2011-08-01

    Controlling the spatiotemporal firing pattern of an intricately connected network of neurons through microstimulation is highly desirable in many applications. We investigated in this paper the feasibility of using a model-based approach to the analysis and control of a basal ganglia (BG) network model of Hodgkin-Huxley (HH) spiking neurons through microstimulation. Detailed analysis of this network model suggests that it can reproduce the experimentally observed characteristics of BG neurons under a normal and a pathological Parkinsonian state. A simplified neuronal firing rate model, identified from the detailed HH network model, is shown to capture the essential network dynamics. Mathematical analysis of the simplified model reveals the presence of a systematic relationship between the network's structure and its dynamic response to spatiotemporally patterned microstimulation. We show that both the network synaptic organization and the local mechanism of microstimulation can impose tight constraints on the possible spatiotemporal firing patterns that can be generated by the microstimulated network, which may hinder the effectiveness of microstimulation to achieve a desired objective under certain conditions. Finally, we demonstrate that the feedback control design aided by the mathematical analysis of the simplified model is indeed effective in driving the BG network in the normal and Parskinsonian states to follow a prescribed spatiotemporal firing pattern. We further show that the rhythmic/oscillatory patterns that characterize a dopamine-depleted BG network can be suppressed as a direct consequence of controlling the spatiotemporal pattern of a subpopulation of the output Globus Pallidus internalis (GPi) neurons in the network. This work may provide plausible explanations for the mechanisms underlying the therapeutic effects of deep brain stimulation (DBS) in Parkinson's disease and pave the way towards a model-based, network level analysis and closed

  11. Spiral Waves and Multiple Spatial Coherence Resonances Induced by Colored Noise in Neuronal Network

    International Nuclear Information System (INIS)

    Tang Zhao; Li Yuye; Xi Lei; Jia Bing; Gu Huaguang

    2012-01-01

    Gaussian colored noise induced spatial patterns and spatial coherence resonances in a square lattice neuronal network composed of Morris-Lecar neurons are studied. Each neuron is at resting state near a saddle-node bifurcation on invariant circle, coupled to its nearest neighbors by electronic coupling. Spiral waves with different structures and disordered spatial structures can be alternately induced within a large range of noise intensity. By calculating spatial structure function and signal-to-noise ratio (SNR), it is found that SNR values are higher when the spiral structures are simple and are lower when the spatial patterns are complex or disordered, respectively. SNR manifest multiple local maximal peaks, indicating that the colored noise can induce multiple spatial coherence resonances. The maximal SNR values decrease as the correlation time of the noise increases. These results not only provide an example of multiple resonances, but also show that Gaussian colored noise play constructive roles in neuronal network. (general)

  12. Synchronization stability and pattern selection in a memristive neuronal network

    Science.gov (United States)

    Wang, Chunni; Lv, Mi; Alsaedi, Ahmed; Ma, Jun

    2017-11-01

    Spatial pattern formation and selection depend on the intrinsic self-organization and cooperation between nodes in spatiotemporal systems. Based on a memory neuron model, a regular network with electromagnetic induction is proposed to investigate the synchronization and pattern selection. In our model, the memristor is used to bridge the coupling between the magnetic flux and the membrane potential, and the induction current results from the time-varying electromagnetic field contributed by the exchange of ion currents and the distribution of charged ions. The statistical factor of synchronization predicts the transition of synchronization and pattern stability. The bifurcation analysis of the sampled time series for the membrane potential reveals the mode transition in electrical activity and pattern selection. A formation mechanism is outlined to account for the emergence of target waves. Although an external stimulus is imposed on each neuron uniformly, the diversity in the magnetic flux and the induction current leads to emergence of target waves in the studied network.

  13. Synchronization stability and pattern selection in a memristive neuronal network.

    Science.gov (United States)

    Wang, Chunni; Lv, Mi; Alsaedi, Ahmed; Ma, Jun

    2017-11-01

    Spatial pattern formation and selection depend on the intrinsic self-organization and cooperation between nodes in spatiotemporal systems. Based on a memory neuron model, a regular network with electromagnetic induction is proposed to investigate the synchronization and pattern selection. In our model, the memristor is used to bridge the coupling between the magnetic flux and the membrane potential, and the induction current results from the time-varying electromagnetic field contributed by the exchange of ion currents and the distribution of charged ions. The statistical factor of synchronization predicts the transition of synchronization and pattern stability. The bifurcation analysis of the sampled time series for the membrane potential reveals the mode transition in electrical activity and pattern selection. A formation mechanism is outlined to account for the emergence of target waves. Although an external stimulus is imposed on each neuron uniformly, the diversity in the magnetic flux and the induction current leads to emergence of target waves in the studied network.

  14. Plasticity of Neuron-Glial Transmission: Equipping Glia for Long-Term Integration of Network Activity

    Directory of Open Access Journals (Sweden)

    Wayne Croft

    2015-01-01

    Full Text Available The capacity of synaptic networks to express activity-dependent changes in strength and connectivity is essential for learning and memory processes. In recent years, glial cells (most notably astrocytes have been recognized as active participants in the modulation of synaptic transmission and synaptic plasticity, implicating these electrically nonexcitable cells in information processing in the brain. While the concept of bidirectional communication between neurons and glia and the mechanisms by which gliotransmission can modulate neuronal function are well established, less attention has been focussed on the computational potential of neuron-glial transmission itself. In particular, whether neuron-glial transmission is itself subject to activity-dependent plasticity and what the computational properties of such plasticity might be has not been explored in detail. In this review, we summarize current examples of plasticity in neuron-glial transmission, in many brain regions and neurotransmitter pathways. We argue that induction of glial plasticity typically requires repetitive neuronal firing over long time periods (minutes-hours rather than the short-lived, stereotyped trigger typical of canonical long-term potentiation. We speculate that this equips glia with a mechanism for monitoring average firing rates in the synaptic network, which is suited to the longer term roles proposed for astrocytes in neurophysiology.

  15. Functional characterization of GABAA receptor-mediated modulation of cortical neuron network activity in microelectrode array recordings

    DEFF Research Database (Denmark)

    Bader, Benjamin M; Steder, Anne; Klein, Anders Bue

    2017-01-01

    The numerous γ-aminobutyric acid type A receptor (GABAAR) subtypes are differentially expressed and mediate distinct functions at neuronal level. In this study we have investigated GABAAR-mediated modulation of the spontaneous activity patterns of primary neuronal networks from murine frontal...... of the information extractable from the MEA recordings offers interesting insights into the contributions of various GABAAR subtypes/subgroups to cortical network activity and the putative functional interplay between these receptors in these neurons....... cortex by characterizing the effects induced by a wide selection of pharmacological tools at a plethora of activity parameters in microelectrode array (MEA) recordings. The basic characteristics of the primary cortical neurons used in the recordings were studied in some detail, and the expression levels...

  16. Consistency and diversity of spike dynamics in the neurons of bed nucleus of stria terminalis of the rat: a dynamic clamp study.

    Directory of Open Access Journals (Sweden)

    Attila Szücs

    Full Text Available Neurons display a high degree of variability and diversity in the expression and regulation of their voltage-dependent ionic channels. Under low level of synaptic background a number of physiologically distinct cell types can be identified in most brain areas that display different responses to standard forms of intracellular current stimulation. Nevertheless, it is not well understood how biophysically different neurons process synaptic inputs in natural conditions, i.e., when experiencing intense synaptic bombardment in vivo. While distinct cell types might process synaptic inputs into different patterns of action potentials representing specific "motifs" of network activity, standard methods of electrophysiology are not well suited to resolve such questions. In the current paper we performed dynamic clamp experiments with simulated synaptic inputs that were presented to three types of neurons in the juxtacapsular bed nucleus of stria terminalis (jcBNST of the rat. Our analysis on the temporal structure of firing showed that the three types of jcBNST neurons did not produce qualitatively different spike responses under identical patterns of input. However, we observed consistent, cell type dependent variations in the fine structure of firing, at the level of single spikes. At the millisecond resolution structure of firing we found high degree of diversity across the entire spectrum of neurons irrespective of their type. Additionally, we identified a new cell type with intrinsic oscillatory properties that produced a rhythmic and regular firing under synaptic stimulation that distinguishes it from the previously described jcBNST cell types. Our findings suggest a sophisticated, cell type dependent regulation of spike dynamics of neurons when experiencing a complex synaptic background. The high degree of their dynamical diversity has implications to their cooperative dynamics and synchronization.

  17. Theta rhythm-like bidirectional cycling dynamics of living neuronal networks in vitro.

    Science.gov (United States)

    Gladkov, Arseniy; Grinchuk, Oleg; Pigareva, Yana; Mukhina, Irina; Kazantsev, Victor; Pimashkin, Alexey

    2018-01-01

    The phenomena of synchronization, rhythmogenesis and coherence observed in brain networks are believed to be a dynamic substrate for cognitive functions such as learning and memory. However, researchers are still debating whether the rhythmic activity emerges from the network morphology that developed during neurogenesis or as a result of neuronal dynamics achieved under certain conditions. In the present study, we observed self-organized spiking activity that converged to long, complex and rhythmically repeated superbursts in neural networks formed by mature hippocampal cultures with a high cellular density. The superburst lasted for tens of seconds and consisted of hundreds of short (50-100 ms) small bursts with a high spiking rate of 139.0 ± 78.6 Hz that is associated with high-frequency oscillations in the hippocampus. In turn, the bursting frequency represents a theta rhythm (11.2 ± 1.5 Hz). The distribution of spikes within the bursts was non-random, representing a set of well-defined spatio-temporal base patterns or motifs. The long superburst was classified into two types. Each type was associated with a unique direction of spike propagation and, hence, was encoded by a binary sequence with random switching between the two "functional" states. The precisely structured bidirectional rhythmic activity that developed in self-organizing cultured networks was quite similar to the activity observed in the in vivo experiments.

  18. Theta rhythm-like bidirectional cycling dynamics of living neuronal networks in vitro.

    Directory of Open Access Journals (Sweden)

    Arseniy Gladkov

    Full Text Available The phenomena of synchronization, rhythmogenesis and coherence observed in brain networks are believed to be a dynamic substrate for cognitive functions such as learning and memory. However, researchers are still debating whether the rhythmic activity emerges from the network morphology that developed during neurogenesis or as a result of neuronal dynamics achieved under certain conditions. In the present study, we observed self-organized spiking activity that converged to long, complex and rhythmically repeated superbursts in neural networks formed by mature hippocampal cultures with a high cellular density. The superburst lasted for tens of seconds and consisted of hundreds of short (50-100 ms small bursts with a high spiking rate of 139.0 ± 78.6 Hz that is associated with high-frequency oscillations in the hippocampus. In turn, the bursting frequency represents a theta rhythm (11.2 ± 1.5 Hz. The distribution of spikes within the bursts was non-random, representing a set of well-defined spatio-temporal base patterns or motifs. The long superburst was classified into two types. Each type was associated with a unique direction of spike propagation and, hence, was encoded by a binary sequence with random switching between the two "functional" states. The precisely structured bidirectional rhythmic activity that developed in self-organizing cultured networks was quite similar to the activity observed in the in vivo experiments.

  19. Transgenic tools to characterize neuronal properties of discrete populations of zebrafish neurons.

    Science.gov (United States)

    Satou, Chie; Kimura, Yukiko; Hirata, Hiromi; Suster, Maximiliano L; Kawakami, Koichi; Higashijima, Shin-ichi

    2013-09-01

    The developing nervous system consists of a variety of cell types. Transgenic animals expressing reporter genes in specific classes of neuronal cells are powerful tools for the study of neuronal network formation. We generated a wide variety of transgenic zebrafish that expressed reporter genes in specific classes of neurons or neuronal progenitors. These include lines in which neurons of specific neurotransmitter phenotypes expressed fluorescent proteins or Gal4, and lines in which specific subsets of the dorsal progenitor domain in the spinal cord expressed fluorescent proteins. Using these, we examined domain organization in the developing dorsal spinal cord, and found that there are six progenitor domains in zebrafish, which is similar to the domain organization in mice. We also systematically characterized neurotransmitter properties of the neurons that are produced from each domain. Given that reporter gene expressions occurs in a wide area of the nervous system in the lines generated, these transgenic fish should serve as powerful tools for the investigation of not only the neurons in the dorsal spinal cord but also neuronal structures and functions in many other regions of the nervous system.

  20. Extrasynaptic neurotransmission in the modulation of brain function. Focus on the striatal neuronal-glial networks

    Directory of Open Access Journals (Sweden)

    Kjell eFuxe

    2012-06-01

    Full Text Available Extrasynaptic neurotransmission is an important short distance form of volume transmission (VT and describes the extracellular diffusion of transmitters and modulators after synaptic spillover or extrasynaptic release in the local circuit regions binding to and activating mainly extrasynaptic neuronal and glial receptors in the neuroglial networks of the brain. Receptor-receptor interactions in G protein-coupled receptor (GPCR heteromers play a major role, on dendritic spines and nerve terminals including glutamate synapses, in the integrative processes of the extrasynaptic signaling. Heteromeric complexes between GPCR and ion-channel receptors play a special role in the integration of the synaptic and extrasynaptic signals. Changes in extracellular concentrations of the classical synaptic neurotransmitters glutamate and GABA found with microdialysis is likely an expression of the activity of the neuron-astrocyte unit of the brain and can be used as an index of VT-mediated actions of these two neurotransmitters in the brain. Thus, the activity of neurons may be functionally linked to the activity of astrocytes, which may release glutamate and GABA to the extracellular space where extrasynaptic glutamate and GABA receptors do exist. Wiring transmission (WT and VT are fundamental properties of all neurons of the CNS but the balance between WT and VT varies from one nerve cell population to the other. The focus is on the striatal cellular networks, and the WT and VT and their integration via receptor heteromers are described in the GABA projection neurons, the glutamate, dopamine, 5-hydroxytryptamine (5-HT and histamine striatal afferents, the cholinergic interneurons and different types of GABA interneurons. In addition, the role in these networks of VT signaling of the energy-dependent modulator adenosine and of endocannabinoids mainly formed in the striatal projection neurons will be underlined to understand the communication in the striatal

  1. Impacts of clustering on noise-induced spiking regularity in the excitatory neuronal networks of subnetworks.

    Science.gov (United States)

    Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua

    2015-01-01

    In this paper, we investigate how clustering factors influent spiking regularity of the neuronal network of subnetworks. In order to do so, we fix the averaged coupling probability and the averaged coupling strength, and take the cluster number M, the ratio of intra-connection probability and inter-connection probability R, the ratio of intra-coupling strength and inter-coupling strength S as controlled parameters. With the obtained simulation results, we find that spiking regularity of the neuronal networks has little variations with changing of R and S when M is fixed. However, cluster number M could reduce the spiking regularity to low level when the uniform neuronal network's spiking regularity is at high level. Combined the obtained results, we can see that clustering factors have little influences on the spiking regularity when the entire energy is fixed, which could be controlled by the averaged coupling strength and the averaged connection probability.

  2. Opposite effects of low and high doses of Abeta42 on electrical network and neuronal excitability in the rat prefrontal cortex.

    Science.gov (United States)

    Wang, Yun; Zhang, Guangping; Zhou, Hongwei; Barakat, Amey; Querfurth, Henry

    2009-12-21

    Changes in neuronal synchronization have been found in patients and animal models of Alzheimer's disease (AD). Synchronized behaviors within neuronal networks are important to such complex cognitive processes as working memory. The mechanisms behind these changes are not understood but may involve the action of soluble beta-amyloid (Abeta) on electrical networks. In order to determine if Abeta can induce changes in neuronal synchronization, the activities of pyramidal neurons were recorded in rat prefrontal cortical (PFC) slices under calcium-free conditions using multi-neuron patch clamp technique. Electrical network activities and synchronization among neurons were significantly inhibited by low dose Abeta42 (1 nM) and initially by high dose Abeta42 (500 nM). However, prolonged application of high dose Abeta42 resulted in network activation and tonic firing. Underlying these observations, we discovered that prolonged application of low and high doses of Abeta42 induced opposite changes in action potential (AP)-threshold and after-hyperpolarization (AHP) of neurons. Accordingly, low dose Abeta42 significantly increased the AP-threshold and deepened the AHP, making neurons less excitable. In contrast, high dose Abeta42 significantly reduced the AP-threshold and shallowed the AHP, making neurons more excitable. These results support a model that low dose Abeta42 released into the interstitium has a physiologic feedback role to dampen electrical network activity by reducing neuronal excitability. Higher concentrations of Abeta42 over time promote supra-synchronization between individual neurons by increasing their excitability. The latter may disrupt frontal-based cognitive processing and in some cases lead to epileptiform discharges.

  3. Opposite effects of low and high doses of Abeta42 on electrical network and neuronal excitability in the rat prefrontal cortex.

    Directory of Open Access Journals (Sweden)

    Yun Wang

    Full Text Available Changes in neuronal synchronization have been found in patients and animal models of Alzheimer's disease (AD. Synchronized behaviors within neuronal networks are important to such complex cognitive processes as working memory. The mechanisms behind these changes are not understood but may involve the action of soluble beta-amyloid (Abeta on electrical networks. In order to determine if Abeta can induce changes in neuronal synchronization, the activities of pyramidal neurons were recorded in rat prefrontal cortical (PFC slices under calcium-free conditions using multi-neuron patch clamp technique. Electrical network activities and synchronization among neurons were significantly inhibited by low dose Abeta42 (1 nM and initially by high dose Abeta42 (500 nM. However, prolonged application of high dose Abeta42 resulted in network activation and tonic firing. Underlying these observations, we discovered that prolonged application of low and high doses of Abeta42 induced opposite changes in action potential (AP-threshold and after-hyperpolarization (AHP of neurons. Accordingly, low dose Abeta42 significantly increased the AP-threshold and deepened the AHP, making neurons less excitable. In contrast, high dose Abeta42 significantly reduced the AP-threshold and shallowed the AHP, making neurons more excitable. These results support a model that low dose Abeta42 released into the interstitium has a physiologic feedback role to dampen electrical network activity by reducing neuronal excitability. Higher concentrations of Abeta42 over time promote supra-synchronization between individual neurons by increasing their excitability. The latter may disrupt frontal-based cognitive processing and in some cases lead to epileptiform discharges.

  4. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights

    Directory of Open Access Journals (Sweden)

    Wilten eNicola

    2016-02-01

    Full Text Available A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF. The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks

  5. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights.

    Science.gov (United States)

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.

  6. Synaptic Dynamics and Neuronal Network Connectivity are reflected in the Distribution of Times in Up states

    Directory of Open Access Journals (Sweden)

    Khanh eDao Duc

    2015-07-01

    Full Text Available The dynamics of neuronal networks connected by synaptic dynamics can sustain long periods of depolarization that can last for hundreds of milliseconds such as Up states recorded during sleep or anesthesia. Yet the underlying mechanism driving these periods remain unclear. We show here within a mean-field model that the residence times of the neuronal membrane potential in cortical Up states does not follow a Poissonian law, but presents several peaks. Furthermore, the present modeling approach allows extracting some information about the neuronal network connectivity from the time distribution histogram. Based on a synaptic-depression model, we find that these peaks, that can be observed in histograms of patch-clamp recordings are not artifacts of electrophysiological measurements, but rather are an inherent property of the network dynamics. Analysis of the equations reveals a stable focus located close to the unstable limit cycle, delimiting a region that defines the Up state. The model further shows that the peaks observed in the Up state time distribution are due to winding around the focus before escaping from the basin of attraction. Finally, we use in vivo recordings of intracellular membrane potential and we recover from the peak distribution, some information about the network connectivity. We conclude that it is possible to recover the network connectivity from the distribution of times that the neuronal membrane voltage spends in Up states.

  7. The connection-set algebra--a novel formalism for the representation of connectivity structure in neuronal network models.

    Science.gov (United States)

    Djurfeldt, Mikael

    2012-07-01

    The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31-42, 2008b) and an implementation in Python has been publicly released.

  8. Role of Delays in Shaping Spatiotemporal Dynamics of Neuronal Activity in Large Networks

    International Nuclear Information System (INIS)

    Roxin, Alex; Brunel, Nicolas; Hansel, David

    2005-01-01

    We study the effect of delays on the dynamics of large networks of neurons. We show that delays give rise to a wealth of bifurcations and to a rich phase diagram, which includes oscillatory bumps, traveling waves, lurching waves, standing waves arising via a period-doubling bifurcation, aperiodic regimes, and regimes of multistability. We study the existence and the stability of the various dynamical patterns analytically and numerically in a simplified rate model as a function of the interaction parameters. The results derived in that framework allow us to understand the origin of the diversity of dynamical states observed in large networks of spiking neurons

  9. Alterations of cortical GABA neurons and network oscillations in schizophrenia.

    Science.gov (United States)

    Gonzalez-Burgos, Guillermo; Hashimoto, Takanori; Lewis, David A

    2010-08-01

    The hypothesis that alterations of cortical inhibitory gamma-aminobutyric acid (GABA) neurons are a central element in the pathology of schizophrenia has emerged from a series of postmortem studies. How such abnormalities may contribute to the clinical features of schizophrenia has been substantially informed by a convergence with basic neuroscience studies revealing complex details of GABA neuron function in the healthy brain. Importantly, activity of the parvalbumin-containing class of GABA neurons has been linked to the production of cortical network oscillations. Furthermore, growing knowledge supports the concept that gamma band oscillations (30-80 Hz) are an essential mechanism for cortical information transmission and processing. Herein we review recent studies further indicating that inhibition from parvalbumin-positive GABA neurons is necessary to produce gamma oscillations in cortical circuits; provide an update on postmortem studies documenting that deficits in the expression of glutamic acid decarboxylase67, which accounts for most GABA synthesis in the cortex, are widely observed in schizophrenia; and describe studies using novel, noninvasive approaches directly assessing potential relations between alterations in GABA, oscillations, and cognitive function in schizophrenia.

  10. Structural covariance networks across healthy young adults and their consistency.

    Science.gov (United States)

    Guo, Xiaojuan; Wang, Yan; Guo, Taomei; Chen, Kewei; Zhang, Jiacai; Li, Ke; Jin, Zhen; Yao, Li

    2015-08-01

    To investigate structural covariance networks (SCNs) as measured by regional gray matter volumes with structural magnetic resonance imaging (MRI) from healthy young adults, and to examine their consistency and stability. Two independent cohorts were included in this study: Group 1 (82 healthy subjects aged 18-28 years) and Group 2 (109 healthy subjects aged 20-28 years). Structural MRI data were acquired at 3.0T and 1.5T using a magnetization prepared rapid-acquisition gradient echo sequence for these two groups, respectively. We applied independent component analysis (ICA) to construct SCNs and further applied the spatial overlap ratio and correlation coefficient to evaluate the spatial consistency of the SCNs between these two datasets. Seven and six independent components were identified for Group 1 and Group 2, respectively. Moreover, six SCNs including the posterior default mode network, the visual and auditory networks consistently existed across the two datasets. The overlap ratios and correlation coefficients of the visual network reached the maximums of 72% and 0.71. This study demonstrates the existence of consistent SCNs corresponding to general functional networks. These structural covariance findings may provide insight into the underlying organizational principles of brain anatomy. © 2014 Wiley Periodicals, Inc.

  11. Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression

    KAUST Repository

    Onesto, Valentina; Cosentino, Carlo; Di Fabrizio, Enzo M.; Cesarelli, Mario; Amato, Francesco; Gentile, Francesco

    2016-01-01

    Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect.

  12. Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression

    KAUST Repository

    Onesto, Valentina

    2016-05-10

    Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect.

  13. Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression

    Directory of Open Access Journals (Sweden)

    Valentina Onesto

    2016-01-01

    Full Text Available Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect.

  14. Self-organized criticality in developing neuronal networks.

    Directory of Open Access Journals (Sweden)

    Christian Tetzlaff

    Full Text Available Recently evidence has accumulated that many neural networks exhibit self-organized criticality. In this state, activity is similar across temporal scales and this is beneficial with respect to information flow. If subcritical, activity can die out, if supercritical epileptiform patterns may occur. Little is known about how developing networks will reach and stabilize criticality. Here we monitor the development between 13 and 95 days in vitro (DIV of cortical cell cultures (n = 20 and find four different phases, related to their morphological maturation: An initial low-activity state (≈19 DIV is followed by a supercritical (≈20 DIV and then a subcritical one (≈36 DIV until the network finally reaches stable criticality (≈58 DIV. Using network modeling and mathematical analysis we describe the dynamics of the emergent connectivity in such developing systems. Based on physiological observations, the synaptic development in the model is determined by the drive of the neurons to adjust their connectivity for reaching on average firing rate homeostasis. We predict a specific time course for the maturation of inhibition, with strong onset and delayed pruning, and that total synaptic connectivity should be strongly linked to the relative levels of excitation and inhibition. These results demonstrate that the interplay between activity and connectivity guides developing networks into criticality suggesting that this may be a generic and stable state of many networks in vivo and in vitro.

  15. Effects of neuronal loss in the dynamic model of neural networks

    International Nuclear Information System (INIS)

    Yoon, B-G; Choi, J; Choi, M Y

    2008-01-01

    We study the phase transitions and dynamic behavior of the dynamic model of neural networks, with an emphasis on the effects of neuronal loss due to external stress. In the absence of loss the overall results obtained numerically are found to agree excellently with the theoretical ones. When the external stress is turned on, some neurons may deteriorate and die; such loss of neurons, in general, weakens the memory in the system. As the loss increases beyond a critical value, the order parameter measuring the strength of memory decreases to zero either continuously or discontinuously, namely, the system loses its memory via a second- or a first-order transition, depending on the ratio of the refractory period to the duration of action potential

  16. Response sensitivity of barrel neuron subpopulations to simulated thalamic input.

    Science.gov (United States)

    Pesavento, Michael J; Rittenhouse, Cynthia D; Pinto, David J

    2010-06-01

    Our goal is to examine the relationship between neuron- and network-level processing in the context of a well-studied cortical function, the processing of thalamic input by whisker-barrel circuits in rodent neocortex. Here we focus on neuron-level processing and investigate the responses of excitatory and inhibitory barrel neurons to simulated thalamic inputs applied using the dynamic clamp method in brain slices. Simulated inputs are modeled after real thalamic inputs recorded in vivo in response to brief whisker deflections. Our results suggest that inhibitory neurons require more input to reach firing threshold, but then fire earlier, with less variability, and respond to a broader range of inputs than do excitatory neurons. Differences in the responses of barrel neuron subtypes depend on their intrinsic membrane properties. Neurons with a low input resistance require more input to reach threshold but then fire earlier than neurons with a higher input resistance, regardless of the neuron's classification. Our results also suggest that the response properties of excitatory versus inhibitory barrel neurons are consistent with the response sensitivities of the ensemble barrel network. The short response latency of inhibitory neurons may serve to suppress ensemble barrel responses to asynchronous thalamic input. Correspondingly, whereas neurons acting as part of the barrel circuit in vivo are highly selective for temporally correlated thalamic input, excitatory barrel neurons acting alone in vitro are less so. These data suggest that network-level processing of thalamic input in barrel cortex depends on neuron-level processing of the same input by excitatory and inhibitory barrel neurons.

  17. A novel recurrent neural network with one neuron and finite-time convergence for k-winners-take-all operation.

    Science.gov (United States)

    Liu, Qingshan; Dang, Chuangyin; Cao, Jinde

    2010-07-01

    In this paper, based on a one-neuron recurrent neural network, a novel k-winners-take-all ( k -WTA) network is proposed. Finite time convergence of the proposed neural network is proved using the Lyapunov method. The k-WTA operation is first converted equivalently into a linear programming problem. Then, a one-neuron recurrent neural network is proposed to get the kth or (k+1)th largest inputs of the k-WTA problem. Furthermore, a k-WTA network is designed based on the proposed neural network to perform the k-WTA operation. Compared with the existing k-WTA networks, the proposed network has simple structure and finite time convergence. In addition, simulation results on numerical examples show the effectiveness and performance of the proposed k-WTA network.

  18. Dopamine Attenuates Ketamine-Induced Neuronal Apoptosis in the Developing Rat Retina Independent of Early Synchronized Spontaneous Network Activity.

    Science.gov (United States)

    Dong, Jing; Gao, Lingqi; Han, Junde; Zhang, Junjie; Zheng, Jijian

    2017-07-01

    Deprivation of spontaneous rhythmic electrical activity in early development by anesthesia administration, among other interventions, induces neuronal apoptosis. However, it is unclear whether enhancement of neuronal electrical activity attenuates neuronal apoptosis in either normal development or after anesthesia exposure. The present study investigated the effects of dopamine, an enhancer of spontaneous rhythmic electrical activity, on ketamine-induced neuronal apoptosis in the developing rat retina. TUNEL and immunohistochemical assays indicated that ketamine time- and dose-dependently aggravated physiological and ketamine-induced apoptosis and inhibited early-synchronized spontaneous network activity. Dopamine administration reversed ketamine-induced neuronal apoptosis, but did not reverse the inhibitory effects of ketamine on early synchronized spontaneous network activity despite enhancing it in controls. Blockade of D1, D2, and A2A receptors and inhibition of cAMP/PKA signaling partially antagonized the protective effect of dopamine against ketamine-induced apoptosis. Together, these data indicate that dopamine attenuates ketamine-induced neuronal apoptosis in the developing rat retina by activating the D1, D2, and A2A receptors, and upregulating cAMP/PKA signaling, rather than through modulation of early synchronized spontaneous network activity.

  19. Network control principles predict neuron function in the Caenorhabditis elegans connectome

    Science.gov (United States)

    Yan, Gang; Vértes, Petra E.; Towlson, Emma K.; Chew, Yee Lian; Walker, Denise S.; Schafer, William R.; Barabási, Albert-László

    2017-10-01

    Recent studies on the controllability of complex systems offer a powerful mathematical framework to systematically explore the structure-function relationship in biological, social, and technological networks. Despite theoretical advances, we lack direct experimental proof of the validity of these widely used control principles. Here we fill this gap by applying a control framework to the connectome of the nematode Caenorhabditis elegans, allowing us to predict the involvement of each C. elegans neuron in locomotor behaviours. We predict that control of the muscles or motor neurons requires 12 neuronal classes, which include neuronal groups previously implicated in locomotion by laser ablation, as well as one previously uncharacterized neuron, PDB. We validate this prediction experimentally, finding that the ablation of PDB leads to a significant loss of dorsoventral polarity in large body bends. Importantly, control principles also allow us to investigate the involvement of individual neurons within each neuronal class. For example, we predict that, within the class of DD motor neurons, only three (DD04, DD05, or DD06) should affect locomotion when ablated individually. This prediction is also confirmed; single cell ablations of DD04 or DD05 specifically affect posterior body movements, whereas ablations of DD02 or DD03 do not. Our predictions are robust to deletions of weak connections, missing connections, and rewired connections in the current connectome, indicating the potential applicability of this analytical framework to larger and less well-characterized connectomes.

  20. Network control principles predict neuron function in the Caenorhabditis elegans connectome.

    Science.gov (United States)

    Yan, Gang; Vértes, Petra E; Towlson, Emma K; Chew, Yee Lian; Walker, Denise S; Schafer, William R; Barabási, Albert-László

    2017-10-26

    Recent studies on the controllability of complex systems offer a powerful mathematical framework to systematically explore the structure-function relationship in biological, social, and technological networks. Despite theoretical advances, we lack direct experimental proof of the validity of these widely used control principles. Here we fill this gap by applying a control framework to the connectome of the nematode Caenorhabditis elegans, allowing us to predict the involvement of each C. elegans neuron in locomotor behaviours. We predict that control of the muscles or motor neurons requires 12 neuronal classes, which include neuronal groups previously implicated in locomotion by laser ablation, as well as one previously uncharacterized neuron, PDB. We validate this prediction experimentally, finding that the ablation of PDB leads to a significant loss of dorsoventral polarity in large body bends. Importantly, control principles also allow us to investigate the involvement of individual neurons within each neuronal class. For example, we predict that, within the class of DD motor neurons, only three (DD04, DD05, or DD06) should affect locomotion when ablated individually. This prediction is also confirmed; single cell ablations of DD04 or DD05 specifically affect posterior body movements, whereas ablations of DD02 or DD03 do not. Our predictions are robust to deletions of weak connections, missing connections, and rewired connections in the current connectome, indicating the potential applicability of this analytical framework to larger and less well-characterized connectomes.

  1. Optimal autaptic and synaptic delays enhanced synchronization transitions induced by each other in Newman–Watts neuronal networks

    International Nuclear Information System (INIS)

    Wang, Baoying; Gong, Yubing; Xie, Huijuan; Wang, Qi

    2016-01-01

    Highlights: • Optimal autaptic delay enhanced synchronization transitions induced by synaptic delay in neuronal networks. • Optimal synaptic delay enhanced synchronization transitions induced by autaptic delay. • Optimal coupling strength enhanced synchronization transitions induced by autaptic or synaptic delay. - Abstract: In this paper, we numerically study the effect of electrical autaptic and synaptic delays on synchronization transitions induced by each other in Newman–Watts Hodgkin–Huxley neuronal networks. It is found that the synchronization transitions induced by synaptic delay vary with varying autaptic delay and become strongest when autaptic delay is optimal. Similarly, the synchronization transitions induced by autaptic delay vary with varying synaptic delay and become strongest at optimal synaptic delay. Also, there is optimal coupling strength by which the synchronization transitions induced by either synaptic or autaptic delay become strongest. These results show that electrical autaptic and synaptic delays can enhance synchronization transitions induced by each other in the neuronal networks. This implies that electrical autaptic and synaptic delays can cooperate with each other and more efficiently regulate the synchrony state of the neuronal networks. These findings could find potential implications for the information transmission in neural systems.

  2. Synaptic and intrinsic activation of GABAergic neurons in the cardiorespiratory brainstem network.

    Science.gov (United States)

    Frank, Julie G; Mendelowitz, David

    2012-01-01

    GABAergic pathways in the brainstem play an essential role in respiratory rhythmogenesis and interactions between the respiratory and cardiovascular neuronal control networks. However, little is known about the identity and function of these GABAergic inhibitory neurons and what determines their activity. In this study we have identified a population of GABAergic neurons in the ventrolateral medulla that receive increased excitatory post-synaptic potentials during inspiration, but also have spontaneous firing in the absence of synaptic input. Using transgenic mice that express GFP under the control of the Gad1 (GAD67) gene promoter, we determined that this population of GABAergic neurons is in close apposition to cardioinhibitory parasympathetic cardiac neurons in the nucleus ambiguus (NA). These neurons fire in synchronization with inspiratory activity. Although they receive excitatory glutamatergic synaptic inputs during inspiration, this excitatory neurotransmission was not altered by blocking nicotinic receptors, and many of these GABAergic neurons continue to fire after synaptic blockade. The spontaneous firing in these GABAergic neurons was not altered by the voltage-gated calcium channel blocker cadmium chloride that blocks both neurotransmission to these neurons and voltage-gated Ca(2+) currents, but spontaneous firing was diminished by riluzole, demonstrating a role of persistent sodium channels in the spontaneous firing in these cardiorespiratory GABAergic neurons that possess a pacemaker phenotype. The spontaneously firing GABAergic neurons identified in this study that increase their activity during inspiration would support respiratory rhythm generation if they acted primarily to inhibit post-inspiratory neurons and thereby release inspiration neurons to increase their activity. This population of inspiratory-modulated GABAergic neurons could also play a role in inhibiting neurons that are most active during expiration and provide a framework for

  3. Synaptic and intrinsic activation of GABAergic neurons in the cardiorespiratory brainstem network.

    Directory of Open Access Journals (Sweden)

    Julie G Frank

    Full Text Available GABAergic pathways in the brainstem play an essential role in respiratory rhythmogenesis and interactions between the respiratory and cardiovascular neuronal control networks. However, little is known about the identity and function of these GABAergic inhibitory neurons and what determines their activity. In this study we have identified a population of GABAergic neurons in the ventrolateral medulla that receive increased excitatory post-synaptic potentials during inspiration, but also have spontaneous firing in the absence of synaptic input. Using transgenic mice that express GFP under the control of the Gad1 (GAD67 gene promoter, we determined that this population of GABAergic neurons is in close apposition to cardioinhibitory parasympathetic cardiac neurons in the nucleus ambiguus (NA. These neurons fire in synchronization with inspiratory activity. Although they receive excitatory glutamatergic synaptic inputs during inspiration, this excitatory neurotransmission was not altered by blocking nicotinic receptors, and many of these GABAergic neurons continue to fire after synaptic blockade. The spontaneous firing in these GABAergic neurons was not altered by the voltage-gated calcium channel blocker cadmium chloride that blocks both neurotransmission to these neurons and voltage-gated Ca(2+ currents, but spontaneous firing was diminished by riluzole, demonstrating a role of persistent sodium channels in the spontaneous firing in these cardiorespiratory GABAergic neurons that possess a pacemaker phenotype. The spontaneously firing GABAergic neurons identified in this study that increase their activity during inspiration would support respiratory rhythm generation if they acted primarily to inhibit post-inspiratory neurons and thereby release inspiration neurons to increase their activity. This population of inspiratory-modulated GABAergic neurons could also play a role in inhibiting neurons that are most active during expiration and provide a

  4. Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.

    Directory of Open Access Journals (Sweden)

    Stojan Jovanović

    2016-06-01

    Full Text Available The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.

  5. Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.

    Science.gov (United States)

    Jovanović, Stojan; Rotter, Stefan

    2016-06-01

    The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs) are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.

  6. Electrical responses and spontaneous activity of human iPS-derived neuronal networks characterized for three-month culture with 4096-electrode arrays

    Directory of Open Access Journals (Sweden)

    Hayder eAmin

    2016-03-01

    Full Text Available The recent availability of human induced pluripotent stem cells (hiPSCs holds great promise as a novel source of human-derived neurons for cell and tissue therapies as well as for in vitro drug screenings that might replace the use of animal models. However, there is still a considerable lack of knowledge on the functional properties of hiPSC-derived neuronal networks, thus limiting their application. Here, upon optimization of cell culture protocols, we demonstrate that both spontaneous and evoked electrical spiking activities of these networks can be characterized on-chip by taking advantage of the resolution provided by CMOS multielectrode arrays (CMOS-MEAs. These devices feature a large and closely-spaced array of 4096 simultaneously recording electrodes and multi-site on-chip electrical stimulation. Our results show that networks of human-derived neurons can respond to electrical stimulation with a physiological repertoire of spike waveforms after three months of cell culture, a period of time during which the network undergoes the expression of developing patterns of spontaneous spiking activity. To achieve this, we have investigated the impact on the network formation and on the emerging network-wide functional properties induced by different biochemical substrates, i.e. poly-dl-ornithine (PDLO, poly-l-ornithine (PLO, and polyethylenimine (PEI, that were used as adhesion promoters for the cell culture. Interestingly, we found that neuronal networks grown on PDLO coated substrates show significantly higher spontaneous firing activity, reliable responses to low-frequency electrical stimuli, and an appropriate level of PSD-95 that may denote a physiological neuronal maturation profile and synapse stabilization. However, our results also suggest that even three-month culture might not be sufficient for human-derived neuronal network maturation. Taken together, our results highlight the tight relationship existing between substrate coatings

  7. Brainmapping Neuronal Networks in Children with Continuous Spikes and Waves during Slow Sleep as revealed by DICS and RPDC

    OpenAIRE

    Dierck, Carina

    2018-01-01

    CSWS is an age-related epileptic encephalopathy consisting of the triad of seizures, neuropsychological impairment and a specific EEG-pattern. This EEG-pattern is characterized by spike-and-wave-discharges emphasized during non-REM sleep. Until now, little has been known about the pathophysiologic processes. So far research approaches on the underlying neuronal network have been based on techniques with a good spatial but poor temporal resolution like fMRI and FDG-PET. In this study the se...

  8. Timing control by redundant inhibitory neuronal circuits

    International Nuclear Information System (INIS)

    Tristan, I.; Rulkov, N. F.; Huerta, R.; Rabinovich, M.

    2014-01-01

    Rhythms and timing control of sequential activity in the brain is fundamental to cognition and behavior. Although experimental and theoretical studies support the understanding that neuronal circuits are intrinsically capable of generating different time intervals, the dynamical origin of the phenomenon of functionally dependent timing control is still unclear. Here, we consider a new mechanism that is related to the multi-neuronal cooperative dynamics in inhibitory brain motifs consisting of a few clusters. It is shown that redundancy and diversity of neurons within each cluster enhances the sensitivity of the timing control with the level of neuronal excitation of the whole network. The generality of the mechanism is shown to work on two different neuronal models: a conductance-based model and a map-based model

  9. Timing control by redundant inhibitory neuronal circuits

    Energy Technology Data Exchange (ETDEWEB)

    Tristan, I., E-mail: itristan@ucsd.edu; Rulkov, N. F.; Huerta, R.; Rabinovich, M. [BioCircuits Institute, University of California, San Diego, La Jolla, California 92093-0402 (United States)

    2014-03-15

    Rhythms and timing control of sequential activity in the brain is fundamental to cognition and behavior. Although experimental and theoretical studies support the understanding that neuronal circuits are intrinsically capable of generating different time intervals, the dynamical origin of the phenomenon of functionally dependent timing control is still unclear. Here, we consider a new mechanism that is related to the multi-neuronal cooperative dynamics in inhibitory brain motifs consisting of a few clusters. It is shown that redundancy and diversity of neurons within each cluster enhances the sensitivity of the timing control with the level of neuronal excitation of the whole network. The generality of the mechanism is shown to work on two different neuronal models: a conductance-based model and a map-based model.

  10. Bifurcation software in Matlab with applications in neuronal modeling.

    Science.gov (United States)

    Govaerts, Willy; Sautois, Bart

    2005-02-01

    Many biological phenomena, notably in neuroscience, can be modeled by dynamical systems. We describe a recent improvement of a Matlab software package for dynamical systems with applications to modeling single neurons and all-to-all connected networks of neurons. The new software features consist of an object-oriented approach to bifurcation computations and the partial inclusion of C-code to speed up the computation. As an application, we study the origin of the spiking behaviour of neurons when the equilibrium state is destabilized by an incoming current. We show that Class II behaviour, i.e. firing with a finite frequency, is possible even if the destabilization occurs through a saddle-node bifurcation. Furthermore, we show that synchronization of an all-to-all connected network of such neurons with only excitatory connections is also possible in this case.

  11. Dynamic Control of Synchronous Activity in Networks of Spiking Neurons.

    Directory of Open Access Journals (Sweden)

    Axel Hutt

    Full Text Available Oscillatory brain activity is believed to play a central role in neural coding. Accumulating evidence shows that features of these oscillations are highly dynamic: power, frequency and phase fluctuate alongside changes in behavior and task demands. The role and mechanism supporting this variability is however poorly understood. We here analyze a network of recurrently connected spiking neurons with time delay displaying stable synchronous dynamics. Using mean-field and stability analyses, we investigate the influence of dynamic inputs on the frequency of firing rate oscillations. We show that afferent noise, mimicking inputs to the neurons, causes smoothing of the system's response function, displacing equilibria and altering the stability of oscillatory states. Our analysis further shows that these noise-induced changes cause a shift of the peak frequency of synchronous oscillations that scales with input intensity, leading the network towards critical states. We lastly discuss the extension of these principles to periodic stimulation, in which externally applied driving signals can trigger analogous phenomena. Our results reveal one possible mechanism involved in shaping oscillatory activity in the brain and associated control principles.

  12. Dynamic Control of Synchronous Activity in Networks of Spiking Neurons.

    Science.gov (United States)

    Hutt, Axel; Mierau, Andreas; Lefebvre, Jérémie

    Oscillatory brain activity is believed to play a central role in neural coding. Accumulating evidence shows that features of these oscillations are highly dynamic: power, frequency and phase fluctuate alongside changes in behavior and task demands. The role and mechanism supporting this variability is however poorly understood. We here analyze a network of recurrently connected spiking neurons with time delay displaying stable synchronous dynamics. Using mean-field and stability analyses, we investigate the influence of dynamic inputs on the frequency of firing rate oscillations. We show that afferent noise, mimicking inputs to the neurons, causes smoothing of the system's response function, displacing equilibria and altering the stability of oscillatory states. Our analysis further shows that these noise-induced changes cause a shift of the peak frequency of synchronous oscillations that scales with input intensity, leading the network towards critical states. We lastly discuss the extension of these principles to periodic stimulation, in which externally applied driving signals can trigger analogous phenomena. Our results reveal one possible mechanism involved in shaping oscillatory activity in the brain and associated control principles.

  13. Ordering chaos and synchronization transitions by chemical delay and coupling on scale-free neuronal networks

    International Nuclear Information System (INIS)

    Gong Yubing; Xie Yanhang; Lin Xiu; Hao Yinghang; Ma Xiaoguang

    2010-01-01

    Research highlights: → Chemical delay and chemical coupling can tame chaotic bursting. → Chemical delay-induced transitions from bursting synchronization to intermittent multiple spiking synchronizations. → Chemical coupling-induced different types of delay-dependent firing transitions. - Abstract: Chemical synaptic connections are more common than electric ones in neurons, and information transmission delay is especially significant for the synapses of chemical type. In this paper, we report a phenomenon of ordering spatiotemporal chaos and synchronization transitions by the delays and coupling through chemical synapses of modified Hodgkin-Huxley (MHH) neurons on scale-free networks. As the delay τ is increased, the neurons exhibit transitions from bursting synchronization (BS) to intermittent multiple spiking synchronizations (SS). As the coupling g syn is increased, the neurons exhibit different types of firing transitions, depending on the values of τ. For a smaller τ, there are transitions from spatiotemporal chaotic bursting (SCB) to BS or SS; while for a larger τ, there are transitions from SCB to intermittent multiple SS. These findings show that the delays and coupling through chemical synapses can tame the chaotic firings and repeatedly enhance the firing synchronization of neurons, and hence could play important roles in the firing activity of the neurons on scale-free networks.

  14. Exercise-induced neuronal plasticity in central autonomic networks: role in cardiovascular control.

    Science.gov (United States)

    Michelini, Lisete C; Stern, Javier E

    2009-09-01

    It is now well established that brain plasticity is an inherent property not only of the developing but also of the adult brain. Numerous beneficial effects of exercise, including improved memory, cognitive function and neuroprotection, have been shown to involve an important neuroplastic component. However, whether major adaptive cardiovascular adjustments during exercise, needed to ensure proper blood perfusion of peripheral tissues, also require brain neuroplasticity, is presently unknown. This review will critically evaluate current knowledge on proposed mechanisms that are likely to underlie the continuous resetting of baroreflex control of heart rate during/after exercise and following exercise training. Accumulating evidence indicates that not only somatosensory afferents (conveyed by skeletal muscle receptors, baroreceptors and/or cardiopulmonary receptors) but also projections arising from central command neurons (in particular, peptidergic hypothalamic pre-autonomic neurons) converge into the nucleus tractus solitarii (NTS) in the dorsal brainstem, to co-ordinate complex cardiovascular adaptations during dynamic exercise. This review focuses in particular on a reciprocally interconnected network between the NTS and the hypothalamic paraventricular nucleus (PVN), which is proposed to act as a pivotal anatomical and functional substrate underlying integrative feedforward and feedback cardiovascular adjustments during exercise. Recent findings supporting neuroplastic adaptive changes within the NTS-PVN reciprocal network (e.g. remodelling of afferent inputs, structural and functional neuronal plasticity and changes in neurotransmitter content) will be discussed within the context of their role as important underlying cellular mechanisms supporting the tonic activation and improved efficacy of these central pathways in response to circulatory demand at rest and during exercise, both in sedentary and in trained individuals. We hope this review will stimulate

  15. Distribution of orientation selectivity in recurrent networks of spiking neurons with different random topologies.

    Science.gov (United States)

    Sadeh, Sadra; Rotter, Stefan

    2014-01-01

    Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity.

  16. Multi-channels coupling-induced pattern transition in a tri-layer neuronal network

    Science.gov (United States)

    Wu, Fuqiang; Wang, Ya; Ma, Jun; Jin, Wuyin; Hobiny, Aatef

    2018-03-01

    Neurons in nerve system show complex electrical behaviors due to complex connection types and diversity in excitability. A tri-layer network is constructed to investigate the signal propagation and pattern formation by selecting different coupling channels between layers. Each layer is set as different states, and the local kinetics is described by Hindmarsh-Rose neuron model. By changing the number of coupling channels between layers and the state of the first layer, the collective behaviors of each layer and synchronization pattern of network are investigated. A statistical factor of synchronization on each layer is calculated. It is found that quiescent state in the second layer can be excited and disordered state in the third layer is suppressed when the first layer is controlled by a pacemaker, and the developed state is dependent on the number of coupling channels. Furthermore, the collapse in the first layer can cause breakdown of other layers in the network, and the mechanism is that disordered state in the third layer is enhanced when sampled signals from the collapsed layer can impose continuous disturbance on the next layer.

  17. Wireless Sensor Network Congestion Control Based on Standard Particle Swarm Optimization and Single Neuron PID.

    Science.gov (United States)

    Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong

    2018-04-19

    Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm⁻neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS.

  18. Wireless Sensor Network Congestion Control Based on Standard Particle Swarm Optimization and Single Neuron PID

    Science.gov (United States)

    Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong

    2018-01-01

    Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm–neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS. PMID:29671822

  19. Local excitation-inhibition ratio for synfire chain propagation in feed-forward neuronal networks

    Science.gov (United States)

    Guo, Xinmeng; Yu, Haitao; Wang, Jiang; Liu, Jing; Cao, Yibin; Deng, Bin

    2017-09-01

    A leading hypothesis holds that spiking activity propagates along neuronal sub-populations which are connected in a feed-forward manner, and the propagation efficiency would be affected by the dynamics of sub-populations. In this paper, how the interaction between local excitation and inhibition effects on synfire chain propagation in feed-forward network (FFN) is investigated. The simulation results show that there is an appropriate excitation-inhibition (EI) ratio maximizing the performance of synfire chain propagation. The optimal EI ratio can significantly enhance the selectivity of FFN to synchronous signals, which thereby increases the stability to background noise. Moreover, the effect of network topology on synfire chain propagation is also investigated. It is found that synfire chain propagation can be maximized by an optimal interlayer linking probability. We also find that external noise is detrimental to synchrony propagation by inducing spiking jitter. The results presented in this paper may provide insights into the effects of network dynamics on neuronal computations.

  20. A Scalable Weight-Free Learning Algorithm for Regulatory Control of Cell Activity in Spiking Neuronal Networks.

    Science.gov (United States)

    Zhang, Xu; Foderaro, Greg; Henriquez, Craig; Ferrari, Silvia

    2018-03-01

    Recent developments in neural stimulation and recording technologies are providing scientists with the ability of recording and controlling the activity of individual neurons in vitro or in vivo, with very high spatial and temporal resolution. Tools such as optogenetics, for example, are having a significant impact in the neuroscience field by delivering optical firing control with the precision and spatiotemporal resolution required for investigating information processing and plasticity in biological brains. While a number of training algorithms have been developed to date for spiking neural network (SNN) models of biological neuronal circuits, exiting methods rely on learning rules that adjust the synaptic strengths (or weights) directly, in order to obtain the desired network-level (or functional-level) performance. As such, they are not applicable to modifying plasticity in biological neuronal circuits, in which synaptic strengths only change as a result of pre- and post-synaptic neuron firings or biological mechanisms beyond our control. This paper presents a weight-free training algorithm that relies solely on adjusting the spatiotemporal delivery of neuron firings in order to optimize the network performance. The proposed weight-free algorithm does not require any knowledge of the SNN model or its plasticity mechanisms. As a result, this training approach is potentially realizable in vitro or in vivo via neural stimulation and recording technologies, such as optogenetics and multielectrode arrays, and could be utilized to control plasticity at multiple scales of biological neuronal circuits. The approach is demonstrated by training SNNs with hundreds of units to control a virtual insect navigating in an unknown environment.

  1. Parameter Diversity Induced Multiple Spatial Coherence Resonances and Spiral Waves in Neuronal Network with and Without Noise

    International Nuclear Information System (INIS)

    Li Yuye; Jia Bing; Gu Huaguang; An Shucheng

    2012-01-01

    Diversity in the neurons and noise are inevitable in the real neuronal network. In this paper, parameter diversity induced spiral waves and multiple spatial coherence resonances in a two-dimensional neuronal network without or with noise are simulated. The relationship between the multiple resonances and the multiple transitions between patterns of spiral waves are identified. The coherence degrees induced by the diversity are suppressed when noise is introduced and noise density is increased. The results suggest that natural nervous system might profit from both parameter diversity and noise, provided a possible approach to control formation and transition of spiral wave by the cooperation between the diversity and noise. (general)

  2. Impact of sub and supra-threshold adaptation currents in networks of spiking neurons.

    Science.gov (United States)

    Colliaux, David; Yger, Pierre; Kaneko, Kunihiko

    2015-12-01

    Neuronal adaptation is the intrinsic capacity of the brain to change, by various mechanisms, its dynamical responses as a function of the context. Such a phenomena, widely observed in vivo and in vitro, is known to be crucial in homeostatic regulation of the activity and gain control. The effects of adaptation have already been studied at the single-cell level, resulting from either voltage or calcium gated channels both activated by the spiking activity and modulating the dynamical responses of the neurons. In this study, by disentangling those effects into a linear (sub-threshold) and a non-linear (supra-threshold) part, we focus on the the functional role of those two distinct components of adaptation onto the neuronal activity at various scales, starting from single-cell responses up to recurrent networks dynamics, and under stationary or non-stationary stimulations. The effects of slow currents on collective dynamics, like modulation of population oscillation and reliability of spike patterns, is quantified for various types of adaptation in sparse recurrent networks.

  3. Cortical neurons and networks are dormant but fully responsive during isoelectric brain state.

    Science.gov (United States)

    Altwegg-Boussac, Tristan; Schramm, Adrien E; Ballestero, Jimena; Grosselin, Fanny; Chavez, Mario; Lecas, Sarah; Baulac, Michel; Naccache, Lionel; Demeret, Sophie; Navarro, Vincent; Mahon, Séverine; Charpier, Stéphane

    2017-09-01

    electrophysiological features were preserved. Manipulations of the membrane potential and intracellular injection of chloride in neocortical neurons failed to reveal an augmented synaptic inhibition during the isoelectric condition. Consistent with the sensory responses recorded from comatose patients, large and highly reproducible somatosensory-evoked potentials could be generated on the inactive electrocorticogram in rats. Intracellular recordings revealed that the underlying neocortical pyramidal cells responded to sensory stimuli by complex synaptic potentials able to trigger action potentials. As in patients, sensory responses in the isoelectric state were delayed compared to control responses and exhibited an elevated reliability during repeated stimuli. Our findings demonstrate that during prolonged isoelectric brain state neurons and synaptic networks are dormant rather than excessively inhibited, conserving their intrinsic properties and their ability to integrate and propagate environmental stimuli. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. Multiple Spatial Coherence Resonances and Spatial Patterns in a Noise-Driven Heterogeneous Neuronal Network

    International Nuclear Information System (INIS)

    Li Yu-Ye; Ding Xue-Li

    2014-01-01

    Heterogeneity of the neurons and noise are inevitable in the real neuronal network. In this paper, Gaussian white noise induced spatial patterns including spiral waves and multiple spatial coherence resonances are studied in a network composed of Morris—Lecar neurons with heterogeneity characterized by parameter diversity. The relationship between the resonances and the transitions between ordered spiral waves and disordered spatial patterns are achieved. When parameter diversity is introduced, the maxima of multiple resonances increases first, and then decreases as diversity strength increases, which implies that the coherence degrees induced by noise are enhanced at an intermediate diversity strength. The synchronization degree of spatial patterns including ordered spiral waves and disordered patterns is identified to be a very low level. The results suggest that the nervous system can profit from both heterogeneity and noise, and the multiple spatial coherence resonances are achieved via the emergency of spiral waves instead of synchronization patterns. (interdisciplinary physics and related areas of science and technology)

  5. Multiple Spatial Coherence Resonances and Spatial Patterns in a Noise-Driven Heterogeneous Neuronal Network

    Science.gov (United States)

    Li, Yu-Ye; Ding, Xue-Li

    2014-12-01

    Heterogeneity of the neurons and noise are inevitable in the real neuronal network. In this paper, Gaussian white noise induced spatial patterns including spiral waves and multiple spatial coherence resonances are studied in a network composed of Morris—Lecar neurons with heterogeneity characterized by parameter diversity. The relationship between the resonances and the transitions between ordered spiral waves and disordered spatial patterns are achieved. When parameter diversity is introduced, the maxima of multiple resonances increases first, and then decreases as diversity strength increases, which implies that the coherence degrees induced by noise are enhanced at an intermediate diversity strength. The synchronization degree of spatial patterns including ordered spiral waves and disordered patterns is identified to be a very low level. The results suggest that the nervous system can profit from both heterogeneity and noise, and the multiple spatial coherence resonances are achieved via the emergency of spiral waves instead of synchronization patterns.

  6. Characterization of a patch-clamp microchannel array towards neuronal networks analysis

    DEFF Research Database (Denmark)

    Alberti, Massimo; Snakenborg, Detlef; Lopacinska, Joanna M.

    2010-01-01

    for simultaneous patch clamping of cultured cells or neurons in the same network. A disposable silicon/silicon dioxide (Si/SiO2) chip with a microhole array was integrated in a microfluidic system for cell handling, perfusion and electrical recording. Fluidic characterization showed that our PC mu CA can work...

  7. Oscillatory neuronal activity reflects lexical-semantic feature integration within and across sensory modalities in distributed cortical networks.

    Science.gov (United States)

    van Ackeren, Markus J; Schneider, Till R; Müsch, Kathrin; Rueschemeyer, Shirley-Ann

    2014-10-22

    Research from the previous decade suggests that word meaning is partially stored in distributed modality-specific cortical networks. However, little is known about the mechanisms by which semantic content from multiple modalities is integrated into a coherent multisensory representation. Therefore we aimed to characterize differences between integration of lexical-semantic information from a single modality compared with two sensory modalities. We used magnetoencephalography in humans to investigate changes in oscillatory neuronal activity while participants verified two features for a given target word (e.g., "bus"). Feature pairs consisted of either two features from the same modality (visual: "red," "big") or different modalities (auditory and visual: "red," "loud"). The results suggest that integrating modality-specific features of the target word is associated with enhanced high-frequency power (80-120 Hz), while integrating features from different modalities is associated with a sustained increase in low-frequency power (2-8 Hz). Source reconstruction revealed a peak in the anterior temporal lobe for low-frequency and high-frequency effects. These results suggest that integrating lexical-semantic knowledge at different cortical scales is reflected in frequency-specific oscillatory neuronal activity in unisensory and multisensory association networks. Copyright © 2014 the authors 0270-6474/14/3314318-06$15.00/0.

  8. A single hidden layer feedforward network with only one neuron in the hidden layer can approximate any univariate function

    OpenAIRE

    Guliyev , Namig; Ismailov , Vugar

    2016-01-01

    The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous function provided that an unlimited number of neurons in a hidden layer is permitted. In this paper, we consider constructive approximation on any finite interval of $\\mathbb{R}$ by neural networks with only one neuron in the hid...

  9. Differential Patterns of Dysconnectivity in Mirror Neuron and Mentalizing Networks in Schizophrenia

    NARCIS (Netherlands)

    Schilbach, Leonhard; Derntl, Birgit; Aleman, Andre; Caspers, Svenja; Clos, Mareike; Diederen, Kelly M J; Gruber, Oliver; Kogler, Lydia; Liemburg, Edith J; Sommer, Iris E; Müller, Veronika I; Cieslik, Edna C; Eickhoff, Simon B

    Impairments of social cognition are well documented in patients with schizophrenia (SCZ), but the neural basis remains poorly understood. In light of evidence that suggests that the "mirror neuron system" (MNS) and the "mentalizing network" (MENT) are key substrates of intersubjectivity and joint

  10. A decaying factor accounts for contained activity in neuronal networks with no need of hierarchical or modular organization

    International Nuclear Information System (INIS)

    Amancio, Diego R; Oliveira Jr, Osvaldo N; Costa, Luciano da F

    2012-01-01

    The mechanisms responsible for containing activity in systems represented by networks are crucial in various phenomena, for example, in diseases such as epilepsy that affect the neuronal networks and for information dissemination in social networks. The first models to account for contained activity included triggering and inhibition processes, but they cannot be applied to social networks where inhibition is clearly absent. A recent model showed that contained activity can be achieved with no need of inhibition processes provided that the network is subdivided into modules (communities). In this paper, we introduce a new concept inspired in the Hebbian theory, through which containment of activity is achieved by incorporating a dynamics based on a decaying activity in a random walk mechanism preferential to the node activity. Upon selecting the decay coefficient within a proper range, we observed sustained activity in all the networks tested, namely, random, Barabási–Albert and geographical networks. The generality of this finding was confirmed by showing that modularity is no longer needed if the dynamics based on the integrate-and-fire dynamics incorporated the decay factor. Taken together, these results provide a proof of principle that persistent, restrained network activation might occur in the absence of any particular topological structure. This may be the reason why neuronal activity does not spread out to the entire neuronal network, even when no special topological organization exists. (paper)

  11. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  12. Signal transfer within a cultured asymmetric cortical neuron circuit.

    Science.gov (United States)

    Isomura, Takuya; Shimba, Kenta; Takayama, Yuzo; Takeuchi, Akimasa; Kotani, Kiyoshi; Jimbo, Yasuhiko

    2015-12-01

    Simplified neuronal circuits are required for investigating information representation in nervous systems and for validating theoretical neural network models. Here, we developed patterned neuronal circuits using micro fabricated devices, comprising a micro-well array bonded to a microelectrode-array substrate. The micro-well array consisted of micrometre-scale wells connected by tunnels, all contained within a silicone slab called a micro-chamber. The design of the micro-chamber confined somata to the wells and allowed axons to grow through the tunnels bidirectionally but with a designed, unidirectional bias. We guided axons into the point of the arrow structure where one of the two tunnel entrances is located, making that the preferred direction. When rat cortical neurons were cultured in the wells, their axons grew through the tunnels and connected to neurons in adjoining wells. Unidirectional burst transfers and other asymmetric signal-propagation phenomena were observed via the substrate-embedded electrodes. Seventy-nine percent of burst transfers were in the forward direction. We also observed rapid propagation of activity from sites of local electrical stimulation, and significant effects of inhibitory synapse blockade on bursting activity. These results suggest that this simple, substrate-controlled neuronal circuit can be applied to develop in vitro models of the function of cortical microcircuits or deep neural networks, better to elucidate the laws governing the dynamics of neuronal networks.

  13. Signal transfer within a cultured asymmetric cortical neuron circuit

    Science.gov (United States)

    Isomura, Takuya; Shimba, Kenta; Takayama, Yuzo; Takeuchi, Akimasa; Kotani, Kiyoshi; Jimbo, Yasuhiko

    2015-12-01

    Objective. Simplified neuronal circuits are required for investigating information representation in nervous systems and for validating theoretical neural network models. Here, we developed patterned neuronal circuits using micro fabricated devices, comprising a micro-well array bonded to a microelectrode-array substrate. Approach. The micro-well array consisted of micrometre-scale wells connected by tunnels, all contained within a silicone slab called a micro-chamber. The design of the micro-chamber confined somata to the wells and allowed axons to grow through the tunnels bidirectionally but with a designed, unidirectional bias. We guided axons into the point of the arrow structure where one of the two tunnel entrances is located, making that the preferred direction. Main results. When rat cortical neurons were cultured in the wells, their axons grew through the tunnels and connected to neurons in adjoining wells. Unidirectional burst transfers and other asymmetric signal-propagation phenomena were observed via the substrate-embedded electrodes. Seventy-nine percent of burst transfers were in the forward direction. We also observed rapid propagation of activity from sites of local electrical stimulation, and significant effects of inhibitory synapse blockade on bursting activity. Significance. These results suggest that this simple, substrate-controlled neuronal circuit can be applied to develop in vitro models of the function of cortical microcircuits or deep neural networks, better to elucidate the laws governing the dynamics of neuronal networks.

  14. A Neuronal Network Model for Pitch Selectivity and Representation.

    Science.gov (United States)

    Huang, Chengcheng; Rinzel, John

    2016-01-01

    Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions.

  15. Theoretical Neuroanatomy:Analyzing the Structure, Dynamics,and Function of Neuronal Networks

    Science.gov (United States)

    Seth, Anil K.; Edelman, Gerald M.

    The mammalian brain is an extraordinary object: its networks give rise to our conscious experiences as well as to the generation of adaptive behavior for the organism within its environment. Progress in understanding the structure, dynamics and function of the brain faces many challenges. Biological neural networks change over time, their detailed structure is difficult to elucidate, and they are highly heterogeneous both in their neuronal units and synaptic connections. In facing these challenges, graph-theoretic and information-theoretic approaches have yielded a number of useful insights and promise many more.

  16. Neuron array with plastic synapses and programmable dendrites.

    Science.gov (United States)

    Ramakrishnan, Shubha; Wunderlich, Richard; Hasler, Jennifer; George, Suma

    2013-10-01

    We describe a novel neuromorphic chip architecture that models neurons for efficient computation. Traditional architectures of neuron array chips consist of large scale systems that are interfaced with AER for implementing intra- or inter-chip connectivity. We present a chip that uses AER for inter-chip communication but uses fast, reconfigurable FPGA-style routing with local memory for intra-chip connectivity. We model neurons with biologically realistic channel models, synapses and dendrites. This chip is suitable for small-scale network simulations and can also be used for sequence detection, utilizing directional selectivity properties of dendrites, ultimately for use in word recognition.

  17. On the sample complexity of learning for networks of spiking neurons with nonlinear synaptic interactions.

    Science.gov (United States)

    Schmitt, Michael

    2004-09-01

    We study networks of spiking neurons that use the timing of pulses to encode information. Nonlinear interactions model the spatial groupings of synapses on the neural dendrites and describe the computations performed at local branches. Within a theoretical framework of learning we analyze the question of how many training examples these networks must receive to be able to generalize well. Bounds for this sample complexity of learning can be obtained in terms of a combinatorial parameter known as the pseudodimension. This dimension characterizes the computational richness of a neural network and is given in terms of the number of network parameters. Two types of feedforward architectures are considered: constant-depth networks and networks of unconstrained depth. We derive asymptotically tight bounds for each of these network types. Constant depth networks are shown to have an almost linear pseudodimension, whereas the pseudodimension of general networks is quadratic. Networks of spiking neurons that use temporal coding are becoming increasingly more important in practical tasks such as computer vision, speech recognition, and motor control. The question of how well these networks generalize from a given set of training examples is a central issue for their successful application as adaptive systems. The results show that, although coding and computation in these networks is quite different and in many cases more powerful, their generalization capabilities are at least as good as those of traditional neural network models.

  18. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size.

    Science.gov (United States)

    Schwalger, Tilo; Deger, Moritz; Gerstner, Wulfram

    2017-04-01

    Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50-2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations.

  19. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    Directory of Open Access Journals (Sweden)

    Susanne Kunkel

    2017-06-01

    Full Text Available NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  20. Searching for collective behavior in a large network of sensory neurons.

    Directory of Open Access Journals (Sweden)

    Gašper Tkačik

    2014-01-01

    Full Text Available Maximum entropy models are the least structured probability distributions that exactly reproduce a chosen set of statistics measured in an interacting network. Here we use this principle to construct probabilistic models which describe the correlated spiking activity of populations of up to 120 neurons in the salamander retina as it responds to natural movies. Already in groups as small as 10 neurons, interactions between spikes can no longer be regarded as small perturbations in an otherwise independent system; for 40 or more neurons pairwise interactions need to be supplemented by a global interaction that controls the distribution of synchrony in the population. Here we show that such "K-pairwise" models--being systematic extensions of the previously used pairwise Ising models--provide an excellent account of the data. We explore the properties of the neural vocabulary by: 1 estimating its entropy, which constrains the population's capacity to represent visual information; 2 classifying activity patterns into a small set of metastable collective modes; 3 showing that the neural codeword ensembles are extremely inhomogenous; 4 demonstrating that the state of individual neurons is highly predictable from the rest of the population, allowing the capacity for error correction.

  1. Oscillations, complex spatiotemporal behavior, and information transport in networks of excitatory and inhibitory neurons

    International Nuclear Information System (INIS)

    Destexhe, A.

    1994-01-01

    Various types of spatiotemporal behavior are described for two-dimensional networks of excitatory and inhibitory neurons with time delayed interactions. It is described how the network behaves as several structural parameters are varied, such as the number of neurons, the connectivity, and the values of synaptic weights. A transition from spatially uniform oscillations to spatiotemporal chaos via intermittentlike behavior is observed. The properties of spatiotemporally chaotic solutions are investigated by evaluating the largest positive Lyapunov exponent and the loss of correlation with distance. Finally, properties of information transport are evaluated during uniform oscillations and spatiotemporal chaos. It is shown that the diffusion coefficient increases significantly in the spatiotemporal phase similar to the increase of transport coefficients at the onset of fluid turbulence. It is proposed that such a property should be seen in other media, such as chemical turbulence or networks of oscillators. The possibility of measuring information transport from appropriate experiments is also discussed

  2. Spiking neural network for recognizing spatiotemporal sequences of spikes

    International Nuclear Information System (INIS)

    Jin, Dezhe Z.

    2004-01-01

    Sensory neurons in many brain areas spike with precise timing to stimuli with temporal structures, and encode temporally complex stimuli into spatiotemporal spikes. How the downstream neurons read out such neural code is an important unsolved problem. In this paper, we describe a decoding scheme using a spiking recurrent neural network. The network consists of excitatory neurons that form a synfire chain, and two globally inhibitory interneurons of different types that provide delayed feedforward and fast feedback inhibition, respectively. The network signals recognition of a specific spatiotemporal sequence when the last excitatory neuron down the synfire chain spikes, which happens if and only if that sequence was present in the input spike stream. The recognition scheme is invariant to variations in the intervals between input spikes within some range. The computation of the network can be mapped into that of a finite state machine. Our network provides a simple way to decode spatiotemporal spikes with diverse types of neurons

  3. Intermittent synchronization in a network of bursting neurons

    Science.gov (United States)

    Park, Choongseok; Rubchinsky, Leonid L.

    2011-09-01

    Synchronized oscillations in networks of inhibitory and excitatory coupled bursting neurons are common in a variety of neural systems from central pattern generators to human brain circuits. One example of the latter is the subcortical network of the basal ganglia, formed by excitatory and inhibitory bursters of the subthalamic nucleus and globus pallidus, involved in motor control and affected in Parkinson's disease. Recent experiments have demonstrated the intermittent nature of the phase-locking of neural activity in this network. Here, we explore one potential mechanism to explain the intermittent phase-locking in a network. We simplify the network to obtain a model of two inhibitory coupled elements and explore its dynamics. We used geometric analysis and singular perturbation methods for dynamical systems to reduce the full model to a simpler set of equations. Mathematical analysis was completed using three slow variables with two different time scales. Intermittently, synchronous oscillations are generated by overlapped spiking which crucially depends on the geometry of the slow phase plane and the interplay between slow variables as well as the strength of synapses. Two slow variables are responsible for the generation of activity patterns with overlapped spiking, and the other slower variable enhances the robustness of an irregular and intermittent activity pattern. While the analyzed network and the explored mechanism of intermittent synchrony appear to be quite generic, the results of this analysis can be used to trace particular values of biophysical parameters (synaptic strength and parameters of calcium dynamics), which are known to be impacted in Parkinson's disease.

  4. Consistently Trained Artificial Neural Network for Automatic Ship Berthing Control

    Directory of Open Access Journals (Sweden)

    Y.A. Ahmed

    2015-09-01

    Full Text Available In this paper, consistently trained Artificial Neural Network controller for automatic ship berthing is discussed. Minimum time course changing manoeuvre is utilised to ensure such consistency and a new concept named ‘virtual window’ is introduced. Such consistent teaching data are then used to train two separate multi-layered feed forward neural networks for command rudder and propeller revolution output. After proper training, several known and unknown conditions are tested to judge the effectiveness of the proposed controller using Monte Carlo simulations. After getting acceptable percentages of success, the trained networks are implemented for the free running experiment system to judge the network’s real time response for Esso Osaka 3-m model ship. The network’s behaviour during such experiments is also investigated for possible effect of initial conditions as well as wind disturbances. Moreover, since the final goal point of the proposed controller is set at some distance from the actual pier to ensure safety, therefore a study on automatic tug assistance is also discussed for the final alignment of the ship with actual pier.

  5. Neuronal network disturbance after focal ischemia in rats

    International Nuclear Information System (INIS)

    Kataoka, K.; Hayakawa, T.; Yamada, K.; Mushiroi, T.; Kuroda, R.; Mogami, H.

    1989-01-01

    We studied functional disturbances following left middle cerebral artery occlusion in rats. Neuronal function was evaluated by [14C]2-deoxyglucose autoradiography 1 day after occlusion. We analyzed the mechanisms of change in glucose utilization outside the infarct using Fink-Heimer silver impregnation, axonal transport of wheat germ agglutinin-conjugated-horseradish peroxidase, and succinate dehydrogenase histochemistry. One day after occlusion, glucose utilization was remarkably reduced in the areas surrounding the infarct. There were many silver grains indicating degeneration of the synaptic terminals in the cortical areas surrounding the infarct and the ipsilateral cingulate cortex. Moreover, in the left thalamus where the left middle cerebral artery supplied no blood, glucose utilization significantly decreased compared with sham-operated rats. In the left thalamus, massive silver staining of degenerated synaptic terminals and decreases in succinate dehydrogenase activity were observed 4 and 5 days after occlusion. The absence of succinate dehydrogenase staining may reflect early changes in retrograde degeneration of thalamic neurons after ischemic injury of the thalamocortical pathway. Terminal degeneration even affected areas remote from the infarct: there were silver grains in the contralateral hemisphere transcallosally connected to the infarct and in the ipsilateral substantia nigra. Axonal transport study showed disruption of the corticospinal tract by subcortical ischemia; the transcallosal pathways in the cortex surrounding the infarct were preserved. The relation between neural function and the neuronal network in the area surrounding the focal cerebral infarct is discussed with regard to ischemic penumbra and diaschisis

  6. Defects formation and spiral waves in a network of neurons in presence of electromagnetic induction.

    Science.gov (United States)

    Rostami, Zahra; Jafari, Sajad

    2018-04-01

    Complex anatomical and physiological structure of an excitable tissue (e.g., cardiac tissue) in the body can represent different electrical activities through normal or abnormal behavior. Abnormalities of the excitable tissue coming from different biological reasons can lead to formation of some defects. Such defects can cause some successive waves that may end up to some additional reorganizing beating behaviors like spiral waves or target waves. In this study, formation of defects and the resulting emitted waves in an excitable tissue are investigated. We have considered a square array network of neurons with nearest-neighbor connections to describe the excitable tissue. Fundamentally, electrophysiological properties of ion currents in the body are responsible for exhibition of electrical spatiotemporal patterns. More precisely, fluctuation of accumulated ions inside and outside of cell causes variable electrical and magnetic field. Considering undeniable mutual effects of electrical field and magnetic field, we have proposed the new Hindmarsh-Rose (HR) neuronal model for the local dynamics of each individual neuron in the network. In this new neuronal model, the influence of magnetic flow on membrane potential is defined. This improved model holds more bifurcation parameters. Moreover, the dynamical behavior of the tissue is investigated in different states of quiescent, spiking, bursting and even chaotic state. The resulting spatiotemporal patterns are represented and the time series of some sampled neurons are displayed, as well.

  7. External pallidal stimulation improves parkinsonian motor signs and modulates neuronal activity throughout the basal ganglia thalamic network.

    Science.gov (United States)

    Vitek, Jerrold L; Zhang, Jianyu; Hashimoto, Takao; Russo, Gary S; Baker, Kenneth B

    2012-01-01

    Deep brain stimulation (DBS) of the internal segment of the globus pallidus (GPi) and the subthalamic nucleus (STN) are effective for the treatment of advanced Parkinson's disease (PD). We have shown previously that DBS of the external segment of the globus pallidus (GPe) is associated with improvements in parkinsonian motor signs; however, the mechanism of this effect is not known. In this study, we extend our findings on the effect of STN and GPi DBS on neuronal activity in the basal ganglia thalamic network to include GPe DBS using the 1-methyl-4-phenyl-1.2.3.6-tetrahydropyridine (MPTP) monkey model. Stimulation parameters that improved bradykinesia were associated with changes in the pattern and mean discharge rate of neuronal activity in the GPi, STN, and the pallidal [ventralis lateralis pars oralis (VLo) and ventralis anterior (VA)] and cerebellar [ventralis lateralis posterior pars oralis (VPLo)] receiving areas of the motor thalamus. Population post-stimulation time histograms revealed a complex pattern of stimulation-related inhibition and excitation for the GPi and VA/VLo, with a more consistent pattern of inhibition in STN and excitation in VPLo. Mean discharge rate was reduced in the GPi and STN and increased in the VPLo. Effective GPe DBS also reduced bursting in the STN and GPi. These data support the hypothesis that therapeutic DBS activates output from the stimulated structure and changes the temporal pattern of neuronal activity throughout the basal ganglia thalamic network and provide further support for GPe as a potential therapeutic target for DBS in the treatment of PD. Copyright © 2011 Elsevier Inc. All rights reserved.

  8. Neuron-Like Networks Between Ribosomal Proteins Within the Ribosome

    Science.gov (United States)

    Poirot, Olivier; Timsit, Youri

    2016-05-01

    From brain to the World Wide Web, information-processing networks share common scale invariant properties. Here, we reveal the existence of neural-like networks at a molecular scale within the ribosome. We show that with their extensions, ribosomal proteins form complex assortative interaction networks through which they communicate through tiny interfaces. The analysis of the crystal structures of 50S eubacterial particles reveals that most of these interfaces involve key phylogenetically conserved residues. The systematic observation of interactions between basic and aromatic amino acids at the interfaces and along the extension provides new structural insights that may contribute to decipher the molecular mechanisms of signal transmission within or between the ribosomal proteins. Similar to neurons interacting through “molecular synapses”, ribosomal proteins form a network that suggest an analogy with a simple molecular brain in which the “sensory-proteins” innervate the functional ribosomal sites, while the “inter-proteins” interconnect them into circuits suitable to process the information flow that circulates during protein synthesis. It is likely that these circuits have evolved to coordinate both the complex macromolecular motions and the binding of the multiple factors during translation. This opens new perspectives on nanoscale information transfer and processing.

  9. A Note on Some Numerical Approaches to Solve a θ˙ Neuron Networks Model

    Directory of Open Access Journals (Sweden)

    Samir Kumar Bhowmik

    2014-01-01

    Full Text Available Space time integration plays an important role in analyzing scientific and engineering models. In this paper, we consider an integrodifferential equation that comes from modeling θ˙ neuron networks. Here, we investigate various schemes for time discretization of a theta-neuron model. We use collocation and midpoint quadrature formula for space integration and then apply various time integration schemes to get a full discrete system. We present some computational results to demonstrate the schemes.

  10. Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.

    Science.gov (United States)

    Burbank, Kendra S

    2015-12-01

    The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.

  11. Spiking Neurons for Analysis of Patterns

    Science.gov (United States)

    Huntsberger, Terrance

    2008-01-01

    Artificial neural networks comprising spiking neurons of a novel type have been conceived as improved pattern-analysis and pattern-recognition computational systems. These neurons are represented by a mathematical model denoted the state-variable model (SVM), which among other things, exploits a computational parallelism inherent in spiking-neuron geometry. Networks of SVM neurons offer advantages of speed and computational efficiency, relative to traditional artificial neural networks. The SVM also overcomes some of the limitations of prior spiking-neuron models. There are numerous potential pattern-recognition, tracking, and data-reduction (data preprocessing) applications for these SVM neural networks on Earth and in exploration of remote planets. Spiking neurons imitate biological neurons more closely than do the neurons of traditional artificial neural networks. A spiking neuron includes a central cell body (soma) surrounded by a tree-like interconnection network (dendrites). Spiking neurons are so named because they generate trains of output pulses (spikes) in response to inputs received from sensors or from other neurons. They gain their speed advantage over traditional neural networks by using the timing of individual spikes for computation, whereas traditional artificial neurons use averages of activity levels over time. Moreover, spiking neurons use the delays inherent in dendritic processing in order to efficiently encode the information content of incoming signals. Because traditional artificial neurons fail to capture this encoding, they have less processing capability, and so it is necessary to use more gates when implementing traditional artificial neurons in electronic circuitry. Such higher-order functions as dynamic tasking are effected by use of pools (collections) of spiking neurons interconnected by spike-transmitting fibers. The SVM includes adaptive thresholds and submodels of transport of ions (in imitation of such transport in biological

  12. Reward-dependent learning in neuronal networks for planning and decision making.

    Science.gov (United States)

    Dehaene, S; Changeux, J P

    2000-01-01

    Neuronal network models have been proposed for the organization of evaluation and decision processes in prefrontal circuitry and their putative neuronal and molecular bases. The models all include an implementation and simulation of an elementary reward mechanism. Their central hypothesis is that tentative rules of behavior, which are coded by clusters of active neurons in prefrontal cortex, are selected or rejected based on an evaluation by this reward signal, which may be conveyed, for instance, by the mesencephalic dopaminergic neurons with which the prefrontal cortex is densely interconnected. At the molecular level, the reward signal is postulated to be a neurotransmitter such as dopamine, which exerts a global modulatory action on prefrontal synaptic efficacies, either via volume transmission or via targeted synaptic triads. Negative reinforcement has the effect of destabilizing the currently active rule-coding clusters; subsequently, spontaneous activity varies again from one cluster to another, giving the organism the chance to discover and learn a new rule. Thus, reward signals function as effective selection signals that either maintain or suppress currently active prefrontal representations as a function of their current adequacy. Simulations of this variation-selection have successfully accounted for the main features of several major tasks that depend on prefrontal cortex integrity, such as the delayed-response test, the Wisconsin card sorting test, the Tower of London test and the Stroop test. For the more complex tasks, we have found it necessary to supplement the external reward input with a second mechanism that supplies an internal reward; it consists of an auto-evaluation loop which short-circuits the reward input from the exterior. This allows for an internal evaluation of covert motor intentions without actualizing them as behaviors, by simply testing them covertly by comparison with memorized former experiences. This element of architecture

  13. Spiny Neurons of Amygdala, Striatum and Cortex Use Dendritic Plateau Potentials to Detect Network UP States

    Directory of Open Access Journals (Sweden)

    Katerina D Oikonomou

    2014-09-01

    Full Text Available Spiny neurons of amygdala, striatum, and cerebral cortex share four interesting features: [1] they are the most abundant cell type within their respective brain area, [2] covered by thousands of thorny protrusions (dendritic spines, [3] possess high levels of dendritic NMDA conductances, and [4] experience sustained somatic depolarizations in vivo and in vitro (UP states. In all spiny neurons of the forebrain, adequate glutamatergic inputs generate dendritic plateau potentials (dendritic UP states characterized by (i fast rise, (ii plateau phase lasting several hundred milliseconds and (iii abrupt decline at the end of the plateau phase. The dendritic plateau potential propagates towards the cell body decrementally to induce a long-lasting (longer than 100 ms, most often 200 – 800 ms steady depolarization (~20 mV amplitude, which resembles a neuronal UP state. Based on voltage-sensitive dye imaging, the plateau depolarization in the soma is precisely time-locked to the regenerative plateau potential taking place in the dendrite. The somatic plateau rises after the onset of the dendritic voltage transient and collapses with the breakdown of the dendritic plateau depolarization. We hypothesize that neuronal UP states in vivo reflect the occurrence of dendritic plateau potentials (dendritic UP states. We propose that the somatic voltage waveform during a neuronal UP state is determined by dendritic plateau potentials. A mammalian spiny neuron uses dendritic plateau potentials to detect and transform coherent network activity into a ubiquitous neuronal UP state. The biophysical properties of dendritic plateau potentials allow neurons to quickly attune to the ongoing network activity, as well as secure the stable amplitudes of successive UP states.

  14. Control of neuronal network organization by chemical surface functionalization of multi-walled carbon nanotube arrays

    International Nuclear Information System (INIS)

    Liu Jie; Bibari, Olivier; Marchand, Gilles; Benabid, Alim-Louis; Sauter-Starace, Fabien; Appaix, Florence; De Waard, Michel

    2011-01-01

    Carbon nanotube substrates are promising candidates for biological applications and devices. Interfacing of these carbon nanotubes with neurons can be controlled by chemical modifications. In this study, we investigated how chemical surface functionalization of multi-walled carbon nanotube arrays (MWNT-A) influences neuronal adhesion and network organization. Functionalization of MWNT-A dramatically modifies the length of neurite fascicles, cluster inter-connection success rate, and the percentage of neurites that escape from the clusters. We propose that chemical functionalization represents a method of choice for developing applications in which neuronal patterning on MWNT-A substrates is required.

  15. Control of neuronal network organization by chemical surface functionalization of multi-walled carbon nanotube arrays

    Energy Technology Data Exchange (ETDEWEB)

    Liu Jie; Bibari, Olivier; Marchand, Gilles; Benabid, Alim-Louis; Sauter-Starace, Fabien [CEA, LETI-Minatec, 17 Rue des Martyrs, 38054 Grenoble Cedex 9 (France); Appaix, Florence; De Waard, Michel, E-mail: fabien.sauter@cea.fr, E-mail: michel.dewaard@ujf-grenoble.fr [Inserm U836, Grenoble Institute of Neuroscience, Site Sante la Tronche, Batiment Edmond J Safra, Chemin Fortune Ferrini, BP170, 38042 Grenoble Cedex 09 (France)

    2011-05-13

    Carbon nanotube substrates are promising candidates for biological applications and devices. Interfacing of these carbon nanotubes with neurons can be controlled by chemical modifications. In this study, we investigated how chemical surface functionalization of multi-walled carbon nanotube arrays (MWNT-A) influences neuronal adhesion and network organization. Functionalization of MWNT-A dramatically modifies the length of neurite fascicles, cluster inter-connection success rate, and the percentage of neurites that escape from the clusters. We propose that chemical functionalization represents a method of choice for developing applications in which neuronal patterning on MWNT-A substrates is required.

  16. A neuronal network model with simplified tonotopicity for tinnitus generation and its relief by sound therapy.

    Science.gov (United States)

    Nagashino, Hirofumi; Kinouchi, Yohsuke; Danesh, Ali A; Pandya, Abhijit S

    2013-01-01

    Tinnitus is the perception of sound in the ears or in the head where no external source is present. Sound therapy is one of the most effective techniques for tinnitus treatment that have been proposed. In order to investigate mechanisms of tinnitus generation and the clinical effects of sound therapy, we have proposed conceptual and computational models with plasticity using a neural oscillator or a neuronal network model. In the present paper, we propose a neuronal network model with simplified tonotopicity of the auditory system as more detailed structure. In this model an integrate-and-fire neuron model is employed and homeostatic plasticity is incorporated. The computer simulation results show that the present model can show the generation of oscillation and its cessation by external input. It suggests that the present framework is promising as a modeling for the tinnitus generation and the effects of sound therapy.

  17. Towards a theory of cortical columns: From spiking neurons to interacting neural populations of finite size

    Science.gov (United States)

    Gerstner, Wulfram

    2017-01-01

    Neural population equations such as neural mass or field models are widely used to study brain activity on a large scale. However, the relation of these models to the properties of single neurons is unclear. Here we derive an equation for several interacting populations at the mesoscopic scale starting from a microscopic model of randomly connected generalized integrate-and-fire neuron models. Each population consists of 50–2000 neurons of the same type but different populations account for different neuron types. The stochastic population equations that we find reveal how spike-history effects in single-neuron dynamics such as refractoriness and adaptation interact with finite-size fluctuations on the population level. Efficient integration of the stochastic mesoscopic equations reproduces the statistical behavior of the population activities obtained from microscopic simulations of a full spiking neural network model. The theory describes nonlinear emergent dynamics such as finite-size-induced stochastic transitions in multistable networks and synchronization in balanced networks of excitatory and inhibitory neurons. The mesoscopic equations are employed to rapidly integrate a model of a cortical microcircuit consisting of eight neuron types, which allows us to predict spontaneous population activities as well as evoked responses to thalamic input. Our theory establishes a general framework for modeling finite-size neural population dynamics based on single cell and synapse parameters and offers an efficient approach to analyzing cortical circuits and computations. PMID:28422957

  18. Healthy human CSF promotes glial differentiation of hESC-derived neural cells while retaining spontaneous activity in existing neuronal networks

    Directory of Open Access Journals (Sweden)

    Heikki Kiiski

    2013-05-01

    The possibilities of human pluripotent stem cell-derived neural cells from the basic research tool to a treatment option in regenerative medicine have been well recognized. These cells also offer an interesting tool for in vitro models of neuronal networks to be used for drug screening and neurotoxicological studies and for patient/disease specific in vitro models. Here, as aiming to develop a reductionistic in vitro human neuronal network model, we tested whether human embryonic stem cell (hESC-derived neural cells could be cultured in human cerebrospinal fluid (CSF in order to better mimic the in vivo conditions. Our results showed that CSF altered the differentiation of hESC-derived neural cells towards glial cells at the expense of neuronal differentiation. The proliferation rate was reduced in CSF cultures. However, even though the use of CSF as the culture medium altered the glial vs. neuronal differentiation rate, the pre-existing spontaneous activity of the neuronal networks persisted throughout the study. These results suggest that it is possible to develop fully human cell and culture-based environments that can further be modified for various in vitro modeling purposes.

  19. Repeated Stimulation of Cultured Networks of Rat Cortical Neurons Induces Parallel Memory Traces

    Science.gov (United States)

    le Feber, Joost; Witteveen, Tim; van Veenendaal, Tamar M.; Dijkstra, Jelle

    2015-01-01

    During systems consolidation, memories are spontaneously replayed favoring information transfer from hippocampus to neocortex. However, at present no empirically supported mechanism to accomplish a transfer of memory from hippocampal to extra-hippocampal sites has been offered. We used cultured neuronal networks on multielectrode arrays and…

  20. Synchronizations in small-world networks of spiking neurons: Diffusive versus sigmoid couplings

    International Nuclear Information System (INIS)

    Hasegawa, Hideo

    2005-01-01

    By using a semianalytical dynamical mean-field approximation previously proposed by the author [H. Hasegawa, Phys. Rev. E 70, 066107 (2004)], we have studied the synchronization of stochastic, small-world (SW) networks of FitzHugh-Nagumo neurons with diffusive couplings. The difference and similarity between results for diffusive and sigmoid couplings have been discussed. It has been shown that with introducing the weak heterogeneity to regular networks, the synchronization may be slightly increased for diffusive couplings, while it is decreased for sigmoid couplings. This increase in the synchronization for diffusive couplings is shown to be due to their local, negative feedback contributions, but not due to the short average distance in SW networks. Synchronization of SW networks depends not only on their structure but also on the type of couplings

  1. Decentralized Consistent Network Updates in SDN with ez-Segway

    KAUST Repository

    Nguyen, Thanh Dang; Chiesa, Marco; Canini, Marco

    2017-01-01

    We present ez-Segway, a decentralized mechanism to consistently and quickly update the network state while preventing forwarding anomalies (loops and black-holes) and avoiding link congestion. In our design, the centralized SDN controller only pre-computes

  2. Computational model of neuron-astrocyte interactions during focal seizure generation

    Directory of Open Access Journals (Sweden)

    Davide eReato

    2012-10-01

    Full Text Available Empirical research in the last decade revealed that astrocytes can respond to neurotransmitters with Ca2+ elevations and generate feedback signals to neurons which modulate synaptic transmission and neuronal excitability. This discovery changed our basic understanding of brain function and provided new perspectives for how astrocytes can participate not only to information processing, but also to the genesis of brain disorders, such as epilepsy. Epilepsy is a neurological disorder characterized by recurrent seizures that can arise focally at restricted areas and propagate throughout the brain. Studies in brain slice models suggest that astrocytes contribute to epileptiform activity by increasing neuronal excitability through a Ca2+-dependent release of glutamate. The underlying mechanism remains, however, unclear. In this study, we implemented a parsimonious network model of neurons and astrocytes. The model consists of excitatory and inhibitory neurons described by Izhikevich's neuron dynamics. The experimentally observed Ca2+ change in astrocytes in response to neuronal activity was modeled with linear equations. We considered that glutamate is released from astrocytes above certain intracellular Ca2+ concentrations thus providing a non-linear positive feedback signal to neurons. Propagating seizure-like ictal discharges (IDs were reliably evoked in our computational model by repeatedly exciting a small area of the network, which replicates experimental results in a slice model of focal ID in entorhinal cortex. We found that the threshold of focal ID generation was lowered when an excitatory feedback-loop between astrocytes and neurons was included. Simulations show that astrocytes can contribute to ID generation by directly affecting the excitatory/inhibitory balance of the neuronal network. Our model can be used to obtain mechanistic insights into the distinct contributions of the different signaling pathways to the generation and

  3. Ablation of NMDA receptors enhances the excitability of hippocampal CA3 neurons.

    Directory of Open Access Journals (Sweden)

    Fumiaki Fukushima

    Full Text Available Synchronized discharges in the hippocampal CA3 recurrent network are supposed to underlie network oscillations, memory formation and seizure generation. In the hippocampal CA3 network, NMDA receptors are abundant at the recurrent synapses but scarce at the mossy fiber synapses. We generated mutant mice in which NMDA receptors were abolished in hippocampal CA3 pyramidal neurons by postnatal day 14. The histological and cytological organizations of the hippocampal CA3 region were indistinguishable between control and mutant mice. We found that mutant mice lacking NMDA receptors selectively in CA3 pyramidal neurons became more susceptible to kainate-induced seizures. Consistently, mutant mice showed characteristic large EEG spikes associated with multiple unit activities (MUA, suggesting enhanced synchronous firing of CA3 neurons. The electrophysiological balance between fast excitatory and inhibitory synaptic transmission was comparable between control and mutant pyramidal neurons in the hippocampal CA3 region, while the NMDA receptor-slow AHP coupling was diminished in the mutant neurons. In the adult brain, inducible ablation of NMDA receptors in the hippocampal CA3 region by the viral expression vector for Cre recombinase also induced similar large EEG spikes. Furthermore, pharmacological blockade of CA3 NMDA receptors enhanced the susceptibility to kainate-induced seizures. These results raise an intriguing possibility that hippocampal CA3 NMDA receptors may suppress the excitability of the recurrent network as a whole in vivo by restricting synchronous firing of CA3 neurons.

  4. Delay selection by spike-timing-dependent plasticity in recurrent networks of spiking neurons receiving oscillatory inputs.

    Directory of Open Access Journals (Sweden)

    Robert R Kerr

    Full Text Available Learning rules, such as spike-timing-dependent plasticity (STDP, change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.

  5. Bistability Analysis of Excitatory-Inhibitory Neural Networks in Limited-Sustained-Activity Regime

    International Nuclear Information System (INIS)

    Ni Yun; Wu Liang; Wu Dan; Zhu Shiqun

    2011-01-01

    Bistable behavior of neuronal complex networks is investigated in the limited-sustained-activity regime when the network is composed of excitatory and inhibitory neurons. The standard stability analysis is performed on the two metastable states separately. Both theoretical analysis and numerical simulations show consistently that the difference between time scales of excitatory and inhibitory populations can influence the dynamical behaviors of the neuronal networks dramatically, leading to the transition from bistable behaviors with memory effects to the collapse of bistable behaviors. These results may suggest one possible neuronal information processing by only tuning time scales. (interdisciplinary physics and related areas of science and technology)

  6. Response of Cultured Neuronal Network Activity After High-Intensity Power Frequency Magnetic Field Exposure

    Directory of Open Access Journals (Sweden)

    Atsushi Saito

    2018-03-01

    Full Text Available High-intensity and low frequency (1–100 kHz time-varying electromagnetic fields stimulate the human body through excitation of the nervous system. In power frequency range (50/60 Hz, a frequency-dependent threshold of the external electric field-induced neuronal modulation in cultured neuronal networks was used as one of the biological indicator in international guidelines; however, the threshold of the magnetic field-induced neuronal modulation has not been elucidated. In this study, we exposed rat brain-derived neuronal networks to a high-intensity power frequency magnetic field (hPF-MF, and evaluated the modulation of synchronized bursting activity using a multi-electrode array (MEA-based extracellular recording technique. As a result of short-term hPF-MF exposure (50–400 mT root-mean-square (rms, 50 Hz, sinusoidal wave, 6 s, the synchronized bursting activity was increased in the 400 mT-exposed group. On the other hand, no change was observed in the 50–200 mT-exposed groups. In order to clarify the mechanisms of the 400 mT hPF-MF exposure-induced neuronal response, we evaluated it after blocking inhibitory synapses using bicuculline methiodide (BMI; subsequently, increase in bursting activity was observed with BMI application, and the response of 400 mT hPF-MF exposure disappeared. Therefore, it was suggested that the response of hPF-MF exposure was involved in the inhibitory input. Next, we screened the inhibitory pacemaker-like neuronal activity which showed autonomous 4–10 Hz firing with CNQX and D-AP5 application, and it was confirmed that the activity was reduced after 400 mT hPF-MF exposure. Comparison of these experimental results with estimated values of the induced electric field (E-field in the culture medium revealed that the change in synchronized bursting activity occurred over 0.3 V/m, which was equivalent to the findings of a previous study that used the external electric fields. In addition, the results suggested that

  7. Reverse engineering a mouse embryonic stem cell-specific transcriptional network reveals a new modulator of neuronal differentiation.

    Science.gov (United States)

    De Cegli, Rossella; Iacobacci, Simona; Flore, Gemma; Gambardella, Gennaro; Mao, Lei; Cutillo, Luisa; Lauria, Mario; Klose, Joachim; Illingworth, Elizabeth; Banfi, Sandro; di Bernardo, Diego

    2013-01-01

    Gene expression profiles can be used to infer previously unknown transcriptional regulatory interaction among thousands of genes, via systems biology 'reverse engineering' approaches. We 'reverse engineered' an embryonic stem (ES)-specific transcriptional network from 171 gene expression profiles, measured in ES cells, to identify master regulators of gene expression ('hubs'). We discovered that E130012A19Rik (E13), highly expressed in mouse ES cells as compared with differentiated cells, was a central 'hub' of the network. We demonstrated that E13 is a protein-coding gene implicated in regulating the commitment towards the different neuronal subtypes and glia cells. The overexpression and knock-down of E13 in ES cell lines, undergoing differentiation into neurons and glia cells, caused a strong up-regulation of the glutamatergic neurons marker Vglut2 and a strong down-regulation of the GABAergic neurons marker GAD65 and of the radial glia marker Blbp. We confirmed E13 expression in the cerebral cortex of adult mice and during development. By immuno-based affinity purification, we characterized protein partners of E13, involved in the Polycomb complex. Our results suggest a role of E13 in regulating the division between glutamatergic projection neurons and GABAergic interneurons and glia cells possibly by epigenetic-mediated transcriptional regulation.

  8. Linking Neurons to Network Function and Behavior by Two-Photon Holographic Optogenetics and Volumetric Imaging.

    Science.gov (United States)

    Dal Maschio, Marco; Donovan, Joseph C; Helmbrecht, Thomas O; Baier, Herwig

    2017-05-17

    We introduce a flexible method for high-resolution interrogation of circuit function, which combines simultaneous 3D two-photon stimulation of multiple targeted neurons, volumetric functional imaging, and quantitative behavioral tracking. This integrated approach was applied to dissect how an ensemble of premotor neurons in the larval zebrafish brain drives a basic motor program, the bending of the tail. We developed an iterative photostimulation strategy to identify minimal subsets of channelrhodopsin (ChR2)-expressing neurons that are sufficient to initiate tail movements. At the same time, the induced network activity was recorded by multiplane GCaMP6 imaging across the brain. From this dataset, we computationally identified activity patterns associated with distinct components of the elicited behavior and characterized the contributions of individual neurons. Using photoactivatable GFP (paGFP), we extended our protocol to visualize single functionally identified neurons and reconstruct their morphologies. Together, this toolkit enables linking behavior to circuit activity with unprecedented resolution. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Mirror Neurons in Humans: Consisting or Confounding Evidence?

    Science.gov (United States)

    Turella, Luca; Pierno, Andrea C.; Tubaldi, Federico; Castiello, Umberto

    2009-01-01

    The widely known discovery of mirror neurons in macaques shows that premotor and parietal cortical areas are not only involved in executing one's own movement, but are also active when observing the action of others. The goal of this essay is to critically evaluate the substance of functional magnetic resonance imaging (fMRI) and positron emission…

  10. Synchronization in node of complex networks consist of complex chaotic system

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Qiang, E-mail: qiangweibeihua@163.com [Beihua University computer and technology College, BeiHua University, Jilin, 132021, Jilin (China); Digital Images Processing Institute of Beihua University, BeiHua University, Jilin, 132011, Jilin (China); Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian, 116024 (China); Xie, Cheng-jun [Beihua University computer and technology College, BeiHua University, Jilin, 132021, Jilin (China); Digital Images Processing Institute of Beihua University, BeiHua University, Jilin, 132011, Jilin (China); Liu, Hong-jun [School of Information Engineering, Weifang Vocational College, Weifang, 261041 (China); Li, Yan-hui [The Library, Weifang Vocational College, Weifang, 261041 (China)

    2014-07-15

    A new synchronization method is investigated for node of complex networks consists of complex chaotic system. When complex networks realize synchronization, different component of complex state variable synchronize up to different scaling complex function by a designed complex feedback controller. This paper change synchronization scaling function from real field to complex field for synchronization in node of complex networks with complex chaotic system. Synchronization in constant delay and time-varying coupling delay complex networks are investigated, respectively. Numerical simulations are provided to show the effectiveness of the proposed method.

  11. Optimization behavior of brainstem respiratory neurons. A cerebral neural network model.

    Science.gov (United States)

    Poon, C S

    1991-01-01

    A recent model of respiratory control suggested that the steady-state respiratory responses to CO2 and exercise may be governed by an optimal control law in the brainstem respiratory neurons. It was not certain, however, whether such complex optimization behavior could be accomplished by a realistic biological neural network. To test this hypothesis, we developed a hybrid computer-neural model in which the dynamics of the lung, brain and other tissue compartments were simulated on a digital computer. Mimicking the "controller" was a human subject who pedalled on a bicycle with varying speed (analog of ventilatory output) with a view to minimize an analog signal of the total cost of breathing (chemical and mechanical) which was computed interactively and displayed on an oscilloscope. In this manner, the visuomotor cortex served as a proxy (homolog) of the brainstem respiratory neurons in the model. Results in 4 subjects showed a linear steady-state ventilatory CO2 response to arterial PCO2 during simulated CO2 inhalation and a nearly isocapnic steady-state response during simulated exercise. Thus, neural optimization is a plausible mechanism for respiratory control during exercise and can be achieved by a neural network with cognitive computational ability without the need for an exercise stimulus.

  12. Decentralized Consistent Network Updates in SDN with ez-Segway

    KAUST Repository

    Nguyen, Thanh Dang

    2017-03-06

    We present ez-Segway, a decentralized mechanism to consistently and quickly update the network state while preventing forwarding anomalies (loops and black-holes) and avoiding link congestion. In our design, the centralized SDN controller only pre-computes information needed by the switches during the update execution. This information is distributed to the switches, which use partial knowledge and direct message passing to efficiently realize the update. This separation of concerns has the key benefit of improving update performance as the communication and computation bottlenecks at the controller are removed. Our evaluations via network emulations and large-scale simulations demonstrate the efficiency of ez-Segway, which compared to a centralized approach, improves network update times by up to 45% and 57% at the median and the 99th percentile, respectively. A deployment of a system prototype in a real OpenFlow switch and an implementation in P4 demonstrate the feasibility and low overhead of implementing simple network update functionality within switches.

  13. Relationship between neuronal network architecture and naming performance in temporal lobe epilepsy: A connectome based approach using machine learning.

    Science.gov (United States)

    Munsell, B C; Wu, G; Fridriksson, J; Thayer, K; Mofrad, N; Desisto, N; Shen, D; Bonilha, L

    2017-09-09

    Impaired confrontation naming is a common symptom of temporal lobe epilepsy (TLE). The neurobiological mechanisms underlying this impairment are poorly understood but may indicate a structural disorganization of broadly distributed neuronal networks that support naming ability. Importantly, naming is frequently impaired in other neurological disorders and by contrasting the neuronal structures supporting naming in TLE with other diseases, it will become possible to elucidate the common systems supporting naming. We aimed to evaluate the neuronal networks that support naming in TLE by using a machine learning algorithm intended to predict naming performance in subjects with medication refractory TLE using only the structural brain connectome reconstructed from diffusion tensor imaging. A connectome-based prediction framework was developed using network properties from anatomically defined brain regions across the entire brain, which were used in a multi-task machine learning algorithm followed by support vector regression. Nodal eigenvector centrality, a measure of regional network integration, predicted approximately 60% of the variance in naming. The nodes with the highest regression weight were bilaterally distributed among perilimbic sub-networks involving mainly the medial and lateral temporal lobe regions. In the context of emerging evidence regarding the role of large structural networks that support language processing, our results suggest intact naming relies on the integration of sub-networks, as opposed to being dependent on isolated brain areas. In the case of TLE, these sub-networks may be disproportionately indicative naming processes that are dependent semantic integration from memory and lexical retrieval, as opposed to multi-modal perception or motor speech production. Copyright © 2017. Published by Elsevier Inc.

  14. The caudate : a key node in the neuronal network imbalance of insomnia?

    NARCIS (Netherlands)

    Stoffers, Diederick; Altena, Ellemarije; van der Werf, Ysbrand D; Sanz-Arigita, Ernesto J; Voorn, Thom A; Astill, Rebecca G; Strijers, Rob L M; Waterman, Dé; Van Someren, Eus J W

    Insomnia is prevalent, severe and partially heritable. Unfortunately, its neuronal correlates remain enigmatic, hampering the development of mechanistic models and rational treatments. Consistently reported impairments concern fragmented sleep, hyper-arousal and executive dysfunction. Because

  15. The caudate: a key node in the neuronal network imbalance of insomnia?

    NARCIS (Netherlands)

    Stoffers, D.; Altena, E.; van der Werf, Y.D.; Sanz-Arigita, E.J.; Voorn, T.A.; Astill, R.G.; Strijers, R.L.M.; Waterman, D.; van Someren, E.J.W.

    2014-01-01

    Insomnia is prevalent, severe and partially heritable. Unfortunately, its neuronal correlates remain enigmatic, hampering the development of mechanistic models and rational treatments. Consistently reported impairments concern fragmented sleep, hyper-arousal and executive dysfunction. Because

  16. SERS investigations and electrical recording of neuronal networks with three-dimensional plasmonic nanoantennas (Conference Presentation)

    Science.gov (United States)

    De Angelis, Francesco

    2017-06-01

    SERS investigations and electrical recording of neuronal networks with three-dimensional plasmonic nanoantennas Michele Dipalo, Valeria Caprettini, Anbrea Barbaglia, Laura Lovato, Francesco De Angelis e-mail: francesco.deangelis@iit.it Istituto Italiano di Tecnologia, Via Morego 30, 16163, Genova Biological systems are analysed mainly by optical, chemical or electrical methods. Normally each of these techniques provides only partial information about the environment, while combined investigations could reveal new phenomena occurring in complex systems such as in-vitro neuronal networks. Aiming at the merging of optical and electrical investigations of biological samples, we introduced three-dimensional plasmonic nanoantennas on CMOS-based electrical sensors [1]. The overall device is then capable of enhanced Raman Analysis of cultured cells combined with electrical recording of neuronal activity. The Raman measurements show a much higher sensitivity when performed on the tip of the nanoantenna in respect to the flat substrate [2]; this effect is a combination of the high plasmonic field enhancement and of the tight adhesion of cells on the nanoantenna tip. Furthermore, when plasmonic opto-poration is exploited [3] the 3D nanoelectrodes are able to penetrate through the cell membrane thus accessing the intracellular environment. Our latest results (unpublished) show that the technique is completely non-invasive and solves many problems related to state-of-the-art intracellular recording approaches on large neuronal networks. This research received funding from ERC-IDEAS Program: "Neuro-Plasmonics" [Grant n. 616213]. References: [1] M. Dipalo, G. C. Messina, H. Amin, R. La Rocca, V. Shalabaeva, A. Simi, A. Maccione, P. Zilio, L. Berdondini, F. De Angelis, Nanoscale 2015, 7, 3703. [2] R. La Rocca, G. C. Messina, M. Dipalo, V. Shalabaeva, F. De Angelis, Small 2015, 11, 4632. [3] G. C. Messina et al., Spatially, Temporally, and Quantitatively Controlled Delivery of

  17. Networks of VTA Neurons Encode Real-Time Information about Uncertain Numbers of Actions Executed to Earn a Reward

    Directory of Open Access Journals (Sweden)

    Jesse Wood

    2017-08-01

    Full Text Available Multiple and unpredictable numbers of actions are often required to achieve a goal. In order to organize behavior and allocate effort so that optimal behavioral policies can be selected, it is necessary to continually monitor ongoing actions. Real-time processing of information related to actions and outcomes is typically assigned to the prefrontal cortex and basal ganglia, but also depends on midbrain regions, especially the ventral tegmental area (VTA. We were interested in how individual VTA neurons, as well as networks within the VTA, encode salient events when an unpredictable number of serial actions are required to obtain a reward. We recorded from ensembles of putative dopamine and non-dopamine neurons in the VTA as animals performed multiple cued trials in a recording session where, in each trial, serial actions were randomly rewarded. While averaging population activity did not reveal a response pattern, we observed that different neurons were selectively tuned to low, medium, or high numbered actions in a trial. This preferential tuning of putative dopamine and non-dopamine VTA neurons to different subsets of actions in a trial allowed information about binned action number to be decoded from the ensemble activity. At the network level, tuning curve similarity was positively associated with action-evoked noise correlations, suggesting that action number selectivity reflects functional connectivity within these networks. Analysis of phasic responses to cue and reward revealed that the requirement to execute multiple and uncertain numbers of actions weakens both cue-evoked responses and cue-reward response correlation. The functional connectivity and ensemble coding scheme that we observe here may allow VTA neurons to cooperatively provide a real-time account of ongoing behavior. These computations may be critical to cognitive and motivational functions that have long been associated with VTA dopamine neurons.

  18. Stochastic multiresonance in coupled excitable FHN neurons

    Science.gov (United States)

    Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua

    2018-04-01

    In this paper, effects of noise on Watts-Strogatz small-world neuronal networks, which are stimulated by a subthreshold signal, have been investigated. With the numerical simulations, it is surprisingly found that there exist several optimal noise intensities at which the subthreshold signal can be detected efficiently. This indicates the occurrence of stochastic multiresonance in the studied neuronal networks. Moreover, it is revealed that the occurrence of stochastic multiresonance has close relationship with the period of subthreshold signal Te and the noise-induced mean period of the neuronal networks T0. In detail, we find that noise could induce the neuronal networks to generate stochastic resonance for M times if Te is not very large and falls into the interval ( M × T 0 , ( M + 1 ) × T 0 ) with M being a positive integer. In real neuronal system, subthreshold signal detection is very meaningful. Thus, the obtained results in this paper could give some important implications on detecting subthreshold signal and propagating neuronal information in neuronal systems.

  19. Overexpression of cypin alters dendrite morphology, single neuron activity, and network properties via distinct mechanisms

    Science.gov (United States)

    Rodríguez, Ana R.; O'Neill, Kate M.; Swiatkowski, Przemyslaw; Patel, Mihir V.; Firestein, Bonnie L.

    2018-02-01

    Objective. This study investigates the effect that overexpression of cytosolic PSD-95 interactor (cypin), a regulator of synaptic PSD-95 protein localization and a core regulator of dendrite branching, exerts on the electrical activity of rat hippocampal neurons and networks. Approach. We cultured rat hippocampal neurons and used lipid-mediated transfection and lentiviral gene transfer to achieve high levels of cypin or cypin mutant (cypinΔPDZ PSD-95 non-binding) expression cellularly and network-wide, respectively. Main results. Our analysis revealed that although overexpression of cypin and cypinΔPDZ increase dendrite numbers and decrease spine density, cypin and cypinΔPDZ distinctly regulate neuronal activity. At the single cell level, cypin promotes decreases in bursting activity while cypinΔPDZ reduces sEPSC frequency and further decreases bursting compared to cypin. At the network level, by using the Fano factor as a measure of spike count variability, cypin overexpression results in an increase in variability of spike count, and this effect is abolished when cypin cannot bind PSD-95. This variability is also dependent on baseline activity levels and on mean spike rate over time. Finally, our spike sorting data show that overexpression of cypin results in a more complex distribution of spike waveforms and that binding to PSD-95 is essential for this complexity. Significance. Our data suggest that dendrite morphology does not play a major role in cypin action on electrical activity.

  20. Inference of topology and the nature of synapses, and the flow of information in neuronal networks

    Science.gov (United States)

    Borges, F. S.; Lameu, E. L.; Iarosz, K. C.; Protachevicz, P. R.; Caldas, I. L.; Viana, R. L.; Macau, E. E. N.; Batista, A. M.; Baptista, M. S.

    2018-02-01

    The characterization of neuronal connectivity is one of the most important matters in neuroscience. In this work, we show that a recently proposed informational quantity, the causal mutual information, employed with an appropriate methodology, can be used not only to correctly infer the direction of the underlying physical synapses, but also to identify their excitatory or inhibitory nature, considering easy to handle and measure bivariate time series. The success of our approach relies on a surprising property found in neuronal networks by which nonadjacent neurons do "understand" each other (positive mutual information), however, this exchange of information is not capable of causing effect (zero transfer entropy). Remarkably, inhibitory connections, responsible for enhancing synchronization, transfer more information than excitatory connections, known to enhance entropy in the network. We also demonstrate that our methodology can be used to correctly infer directionality of synapses even in the presence of dynamic and observational Gaussian noise, and is also successful in providing the effective directionality of intermodular connectivity, when only mean fields can be measured.

  1. Mechanisms and neuronal networks involved in reactive and proactive cognitive control of interference in working memory.

    Science.gov (United States)

    Irlbacher, Kerstin; Kraft, Antje; Kehrer, Stefanie; Brandt, Stephan A

    2014-10-01

    Cognitive control can be reactive or proactive in nature. Reactive control mechanisms, which support the resolution of interference, start after its onset. Conversely, proactive control involves the anticipation and prevention of interference prior to its occurrence. The interrelation of both types of cognitive control is currently under debate: Are they mediated by different neuronal networks? Or are there neuronal structures that have the potential to act in a proactive as well as in a reactive manner? This review illustrates the way in which integrating knowledge gathered from behavioral studies, functional imaging, and human electroencephalography proves useful in answering these questions. We focus on studies that investigate interference resolution at the level of working memory representations. In summary, different mechanisms are instrumental in supporting reactive and proactive control. Distinct neuronal networks are involved, though some brain regions, especially pre-SMA, possess functions that are relevant to both control modes. Therefore, activation of these brain areas could be observed in reactive, as well as proactive control, but at different times during information processing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Spatially structured oscillations in a two-dimensional excitatory neuronal network with synaptic depression

    KAUST Repository

    Kilpatrick, Zachary P.

    2009-10-29

    We study the spatiotemporal dynamics of a two-dimensional excitatory neuronal network with synaptic depression. Coupling between populations of neurons is taken to be nonlocal, while depression is taken to be local and presynaptic. We show that the network supports a wide range of spatially structured oscillations, which are suggestive of phenomena seen in cortical slice experiments and in vivo. The particular form of the oscillations depends on initial conditions and the level of background noise. Given an initial, spatially localized stimulus, activity evolves to a spatially localized oscillating core that periodically emits target waves. Low levels of noise can spontaneously generate several pockets of oscillatory activity that interact via their target patterns. Periodic activity in space can also organize into spiral waves, provided that there is some source of rotational symmetry breaking due to external stimuli or noise. In the high gain limit, no oscillatory behavior exists, but a transient stimulus can lead to a single, outward propagating target wave. © Springer Science + Business Media, LLC 2009.

  3. Spatially structured oscillations in a two-dimensional excitatory neuronal network with synaptic depression

    KAUST Repository

    Kilpatrick, Zachary P.; Bressloff, Paul C.

    2009-01-01

    We study the spatiotemporal dynamics of a two-dimensional excitatory neuronal network with synaptic depression. Coupling between populations of neurons is taken to be nonlocal, while depression is taken to be local and presynaptic. We show that the network supports a wide range of spatially structured oscillations, which are suggestive of phenomena seen in cortical slice experiments and in vivo. The particular form of the oscillations depends on initial conditions and the level of background noise. Given an initial, spatially localized stimulus, activity evolves to a spatially localized oscillating core that periodically emits target waves. Low levels of noise can spontaneously generate several pockets of oscillatory activity that interact via their target patterns. Periodic activity in space can also organize into spiral waves, provided that there is some source of rotational symmetry breaking due to external stimuli or noise. In the high gain limit, no oscillatory behavior exists, but a transient stimulus can lead to a single, outward propagating target wave. © Springer Science + Business Media, LLC 2009.

  4. Neuronal avalanches and learning

    Energy Technology Data Exchange (ETDEWEB)

    Arcangelis, Lucilla de, E-mail: dearcangelis@na.infn.it [Department of Information Engineering and CNISM, Second University of Naples, 81031 Aversa (Italy)

    2011-05-01

    Networks of living neurons represent one of the most fascinating systems of biology. If the physical and chemical mechanisms at the basis of the functioning of a single neuron are quite well understood, the collective behaviour of a system of many neurons is an extremely intriguing subject. Crucial ingredient of this complex behaviour is the plasticity property of the network, namely the capacity to adapt and evolve depending on the level of activity. This plastic ability is believed, nowadays, to be at the basis of learning and memory in real brains. Spontaneous neuronal activity has recently shown features in common to other complex systems. Experimental data have, in fact, shown that electrical information propagates in a cortex slice via an avalanche mode. These avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems and successful models have been developed to describe their behaviour. In this contribution we discuss a statistical mechanical model for the complex activity in a neuronal network. The model implements the main physiological properties of living neurons and is able to reproduce recent experimental results. Then, we discuss the learning abilities of this neuronal network. Learning occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. The system is able to learn all the tested rules, in particular the exclusive OR (XOR) and a random rule with three inputs. The learning dynamics exhibits universal features as function of the strength of plastic adaptation. Any rule could be learned provided that the plastic adaptation is sufficiently slow.

  5. Neuronal avalanches and learning

    International Nuclear Information System (INIS)

    Arcangelis, Lucilla de

    2011-01-01

    Networks of living neurons represent one of the most fascinating systems of biology. If the physical and chemical mechanisms at the basis of the functioning of a single neuron are quite well understood, the collective behaviour of a system of many neurons is an extremely intriguing subject. Crucial ingredient of this complex behaviour is the plasticity property of the network, namely the capacity to adapt and evolve depending on the level of activity. This plastic ability is believed, nowadays, to be at the basis of learning and memory in real brains. Spontaneous neuronal activity has recently shown features in common to other complex systems. Experimental data have, in fact, shown that electrical information propagates in a cortex slice via an avalanche mode. These avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems and successful models have been developed to describe their behaviour. In this contribution we discuss a statistical mechanical model for the complex activity in a neuronal network. The model implements the main physiological properties of living neurons and is able to reproduce recent experimental results. Then, we discuss the learning abilities of this neuronal network. Learning occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. The system is able to learn all the tested rules, in particular the exclusive OR (XOR) and a random rule with three inputs. The learning dynamics exhibits universal features as function of the strength of plastic adaptation. Any rule could be learned provided that the plastic adaptation is sufficiently slow.

  6. Anxiogenic drug administration and elevated plus-maze exposure in rats activate populations of relaxin-3 neurons in the nucleus incertus and serotonergic neurons in the dorsal raphe nucleus.

    Science.gov (United States)

    Lawther, A J; Clissold, M L; Ma, S; Kent, S; Lowry, C A; Gundlach, A L; Hale, M W

    2015-09-10

    Anxiety is a complex and adaptive emotional state controlled by a distributed and interconnected network of brain regions, and disruption of these networks is thought to give rise to the behavioral symptoms associated with anxiety disorders in humans. The dorsal raphe nucleus (DR), which contains the majority of forebrain-projecting serotonergic neurons, is implicated in the control of anxiety states and anxiety-related behavior via neuromodulatory effects on these networks. Relaxin-3 is the native neuropeptide ligand for the Gi/o-protein-coupled receptor, RXFP3, and is primarily expressed in the nucleus incertus (NI), a tegmental region immediately caudal to the DR. RXFP3 activation has been shown to modulate anxiety-related behavior in rodents, and RXFP3 mRNA is expressed in the DR. In this study, we examined the response of relaxin-3-containing neurons in the NI and serotonergic neurons in the DR following pharmacologically induced anxiety and exposure to an aversive environment. We administered the anxiogenic drug FG-7142 or vehicle to adult male Wistar rats and, 30 min later, exposed them to either the elevated plus-maze or home cage control conditions. Immunohistochemical detection of c-Fos was used to determine activation of serotonergic neurons in the DR and relaxin-3 neurons in the NI, measured 2h following drug injection. Analysis revealed that FG-7142 administration and exposure to the elevated plus-maze are both associated with an increase in c-Fos expression in relaxin-3-containing neurons in the NI and in serotonergic neurons in dorsal and ventrolateral regions of the DR. These data are consistent with the hypothesis that relaxin-3 systems in the NI and serotonin systems in the DR interact to form part of a network involved in the control of anxiety-related behavior. Copyright © 2015 IBRO. Published by Elsevier Ltd. All rights reserved.

  7. Criticality in Neuronal Networks

    Science.gov (United States)

    Friedman, Nir; Ito, Shinya; Brinkman, Braden A. W.; Shimono, Masanori; Deville, R. E. Lee; Beggs, John M.; Dahmen, Karin A.; Butler, Tom C.

    2012-02-01

    In recent years, experiments detecting the electrical firing patterns in slices of in vitro brain tissue have been analyzed to suggest the presence of scale invariance and possibly criticality in the brain. Much of the work done however has been limited in two ways: 1) the data collected is from local field potentials that do not represent the firing of individual neurons; 2) the analysis has been primarily limited to histograms. In our work we examine data based on the firing of individual neurons (spike data), and greatly extend the analysis by considering shape collapse and exponents. Our results strongly suggest that the brain operates near a tuned critical point of a highly distinctive universality class.

  8. Consistent Steering System using SCTP for Bluetooth Scatternet Sensor Network

    Science.gov (United States)

    Dhaya, R.; Sadasivam, V.; Kanthavel, R.

    2012-12-01

    Wireless communication is the best way to convey information from source to destination with flexibility and mobility and Bluetooth is the wireless technology suitable for short distance. On the other hand a wireless sensor network (WSN) consists of spatially distributed autonomous sensors to cooperatively monitor physical or environmental conditions, such as temperature, sound, vibration, pressure, motion or pollutants. Using Bluetooth piconet wireless technique in sensor nodes creates limitation in network depth and placement. The introduction of Scatternet solves the network restrictions with lack of reliability in data transmission. When the depth of the network increases, it results in more difficulties in routing. No authors so far focused on the reliability factors of Scatternet sensor network's routing. This paper illustrates the proposed system architecture and routing mechanism to increase the reliability. The another objective is to use reliable transport protocol that uses the multi-homing concept and supports multiple streams to prevent head-of-line blocking. The results show that the Scatternet sensor network has lower packet loss even in the congestive environment than the existing system suitable for all surveillance applications.

  9. Neuronal survival in the brain: neuron type-specific mechanisms

    DEFF Research Database (Denmark)

    Pfisterer, Ulrich Gottfried; Khodosevich, Konstantin

    2017-01-01

    Neurogenic regions of mammalian brain produce many more neurons that will eventually survive and reach a mature stage. Developmental cell death affects both embryonically produced immature neurons and those immature neurons that are generated in regions of adult neurogenesis. Removal of substantial...... numbers of neurons that are not yet completely integrated into the local circuits helps to ensure that maturation and homeostatic function of neuronal networks in the brain proceed correctly. External signals from brain microenvironment together with intrinsic signaling pathways determine whether...... for survival in a certain brain region. This review focuses on how immature neurons survive during normal and impaired brain development, both in the embryonic/neonatal brain and in brain regions associated with adult neurogenesis, and emphasizes neuron type-specific mechanisms that help to survive for various...

  10. Engineering connectivity by multiscale micropatterning of individual populations of neurons.

    Science.gov (United States)

    Albers, Jonas; Toma, Koji; Offenhäusser, Andreas

    2015-02-01

    Functional networks are the basis of information processing in the central nervous system. Essential for their formation are guided neuronal growth as well as controlled connectivity and information flow. The basis of neuronal development is generated by guiding cues and geometric constraints. To investigate the neuronal growth and connectivity of adjacent neuronal networks, two-dimensional protein patterns were created. A mixture of poly-L-lysine and laminin was transferred onto a silanized glass surface by microcontact printing. The structures were populated with dissociated primary cortical embryonic rat neurons. Triangular structures with diverse opening angles, height, and design were chosen as two-dimensional structures to allow network formation with constricted gateways. Neuronal development was observed by immunohistochemistry to pursue the influence of the chosen structures on the neuronal outgrowth. Neurons were stained for MAP2, while poly-L-lysine was FITC labeled. With this study we present an easy-to-use technique to engineer two-dimensional networks in vitro with defined gateways. The presented micropatterning method is used to generate daisy-chained neuronal networks with predefined connectivity. Signal propagation among geometrically constrained networks can easily be monitored by calcium-sensitive dyes, providing insights into network communication in vitro. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Prediction of rat behavior outcomes in memory tasks using functional connections among neurons.

    Directory of Open Access Journals (Sweden)

    Hu Lu

    Full Text Available BACKGROUND: Analyzing the neuronal organizational structures and studying the changes in the behavior of the organism is key to understanding cognitive functions of the brain. Although some studies have indicated that spatiotemporal firing patterns of neuronal populations have a certain relationship with the behavioral responses, the issues of whether there are any relationships between the functional networks comprised of these cortical neurons and behavioral tasks and whether it is possible to take advantage of these networks to predict correct and incorrect outcomes of single trials of animals are still unresolved. METHODOLOGY/PRINCIPAL FINDINGS: This paper presents a new method of analyzing the structures of whole-recorded neuronal functional networks (WNFNs and local neuronal circuit groups (LNCGs. The activity of these neurons was recorded in several rats. The rats performed two different behavioral tasks, the Y-maze task and the U-maze task. Using the results of the assessment of the WNFNs and LNCGs, this paper describes a realization procedure for predicting the behavioral outcomes of single trials. The methodology consists of four main parts: construction of WNFNs from recorded neuronal spike trains, partitioning the WNFNs into the optimal LNCGs using social community analysis, unsupervised clustering of all trials from each dataset into two different clusters, and predicting the behavioral outcomes of single trials. The results show that WNFNs and LNCGs correlate with the behavior of the animal. The U-maze datasets show higher accuracy for unsupervised clustering results than those from the Y-maze task, and these datasets can be used to predict behavioral responses effectively. CONCLUSIONS/SIGNIFICANCE: The results of the present study suggest that a methodology proposed in this paper is suitable for analysis of the characteristics of neuronal functional networks and the prediction of rat behavior. These types of structures in cortical

  12. Optimal control of directional deep brain stimulation in the parkinsonian neuronal network

    Science.gov (United States)

    Fan, Denggui; Wang, Zhihui; Wang, Qingyun

    2016-07-01

    The effect of conventional deep brain stimulation (DBS) on debilitating symptoms of Parkinson's disease can be limited because it can only yield the spherical field. And, some side effects are clearly induced with influencing their adjacent ganglia. Recent experimental evidence for patients with Parkinson's disease has shown that a novel DBS electrode with 32 independent stimulation source contacts can effectively optimize the clinical therapy by enlarging the therapeutic windows, when it is applied on the subthalamic nucleus (STN). This is due to the selective activation in clusters of various stimulation contacts which can be steered directionally and accurately on the targeted regions of interest. In addition, because of the serious damage to the neural tissues, the charge-unbalanced stimulation is not typically indicated and the real DBS utilizes charge-balanced bi-phasic (CBBP) pulses. Inspired by this, we computationally investigate the optimal control of directional CBBP-DBS from the proposed parkinsonian neuronal network of basal ganglia-thalamocortical circuit. By appropriately tuning stimulation for different neuronal populations, it can be found that directional steering CBBP-DBS paradigms are superior to the spherical case in improving parkinsonian dynamical properties including the synchronization of neuronal populations and the reliability of thalamus relaying the information from cortex, which is in a good agreement with the physiological experiments. Furthermore, it can be found that directional steering stimulations can increase the optimal stimulation intensity of desynchronization by more than 1 mA compared to the spherical case. This is consistent with the experimental result with showing that there exists at least one steering direction that can allow increasing the threshold of side effects by 1 mA. In addition, we also simulate the local field potential (LFP) and dominant frequency (DF) of the STN neuronal population induced by the activation

  13. Beyond Critical Exponents in Neuronal Avalanches

    Science.gov (United States)

    Friedman, Nir; Butler, Tom; Deville, Robert; Beggs, John; Dahmen, Karin

    2011-03-01

    Neurons form a complex network in the brain, where they interact with one another by firing electrical signals. Neurons firing can trigger other neurons to fire, potentially causing avalanches of activity in the network. In many cases these avalanches have been found to be scale independent, similar to critical phenomena in diverse systems such as magnets and earthquakes. We discuss models for neuronal activity that allow for the extraction of testable, statistical predictions. We compare these models to experimental results, and go beyond critical exponents.

  14. Neuronal synchrony: peculiarity and generality.

    Science.gov (United States)

    Nowotny, Thomas; Huerta, Ramon; Rabinovich, Mikhail I

    2008-09-01

    Synchronization in neuronal systems is a new and intriguing application of dynamical systems theory. Why are neuronal systems different as a subject for synchronization? (1) Neurons in themselves are multidimensional nonlinear systems that are able to exhibit a wide variety of different activity patterns. Their "dynamical repertoire" includes regular or chaotic spiking, regular or chaotic bursting, multistability, and complex transient regimes. (2) Usually, neuronal oscillations are the result of the cooperative activity of many synaptically connected neurons (a neuronal circuit). Thus, it is necessary to consider synchronization between different neuronal circuits as well. (3) The synapses that implement the coupling between neurons are also dynamical elements and their intrinsic dynamics influences the process of synchronization or entrainment significantly. In this review we will focus on four new problems: (i) the synchronization in minimal neuronal networks with plastic synapses (synchronization with activity dependent coupling), (ii) synchronization of bursts that are generated by a group of nonsymmetrically coupled inhibitory neurons (heteroclinic synchronization), (iii) the coordination of activities of two coupled neuronal networks (partial synchronization of small composite structures), and (iv) coarse grained synchronization in larger systems (synchronization on a mesoscopic scale). (c) 2008 American Institute of Physics.

  15. Mean-field analysis of orientation selectivity in inhibition-dominated networks of spiking neurons.

    Science.gov (United States)

    Sadeh, Sadra; Cardanobile, Stefano; Rotter, Stefan

    2014-01-01

    Mechanisms underlying the emergence of orientation selectivity in the primary visual cortex are highly debated. Here we study the contribution of inhibition-dominated random recurrent networks to orientation selectivity, and more generally to sensory processing. By simulating and analyzing large-scale networks of spiking neurons, we investigate tuning amplification and contrast invariance of orientation selectivity in these networks. In particular, we show how selective attenuation of the common mode and amplification of the modulation component take place in these networks. Selective attenuation of the baseline, which is governed by the exceptional eigenvalue of the connectivity matrix, removes the unspecific, redundant signal component and ensures the invariance of selectivity across different contrasts. Selective amplification of modulation, which is governed by the operating regime of the network and depends on the strength of coupling, amplifies the informative signal component and thus increases the signal-to-noise ratio. Here, we perform a mean-field analysis which accounts for this process.

  16. Generalized activity equations for spiking neural network dynamics

    Directory of Open Access Journals (Sweden)

    Michael A Buice

    2013-11-01

    Full Text Available Much progress has been made in uncovering the computational capabilities of spiking neural networks. However, spiking neurons will always be more expensive to simulate compared to rate neurons because of the inherent disparity in time scales - the spike duration time is much shorter than the inter-spike time, which is much shorter than any learning time scale. In numerical analysis, this is a classic stiff problem. Spiking neurons are also much more difficult to study analytically. One possible approach to making spiking networks more tractable is to augment mean field activity models with some information about spiking correlations. For example, such a generalized activity model could carry information about spiking rates and correlations between spikes self-consistently. Here, we will show how this can be accomplished by constructing a complete formal probabilistic description of the network and then expanding around a small parameter such as the inverse of the number of neurons in the network. The mean field theory of the system gives a rate-like description. The first order terms in the perturbation expansion keep track of covariances.

  17. Large-scale modelling of neuronal systems

    International Nuclear Information System (INIS)

    Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.

    2009-01-01

    The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.

  18. Fast computation with spikes in a recurrent neural network

    International Nuclear Information System (INIS)

    Jin, Dezhe Z.; Seung, H. Sebastian

    2002-01-01

    Neural networks with recurrent connections are sometimes regarded as too slow at computation to serve as models of the brain. Here we analytically study a counterexample, a network consisting of N integrate-and-fire neurons with self excitation, all-to-all inhibition, instantaneous synaptic coupling, and constant external driving inputs. When the inhibition and/or excitation are large enough, the network performs a winner-take-all computation for all possible external inputs and initial states of the network. The computation is done very quickly: As soon as the winner spikes once, the computation is completed since no other neurons will spike. For some initial states, the winner is the first neuron to spike, and the computation is done at the first spike of the network. In general, there are M potential winners, corresponding to the top M external inputs. When the external inputs are close in magnitude, M tends to be larger. If M>1, the selection of the actual winner is strongly influenced by the initial states. If a special relation between the excitation and inhibition is satisfied, the network always selects the neuron with the maximum external input as the winner

  19. Network-state modulation of power-law frequency-scaling in visual cortical neurons.

    Science.gov (United States)

    El Boustani, Sami; Marre, Olivier; Béhuret, Sébastien; Baudot, Pierre; Yger, Pierre; Bal, Thierry; Destexhe, Alain; Frégnac, Yves

    2009-09-01

    Various types of neural-based signals, such as EEG, local field potentials and intracellular synaptic potentials, integrate multiple sources of activity distributed across large assemblies. They have in common a power-law frequency-scaling structure at high frequencies, but it is still unclear whether this scaling property is dominated by intrinsic neuronal properties or by network activity. The latter case is particularly interesting because if frequency-scaling reflects the network state it could be used to characterize the functional impact of the connectivity. In intracellularly recorded neurons of cat primary visual cortex in vivo, the power spectral density of V(m) activity displays a power-law structure at high frequencies with a fractional scaling exponent. We show that this exponent is not constant, but depends on the visual statistics used to drive the network. To investigate the determinants of this frequency-scaling, we considered a generic recurrent model of cortex receiving a retinotopically organized external input. Similarly to the in vivo case, our in computo simulations show that the scaling exponent reflects the correlation level imposed in the input. This systematic dependence was also replicated at the single cell level, by controlling independently, in a parametric way, the strength and the temporal decay of the pairwise correlation between presynaptic inputs. This last model was implemented in vitro by imposing the correlation control in artificial presynaptic spike trains through dynamic-clamp techniques. These in vitro manipulations induced a modulation of the scaling exponent, similar to that observed in vivo and predicted in computo. We conclude that the frequency-scaling exponent of the V(m) reflects stimulus-driven correlations in the cortical network activity. Therefore, we propose that the scaling exponent could be used to read-out the "effective" connectivity responsible for the dynamical signature of the population signals measured

  20. Growth dynamics explain the development of spatiotemporal burst activity of young cultured neuronal networks in detail.

    Directory of Open Access Journals (Sweden)

    Taras A Gritsun

    Full Text Available A typical property of isolated cultured neuronal networks of dissociated rat cortical cells is synchronized spiking, called bursting, starting about one week after plating, when the dissociated cells have sufficiently sent out their neurites and formed enough synaptic connections. This paper is the third in a series of three on simulation models of cultured networks. Our two previous studies [26], [27] have shown that random recurrent network activity models generate intra- and inter-bursting patterns similar to experimental data. The networks were noise or pacemaker-driven and had Izhikevich-neuronal elements with only short-term plastic (STP synapses (so, no long-term potentiation, LTP, or depression, LTD, was included. However, elevated pre-phases (burst leaders and after-phases of burst main shapes, that usually arise during the development of the network, were not yet simulated in sufficient detail. This lack of detail may be due to the fact that the random models completely missed network topology .and a growth model. Therefore, the present paper adds, for the first time, a growth model to the activity model, to give the network a time dependent topology and to explain burst shapes in more detail. Again, without LTP or LTD mechanisms. The integrated growth-activity model yielded realistic bursting patterns. The automatic adjustment of various mutually interdependent network parameters is one of the major advantages of our current approach. Spatio-temporal bursting activity was validated against experiment. Depending on network size, wave reverberation mechanisms were seen along the network boundaries, which may explain the generation of phases of elevated firing before and after the main phase of the burst shape.In summary, the results show that adding topology and growth explain burst shapes in great detail and suggest that young networks still lack/do not need LTP or LTD mechanisms.

  1. Task-dependent changes in cross-level coupling between single neurons and oscillatory activity in multiscale networks.

    Directory of Open Access Journals (Sweden)

    Ryan T Canolty

    Full Text Available Understanding the principles governing the dynamic coordination of functional brain networks remains an important unmet goal within neuroscience. How do distributed ensembles of neurons transiently coordinate their activity across a variety of spatial and temporal scales? While a complete mechanistic account of this process remains elusive, evidence suggests that neuronal oscillations may play a key role in this process, with different rhythms influencing both local computation and long-range communication. To investigate this question, we recorded multiple single unit and local field potential (LFP activity from microelectrode arrays implanted bilaterally in macaque motor areas. Monkeys performed a delayed center-out reach task either manually using their natural arm (Manual Control, MC or under direct neural control through a brain-machine interface (Brain Control, BC. In accord with prior work, we found that the spiking activity of individual neurons is coupled to multiple aspects of the ongoing motor beta rhythm (10-45 Hz during both MC and BC, with neurons exhibiting a diversity of coupling preferences. However, here we show that for identified single neurons, this beta-to-rate mapping can change in a reversible and task-dependent way. For example, as beta power increases, a given neuron may increase spiking during MC but decrease spiking during BC, or exhibit a reversible shift in the preferred phase of firing. The within-task stability of coupling, combined with the reversible cross-task changes in coupling, suggest that task-dependent changes in the beta-to-rate mapping play a role in the transient functional reorganization of neural ensembles. We characterize the range of task-dependent changes in the mapping from beta amplitude, phase, and inter-hemispheric phase differences to the spike rates of an ensemble of simultaneously-recorded neurons, and discuss the potential implications that dynamic remapping from oscillatory activity to

  2. Statistics of Visual Responses to Image Object Stimuli from Primate AIT Neurons to DNN Neurons.

    Science.gov (United States)

    Dong, Qiulei; Wang, Hong; Hu, Zhanyi

    2018-02-01

    Under the goal-driven paradigm, Yamins et al. ( 2014 ; Yamins & DiCarlo, 2016 ) have shown that by optimizing only the final eight-way categorization performance of a four-layer hierarchical network, not only can its top output layer quantitatively predict IT neuron responses but its penultimate layer can also automatically predict V4 neuron responses. Currently, deep neural networks (DNNs) in the field of computer vision have reached image object categorization performance comparable to that of human beings on ImageNet, a data set that contains 1.3 million training images of 1000 categories. We explore whether the DNN neurons (units in DNNs) possess image object representational statistics similar to monkey IT neurons, particularly when the network becomes deeper and the number of image categories becomes larger, using VGG19, a typical and widely used deep network of 19 layers in the computer vision field. Following Lehky, Kiani, Esteky, and Tanaka ( 2011 , 2014 ), where the response statistics of 674 IT neurons to 806 image stimuli are analyzed using three measures (kurtosis, Pareto tail index, and intrinsic dimensionality), we investigate the three issues in this letter using the same three measures: (1) the similarities and differences of the neural response statistics between VGG19 and primate IT cortex, (2) the variation trends of the response statistics of VGG19 neurons at different layers from low to high, and (3) the variation trends of the response statistics of VGG19 neurons when the numbers of stimuli and neurons increase. We find that the response statistics on both single-neuron selectivity and population sparseness of VGG19 neurons are fundamentally different from those of IT neurons in most cases; by increasing the number of neurons in different layers and the number of stimuli, the response statistics of neurons at different layers from low to high do not substantially change; and the estimated intrinsic dimensionality values at the low

  3. Analysis and modeling of ensemble recordings from respiratory pre-motor neurons indicate changes in functional network architecture after acute hypoxia

    Directory of Open Access Journals (Sweden)

    Roberto F Galán

    2010-09-01

    Full Text Available We have combined neurophysiologic recording, statistical analysis, and computational modeling to investigate the dynamics of the respiratory network in the brainstem. Using a multielectrode array, we recorded ensembles of respiratory neurons in perfused in situ rat preparations that produce spontaneous breathing patterns, focusing on inspiratory pre-motor neurons. We compared firing rates and neuronal synchronization among these neurons before and after a brief hypoxic stimulus. We observed a significant decrease in the number of spikes after stimulation, in part due to a transient slowing of the respiratory pattern. However, the median interspike interval did not change, suggesting that the firing threshold of the neurons was not affected but rather the synaptic input was. A bootstrap analysis of synchrony between spike trains revealed that, both before and after brief hypoxia, up to 45 % (but typically less than 5 % of coincident spikes across neuronal pairs was not explained by chance. Most likely, this synchrony resulted from common synaptic input to the pre-motor population, an example of stochastic synchronization. After brief hypoxia most pairs were less synchronized, although some were more, suggesting that the respiratory network was “rewired” transiently after the stimulus. To investigate this hypothesis, we created a simple computational model with feed-forward divergent connections along the inspiratory pathway. Assuming that 1 the number of divergent projections was not the same for all presynaptic cells, but rather spanned a wide range and 2 that the stimulus increased inhibition at the top of the network; this model reproduced the reduction in firing rate and bootstrap-corrected synchrony subsequent to hypoxic stimulation observed in our experimental data.

  4. Neuronal Differentiation Modulated by Polymeric Membrane Properties.

    Science.gov (United States)

    Morelli, Sabrina; Piscioneri, Antonella; Drioli, Enrico; De Bartolo, Loredana

    2017-01-01

    In this study, different collagen-blend membranes were successfully constructed by blending collagen with chitosan (CHT) or poly(lactic-co-glycolic acid) (PLGA) to enhance their properties and thus create new biofunctional materials with great potential use for neuronal tissue engineering and regeneration. Collagen blending strongly affected membrane properties in the following ways: (i) it improved the surface hydrophilicity of both pure CHT and PLGA membranes, (ii) it reduced the stiffness of CHT membranes, but (iii) it did not modify the good mechanical properties of PLGA membranes. Then, we investigated the effect of the different collagen concentrations on the neuronal behavior of the membranes developed. Morphological observations, immunocytochemistry, and morphometric measures demonstrated that the membranes developed, especially CHT/Col30, PLGA, and PLGA/Col1, provided suitable microenvironments for neuronal growth owing to their enhanced properties. The most consistent neuronal differentiation was obtained in neurons cultured on PLGA-based membranes, where a well-developed neuronal network was achieved due to their improved mechanical properties. Our findings suggest that tensile strength and elongation at break are key material parameters that have potential influence on both axonal elongation and neuronal structure and organization, which are of fundamental importance for the maintenance of efficient neuronal growth. Hence, our study has provided new insights regarding the effects of membrane mechanical properties on neuronal behavior, and thus it may help to design and improve novel instructive biomaterials for neuronal tissue engineering. © 2017 S. Karger AG, Basel.

  5. Hysteretic recurrent neural networks: a tool for modeling hysteretic materials and systems

    International Nuclear Information System (INIS)

    Veeramani, Arun S; Crews, John H; Buckner, Gregory D

    2009-01-01

    This paper introduces a novel recurrent neural network, the hysteretic recurrent neural network (HRNN), that is ideally suited to modeling hysteretic materials and systems. This network incorporates a hysteretic neuron consisting of conjoined sigmoid activation functions. Although similar hysteretic neurons have been explored previously, the HRNN is unique in its utilization of simple recurrence to 'self-select' relevant activation functions. Furthermore, training is facilitated by placing the network weights on the output side, allowing standard backpropagation of error training algorithms to be used. We present two- and three-phase versions of the HRNN for modeling hysteretic materials with distinct phases. These models are experimentally validated using data collected from shape memory alloys and ferromagnetic materials. The results demonstrate the HRNN's ability to accurately generalize hysteretic behavior with a relatively small number of neurons. Additional benefits lie in the network's ability to identify statistical information concerning the macroscopic material by analyzing the weights of the individual neurons

  6. Chimera patterns in two-dimensional networks of coupled neurons

    Science.gov (United States)

    Schmidt, Alexander; Kasimatis, Theodoros; Hizanidis, Johanne; Provata, Astero; Hövel, Philipp

    2017-03-01

    We discuss synchronization patterns in networks of FitzHugh-Nagumo and leaky integrate-and-fire oscillators coupled in a two-dimensional toroidal geometry. A common feature between the two models is the presence of fast and slow dynamics, a typical characteristic of neurons. Earlier studies have demonstrated that both models when coupled nonlocally in one-dimensional ring networks produce chimera states for a large range of parameter values. In this study, we give evidence of a plethora of two-dimensional chimera patterns of various shapes, including spots, rings, stripes, and grids, observed in both models, as well as additional patterns found mainly in the FitzHugh-Nagumo system. Both systems exhibit multistability: For the same parameter values, different initial conditions give rise to different dynamical states. Transitions occur between various patterns when the parameters (coupling range, coupling strength, refractory period, and coupling phase) are varied. Many patterns observed in the two models follow similar rules. For example, the diameter of the rings grows linearly with the coupling radius.

  7. State and Training Effects of Mindfulness Meditation on Brain Networks Reflect Neuronal Mechanisms of Its Antidepressant Effect

    Directory of Open Access Journals (Sweden)

    Chuan-Chih Yang

    2016-01-01

    Full Text Available The topic of investigating how mindfulness meditation training can have antidepressant effects via plastic changes in both resting state and meditation state brain activity is important in the rapidly emerging field of neuroplasticity. In the present study, we used a longitudinal design investigating resting state fMRI both before and after 40 days of meditation training in 13 novices. After training, we compared differences in network connectivity between rest and meditation using common resting state functional connectivity methods. Interregional methods were paired with local measures such as Regional Homogeneity. As expected, significant differences in functional connectivity both between states (rest versus meditation and between time points (before versus after training were observed. During meditation, the internal consistency in the precuneus and the temporoparietal junction increased, while the internal consistency of frontal brain regions decreased. A follow-up analysis of regional connectivity of the dorsal anterior cingulate cortex further revealed reduced connectivity with anterior insula during meditation. After meditation training, reduced resting state functional connectivity between the pregenual anterior cingulate and dorsal medical prefrontal cortex was observed. Most importantly, significantly reduced depression/anxiety scores were observed after training. Hence, these findings suggest that mindfulness meditation might be of therapeutic use by inducing plasticity related network changes altering the neuronal basis of affective disorders such as depression.

  8. Voltage-sensitive dye recording from networks of cultured neurons

    Science.gov (United States)

    Chien, Chi-Bin

    This thesis describes the development and testing of a sensitive apparatus for recording electrical activity from microcultures of rat superior cervical ganglion (SCG) neurons by using voltage-sensitive fluorescent dyes.The apparatus comprises a feedback-regulated mercury arc light source, an inverted epifluorescence microscope, a novel fiber-optic camera with discrete photodiode detectors, and low-noise preamplifiers. Using an NA 0.75 objective and illuminating at 10 W/cm2 with the 546 nm mercury line, a typical SCG neuron stained with the styryl dye RH423 gives a detected photocurrent of 1 nA; the light source and optical detectors are quiet enough that the shot noise in this photocurrent--about.03% rms--dominates. The design, theory, and performance of this dye-recording apparatus are discussed in detail.Styryl dyes such as RH423 typically give signals of 1%/100 mV on these cells; the signals are linear in membrane potential, but do not appear to arise from a purely electrochromic mechanism. Given this voltage sensitivity and the noise level of the apparatus, it should be possible to detect both action potentials and subthreshold synaptic potentials from SCG cell bodies. In practice, dye recording can easily detect action potentials from every neuron in an SCG microculture, but small synaptic potentials are obscured by dye signals from the dense network of axons.In another microculture system that does not have such long and complex axons, this dye-recording apparatus should be able to detect synaptic potentials, making it possible to noninvasively map the synaptic connections in a microculture, and thus to study long-term synaptic plasticity.

  9. Artificial neuron-glia networks learning approach based on cooperative coevolution.

    Science.gov (United States)

    Mesejo, Pablo; Ibáñez, Oscar; Fernández-Blanco, Enrique; Cedrón, Francisco; Pazos, Alejandro; Porto-Pazos, Ana B

    2015-06-01

    Artificial Neuron-Glia Networks (ANGNs) are a novel bio-inspired machine learning approach. They extend classical Artificial Neural Networks (ANNs) by incorporating recent findings and suppositions about the way information is processed by neural and astrocytic networks in the most evolved living organisms. Although ANGNs are not a consolidated method, their performance against the traditional approach, i.e. without artificial astrocytes, was already demonstrated on classification problems. However, the corresponding learning algorithms developed so far strongly depends on a set of glial parameters which are manually tuned for each specific problem. As a consequence, previous experimental tests have to be done in order to determine an adequate set of values, making such manual parameter configuration time-consuming, error-prone, biased and problem dependent. Thus, in this paper, we propose a novel learning approach for ANGNs that fully automates the learning process, and gives the possibility of testing any kind of reasonable parameter configuration for each specific problem. This new learning algorithm, based on coevolutionary genetic algorithms, is able to properly learn all the ANGNs parameters. Its performance is tested on five classification problems achieving significantly better results than ANGN and competitive results with ANN approaches.

  10. Optogenetic analysis of a nociceptor neuron and network reveals ion channels acting downstream of primary sensors

    Science.gov (United States)

    Husson, Steven J.; Costa, Wagner Steuer; Wabnig, Sebastian; Stirman, Jeffrey N.; Watson, Joseph D.; Spencer, W. Clay; Akerboom, Jasper; Looger, Loren L.; Treinin, Millet; Miller, David M.; Lu, Hang; Gottschalk, Alexander

    2012-01-01

    Summary Background Nociception generally evokes rapid withdrawal behavior in order to protect the tissue from harmful insults. Most nociceptive neurons responding to mechanical insults display highly branched dendrites, an anatomy shared by Caenorhabditis elegans FLP and PVD neurons, which mediate harsh touch responses. Although several primary molecular nociceptive sensors have been characterized, less is known about modulation and amplification of noxious signals within nociceptor neurons. First, we analyzed the FLP/PVD network by optogenetics and studied integration of signals from these cells in downstream interneurons. Second, we investigated which genes modulate PVD function, based on prior single neuron mRNA profiling of PVD. Results Selectively photoactivating PVD, FLP and downstream interneurons using Channelrhodopsin-2 (ChR2) enabled functionally dissecting this nociceptive network, without interfering signals by other mechanoreceptors. Forward or reverse escape behaviors were determined by PVD and FLP, via integration by command interneurons. To identify mediators of PVD function, acting downstream of primary nocisensor molecules, we knocked down PVD-specific transcripts by RNAi and quantified light-evoked PVD-dependent behavior. Cell-specific disruption of synaptobrevin or voltage-gated Ca2+-channels (VGCCs) showed that PVD signals chemically to command interneurons. Knocking down the DEG/ENaC channel ASIC-1 and the TRPM channel GTL-1 indicated that ASIC-1 may extend PVD’s dynamic range and that GTL-1 may amplify its signals. These channels act cell-autonomously in PVD, downstream of primary mechanosensory molecules. Conclusions Our work implicates TRPM channels in modifying excitability of, and DEG/ENaCs in potentiating signal output from a mechano-nociceptor neuron. ASIC-1 and GTL-1 homologues, if functionally conserved, may denote valid targets for novel analgesics. PMID:22483941

  11. Dynamics in a Delayed Neural Network Model of Two Neurons with Inertial Coupling

    Directory of Open Access Journals (Sweden)

    Changjin Xu

    2012-01-01

    Full Text Available A delayed neural network model of two neurons with inertial coupling is dealt with in this paper. The stability is investigated and Hopf bifurcation is demonstrated. Applying the normal form theory and the center manifold argument, we derive the explicit formulas for determining the properties of the bifurcating periodic solutions. An illustrative example is given to demonstrate the effectiveness of the obtained results.

  12. Effect of acute lateral hemisection of the spinal cord on spinal neurons of postural networks

    Science.gov (United States)

    Zelenin, P. V.; Lyalka, V. F.; Orlovsky, G. N.; Deliagina, T. G.

    2016-01-01

    In quadrupeds, acute lateral hemisection of the spinal cord (LHS) severely impairs postural functions, which recover over time. Postural limb reflexes (PLRs) represent a substantial component of postural corrections in intact animals. The aim of the present study was to characterize the effects of acute LHS on two populations of spinal neurons (F and E) mediating PLRs. For this purpose, in decerebrate rabbits, responses of individual neurons from L5 to stimulation causing PLRs were recorded before and during reversible LHS (caused by temporal cold block of signal transmission in lateral spinal pathways at L1), as well as after acute surgical (Sur) LHS at L1. Results obtained after Sur-LHS were compared to control data obtained in our previous study. We found that acute LHS caused disappearance of PLRs on the affected side. It also changed a proportion of different types of neurons on that side. A significant decrease and increase in the proportion of F- and non-modulated neurons, respectively, was found. LHS caused a significant decrease in most parameters of activity in F-neurons located in the ventral horn on the lesioned side and in E-neurons of the dorsal horn on both sides. These changes were caused by a significant decrease in the efficacy of posture-related sensory input from the ipsilateral limb to F-neurons, and from the contralateral limb to both F- and E-neurons. These distortions in operation of postural networks underlie the impairment of postural control after acute LHS, and represent a starting point for the subsequent recovery of postural functions. PMID:27702647

  13. High-Degree Neurons Feed Cortical Computations.

    Directory of Open Access Journals (Sweden)

    Nicholas M Timme

    2016-05-01

    Full Text Available Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree or sends out (out-degree. To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to

  14. The circadian rhythm induced by the heterogeneous network structure of the suprachiasmatic nucleus

    Science.gov (United States)

    Gu, Changgui; Yang, Huijie

    2016-05-01

    In mammals, the master clock is located in the suprachiasmatic nucleus (SCN), which is composed of about 20 000 nonidentical neuronal oscillators expressing different intrinsic periods. These neurons are coupled through neurotransmitters to form a network consisting of two subgroups, i.e., a ventrolateral (VL) subgroup and a dorsomedial (DM) subgroup. The VL contains about 25% SCN neurons that receive photic input from the retina, and the DM comprises the remaining 75% SCN neurons which are coupled to the VL. The synapses from the VL to the DM are evidently denser than that from the DM to the VL, in which the VL dominates the DM. Therefore, the SCN is a heterogeneous network where the neurons of the VL are linked with a large number of SCN neurons. In the present study, we mimicked the SCN network based on Goodwin model considering four types of networks including an all-to-all network, a Newman-Watts (NW) small world network, an Erdös-Rényi (ER) random network, and a Barabási-Albert (BA) scale free network. We found that the circadian rhythm was induced in the BA, ER, and NW networks, while the circadian rhythm was absent in the all-to-all network with weak cellular coupling, where the amplitude of the circadian rhythm is largest in the BA network which is most heterogeneous in the network structure. Our finding provides an alternative explanation for the induction or enhancement of circadian rhythm by the heterogeneity of the network structure.

  15. The role of propriospinal neuronal network in transmitting the alternating muscular activities of flexor and extensor in parkinsonian tremor.

    Science.gov (United States)

    Hao, M; He, X; Lan, N

    2012-01-01

    It has been shown that normal cyclic movement of human arm and resting limb tremor in Parkinson's disease (PD) are associated with the oscillatory neuronal activities in different cerebral networks, which are transmitted to the antagonistic muscles via the same spinal pathway. There are mono-synaptic and multi-synaptic corticospinal pathways for conveying motor commands. This study investigates the plausible role of propriospinal neuronal (PN) network in the C3-C4 levels in multi-synaptic transmission of cortical commands for oscillatory movements. A PN network model is constructed based on known neurophysiological connections, and is hypothesized to achieve the conversion of cortical oscillations into alternating antagonistic muscle bursts. Simulations performed with a virtual arm (VA) model indicate that without the PN network, the alternating bursts of antagonistic muscle EMG could not be reliably generated, whereas with the PN network, the alternating pattern of bursts were naturally displayed in the three pairs of antagonist muscles. Thus, it is suggested that oscillations in the primary motor cortex (M1) of single and double tremor frequencies are processed at the PN network to compute the alternating burst pattern in the flexor and extensor muscles.

  16. Single-cell axotomy of cultured hippocampal neurons integrated in neuronal circuits.

    Science.gov (United States)

    Gomis-Rüth, Susana; Stiess, Michael; Wierenga, Corette J; Meyn, Liane; Bradke, Frank

    2014-05-01

    An understanding of the molecular mechanisms of axon regeneration after injury is key for the development of potential therapies. Single-cell axotomy of dissociated neurons enables the study of the intrinsic regenerative capacities of injured axons. This protocol describes how to perform single-cell axotomy on dissociated hippocampal neurons containing synapses. Furthermore, to axotomize hippocampal neurons integrated in neuronal circuits, we describe how to set up coculture with a few fluorescently labeled neurons. This approach allows axotomy of single cells in a complex neuronal network and the observation of morphological and molecular changes during axon regeneration. Thus, single-cell axotomy of mature neurons is a valuable tool for gaining insights into cell intrinsic axon regeneration and the plasticity of neuronal polarity of mature neurons. Dissociation of the hippocampus and plating of hippocampal neurons takes ∼2 h. Neurons are then left to grow for 2 weeks, during which time they integrate into neuronal circuits. Subsequent axotomy takes 10 min per neuron and further imaging takes 10 min per neuron.

  17. Emergent synchronous bursting of oxytocin neuronal network.

    Directory of Open Access Journals (Sweden)

    Enrico Rossoni

    2008-07-01

    Full Text Available When young suckle, they are rewarded intermittently with a let-down of milk that results from reflex secretion of the hormone oxytocin; without oxytocin, newly born young will die unless they are fostered. Oxytocin is made by magnocellular hypothalamic neurons, and is secreted from their nerve endings in the pituitary in response to action potentials (spikes that are generated in the cell bodies and which are propagated down their axons to the nerve endings. Normally, oxytocin cells discharge asynchronously at 1-3 spikes/s, but during suckling, every 5 min or so, each discharges a brief, intense burst of spikes that release a pulse of oxytocin into the circulation. This reflex was the first, and is perhaps the best, example of a physiological role for peptide-mediated communication within the brain: it is coordinated by the release of oxytocin from the dendrites of oxytocin cells; it can be facilitated by injection of tiny amounts of oxytocin into the hypothalamus, and it can be blocked by injection of tiny amounts of oxytocin antagonist. Here we show how synchronized bursting can arise in a neuronal network model that incorporates basic observations of the physiology of oxytocin cells. In our model, bursting is an emergent behaviour of a complex system, involving both positive and negative feedbacks, between many sparsely connected cells. The oxytocin cells are regulated by independent afferent inputs, but they interact by local release of oxytocin and endocannabinoids. Oxytocin released from the dendrites of these cells has a positive-feedback effect, while endocannabinoids have an inhibitory effect by suppressing the afferent input to the cells.

  18. Population activity structure of excitatory and inhibitory neurons.

    Science.gov (United States)

    Bittner, Sean R; Williamson, Ryan C; Snyder, Adam C; Litwin-Kumar, Ashok; Doiron, Brent; Chase, Steven M; Smith, Matthew A; Yu, Byron M

    2017-01-01

    Many studies use population analysis approaches, such as dimensionality reduction, to characterize the activity of large groups of neurons. To date, these methods have treated each neuron equally, without taking into account whether neurons are excitatory or inhibitory. We studied population activity structure as a function of neuron type by applying factor analysis to spontaneous activity from spiking networks with balanced excitation and inhibition. Throughout the study, we characterized population activity structure by measuring its dimensionality and the percentage of overall activity variance that is shared among neurons. First, by sampling only excitatory or only inhibitory neurons, we found that the activity structures of these two populations in balanced networks are measurably different. We also found that the population activity structure is dependent on the ratio of excitatory to inhibitory neurons sampled. Finally we classified neurons from extracellular recordings in the primary visual cortex of anesthetized macaques as putative excitatory or inhibitory using waveform classification, and found similarities with the neuron type-specific population activity structure of a balanced network with excitatory clustering. These results imply that knowledge of neuron type is important, and allows for stronger statistical tests, when interpreting population activity structure.

  19. Population activity structure of excitatory and inhibitory neurons.

    Directory of Open Access Journals (Sweden)

    Sean R Bittner

    Full Text Available Many studies use population analysis approaches, such as dimensionality reduction, to characterize the activity of large groups of neurons. To date, these methods have treated each neuron equally, without taking into account whether neurons are excitatory or inhibitory. We studied population activity structure as a function of neuron type by applying factor analysis to spontaneous activity from spiking networks with balanced excitation and inhibition. Throughout the study, we characterized population activity structure by measuring its dimensionality and the percentage of overall activity variance that is shared among neurons. First, by sampling only excitatory or only inhibitory neurons, we found that the activity structures of these two populations in balanced networks are measurably different. We also found that the population activity structure is dependent on the ratio of excitatory to inhibitory neurons sampled. Finally we classified neurons from extracellular recordings in the primary visual cortex of anesthetized macaques as putative excitatory or inhibitory using waveform classification, and found similarities with the neuron type-specific population activity structure of a balanced network with excitatory clustering. These results imply that knowledge of neuron type is important, and allows for stronger statistical tests, when interpreting population activity structure.

  20. Population activity structure of excitatory and inhibitory neurons

    Science.gov (United States)

    Doiron, Brent

    2017-01-01

    Many studies use population analysis approaches, such as dimensionality reduction, to characterize the activity of large groups of neurons. To date, these methods have treated each neuron equally, without taking into account whether neurons are excitatory or inhibitory. We studied population activity structure as a function of neuron type by applying factor analysis to spontaneous activity from spiking networks with balanced excitation and inhibition. Throughout the study, we characterized population activity structure by measuring its dimensionality and the percentage of overall activity variance that is shared among neurons. First, by sampling only excitatory or only inhibitory neurons, we found that the activity structures of these two populations in balanced networks are measurably different. We also found that the population activity structure is dependent on the ratio of excitatory to inhibitory neurons sampled. Finally we classified neurons from extracellular recordings in the primary visual cortex of anesthetized macaques as putative excitatory or inhibitory using waveform classification, and found similarities with the neuron type-specific population activity structure of a balanced network with excitatory clustering. These results imply that knowledge of neuron type is important, and allows for stronger statistical tests, when interpreting population activity structure. PMID:28817581

  1. A principled dimension-reduction method for the population density approach to modeling networks of neurons with synaptic dynamics.

    Science.gov (United States)

    Ly, Cheng

    2013-10-01

    The population density approach to neural network modeling has been utilized in a variety of contexts. The idea is to group many similar noisy neurons into populations and track the probability density function for each population that encompasses the proportion of neurons with a particular state rather than simulating individual neurons (i.e., Monte Carlo). It is commonly used for both analytic insight and as a time-saving computational tool. The main shortcoming of this method is that when realistic attributes are incorporated in the underlying neuron model, the dimension of the probability density function increases, leading to intractable equations or, at best, computationally intensive simulations. Thus, developing principled dimension-reduction methods is essential for the robustness of these powerful methods. As a more pragmatic tool, it would be of great value for the larger theoretical neuroscience community. For exposition of this method, we consider a single uncoupled population of leaky integrate-and-fire neurons receiving external excitatory synaptic input only. We present a dimension-reduction method that reduces a two-dimensional partial differential-integral equation to a computationally efficient one-dimensional system and gives qualitatively accurate results in both the steady-state and nonequilibrium regimes. The method, termed modified mean-field method, is based entirely on the governing equations and not on any auxiliary variables or parameters, and it does not require fine-tuning. The principles of the modified mean-field method have potential applicability to more realistic (i.e., higher-dimensional) neural networks.

  2. A neural network model of the relativistic electron flux at geosynchronous orbit

    International Nuclear Information System (INIS)

    Koons, H.C.; Gorney, D.J.

    1991-01-01

    A neural network has been developed to model the temporal variations of relativistic (>3 MeV) electrons at geosynchronous orbit based on model inputs consisting of 10 consecutive days of the daily sum of the planetary magnetic index ΣKp. The neural network consists of three layers of neurons, containing 10 neurons in the input layer, 6 neurons in a hidden layer, and 1 output neuron. The output is a prediction of the daily-averaged electron flux for the tenth day. The neural network was trained using 62 days of data from July 1, 1984, through August 31, 1984, from the SEE spectrometer on the geosynchronous spacecraft 1982-019. The performance of the model was measured by comparing model outputs with measured fluxes over a 6-year period from April 19, 1982, to June 4, 1988. For the entire data set the rms logarithmic error of the neural network is 0.76, and the average logarithmic error is 0.58. The neural network is essentially zero biased, and for accumulation intervals of 3 days or longer the average logarithmic error is less than 0.1. The neural network provides results that are significantly more accurate than those from linear prediction filters. The model has been used to simulate conditions which are rarely observed in nature, such as long periods of quiet (ΣKp = 0) and ideal impulses. It has also been used to make reasonably accurate day-ahead forecasts of the relativistic electron flux at geosynchronous orbit

  3. Efficient computation in networks of spiking neurons: simulations and theory

    International Nuclear Information System (INIS)

    Natschlaeger, T.

    1999-01-01

    One of the most prominent features of biological neural systems is that individual neurons communicate via short electrical pulses, the so called action potentials or spikes. In this thesis we investigate possible mechanisms which can in principle explain how complex computations in spiking neural networks (SNN) can be performed very fast, i.e. within a few 10 milliseconds. Some of these models are based on the assumption that relevant information is encoded by the timing of individual spikes (temporal coding). We will also discuss a model which is based on a population code and still is able to perform fast complex computations. In their natural environment biological neural systems have to process signals with a rich temporal structure. Hence it is an interesting question how neural systems process time series. In this context we explore possible links between biophysical characteristics of single neurons (refractory behavior, connectivity, time course of postsynaptic potentials) and synapses (unreliability, dynamics) on the one hand and possible computations on times series on the other hand. Furthermore we describe a general model of computation that exploits dynamic synapses. This model provides a general framework for understanding how neural systems process time-varying signals. (author)

  4. Effects of weak electric fields on the activity of neurons and neuronal networks

    International Nuclear Information System (INIS)

    Jeffreys, J.G.R.; Deans, J.; Bikson, M.; Fox, J.

    2003-01-01

    Electric fields applied to brain tissue will affect cellular properties. They will hyperpolarise the ends of cells closest to the positive part of the field, and depolarise ends closest to the negative. In the case of neurons this affects excitability. How these changes in transmembrane potential are distributed depends on the length constant of the neuron, and on its geometry; if the neuron is electrically compact, the change in transmembrane potential becomes an almost linear function of distance in the direction of the field. Neurons from the mammalian hippocampus, maintained in tissue slices in vitro, are significantly affected by fields of around 1-5 Vm -1 . (author)

  5. Identifying Chaotic FitzHugh–Nagumo Neurons Using Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Ri-Qi Su

    2014-07-01

    Full Text Available We develop a completely data-driven approach to reconstructing coupled neuronal networks that contain a small subset of chaotic neurons. Such chaotic elements can be the result of parameter shift in their individual dynamical systems and may lead to abnormal functions of the network. To accurately identify the chaotic neurons may thus be necessary and important, for example, applying appropriate controls to bring the network to a normal state. However, due to couplings among the nodes, the measured time series, even from non-chaotic neurons, would appear random, rendering inapplicable traditional nonlinear time-series analysis, such as the delay-coordinate embedding method, which yields information about the global dynamics of the entire network. Our method is based on compressive sensing. In particular, we demonstrate that identifying chaotic elements can be formulated as a general problem of reconstructing the nodal dynamical systems, network connections and all coupling functions, as well as their weights. The working and efficiency of the method are illustrated by using networks of non-identical FitzHugh–Nagumo neurons with randomly-distributed coupling weights.

  6. Scaling Properties of Dimensionality Reduction for Neural Populations and Network Models.

    Directory of Open Access Journals (Sweden)

    Ryan C Williamson

    2016-12-01

    Full Text Available Recent studies have applied dimensionality reduction methods to understand how the multi-dimensional structure of neural population activity gives rise to brain function. It is unclear, however, how the results obtained from dimensionality reduction generalize to recordings with larger numbers of neurons and trials or how these results relate to the underlying network structure. We address these questions by applying factor analysis to recordings in the visual cortex of non-human primates and to spiking network models that self-generate irregular activity through a balance of excitation and inhibition. We compared the scaling trends of two key outputs of dimensionality reduction-shared dimensionality and percent shared variance-with neuron and trial count. We found that the scaling properties of networks with non-clustered and clustered connectivity differed, and that the in vivo recordings were more consistent with the clustered network. Furthermore, recordings from tens of neurons were sufficient to identify the dominant modes of shared variability that generalize to larger portions of the network. These findings can help guide the interpretation of dimensionality reduction outputs in regimes of limited neuron and trial sampling and help relate these outputs to the underlying network structure.

  7. Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network

    Directory of Open Access Journals (Sweden)

    Adam ePonzi

    2012-03-01

    Full Text Available The striatal medium spiny neuron (MSNs network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri stimulus time histograms (PSTH of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioural task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviourally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would in when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and delineate the range of parameters where this behaviour is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response

  8. Input dependent cell assembly dynamics in a model of the striatal medium spiny neuron network.

    Science.gov (United States)

    Ponzi, Adam; Wickens, Jeff

    2012-01-01

    The striatal medium spiny neuron (MSN) network is sparsely connected with fairly weak GABAergic collaterals receiving an excitatory glutamatergic cortical projection. Peri-stimulus time histograms (PSTH) of MSN population response investigated in various experimental studies display strong firing rate modulations distributed throughout behavioral task epochs. In previous work we have shown by numerical simulation that sparse random networks of inhibitory spiking neurons with characteristics appropriate for UP state MSNs form cell assemblies which fire together coherently in sequences on long behaviorally relevant timescales when the network receives a fixed pattern of constant input excitation. Here we first extend that model to the case where cortical excitation is composed of many independent noisy Poisson processes and demonstrate that cell assembly dynamics is still observed when the input is sufficiently weak. However if cortical excitation strength is increased more regularly firing and completely quiescent cells are found, which depend on the cortical stimulation. Subsequently we further extend previous work to consider what happens when the excitatory input varies as it would when the animal is engaged in behavior. We investigate how sudden switches in excitation interact with network generated patterned activity. We show that sequences of cell assembly activations can be locked to the excitatory input sequence and outline the range of parameters where this behavior is shown. Model cell population PSTH display both stimulus and temporal specificity, with large population firing rate modulations locked to elapsed time from task events. Thus the random network can generate a large diversity of temporally evolving stimulus dependent responses even though the input is fixed between switches. We suggest the MSN network is well suited to the generation of such slow coherent task dependent response which could be utilized by the animal in behavior.

  9. Network-state modulation of power-law frequency-scaling in visual cortical neurons.

    Directory of Open Access Journals (Sweden)

    Sami El Boustani

    2009-09-01

    Full Text Available Various types of neural-based signals, such as EEG, local field potentials and intracellular synaptic potentials, integrate multiple sources of activity distributed across large assemblies. They have in common a power-law frequency-scaling structure at high frequencies, but it is still unclear whether this scaling property is dominated by intrinsic neuronal properties or by network activity. The latter case is particularly interesting because if frequency-scaling reflects the network state it could be used to characterize the functional impact of the connectivity. In intracellularly recorded neurons of cat primary visual cortex in vivo, the power spectral density of V(m activity displays a power-law structure at high frequencies with a fractional scaling exponent. We show that this exponent is not constant, but depends on the visual statistics used to drive the network. To investigate the determinants of this frequency-scaling, we considered a generic recurrent model of cortex receiving a retinotopically organized external input. Similarly to the in vivo case, our in computo simulations show that the scaling exponent reflects the correlation level imposed in the input. This systematic dependence was also replicated at the single cell level, by controlling independently, in a parametric way, the strength and the temporal decay of the pairwise correlation between presynaptic inputs. This last model was implemented in vitro by imposing the correlation control in artificial presynaptic spike trains through dynamic-clamp techniques. These in vitro manipulations induced a modulation of the scaling exponent, similar to that observed in vivo and predicted in computo. We conclude that the frequency-scaling exponent of the V(m reflects stimulus-driven correlations in the cortical network activity. Therefore, we propose that the scaling exponent could be used to read-out the "effective" connectivity responsible for the dynamical signature of the population

  10. Impact of Bounded Noise and Rewiring on the Formation and Instability of Spiral Waves in a Small-World Network of Hodgkin-Huxley Neurons.

    Science.gov (United States)

    Yao, Yuangen; Deng, Haiyou; Ma, Chengzhang; Yi, Ming; Ma, Jun

    2017-01-01

    Spiral waves are observed in the chemical, physical and biological systems, and the emergence of spiral waves in cardiac tissue is linked to some diseases such as heart ventricular fibrillation and epilepsy; thus it has importance in theoretical studies and potential medical applications. Noise is inevitable in neuronal systems and can change the electrical activities of neuron in different ways. Many previous theoretical studies about the impacts of noise on spiral waves focus an unbounded Gaussian noise and even colored noise. In this paper, the impacts of bounded noise and rewiring of network on the formation and instability of spiral waves are discussed in small-world (SW) network of Hodgkin-Huxley (HH) neurons through numerical simulations, and possible statistical analysis will be carried out. Firstly, we present SW network of HH neurons subjected to bounded noise. Then, it is numerically demonstrated that bounded noise with proper intensity σ, amplitude A, or frequency f can facilitate the formation of spiral waves when rewiring probability p is below certain thresholds. In other words, bounded noise-induced resonant behavior can occur in the SW network of neurons. In addition, rewiring probability p always impairs spiral waves, while spiral waves are confirmed to be robust for small p, thus shortcut-induced phase transition of spiral wave with the increase of p is induced. Furthermore, statistical factors of synchronization are calculated to discern the phase transition of spatial pattern, and it is confirmed that larger factor of synchronization is approached with increasing of rewiring probability p, and the stability of spiral wave is destroyed.

  11. Neuronal replacement therapy: previous achievements and challenges ahead

    Science.gov (United States)

    Grade, Sofia; Götz, Magdalena

    2017-10-01

    Lifelong neurogenesis and incorporation of newborn neurons into mature neuronal circuits operates in specialized niches of the mammalian brain and serves as role model for neuronal replacement strategies. However, to which extent can the remaining brain parenchyma, which never incorporates new neurons during the adulthood, be as plastic and readily accommodate neurons in networks that suffered neuronal loss due to injury or neurological disease? Which microenvironment is permissive for neuronal replacement and synaptic integration and which cells perform best? Can lost function be restored and how adequate is the participation in the pre-existing circuitry? Could aberrant connections cause malfunction especially in networks dominated by excitatory neurons, such as the cerebral cortex? These questions show how important connectivity and circuitry aspects are for regenerative medicine, which is the focus of this review. We will discuss the impressive advances in neuronal replacement strategies and success from exogenous as well as endogenous cell sources. Both have seen key novel technologies, like the groundbreaking discovery of induced pluripotent stem cells and direct neuronal reprogramming, offering alternatives to the transplantation of fetal neurons, and both herald great expectations. For these to become reality, neuronal circuitry analysis is key now. As our understanding of neuronal circuits increases, neuronal replacement therapy should fulfill those prerequisites in network structure and function, in brain-wide input and output. Now is the time to incorporate neural circuitry research into regenerative medicine if we ever want to truly repair brain injury.

  12. Conceptual Network Model From Sensory Neurons to Astrocytes of the Human Nervous System.

    Science.gov (United States)

    Yang, Yiqun; Yeo, Chai Kiat

    2015-07-01

    From a single-cell animal like paramecium to vertebrates like ape, the nervous system plays an important role in responding to the variations of the environment. Compared to animals, the nervous system in the human body possesses more intricate organization and utility. The nervous system anatomy has been understood progressively, yet the explanation at the cell level regarding complete information transmission is still lacking. Along the signal pathway toward the brain, an external stimulus first activates action potentials in the sensing neuron and these electric pulses transmit along the spinal nerve or cranial nerve to the neurons in the brain. Second, calcium elevation is triggered in the branch of astrocyte at the tripartite synapse. Third, the local calcium wave expands to the entire territory of the astrocyte. Finally, the calcium wave propagates to the neighboring astrocyte via gap junction channel. In our study, we integrate the existing mathematical model and biological experiments in each step of the signal transduction to establish a conceptual network model for the human nervous system. The network is composed of four layers and the communication protocols of each layer could be adapted to entities with different characterizations. We verify our simulation results against the available biological experiments and mathematical models and provide a test case of the integrated network. As the production of conscious episode in the human nervous system is still under intense research, our model serves as a useful tool to facilitate, complement and verify current and future study in human cognition.

  13. Cellullar insights into cerebral cortical development: focusing on the locomotion mode of neuronal migration

    Directory of Open Access Journals (Sweden)

    Takeshi eKawauchi

    2015-10-01

    Full Text Available The mammalian brain consists of numerous compartments that are closely connected with each other via neural networks, comprising the basis of higher order brain functions. The highly specialized structure originates from simple pseudostratified neuroepithelium-derived neural progenitors located near the ventricle. A long journey by neurons from the ventricular side is essential for the formation of a sophisticated brain structure, including a mammalian-specific six-layered cerebral cortex. Neuronal migration consists of several contiguous steps, but the locomotion mode comprises a large part of the migration. The locomoting neurons exhibit unique features; a radial glial fiber-dependent migration requiring the endocytic recycling of N-cadherin and a neuron-specific migration mode with dilation/swelling formation that requires the actin and microtubule organization possibly regulated by cyclin-dependent kinase 5 (Cdk5, Dcx, p27kip1, Rac1 and POSH. Here I will introduce the roles of various cellular events, such as cytoskeletal organization, cell adhesion and membrane trafficking, in the regulation of the neuronal migration, with particular focus on the locomotion mode.

  14. A neuron-astrocyte transistor-like model for neuromorphic dressed neurons.

    Science.gov (United States)

    Valenza, G; Pioggia, G; Armato, A; Ferro, M; Scilingo, E P; De Rossi, D

    2011-09-01

    Experimental evidences on the role of the synaptic glia as an active partner together with the bold synapse in neuronal signaling and dynamics of neural tissue strongly suggest to investigate on a more realistic neuron-glia model for better understanding human brain processing. Among the glial cells, the astrocytes play a crucial role in the tripartite synapsis, i.e. the dressed neuron. A well-known two-way astrocyte-neuron interaction can be found in the literature, completely revising the purely supportive role for the glia. The aim of this study is to provide a computationally efficient model for neuron-glia interaction. The neuron-glia interactions were simulated by implementing the Li-Rinzel model for an astrocyte and the Izhikevich model for a neuron. Assuming the dressed neuron dynamics similar to the nonlinear input-output characteristics of a bipolar junction transistor, we derived our computationally efficient model. This model may represent the fundamental computational unit for the development of real-time artificial neuron-glia networks opening new perspectives in pattern recognition systems and in brain neurophysiology. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. Time Delay and Long-Range Connection Induced Synchronization Transitions in Newman-Watts Small-World Neuronal Networks

    Science.gov (United States)

    Qian, Yu

    2014-01-01

    The synchronization transitions in Newman-Watts small-world neuronal networks (SWNNs) induced by time delay and long-range connection (LRC) probability have been investigated by synchronization parameter and space-time plots. Four distinct parameter regions, that is, asynchronous region, transition region, synchronous region, and oscillatory region have been discovered at certain LRC probability as time delay is increased. Interestingly, desynchronization is observed in oscillatory region. More importantly, we consider the spatiotemporal patterns obtained in delayed Newman-Watts SWNNs are the competition results between long-range drivings (LRDs) and neighboring interactions. In addition, for moderate time delay, the synchronization of neuronal network can be enhanced remarkably by increasing LRC probability. Furthermore, lag synchronization has been found between weak synchronization and complete synchronization as LRC probability is a little less than 1.0. Finally, the two necessary conditions, moderate time delay and large numbers of LRCs, are exposed explicitly for synchronization in delayed Newman-Watts SWNNs. PMID:24810595

  16. Segmentation of isolated MR images: development and comparison of neuronal networks

    International Nuclear Information System (INIS)

    Paredes, R.; Robles, M.; Marti-Bonmati, L.; Masia, L.

    1998-01-01

    Segmentation defines the capacity to differentiate among types of tissues. In MR. it is frequently applied to volumetric determinations. Digital images can be segmented in a number of ways; neuronal networks (NN) can be employed for this purpose. Our objective was to develop algorithms for automatic segmentation using NN and apply them to central nervous system MR images. The segmentation obtained with NN was compared with that resulting from other procedures (region-growing and K means). Each NN consisted of two layers: one based on unsupervised training, which was utilized for image segmentation in sets of K, and a second layer associating each set obtained by the preceding layer with the real set corresponding to the previously segmented objective image. This NN was trained with previously segmented images with supervised regions-growing algorithms and automatic K means. Thus, 4 different segmentation were obtained: region-growing, K means, NN with region-growing and NN with K means. The tissue volumes corresponding to cerebrospinal fluid, gray matter and white matter obtained with the 4 techniques were compared and the most representative segmented image was selected qualitatively by averaging the visual perception of 3 radiologists. The segmentation that best corresponded to the visual perception of the radiologists was that consisting of trained NN with region-growing. In comparison, the other 3 algorithms presented low percentage differences (mean, 3.44%). The mean percentage error for the 3 tissues from these algorithms was lower for region-growing segmentation (2.34%) than for trained NN with K means (3.31%) and for automatic K-means segmentation (4.66%). Thus, NN are reliable in the automation of isolated MR image segmentation. (Author) 12 refs

  17. Synchronization and Inter-Layer Interactions of Noise-Driven Neural Networks.

    Science.gov (United States)

    Yuniati, Anis; Mai, Te-Lun; Chen, Chi-Ming

    2017-01-01

    In this study, we used the Hodgkin-Huxley (HH) model of neurons to investigate the phase diagram of a developing single-layer neural network and that of a network consisting of two weakly coupled neural layers. These networks are noise driven and learn through the spike-timing-dependent plasticity (STDP) or the inverse STDP rules. We described how these networks transited from a non-synchronous background activity state (BAS) to a synchronous firing state (SFS) by varying the network connectivity and the learning efficacy. In particular, we studied the interaction between a SFS layer and a BAS layer, and investigated how synchronous firing dynamics was induced in the BAS layer. We further investigated the effect of the inter-layer interaction on a BAS to SFS repair mechanism by considering three types of neuron positioning (random, grid, and lognormal distributions) and two types of inter-layer connections (random and preferential connections). Among these scenarios, we concluded that the repair mechanism has the largest effect for a network with the lognormal neuron positioning and the preferential inter-layer connections.

  18. An improved superconducting neural circuit and its application for a neural network solving a combinatorial optimization problem

    International Nuclear Information System (INIS)

    Onomi, T; Nakajima, K

    2014-01-01

    We have proposed a superconducting Hopfield-type neural network for solving the N-Queens problem which is one of combinatorial optimization problems. The sigmoid-shape function of a neuron output is represented by the output of coupled SQUIDs gate consisting of a single-junction and a double-junction SQUIDs. One of the important factors for an improvement of the network performance is an improvement of a threshold characteristic of a neuron circuit. In this paper, we report an improved design of coupled SQUID gates for a superconducting neural network. A step-like function with a steep threshold at a rising edge is desirable for a neuron circuit to solve a combinatorial optimization problem. A neuron circuit is composed of two coupled SQUIDs gates with a cascade connection in order to obtain such characteristics. The designed neuron circuit is fabricated by a 2.5 kA/cm 2 Nb/AlOx/Nb process. The operation of a fabricated neuron circuit is experimentally demonstrated. Moreover, we discuss about the performance of the neural network using the improved neuron circuits and delayed negative self-connections.

  19. Consistent initial conditions for the Saint-Venant equations in river network modeling

    Directory of Open Access Journals (Sweden)

    C.-W. Yu

    2017-09-01

    Full Text Available Initial conditions for flows and depths (cross-sectional areas throughout a river network are required for any time-marching (unsteady solution of the one-dimensional (1-D hydrodynamic Saint-Venant equations. For a river network modeled with several Strahler orders of tributaries, comprehensive and consistent synoptic data are typically lacking and synthetic starting conditions are needed. Because of underlying nonlinearity, poorly defined or inconsistent initial conditions can lead to convergence problems and long spin-up times in an unsteady solver. Two new approaches are defined and demonstrated herein for computing flows and cross-sectional areas (or depths. These methods can produce an initial condition data set that is consistent with modeled landscape runoff and river geometry boundary conditions at the initial time. These new methods are (1 the pseudo time-marching method (PTM that iterates toward a steady-state initial condition using an unsteady Saint-Venant solver and (2 the steady-solution method (SSM that makes use of graph theory for initial flow rates and solution of a steady-state 1-D momentum equation for the channel cross-sectional areas. The PTM is shown to be adequate for short river reaches but is significantly slower and has occasional non-convergent behavior for large river networks. The SSM approach is shown to provide a rapid solution of consistent initial conditions for both small and large networks, albeit with the requirement that additional code must be written rather than applying an existing unsteady Saint-Venant solver.

  20. Complex Behavior in an Integrate-and-Fire Neuron Model Based on Small World Networks

    International Nuclear Information System (INIS)

    Lin Min; Chen Tianlun

    2005-01-01

    Based on our previously pulse-coupled integrate-and-fire neuron model in small world networks, we investigate the complex behavior of electroencephalographic (EEG)-like activities produced by such a model. We find EEG-like activities have obvious chaotic characteristics. We also analyze the complex behaviors of EEG-like signals, such as spectral analysis, reconstruction of the phase space, the correlation dimension, and so on.

  1. Sequentially firing neurons confer flexible timing in neural pattern generators

    International Nuclear Information System (INIS)

    Urban, Alexander; Ermentrout, Bard

    2011-01-01

    Neuronal networks exhibit a variety of complex spatiotemporal patterns that include sequential activity, synchrony, and wavelike dynamics. Inhibition is the primary means through which such patterns are implemented. This behavior is dependent on both the intrinsic dynamics of the individual neurons as well as the connectivity patterns. Many neural circuits consist of networks of smaller subcircuits (motifs) that are coupled together to form the larger system. In this paper, we consider a particularly simple motif, comprising purely inhibitory interactions, which generates sequential periodic dynamics. We first describe the dynamics of the single motif both for general balanced coupling (all cells receive the same number and strength of inputs) and then for a specific class of balanced networks: circulant systems. We couple these motifs together to form larger networks. We use the theory of weak coupling to derive phase models which, themselves, have a certain structure and symmetry. We show that this structure endows the coupled system with the ability to produce arbitrary timing relationships between symmetrically coupled motifs and that the phase relationships are robust over a wide range of frequencies. The theory is applicable to many other systems in biology and physics.

  2. Linear stability analysis of retrieval state in associative memory neural networks of spiking neurons

    International Nuclear Information System (INIS)

    Yoshioka, Masahiko

    2002-01-01

    We study associative memory neural networks of the Hodgkin-Huxley type of spiking neurons in which multiple periodic spatiotemporal patterns of spike timing are memorized as limit-cycle-type attractors. In encoding the spatiotemporal patterns, we assume the spike-timing-dependent synaptic plasticity with the asymmetric time window. Analysis for periodic solution of retrieval state reveals that if the area of the negative part of the time window is equivalent to the positive part, then crosstalk among encoded patterns vanishes. Phase transition due to the loss of the stability of periodic solution is observed when we assume fast α function for direct interaction among neurons. In order to evaluate the critical point of this phase transition, we employ Floquet theory in which the stability problem of the infinite number of spiking neurons interacting with α function is reduced to the eigenvalue problem with the finite size of matrix. Numerical integration of the single-body dynamics yields the explicit value of the matrix, which enables us to determine the critical point of the phase transition with a high degree of precision

  3. Visual language recognition with a feed-forward network of spiking neurons

    Energy Technology Data Exchange (ETDEWEB)

    Rasmussen, Craig E [Los Alamos National Laboratory; Garrett, Kenyan [Los Alamos National Laboratory; Sottile, Matthew [GALOIS; Shreyas, Ns [INDIANA UNIV.

    2010-01-01

    An analogy is made and exploited between the recognition of visual objects and language parsing. A subset of regular languages is used to define a one-dimensional 'visual' language, in which the words are translational and scale invariant. This allows an exploration of the viewpoint invariant languages that can be solved by a network of concurrent, hierarchically connected processors. A language family is defined that is hierarchically tiling system recognizable (HREC). As inspired by nature, an algorithm is presented that constructs a cellular automaton that recognizes strings from a language in the HREC family. It is demonstrated how a language recognizer can be implemented from the cellular automaton using a feed-forward network of spiking neurons. This parser recognizes fixed-length strings from the language in parallel and as the computation is pipelined, a new string can be parsed in each new interval of time. The analogy with formal language theory allows inferences to be drawn regarding what class of objects can be recognized by visual cortex operating in purely feed-forward fashion and what class of objects requires a more complicated network architecture.

  4. Chimera states in bursting neurons

    OpenAIRE

    Bera, Bidesh K.; Ghosh, Dibakar; Lakshmanan, M.

    2015-01-01

    We study the existence of chimera states in pulse-coupled networks of bursting Hindmarsh-Rose neurons with nonlocal, global and local (nearest neighbor) couplings. Through a linear stability analysis, we discuss the behavior of stability function in the incoherent (i.e. disorder), coherent, chimera and multi-chimera states. Surprisingly, we find that chimera and multi-chimera states occur even using local nearest neighbor interaction in a network of identical bursting neurons alone. This is i...

  5. Connectome-scale group-wise consistent resting-state network analysis in autism spectrum disorder

    Directory of Open Access Journals (Sweden)

    Yu Zhao

    2016-01-01

    Full Text Available Understanding the organizational architecture of human brain function and its alteration patterns in diseased brains such as Autism Spectrum Disorder (ASD patients are of great interests. In-vivo functional magnetic resonance imaging (fMRI offers a unique window to investigate the mechanism of brain function and to identify functional network components of the human brain. Previously, we have shown that multiple concurrent functional networks can be derived from fMRI signals using whole-brain sparse representation. Yet it is still an open question to derive group-wise consistent networks featured in ASD patients and controls. Here we proposed an effective volumetric network descriptor, named connectivity map, to compactly describe spatial patterns of brain network maps and implemented a fast framework in Apache Spark environment that can effectively identify group-wise consistent networks in big fMRI dataset. Our experiment results identified 144 group-wisely common intrinsic connectivity networks (ICNs shared between ASD patients and healthy control subjects, where some ICNs are substantially different between the two groups. Moreover, further analysis on the functional connectivity and spatial overlap between these 144 common ICNs reveals connectomics signatures characterizing ASD patients and controls. In particular, the computing time of our Spark-enabled functional connectomics framework is significantly reduced from 240 hours (C++ code, single core to 20 hours, exhibiting a great potential to handle fMRI big data in the future.

  6. Single neuron computation

    CERN Document Server

    McKenna, Thomas M; Zornetzer, Steven F

    1992-01-01

    This book contains twenty-two original contributions that provide a comprehensive overview of computational approaches to understanding a single neuron structure. The focus on cellular-level processes is twofold. From a computational neuroscience perspective, a thorough understanding of the information processing performed by single neurons leads to an understanding of circuit- and systems-level activity. From the standpoint of artificial neural networks (ANNs), a single real neuron is as complex an operational unit as an entire ANN, and formalizing the complex computations performed by real n

  7. Representation and traversal of documentation space. Data analysis, neuron networks and image banks

    International Nuclear Information System (INIS)

    Lelu, A.; Rosenblatt, D.

    1986-01-01

    Improvements in the visual representation of considerable amounts of data for the user is necessary for progress in documentation systems. We review practical implementations in this area, which additionally integrate concepts arising from data analysis in the most general sense. The relationship between data analysis and neuron networks is then established. Following a description of simulation experiments, we finally present software for outputting and traversing image banks which integrate most of the concept developed in this article [fr

  8. Inhibitory neurons modulate spontaneous signaling in cultured cortical neurons: density-dependent regulation of excitatory neuronal signaling

    International Nuclear Information System (INIS)

    Serra, Michael; Guaraldi, Mary; Shea, Thomas B

    2010-01-01

    Cortical neuronal activity depends on a balance between excitatory and inhibitory influences. Culturing of neurons on multi-electrode arrays (MEAs) has provided insight into the development and maintenance of neuronal networks. Herein, we seeded MEAs with murine embryonic cortical/hippocampal neurons at different densities ( 1000 cells mm −2 ) and monitored resultant spontaneous signaling. Sparsely seeded cultures displayed a large number of bipolar, rapid, high-amplitude individual signals with no apparent temporal regularity. By contrast, densely seeded cultures instead displayed clusters of signals at regular intervals. These patterns were observed even within thinner and thicker areas of the same culture. GABAergic neurons (25% of total neurons in our cultures) mediated the differential signal patterns observed above, since addition of the inhibitory antagonist bicuculline to dense cultures and hippocampal slice cultures induced the signal pattern characteristic of sparse cultures. Sparsely seeded cultures likely lacked sufficient inhibitory neurons to modulate excitatory activity. Differential seeding of MEAs can provide a unique model for analyses of pertubation in the interaction between excitatory and inhibitory function during aging and neuropathological conditions where dysregulation of GABAergic neurons is a significant component

  9. Functional clustering in hippocampal cultures: relating network structure and dynamics

    International Nuclear Information System (INIS)

    Feldt, S; Dzakpasu, R; Olariu, E; Żochowski, M; Wang, J X; Shtrahman, E

    2010-01-01

    In this work we investigate the relationship between gross anatomic structural network properties, neuronal dynamics and the resultant functional structure in dissociated rat hippocampal cultures. Specifically, we studied cultures as they developed under two conditions: the first supporting glial cell growth (high glial group), and the second one inhibiting it (low glial group). We then compared structural network properties and the spatio-temporal activity patterns of the neurons. Differences in dynamics between the two groups could be linked to the impact of the glial network on the neuronal network as the cultures developed. We also implemented a recently developed algorithm called the functional clustering algorithm (FCA) to obtain the resulting functional network structure. We show that this new algorithm is useful for capturing changes in functional network structure as the networks evolve over time. The FCA detects changes in functional structure that are consistent with expected dynamical differences due to the impact of the glial network. Cultures in the high glial group show an increase in global synchronization as the cultures age, while those in the low glial group remain locally synchronized. We additionally use the FCA to quantify the amount of synchronization present in the cultures and show that the total level of synchronization in the high glial group is stronger than in the low glial group. These results indicate an interdependence between the glial and neuronal networks present in dissociated cultures

  10. Development and application of an optogenetic platform for controlling and imaging a large number of individual neurons

    Science.gov (United States)

    Mohammed, Ali Ibrahim Ali

    The understanding and treatment of brain disorders as well as the development of intelligent machines is hampered by the lack of knowledge of how the brain fundamentally functions. Over the past century, we have learned much about how individual neurons and neural networks behave, however new tools are critically needed to interrogate how neural networks give rise to complex brain processes and disease conditions. Recent innovations in molecular techniques, such as optogenetics, have enabled neuroscientists unprecedented precision to excite, inhibit and record defined neurons. The impressive sensitivity of currently available optogenetic sensors and actuators has now enabled the possibility of analyzing a large number of individual neurons in the brains of behaving animals. To promote the use of these optogenetic tools, this thesis integrates cutting edge optogenetic molecular sensors which is ultrasensitive for imaging neuronal activity with custom wide field optical microscope to analyze a large number of individual neurons in living brains. Wide-field microscopy provides a large field of view and better spatial resolution approaching the Abbe diffraction limit of fluorescent microscope. To demonstrate the advantages of this optical platform, we imaged a deep brain structure, the Hippocampus, and tracked hundreds of neurons over time while mouse was performing a memory task to investigate how those individual neurons related to behavior. In addition, we tested our optical platform in investigating transient neural network changes upon mechanical perturbation related to blast injuries. In this experiment, all blasted mice show a consistent change in neural network. A small portion of neurons showed a sustained calcium increase for an extended period of time, whereas the majority lost their activities. Finally, using optogenetic silencer to control selective motor cortex neurons, we examined their contributions to the network pathology of basal ganglia related to

  11. Autapse-induced multiple stochastic resonances in a modular neuronal network

    Science.gov (United States)

    Yang, XiaoLi; Yu, YanHu; Sun, ZhongKui

    2017-08-01

    This study investigates the nontrivial effects of autapse on stochastic resonance in a modular neuronal network subjected to bounded noise. The resonance effect of autapse is detected by imposing a self-feedback loop with autaptic strength and autaptic time delay to each constituent neuron. Numerical simulations have demonstrated that bounded noise with the proper level of amplitude can induce stochastic resonance; moreover, the noise induced resonance dynamics can be significantly shaped by the autapse. In detail, for a specific range of autaptic strength, multiple stochastic resonances can be induced when the autaptic time delays are appropriately adjusted. These appropriately adjusted delays are detected to nearly approach integer multiples of the period of the external weak signal when the autaptic strength is very near zero; otherwise, they do not match the period of the external weak signal when the autaptic strength is slightly greater than zero. Surprisingly, in both cases, the differences between arbitrary two adjacent adjusted autaptic delays are always approximately equal to the period of the weak signal. The phenomenon of autaptic delay induced multiple stochastic resonances is further confirmed to be robust against the period of the external weak signal and the intramodule probability of subnetwork. These findings could have important implications for weak signal detection and information propagation in realistic neural systems.

  12. Unsupervised discrimination of patterns in spiking neural networks with excitatory and inhibitory synaptic plasticity.

    Science.gov (United States)

    Srinivasa, Narayan; Cho, Youngkwan

    2014-01-01

    A spiking neural network model is described for learning to discriminate among spatial patterns in an unsupervised manner. The network anatomy consists of source neurons that are activated by external inputs, a reservoir that resembles a generic cortical layer with an excitatory-inhibitory (EI) network and a sink layer of neurons for readout. Synaptic plasticity in the form of STDP is imposed on all the excitatory and inhibitory synapses at all times. While long-term excitatory STDP enables sparse and efficient learning of the salient features in inputs, inhibitory STDP enables this learning to be stable by establishing a balance between excitatory and inhibitory currents at each neuron in the network. The synaptic weights between source and reservoir neurons form a basis set for the input patterns. The neural trajectories generated in the reservoir due to input stimulation and lateral connections between reservoir neurons can be readout by the sink layer neurons. This activity is used for adaptation of synapses between reservoir and sink layer neurons. A new measure called the discriminability index (DI) is introduced to compute if the network can discriminate between old patterns already presented in an initial training session. The DI is also used to compute if the network adapts to new patterns without losing its ability to discriminate among old patterns. The final outcome is that the network is able to correctly discriminate between all patterns-both old and new. This result holds as long as inhibitory synapses employ STDP to continuously enable current balance in the network. The results suggest a possible direction for future investigation into how spiking neural networks could address the stability-plasticity question despite having continuous synaptic plasticity.

  13. Intraspinal serotonergic neurons consist of two, temporally distinct populations in developing zebrafish.

    Science.gov (United States)

    Montgomery, Jacob E; Wiggin, Timothy D; Rivera-Perez, Luis M; Lillesaar, Christina; Masino, Mark A

    2016-06-01

    Zebrafish intraspinal serotonergic neuron (ISN) morphology and distribution have been examined in detail at different ages; however, some aspects of the development of these cells remain unclear. Although antibodies to serotonin (5-HT) have detected ISNs in the ventral spinal cord of embryos, larvae, and adults, the only tryptophan hydroxylase (tph) transcript that has been described in the spinal cord is tph1a. Paradoxically, spinal tph1a is only expressed transiently in embryos, which brings the source of 5-HT in the ISNs of larvae and adults into question. Because the pet1 and tph2 promoters drive transgene expression in the spinal cord, we hypothesized that tph2 is expressed in spinal cords of zebrafish larvae. We confirmed this hypothesis through in situ hybridization. Next, we used 5-HT antibody labeling and transgenic markers of tph2-expressing neurons to identify a transient population of ISNs in embryos that was distinct from ISNs that appeared later in development. The existence of separate ISN populations may not have been recognized previously due to their shared location in the ventral spinal cord. Finally, we used transgenic markers and immunohistochemical labeling to identify the transient ISN population as GABAergic Kolmer-Agduhr double-prime (KA″) neurons. Altogether, this study revealed a novel developmental paradigm in which KA″ neurons are transiently serotonergic before the appearance of a stable population of tph2-expressing ISNs. © 2015 Wiley Periodicals, Inc.

  14. Neural-network-based depth computation for blind navigation

    Science.gov (United States)

    Wong, Farrah; Nagarajan, Ramachandran R.; Yaacob, Sazali

    2004-12-01

    A research undertaken to help blind people to navigate autonomously or with minimum assistance is termed as "Blind Navigation". In this research, an aid that could help blind people in their navigation is proposed. Distance serves as an important clue during our navigation. A stereovision navigation aid implemented with two digital video cameras that are spaced apart and fixed on a headgear to obtain the distance information is presented. In this paper, a neural network methodology is used to obtain the required parameters of the camera which is known as camera calibration. These parameters are not known but obtained by adjusting the weights in the network. The inputs to the network consist of the matching features in the stereo pair images. A back propagation network with 16-input neurons, 3 hidden neurons and 1 output neuron, which gives depth, is created. The distance information is incorporated into the final processed image as four gray levels such as white, light gray, dark gray and black. Preliminary results have shown that the percentage errors fall below 10%. It is envisaged that the distance provided by neural network shall enable blind individuals to go near and pick up an object of interest.

  15. Structural and functional properties of a probabilistic model of neuronal connectivity in a simple locomotor network

    Science.gov (United States)

    Merrison-Hort, Robert; Soffe, Stephen R; Borisyuk, Roman

    2018-01-01

    Although, in most animals, brain connectivity varies between individuals, behaviour is often similar across a species. What fundamental structural properties are shared across individual networks that define this behaviour? We describe a probabilistic model of connectivity in the hatchling Xenopus tadpole spinal cord which, when combined with a spiking model, reliably produces rhythmic activity corresponding to swimming. The probabilistic model allows calculation of structural characteristics that reflect common network properties, independent of individual network realisations. We use the structural characteristics to study examples of neuronal dynamics, in the complete network and various sub-networks, and this allows us to explain the basis for key experimental findings, and make predictions for experiments. We also study how structural and functional features differ between detailed anatomical connectomes and those generated by our new, simpler, model (meta-model). PMID:29589828

  16. Dynamical analysis of Parkinsonian state emulated by hybrid Izhikevich neuron models

    Science.gov (United States)

    Liu, Chen; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Li, Huiyan; Loparo, Kenneth A.; Fietkiewicz, Chris

    2015-11-01

    Computational models play a significant role in exploring novel theories to complement the findings of physiological experiments. Various computational models have been developed to reveal the mechanisms underlying brain functions. Particularly, in the development of therapies to modulate behavioral and pathological abnormalities, computational models provide the basic foundations to exhibit transitions between physiological and pathological conditions. Considering the significant roles of the intrinsic properties of the globus pallidus and the coupling connections between neurons in determining the firing patterns and the dynamical activities of the basal ganglia neuronal network, we propose a hypothesis that pathological behaviors under the Parkinsonian state may originate from combined effects of intrinsic properties of globus pallidus neurons and synaptic conductances in the whole neuronal network. In order to establish a computational efficient network model, hybrid Izhikevich neuron model is used due to its capacity of capturing the dynamical characteristics of the biological neuronal activities. Detailed analysis of the individual Izhikevich neuron model can assist in understanding the roles of model parameters, which then facilitates the establishment of the basal ganglia-thalamic network model, and contributes to a further exploration of the underlying mechanisms of the Parkinsonian state. Simulation results show that the hybrid Izhikevich neuron model is capable of capturing many of the dynamical properties of the basal ganglia-thalamic neuronal network, such as variations of the firing rates and emergence of synchronous oscillations under the Parkinsonian condition, despite the simplicity of the two-dimensional neuronal model. It may suggest that the computational efficient hybrid Izhikevich neuron model can be used to explore basal ganglia normal and abnormal functions. Especially it provides an efficient way of emulating the large-scale neuron network

  17. From Neurons to Brain: Adaptive Self-Wiring of Neurons

    OpenAIRE

    Segev, Ronen; Ben-Jacob, Eshel

    1998-01-01

    During embryonic morpho-genesis, a collection of individual neurons turns into a functioning network with unique capabilities. Only recently has this most staggering example of emergent process in the natural world, began to be studied. Here we propose a navigational strategy for neurites growth cones, based on sophisticated chemical signaling. We further propose that the embryonic environment (the neurons and the glia cells) acts as an excitable media in which concentric and spiral chemical ...

  18. Using complex networks to quantify consistency in the use of words

    International Nuclear Information System (INIS)

    Amancio, D R; Oliveira Jr, O N; Costa, L da F

    2012-01-01

    In this paper we have quantified the consistency of word usage in written texts represented by complex networks, where words were taken as nodes, by measuring the degree of preservation of the node neighborhood. Words were considered highly consistent if the authors used them with the same neighborhood. When ranked according to the consistency of use, the words obeyed a log-normal distribution, in contrast to Zipf's law that applies to the frequency of use. Consistency correlated positively with the familiarity and frequency of use, and negatively with ambiguity and age of acquisition. An inspection of some highly consistent words confirmed that they are used in very limited semantic contexts. A comparison of consistency indices for eight authors indicated that these indices may be employed for author recognition. Indeed, as expected, authors of novels could be distinguished from those who wrote scientific texts. Our analysis demonstrated the suitability of the consistency indices, which can now be applied in other tasks, such as emotion recognition

  19. Emergent properties of interacting populations of spiking neurons.

    Science.gov (United States)

    Cardanobile, Stefano; Rotter, Stefan

    2011-01-01

    Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations.

  20. Barbed channels enhance unidirectional connectivity between neuronal networks cultured on multi electrode arrays

    Science.gov (United States)

    le Feber, Joost; Postma, Wybren; de Weerd, Eddy; Weusthof, Marcel; Rutten, Wim L. C.

    2015-01-01

    Cultured neurons on multi electrode arrays (MEAs) have been widely used to study various aspects of neuronal (network) functioning. A possible drawback of this approach is the lack of structure in these networks. At the single cell level, several solutions have been proposed to enable directed connectivity, and promising results were obtained. At the level of connected sub-populations, a few attempts have been made with promising results. First assessment of the designs' functionality, however, suggested room for further improvement. We designed a two chamber MEA aiming to create a unidirectional connection between the networks in both chambers (“emitting” and “receiving”). To achieve this unidirectionality, all interconnecting channels contained barbs that hindered axon growth in the opposite direction (from receiving to emitting chamber). Visual inspection showed that axons predominantly grew through the channels in the promoted direction. This observation was confirmed by spontaneous activity recordings. Cross-correlation between the signals from two electrodes inside the channels suggested signal propagation at ≈2 m/s from emitting to receiving chamber. Cross-correlation between the firing patterns in both chambers indicated that most correlated activity was initiated in the emitting chamber, which was also reflected by a significantly lower fraction of partial bursts (i.e., a one-chamber-only burst) in the emitting chamber. Finally, electrical stimulation in the emitting chamber induced a fast response in that chamber, and a slower response in the receiving chamber. Stimulation in the receiving chamber evoked a fast response in that chamber, but no response in the emitting chamber. These results confirm the predominantly unidirectional nature of the connecting channels from emitting to receiving chamber. PMID:26578869

  1. Barbed channels enhance unidirectional connectivity between neuronal networks cultured on multi electrode arrays.

    Directory of Open Access Journals (Sweden)

    Joost eLe Feber

    2015-11-01

    Full Text Available Cultured neurons on multi electrode arrays (MEAs have been widely used to study various as-pects of neuronal (network functioning. A possible drawback of this approach is the lack of structure in these networks. At the single cell level, several solutions have been proposed to ena-ble directed connectivity, and promising results were obtained. At the level of connected sub-populations, a few attempts have been made with promising results. First assessment of the de-signs’ functionality, however, suggested room for further improvement.We designed a two chamber MEA aiming to create a unidirectional connection between the net-works in both chambers (‘emitting’ and ‘receiving’. To achieve this unidirectionality, all inter-connecting channels contained barbs that hindered axon growth in the opposite direction (from receiving to emitting chamber. Visual inspection showed that axons predominantly grew through the channels in the promoted direction . This observation was confirmed by spontaneous activity recordings. Cross-correlation between the signals from two electrodes inside the channels suggested signal propagation at ≈2 m/s from emitting to receiving chamber. Cross-correlation between the firing patterns in both chambers indicated that most correlated activity was initiated in the emitting chamber, which was also reflected by a significantly lower fraction of partial bursts (e. a one-chamber-only burst in the emitting chamber. Finally, electrical stimulation in the emitting chamber induced a fast response in that chamber, and a slower response in the receiving chamber. Stimulation in the receiving chamber evoked a fast response in that chamber, but no response in the emitting chamber. These results confirm the predominantly unidirectional nature of the connecting channels from emitting to receiving chamber.

  2. The simplest maximum entropy model for collective behavior in a neural network

    International Nuclear Information System (INIS)

    Tkačik, Gašper; Marre, Olivier; Mora, Thierry; Amodei, Dario; Bialek, William; Berry II, Michael J

    2013-01-01

    Recent work emphasizes that the maximum entropy principle provides a bridge between statistical mechanics models for collective behavior in neural networks and experiments on networks of real neurons. Most of this work has focused on capturing the measured correlations among pairs of neurons. Here we suggest an alternative, constructing models that are consistent with the distribution of global network activity, i.e. the probability that K out of N cells in the network generate action potentials in the same small time bin. The inverse problem that we need to solve in constructing the model is analytically tractable, and provides a natural ‘thermodynamics’ for the network in the limit of large N. We analyze the responses of neurons in a small patch of the retina to naturalistic stimuli, and find that the implied thermodynamics is very close to an unusual critical point, in which the entropy (in proper units) is exactly equal to the energy. (paper)

  3. Nonlinear Bayesian filtering and learning: a neuronal dynamics for perception.

    Science.gov (United States)

    Kutschireiter, Anna; Surace, Simone Carlo; Sprekeler, Henning; Pfister, Jean-Pascal

    2017-08-18

    The robust estimation of dynamical hidden features, such as the position of prey, based on sensory inputs is one of the hallmarks of perception. This dynamical estimation can be rigorously formulated by nonlinear Bayesian filtering theory. Recent experimental and behavioral studies have shown that animals' performance in many tasks is consistent with such a Bayesian statistical interpretation. However, it is presently unclear how a nonlinear Bayesian filter can be efficiently implemented in a network of neurons that satisfies some minimum constraints of biological plausibility. Here, we propose the Neural Particle Filter (NPF), a sampling-based nonlinear Bayesian filter, which does not rely on importance weights. We show that this filter can be interpreted as the neuronal dynamics of a recurrently connected rate-based neural network receiving feed-forward input from sensory neurons. Further, it captures properties of temporal and multi-sensory integration that are crucial for perception, and it allows for online parameter learning with a maximum likelihood approach. The NPF holds the promise to avoid the 'curse of dimensionality', and we demonstrate numerically its capability to outperform weighted particle filters in higher dimensions and when the number of particles is limited.

  4. On Rhythms in Neuronal Networks with Recurrent Excitation.

    Science.gov (United States)

    Börgers, Christoph; Takeuchi, R Melody; Rosebrock, Daniel T

    2018-02-01

    We investigate rhythms in networks of neurons with recurrent excitation, that is, with excitatory cells exciting each other. Recurrent excitation can sustain activity even when the cells in the network are driven below threshold, too weak to fire on their own. This sort of "reverberating" activity is often thought to be the basis of working memory. Recurrent excitation can also lead to "runaway" transitions, sudden transitions to high-frequency firing; this may be related to epileptic seizures. Not all fundamental questions about these phenomena have been answered with clarity in the literature. We focus on three questions here: (1) How much recurrent excitation is needed to sustain reverberating activity? How does the answer depend on parameters? (2) Is there a positive minimum frequency of reverberating activity, a positive "onset frequency"? How does it depend on parameters? (3) When do runaway transitions occur? For reduced models, we give mathematical answers to these questions. We also examine computationally to which extent our findings are reflected in the behavior of biophysically more realistic model networks. Our main results can be summarized as follows. (1) Reverberating activity can be fueled by extremely weak slow recurrent excitation, but only by sufficiently strong fast recurrent excitation. (2) The onset of reverberating activity, as recurrent excitation is strengthened or external drive is raised, occurs at a positive frequency. It is faster when the external drive is weaker (and the recurrent excitation stronger). It is slower when the recurrent excitation has a longer decay time constant. (3) Runaway transitions occur only with fast, not with slow, recurrent excitation. We also demonstrate that the relation between reverberating activity fueled by recurrent excitation and runaway transitions can be visualized in an instructive way by a (generalized) cusp catastrophe surface.

  5. Context-specific metabolic networks are consistent with experiments.

    Directory of Open Access Journals (Sweden)

    Scott A Becker

    2008-05-01

    Full Text Available Reconstructions of cellular metabolism are publicly available for a variety of different microorganisms and some mammalian genomes. To date, these reconstructions are "genome-scale" and strive to include all reactions implied by the genome annotation, as well as those with direct experimental evidence. Clearly, many of the reactions in a genome-scale reconstruction will not be active under particular conditions or in a particular cell type. Methods to tailor these comprehensive genome-scale reconstructions into context-specific networks will aid predictive in silico modeling for a particular situation. We present a method called Gene Inactivity Moderated by Metabolism and Expression (GIMME to achieve this goal. The GIMME algorithm uses quantitative gene expression data and one or more presupposed metabolic objectives to produce the context-specific reconstruction that is most consistent with the available data. Furthermore, the algorithm provides a quantitative inconsistency score indicating how consistent a set of gene expression data is with a particular metabolic objective. We show that this algorithm produces results consistent with biological experiments and intuition for adaptive evolution of bacteria, rational design of metabolic engineering strains, and human skeletal muscle cells. This work represents progress towards producing constraint-based models of metabolism that are specific to the conditions where the expression profiling data is available.

  6. Motor Neurons

    DEFF Research Database (Denmark)

    Hounsgaard, Jorn

    2017-01-01

    Motor neurons translate synaptic input from widely distributed premotor networks into patterns of action potentials that orchestrate motor unit force and motor behavior. Intercalated between the CNS and muscles, motor neurons add to and adjust the final motor command. The identity and functional...... in in vitro preparations is far from complete. Nevertheless, a foundation has been provided for pursuing functional significance of intrinsic response properties in motoneurons in vivo during motor behavior at levels from molecules to systems....

  7. Stochastic Resonance in Neuronal Network Motifs with Ornstein-Uhlenbeck Colored Noise

    Directory of Open Access Journals (Sweden)

    Xuyang Lou

    2014-01-01

    Full Text Available We consider here the effect of the Ornstein-Uhlenbeck colored noise on the stochastic resonance of the feed-forward-loop (FFL network motif. The FFL motif is modeled through the FitzHugh-Nagumo neuron model as well as the chemical coupling. Our results show that the noise intensity and the correlation time of the noise process serve as the control parameters, which have great impacts on the stochastic dynamics of the FFL motif. We find that, with a proper choice of noise intensities and the correlation time of the noise process, the signal-to-noise ratio (SNR can display more than one peak.

  8. The mechanism of synchronization in feed-forward neuronal networks

    International Nuclear Information System (INIS)

    Goedeke, S; Diesmann, M

    2008-01-01

    Synchronization in feed-forward subnetworks of the brain has been proposed to explain the precisely timed spike patterns observed in experiments. While the attractor dynamics of these networks is now well understood, the underlying single neuron mechanisms remain unexplained. Previous attempts have captured the effects of the highly fluctuating membrane potential by relating spike intensity f(U) to the instantaneous voltage U generated by the input. This article shows that f is high during the rise and low during the decay of U(t), demonstrating that the U-dot-dependence of f, not refractoriness, is essential for synchronization. Moreover, the bifurcation scenario is quantitatively described by a simple f(U,U-dot) relationship. These findings suggest f(U,U-dot) as the relevant model class for the investigation of neural synchronization phenomena in a noisy environment

  9. Network inference from functional experimental data (Conference Presentation)

    Science.gov (United States)

    Desrosiers, Patrick; Labrecque, Simon; Tremblay, Maxime; Bélanger, Mathieu; De Dorlodot, Bertrand; Côté, Daniel C.

    2016-03-01

    Functional connectivity maps of neuronal networks are critical tools to understand how neurons form circuits, how information is encoded and processed by neurons, how memory is shaped, and how these basic processes are altered under pathological conditions. Current light microscopy allows to observe calcium or electrical activity of thousands of neurons simultaneously, yet assessing comprehensive connectivity maps directly from such data remains a non-trivial analytical task. There exist simple statistical methods, such as cross-correlation and Granger causality, but they only detect linear interactions between neurons. Other more involved inference methods inspired by information theory, such as mutual information and transfer entropy, identify more accurately connections between neurons but also require more computational resources. We carried out a comparative study of common connectivity inference methods. The relative accuracy and computational cost of each method was determined via simulated fluorescence traces generated with realistic computational models of interacting neurons in networks of different topologies (clustered or non-clustered) and sizes (10-1000 neurons). To bridge the computational and experimental works, we observed the intracellular calcium activity of live hippocampal neuronal cultures infected with the fluorescent calcium marker GCaMP6f. The spontaneous activity of the networks, consisting of 50-100 neurons per field of view, was recorded from 20 to 50 Hz on a microscope controlled by a homemade software. We implemented all connectivity inference methods in the software, which rapidly loads calcium fluorescence movies, segments the images, extracts the fluorescence traces, and assesses the functional connections (with strengths and directions) between each pair of neurons. We used this software to assess, in real time, the functional connectivity from real calcium imaging data in basal conditions, under plasticity protocols, and epileptic

  10. Adaptive coupling optimized spiking coherence and synchronization in Newman-Watts neuronal networks.

    Science.gov (United States)

    Gong, Yubing; Xu, Bo; Wu, Ya'nan

    2013-09-01

    In this paper, we have numerically studied the effect of adaptive coupling on the temporal coherence and synchronization of spiking activity in Newman-Watts Hodgkin-Huxley neuronal networks. It is found that random shortcuts can enhance the spiking synchronization more rapidly when the increment speed of adaptive coupling is increased and can optimize the temporal coherence of spikes only when the increment speed of adaptive coupling is appropriate. It is also found that adaptive coupling strength can enhance the synchronization of spikes and can optimize the temporal coherence of spikes when random shortcuts are appropriate. These results show that adaptive coupling has a big influence on random shortcuts related spiking activity and can enhance and optimize the temporal coherence and synchronization of spiking activity of the network. These findings can help better understand the roles of adaptive coupling for improving the information processing and transmission in neural systems.

  11. MicroRNA-128 governs neuronal excitability and motor behavior in mice

    DEFF Research Database (Denmark)

    Tan, Chan Lek; Plotkin, Joshua L.; Venø, Morten Trillingsgaard

    2013-01-01

    The control of motor behavior in animals and humans requires constant adaptation of neuronal networks to signals of various types and strengths. We found that microRNA-128 (miR-128), which is expressed in adult neurons, regulates motor behavior by modulating neuronal signaling networks and excita...

  12. Extraction of Inter-Aural Time Differences Using a Spiking Neuron Network Model of the Medial Superior Olive

    Directory of Open Access Journals (Sweden)

    Jörg Encke

    2018-03-01

    Full Text Available The mammalian auditory system is able to extract temporal and spectral features from sound signals at the two ears. One important cue for localization of low-frequency sound sources in the horizontal plane are inter-aural time differences (ITDs which are first analyzed in the medial superior olive (MSO in the brainstem. Neural recordings of ITD tuning curves at various stages along the auditory pathway suggest that ITDs in the mammalian brainstem are not represented in form of a Jeffress-type place code. An alternative is the hemispheric opponent-channel code, according to which ITDs are encoded as the difference in the responses of the MSO nuclei in the two hemispheres. In this study, we present a physiologically-plausible, spiking neuron network model of the mammalian MSO circuit and apply two different methods of extracting ITDs from arbitrary sound signals. The network model is driven by a functional model of the auditory periphery and physiological models of the cochlear nucleus and the MSO. Using a linear opponent-channel decoder, we show that the network is able to detect changes in ITD with a precision down to 10 μs and that the sensitivity of the decoder depends on the slope of the ITD-rate functions. A second approach uses an artificial neuronal network to predict ITDs directly from the spiking output of the MSO and ANF model. Using this predictor, we show that the MSO-network is able to reliably encode static and time-dependent ITDs over a large frequency range, also for complex signals like speech.

  13. Observability and synchronization of neuron models

    Science.gov (United States)

    Aguirre, Luis A.; Portes, Leonardo L.; Letellier, Christophe

    2017-10-01

    Observability is the property that enables recovering the state of a dynamical system from a reduced number of measured variables. In high-dimensional systems, it is therefore important to make sure that the variable recorded to perform the analysis conveys good observability of the system dynamics. The observability of a network of neuron models depends nontrivially on the observability of the node dynamics and on the topology of the network. The aim of this paper is twofold. First, to perform a study of observability using four well-known neuron models by computing three different observability coefficients. This not only clarifies observability properties of the models but also shows the limitations of applicability of each type of coefficients in the context of such models. Second, to study the emergence of phase synchronization in networks composed of neuron models. This is done performing multivariate singular spectrum analysis which, to the best of the authors' knowledge, has not been used in the context of networks of neuron models. It is shown that it is possible to detect phase synchronization: (i) without having to measure all the state variables, but only one (that provides greatest observability) from each node and (ii) without having to estimate the phase.

  14. Studying the Behaviour of Model of Mirror Neuron System in Case of Autism

    Directory of Open Access Journals (Sweden)

    Shikha Anirban

    2012-04-01

    Full Text Available Several experiment done by the researchers conducted that autism is caused by the dysfunctional mirror neuron system and the dysfunctions of mirror neuron system is proportional to the symptom severity of autism. In the present work those experiments were studied as well as studying a model of mirror neuron system called MNS2 developed by a research group. This research examined the behavior of the model in case of autism and compared the result with those studies conducting dysfunctions of mirror neuron system in autism. To perform this, a neural network employing the model was developed which recognized the three types of grasping (faster, normal and slower. The network was implemented with back propagation through time learning algorithm. The whole grasping process was divided into 30 time steps and different hand and object states at each time step was used as the input of the network. Normally the network successfully recognized all of the three types of grasps. The network required more times as the number of inactive neurons increased. And in case of maximum inactive neurons of the mirror neuron system the network became unable to recognize the types of grasp. As the time to recognize the types of grasp is proportional to the number of inactive neurons, the experiment result supports the hypothesis that dysfunctions of MNS is proportional to the symptom severity of autism. Keywords— Autism, MNS, mirror neuron, neural network, BPTT

  15. Deep Spiking Networks

    NARCIS (Netherlands)

    O'Connor, P.; Welling, M.

    2016-01-01

    We introduce an algorithm to do backpropagation on a spiking network. Our network is "spiking" in the sense that our neurons accumulate their activation into a potential over time, and only send out a signal (a "spike") when this potential crosses a threshold and the neuron is reset. Neurons only

  16. Attractive target wave patterns in complex networks consisting of excitable nodes

    International Nuclear Information System (INIS)

    Zhang Li-Sheng; Mi Yuan-Yuan; Liao Xu-Hong; Qian Yu; Hu Gang

    2014-01-01

    This review describes the investigations of oscillatory complex networks consisting of excitable nodes, focusing on the target wave patterns or say the target wave attractors. A method of dominant phase advanced driving (DPAD) is introduced to reveal the dynamic structures in the networks supporting oscillations, such as the oscillation sources and the main excitation propagation paths from the sources to the whole networks. The target center nodes and their drivers are regarded as the key nodes which can completely determine the corresponding target wave patterns. Therefore, the center (say node A) and its driver (say node B) of a target wave can be used as a label, (A,B), of the given target pattern. The label can give a clue to conveniently retrieve, suppress, and control the target waves. Statistical investigations, both theoretically from the label analysis and numerically from direct simulations of network dynamics, show that there exist huge numbers of target wave attractors in excitable complex networks if the system size is large, and all these attractors can be labeled and easily controlled based on the information given by the labels. The possible applications of the physical ideas and the mathematical methods about multiplicity and labelability of attractors to memory problems of neural networks are briefly discussed. (topical review - statistical physics and complex systems)

  17. Modular networks with delayed coupling: Synchronization and frequency control

    Science.gov (United States)

    Maslennikov, Oleg V.; Nekorkin, Vladimir I.

    2014-07-01

    We study the collective dynamics of modular networks consisting of map-based neurons which generate irregular spike sequences. Three types of intramodule topology are considered: a random Erdös-Rényi network, a small-world Watts-Strogatz network, and a scale-free Barabási-Albert network. The interaction between the neurons of different modules is organized by relatively sparse connections with time delay. For all the types of the network topology considered, we found that with increasing delay two regimes of module synchronization alternate with each other: inphase and antiphase. At the same time, the average rate of collective oscillations decreases within each of the time-delay intervals corresponding to a particular synchronization regime. A dual role of the time delay is thus established: controlling a synchronization mode and degree and controlling an average network frequency. Furthermore, we investigate the influence on the modular synchronization by other parameters: the strength of intermodule coupling and the individual firing rate.

  18. Autaptic effects on synchrony of neurons coupled by electrical synapses

    Science.gov (United States)

    Kim, Youngtae

    2017-07-01

    In this paper, we numerically study the effects of a special synapse known as autapse on synchronization of population of Morris-Lecar (ML) neurons coupled by electrical synapses. Several configurations of the ML neuronal populations such as a pair or a ring or a globally coupled network with and without autapses are examined. While most of the papers on the autaptic effects on synchronization have used networks of neurons of same spiking rate, we use the network of neurons of different spiking rates. We find that the optimal autaptic coupling strength and the autaptic time delay enhance synchronization in our neural networks. We use the phase response curve analysis to explain the enhanced synchronization by autapses. Our findings reveal the important relationship between the intraneuronal feedback loop and the interneuronal coupling.

  19. Fetal brain extracellular matrix boosts neuronal network formation in 3D bioengineered model of cortical brain tissue.

    Science.gov (United States)

    Sood, Disha; Chwalek, Karolina; Stuntz, Emily; Pouli, Dimitra; Du, Chuang; Tang-Schomer, Min; Georgakoudi, Irene; Black, Lauren D; Kaplan, David L

    2016-01-01

    The extracellular matrix (ECM) constituting up to 20% of the organ volume is a significant component of the brain due to its instructive role in the compartmentalization of functional microdomains in every brain structure. The composition, quantity and structure of ECM changes dramatically during the development of an organism greatly contributing to the remarkably sophisticated architecture and function of the brain. Since fetal brain is highly plastic, we hypothesize that the fetal brain ECM may contain cues promoting neural growth and differentiation, highly desired in regenerative medicine. Thus, we studied the effect of brain-derived fetal and adult ECM complemented with matricellular proteins on cortical neurons using in vitro 3D bioengineered model of cortical brain tissue. The tested parameters included neuronal network density, cell viability, calcium signaling and electrophysiology. Both, adult and fetal brain ECM as well as matricellular proteins significantly improved neural network formation as compared to single component, collagen I matrix. Additionally, the brain ECM improved cell viability and lowered glutamate release. The fetal brain ECM induced superior neural network formation, calcium signaling and spontaneous spiking activity over adult brain ECM. This study highlights the difference in the neuroinductive properties of fetal and adult brain ECM and suggests that delineating the basis for this divergence may have implications for regenerative medicine.

  20. [Hardware Implementation of Numerical Simulation Function of Hodgkin-Huxley Model Neurons Action Potential Based on Field Programmable Gate Array].

    Science.gov (United States)

    Wang, Jinlong; Lu, Mai; Hu, Yanwen; Chen, Xiaoqiang; Pan, Qiangqiang

    2015-12-01

    Neuron is the basic unit of the biological neural system. The Hodgkin-Huxley (HH) model is one of the most realistic neuron models on the electrophysiological characteristic description of neuron. Hardware implementation of neuron could provide new research ideas to clinical treatment of spinal cord injury, bionics and artificial intelligence. Based on the HH model neuron and the DSP Builder technology, in the present study, a single HH model neuron hardware implementation was completed in Field Programmable Gate Array (FPGA). The neuron implemented in FPGA was stimulated by different types of current, the action potential response characteristics were analyzed, and the correlation coefficient between numerical simulation result and hardware implementation result were calculated. The results showed that neuronal action potential response of FPGA was highly consistent with numerical simulation result. This work lays the foundation for hardware implementation of neural network.

  1. The mirror neuron system.

    Science.gov (United States)

    Cattaneo, Luigi; Rizzolatti, Giacomo

    2009-05-01

    Mirror neurons are a class of neurons, originally discovered in the premotor cortex of monkeys, that discharge both when individuals perform a given motor act and when they observe others perform that same motor act. Ample evidence demonstrates the existence of a cortical network with the properties of mirror neurons (mirror system) in humans. The human mirror system is involved in understanding others' actions and their intentions behind them, and it underlies mechanisms of observational learning. Herein, we will discuss the clinical implications of the mirror system.

  2. Creation of defined single cell resolution neuronal circuits on microelectrode arrays

    Science.gov (United States)

    Pirlo, Russell Kirk

    2009-12-01

    The way cell-cell organization of neuronal networks influences activity and facilitates function is not well understood. Microelectrode arrays (MEAs) and advancing cell patterning technologies have enabled access to and control of in vitro neuronal networks spawning much new research in neuroscience and neuroengineering. We propose that small, simple networks of neurons with defined circuitry may serve as valuable research models where every connection can be analyzed, controlled and manipulated. Towards the goal of creating such neuronal networks we have applied microfabricated elastomeric membranes, surface modification and our unique laser cell patterning system to create defined neuronal circuits with single-cell precision on MEAs. Definition of synaptic connectivity was imposed by the 3D physical constraints of polydimethylsiloxane elastomeric membranes. The membranes had 20mum clear-through holes and 2-3mum deep channels which when applied to the surface of the MEA formed microwells to confine neurons to electrodes connected via shallow tunnels to direct neurite outgrowth. Tapering and turning of channels was used to influence neurite polarity. Biocompatibility of the membranes was increased by vacuum baking, oligomer extraction, and autoclaving. Membranes were bound to the MEA by oxygen plasma treatment and heated pressure. The MEA/membrane surface was treated with oxygen plasma, poly-D-lysine and laminin to improve neuron attachment, survival and neurite outgrowth. Prior to cell patterning the outer edge of culture area was seeded with 5x10 5 cells per cm and incubated for 2 days. Single embryonic day 7 chick forebrain neurons were then patterned into the microwells and onto the electrodes using our laser cell patterning system. Patterned neurons successfully attached to and were confined to the electrodes. Neurites extended through the interconnecting channels and connected with adjacent neurons. These results demonstrate that neuronal circuits can be

  3. Multiple coherence resonances and synchronization transitions by time delay in adaptive scale-free neuronal networks with spike-timing-dependent plasticity

    International Nuclear Information System (INIS)

    Xie, Huijuan; Gong, Yubing

    2017-01-01

    In this paper, we numerically study the effect of spike-timing-dependent plasticity (STDP) on multiple coherence resonances (MCR) and synchronization transitions (ST) induced by time delay in adaptive scale-free Hodgkin–Huxley neuronal networks. It is found that STDP has a big influence on MCR and ST induced by time delay and on the effect of network average degree on the MCR and ST. MCR is enhanced or suppressed as the adjusting rate A p of STDP decreases or increases, and there is optimal A p by which ST becomes strongest. As network average degree 〈k〉 increases, ST is enhanced and there is optimal 〈k〉 at which MCR becomes strongest. Moreover, for a larger A p value, ST is enhanced more rapidly with increasing 〈k〉 and the optimal 〈k〉 for MCR increases. These results show that STDP can either enhance or suppress MCR, and there is optimal STDP that can most strongly enhance ST induced by time delay in the adaptive neuronal networks. These findings could find potential implication for the information processing and transmission in neural systems.

  4. Fragility in dynamic networks: application to neural networks in the epileptic cortex.

    Science.gov (United States)

    Sritharan, Duluxan; Sarma, Sridevi V

    2014-10-01

    Epilepsy is a network phenomenon characterized by atypical activity at the neuronal and population levels during seizures, including tonic spiking, increased heterogeneity in spiking rates, and synchronization. The etiology of epilepsy is unclear, but a common theme among proposed mechanisms is that structural connectivity between neurons is altered. It is hypothesized that epilepsy arises not from random changes in connectivity, but from specific structural changes to the most fragile nodes or neurons in the network. In this letter, the minimum energy perturbation on functional connectivity required to destabilize linear networks is derived. Perturbation results are then applied to a probabilistic nonlinear neural network model that operates at a stable fixed point. That is, if a small stimulus is applied to the network, the activation probabilities of each neuron respond transiently but eventually recover to their baseline values. When the perturbed network is destabilized, the activation probabilities shift to larger or smaller values or oscillate when a small stimulus is applied. Finally, the structural modifications to the neural network that achieve the functional perturbation are derived. Simulations of the unperturbed and perturbed networks qualitatively reflect neuronal activity observed in epilepsy patients, suggesting that the changes in network dynamics due to destabilizing perturbations, including the emergence of an unstable manifold or a stable limit cycle, may be indicative of neuronal or population dynamics during seizure. That is, the epileptic cortex is always on the brink of instability and minute changes in the synaptic weights associated with the most fragile node can suddenly destabilize the network to cause seizures. Finally, the theory developed here and its interpretation of epileptic networks enables the design of a straightforward feedback controller that first detects when the network has destabilized and then applies linear state

  5. Firing Patterns and Transitions in Coupled Neurons Controlled by a Pacemaker

    International Nuclear Information System (INIS)

    Mei-Sheng, Li; Qi-Shao, Lu; Li-Xia, Duan; Qing-Yun, Wang

    2008-01-01

    To reveal the dynamics of neuronal networks with pacemakers, the firing patterns and their transitions are investigated in a ring HR neuronal network with gap junctions under the control of a pacemaker. Compared with the situation without pacemaker, the neurons in the network can exhibit various firing patterns as the external current is applied or the coupling strength of pacemaker varies. The results are beneficial for understanding the complex cooperative behaviour of large neural assemblies with pacemaker control

  6. Organization of left-right coordination of neuronal activity in the mammalian spinal cord

    DEFF Research Database (Denmark)

    Shevtsova, Natalia A.; Talpalar, Adolfo E.; Markin, Sergey N.

    2015-01-01

    and the left-right synchronous hopping-like pattern in mutants lacking specific neuron classes, and speed-dependent asymmetric changes of flexor and extensor phase durations. The models provide insights into the architecture of spinal network and the organization of parallel inhibitory and excitatory CIN....... In this study, we construct and analyse two computational models of spinal locomotor circuits consisting of left and right rhythm generators interacting bilaterally via several neuronal pathways mediated by different CINs. The CIN populations incorporated in the models include the genetically identified...... inhibitory (V0D) and excitatory (V0V) subtypes of V0 CINs and excitatory V3 CINs. The model also includes the ipsilaterally projecting excitatory V2a interneurons mediating excitatory drive to the V0V CINs. The proposed network architectures and CIN connectivity allow the models to closely reproduce...

  7. The Laplacian spectrum of neural networks

    Science.gov (United States)

    de Lange, Siemon C.; de Reus, Marcel A.; van den Heuvel, Martijn P.

    2014-01-01

    The brain is a complex network of neural interactions, both at the microscopic and macroscopic level. Graph theory is well suited to examine the global network architecture of these neural networks. Many popular graph metrics, however, encode average properties of individual network elements. Complementing these “conventional” graph metrics, the eigenvalue spectrum of the normalized Laplacian describes a network's structure directly at a systems level, without referring to individual nodes or connections. In this paper, the Laplacian spectra of the macroscopic anatomical neuronal networks of the macaque and cat, and the microscopic network of the Caenorhabditis elegans were examined. Consistent with conventional graph metrics, analysis of the Laplacian spectra revealed an integrative community structure in neural brain networks. Extending previous findings of overlap of network attributes across species, similarity of the Laplacian spectra across the cat, macaque and C. elegans neural networks suggests a certain level of consistency in the overall architecture of the anatomical neural networks of these species. Our results further suggest a specific network class for neural networks, distinct from conceptual small-world and scale-free models as well as several empirical networks. PMID:24454286

  8. Superior Generalization Capability of Hardware-Learing Algorithm Developed for Self-Learning Neuron-MOS Neural Networks

    Science.gov (United States)

    Kondo, Shuhei; Shibata, Tadashi; Ohmi, Tadahiro

    1995-02-01

    We have investigated the learning performance of the hardware backpropagation (HBP) algorithm, a hardware-oriented learning algorithm developed for the self-learning architecture of neural networks constructed using neuron MOS (metal-oxide-semiconductor) transistors. The solution to finding a mirror symmetry axis in a 4×4 binary pixel array was tested by computer simulation based on the HBP algorithm. Despite the inherent restrictions imposed on the hardware-learning algorithm, HBP exhibits equivalent learning performance to that of the original backpropagation (BP) algorithm when all the pertinent parameters are optimized. Very importantly, we have found that HBP has a superior generalization capability over BP; namely, HBP exhibits higher performance in solving problems that the network has not yet learnt.

  9. Emergent properties of interacting populations of spiking neurons

    Directory of Open Access Journals (Sweden)

    Stefano eCardanobile

    2011-12-01

    Full Text Available Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system.Here, we discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks on the population level is faithfully reflected by a set of non-linear rate equations, describing all interactions on this level. These equations, in turn, are similar in structure to the Lotka-Volterra equations, well known by their use in modeling predator-prey relationships in population biology, but abundant applications to economic theory have also been described.We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of neural populations.

  10. The Energy Coding of a Structural Neural Network Based on the Hodgkin-Huxley Model.

    Science.gov (United States)

    Zhu, Zhenyu; Wang, Rubin; Zhu, Fengyun

    2018-01-01

    Based on the Hodgkin-Huxley model, the present study established a fully connected structural neural network to simulate the neural activity and energy consumption of the network by neural energy coding theory. The numerical simulation result showed that the periodicity of the network energy distribution was positively correlated to the number of neurons and coupling strength, but negatively correlated to signal transmitting delay. Moreover, a relationship was established between the energy distribution feature and the synchronous oscillation of the neural network, which showed that when the proportion of negative energy in power consumption curve was high, the synchronous oscillation of the neural network was apparent. In addition, comparison with the simulation result of structural neural network based on the Wang-Zhang biophysical model of neurons showed that both models were essentially consistent.

  11. How pattern formation in ring networks of excitatory and inhibitoryspiking neurons depends on the input current regime

    Directory of Open Access Journals (Sweden)

    Birgit eKriener

    2014-01-01

    Full Text Available Pattern formation, i.e., the generation of an inhomogeneous spatial activity distribution in a dynamical system with translation invariant structure, is a well-studied phenomenon in neuronal network dynamics,specifically in neural field models. These are population models to describe the spatio-temporal dynamics of large groups of neurons in terms of macroscopic variables such as population firing rates. Though neural field models are often deduced from and equipped with biophysically meaningfulproperties, a direct mapping to simulations of individual spiking neuron populations is rarely considered. Neurons have a distinct identity defined by their action on their postsynaptic targets. In its simplest form they act either excitatorily or inhibitorily.When the distribution of neuron identities is assumed to be periodic, pattern formation can be observed, given the coupling strength is supercritical, i.e., larger than a critical weight. We find that this critical weight is strongly dependent on the characteristics of the neuronal input, i.e., depends on whether neurons are mean- orfluctuation driven, and different limits in linearizing the full non-linear system apply in order to assess stability.In particular, if neurons are mean-driven, the linearization has a very simple form and becomesindependent of both the fixed point firing rate and the variance of the input current, while in the very strongly fluctuation-driven regime the fixed point rate, as well as the input mean and variance areimportant parameters in the determination of the critical weight.We demonstrate that interestingly even in ``intermediate'' regimes, when the system is technically fluctuation-driven, the simple linearization neglecting the variance of the input can yield the better prediction of the critical couplingstrength. We moreover analyze the effects of structural randomness by rewiring individualsynapses or redistributing weights, as well as coarse-graining on pattern

  12. Probing the dynamics of identified neurons with a data-driven modeling approach.

    Directory of Open Access Journals (Sweden)

    Thomas Nowotny

    2008-07-01

    Full Text Available In controlling animal behavior the nervous system has to perform within the operational limits set by the requirements of each specific behavior. The implications for the corresponding range of suitable network, single neuron, and ion channel properties have remained elusive. In this article we approach the question of how well-constrained properties of neuronal systems may be on the neuronal level. We used large data sets of the activity of isolated invertebrate identified cells and built an accurate conductance-based model for this cell type using customized automated parameter estimation techniques. By direct inspection of the data we found that the variability of the neurons is larger when they are isolated from the circuit than when in the intact system. Furthermore, the responses of the neurons to perturbations appear to be more consistent than their autonomous behavior under stationary conditions. In the developed model, the constraints on different parameters that enforce appropriate model dynamics vary widely from some very tightly controlled parameters to others that are almost arbitrary. The model also allows predictions for the effect of blocking selected ionic currents and to prove that the origin of irregular dynamics in the neuron model is proper chaoticity and that this chaoticity is typical in an appropriate sense. Our results indicate that data driven models are useful tools for the in-depth analysis of neuronal dynamics. The better consistency of responses to perturbations, in the real neurons as well as in the model, suggests a paradigm shift away from measuring autonomous dynamics alone towards protocols of controlled perturbations. Our predictions for the impact of channel blockers on the neuronal dynamics and the proof of chaoticity underscore the wide scope of our approach.

  13. Self-consistent signal-to-noise analysis of the statistical behavior of analog neural networks and enhancement of the storage capacity

    Science.gov (United States)

    Shiino, Masatoshi; Fukai, Tomoki

    1993-08-01

    Based on the self-consistent signal-to-noise analysis (SCSNA) capable of dealing with analog neural networks with a wide class of transfer functions, enhancement of the storage capacity of associative memory and the related statistical properties of neural networks are studied for random memory patterns. Two types of transfer functions with the threshold parameter θ are considered, which are derived from the sigmoidal one to represent the output of three-state neurons. Neural networks having a monotonically increasing transfer function FM, FM(u)=sgnu (||u||>θ), FM(u)=0 (||u||memory patterns), implying the reduction of the number of spurious states. The behavior of the storage capacity with changing θ is qualitatively the same as that of the Ising spin neural networks with varying temperature. On the other hand, the nonmonotonic transfer function FNM, FNM(u)=sgnu (||u||=θ) gives rise to remarkable features in several respects. First, it yields a large enhancement of the storage capacity compared with the Amit-Gutfreund-Sompolinsky (AGS) value: with decreasing θ from θ=∞, the storage capacity αc of such a network is increased from the AGS value (~=0.14) to attain its maximum value of ~=0.42 at θ~=0.7 and afterwards is decreased to vanish at θ=0. Whereas for θ>~1 the storage capacity αc coincides with the value αc~ determined by the SCSNA as the upper bound of α ensuring the existence of retrieval solutions, for θr≠0 (i.e., finite width of the local field distribution), which is implied by the order-parameter equations of the SCSNA, disappears at a certain critical loading rate α0, and for αr=0+). As a consequence, memory retrieval without errors becomes possible even in the saturation limit α≠0. Results of the computer simulations on the statistical properties of the novel phase with αstorage capacity is also analyzed for the two types of networks. It is conspicuous for the networks with FNM, where the self-couplings increase the stability of

  14. Neurons in the human amygdala selective for perceived emotion

    Science.gov (United States)

    Wang, Shuo; Tudusciuc, Oana; Mamelak, Adam N.; Ross, Ian B.; Adolphs, Ralph; Rutishauser, Ueli

    2014-01-01

    The human amygdala plays a key role in recognizing facial emotions and neurons in the monkey and human amygdala respond to the emotional expression of faces. However, it remains unknown whether these responses are driven primarily by properties of the stimulus or by the perceptual judgments of the perceiver. We investigated these questions by recording from over 200 single neurons in the amygdalae of 7 neurosurgical patients with implanted depth electrodes. We presented degraded fear and happy faces and asked subjects to discriminate their emotion by button press. During trials where subjects responded correctly, we found neurons that distinguished fear vs. happy emotions as expressed by the displayed faces. During incorrect trials, these neurons indicated the patients’ subjective judgment. Additional analysis revealed that, on average, all neuronal responses were modulated most by increases or decreases in response to happy faces, and driven predominantly by judgments about the eye region of the face stimuli. Following the same analyses, we showed that hippocampal neurons, unlike amygdala neurons, only encoded emotions but not subjective judgment. Our results suggest that the amygdala specifically encodes the subjective judgment of emotional faces, but that it plays less of a role in simply encoding aspects of the image array. The conscious percept of the emotion shown in a face may thus arise from interactions between the amygdala and its connections within a distributed cortical network, a scheme also consistent with the long response latencies observed in human amygdala recordings. PMID:24982200

  15. Action observation and mirror neuron network: a tool for motor stroke rehabilitation.

    Science.gov (United States)

    Sale, P; Franceschini, M

    2012-06-01

    Mirror neurons are a specific class of neurons that are activated and discharge both during observation of the same or similar motor act performed by another individual and during the execution of a motor act. Different studies based on non invasive neuroelectrophysiological assessment or functional brain imaging techniques have demonstrated the presence of the mirror neuron and their mechanism in humans. Various authors have demonstrated that in the human these networks are activated when individuals learn motor actions via execution (as in traditional motor learning), imitation, observation (as in observational learning) and motor imagery. Activation of these brain areas (inferior parietal lobe and the ventral premotor cortex, as well as the caudal part of the inferior frontal gyrus [IFG]) following observation or motor imagery may thereby facilitate subsequent movement execution by directly matching the observed or imagined action to the internal simulation of that action. It is therefore believed that this multi-sensory action-observation system enables individuals to (re) learn impaired motor functions through the activation of these internal action-related representations. In humans, the mirror mechanism is also located in various brain segment: in Broca's area, which is involved in language processing and speech production and not only in centres that mediate voluntary movement, but also in cortical areas that mediate visceromotor emotion-related behaviours. On basis of this finding, during the last 10 years various studies were carry out regarding the clinical use of action observation for motor rehabilitation of sub-acute and chronic stroke patients.

  16. Parkin Mutations Reduce the Complexity of Neuronal Processes in iPSC-derived Human Neurons

    Science.gov (United States)

    Ren, Yong; Jiang, Houbo; Hu, Zhixing; Fan, Kevin; Wang, Jun; Janoschka, Stephen; Wang, Xiaomin; Ge, Shaoyu; Feng, Jian

    2015-01-01

    Parkinson’s disease (PD) is characterized by the degeneration of nigral dopaminergic (DA) neurons and non-DA neurons in many parts of the brain. Mutations of parkin, an E3 ubiquitin ligase that strongly binds to microtubules, are the most frequent cause of recessively inherited Parkinson’s disease. The lack of robust PD phenotype in parkin knockout mice suggests a unique vulnerability of human neurons to parkin mutations. Here, we show that the complexity of neuronal processes as measured by total neurite length, number of terminals, number of branch points and Sholl analysis, was greatly reduced in induced pluripotent stem cell (iPSC)-derived TH+ or TH− neurons from PD patients with parkin mutations. Consistent with these, microtubule stability was significantly decreased by parkin mutations in iPSC-derived neurons. Overexpression of parkin, but not its PD-linked mutant nor GFP, restored the complexity of neuronal processes and the stability of microtubules. Consistent with these, the microtubule-depolymerizing agent colchicine mimicked the effect of parkin mutations by decreasing neurite length and complexity in control neurons while the microtubule-stabilizing drug taxol mimicked the effect of parkin overexpression by enhancing the morphology of parkin-deficient neurons. The results suggest that parkin maintains the morphological complexity of human neurons by stabilizing microtubules. PMID:25332110

  17. MRI Study on the Functional and Spatial Consistency of Resting State-Related Independent Components of the Brain Network

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Bum Seok [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Choi, Jee Wook [Daejeon St. Mary' s Hospital, The Catholic University of Korea College of Medicine, Daejeon (Korea, Republic of); Kim, Ji Woong [College of Medical Science, Konyang University, Daejeon(Korea, Republic of)

    2012-06-15

    Resting-state networks (RSNs), including the default mode network (DMN), have been considered as markers of brain status such as consciousness, developmental change, and treatment effects. The consistency of functional connectivity among RSNs has not been fully explored, especially among resting-state-related independent components (RSICs). This resting-state fMRI study addressed the consistency of functional connectivity among RSICs as well as their spatial consistency between 'at day 1' and 'after 4 weeks' in 13 healthy volunteers. We found that most RSICs, especially the DMN, are reproducible across time, whereas some RSICs were variable in either their spatial characteristics or their functional connectivity. Relatively low spatial consistency was found in the basal ganglia, a parietal region of left frontoparietal network, and the supplementary motor area. The functional connectivity between two independent components, the bilateral angular/supramarginal gyri/intraparietal lobule and bilateral middle temporal/occipital gyri, was decreased across time regardless of the correlation analysis method employed, (Pearson's or partial correlation). RSICs showing variable consistency are different between spatial characteristics and functional connectivity. To understand the brain as a dynamic network, we recommend further investigation of both changes in the activation of specific regions and the modulation of functional connectivity in the brain network.

  18. MRI Study on the Functional and Spatial Consistency of Resting State-Related Independent Components of the Brain Network

    International Nuclear Information System (INIS)

    Jeong, Bum Seok; Choi, Jee Wook; Kim, Ji Woong

    2012-01-01

    Resting-state networks (RSNs), including the default mode network (DMN), have been considered as markers of brain status such as consciousness, developmental change, and treatment effects. The consistency of functional connectivity among RSNs has not been fully explored, especially among resting-state-related independent components (RSICs). This resting-state fMRI study addressed the consistency of functional connectivity among RSICs as well as their spatial consistency between 'at day 1' and 'after 4 weeks' in 13 healthy volunteers. We found that most RSICs, especially the DMN, are reproducible across time, whereas some RSICs were variable in either their spatial characteristics or their functional connectivity. Relatively low spatial consistency was found in the basal ganglia, a parietal region of left frontoparietal network, and the supplementary motor area. The functional connectivity between two independent components, the bilateral angular/supramarginal gyri/intraparietal lobule and bilateral middle temporal/occipital gyri, was decreased across time regardless of the correlation analysis method employed, (Pearson's or partial correlation). RSICs showing variable consistency are different between spatial characteristics and functional connectivity. To understand the brain as a dynamic network, we recommend further investigation of both changes in the activation of specific regions and the modulation of functional connectivity in the brain network.

  19. Dynamics of a structured neuron population

    International Nuclear Information System (INIS)

    Pakdaman, Khashayar; Salort, Delphine; Perthame, Benoît

    2010-01-01

    We study the dynamics of assemblies of interacting neurons. For large fully connected networks, the dynamics of the system can be described by a partial differential equation reminiscent of age-structure models used in mathematical ecology, where the 'age' of a neuron represents the time elapsed since its last discharge. The nonlinearity arises from the connectivity J of the network. We prove some mathematical properties of the model that are directly related to qualitative properties. On the one hand, we prove that it is well-posed and that it admits stationary states which, depending upon the connectivity, can be unique or not. On the other hand, we study the long time behaviour of solutions; both for small and large J, we prove the relaxation to the steady state describing asynchronous firing of the neurons. In the middle range, numerical experiments show that periodic solutions appear expressing re-synchronization of the network and asynchronous firing

  20. 18F-FDG PET imaging on the neuronal network of Parkinson's disease patients following deep brain stimulation of bilateral subthalamic nucleus

    International Nuclear Information System (INIS)

    Zuo Chuantao; Huang Zhemin; Zhao Jun; Guan Yihui; Lin Xiangtong; Li Dianyou; Sun Bomin

    2007-01-01

    Objective: There is evidence that the cause and progression of Parkinson's disease (PD) may be attributed to subthalamic nucleus (STN) dysfunction and that external electrical stimulation of the STN may improve the underlying neuronal network. This study aimed at using 18 F-FDG PET to monitor the functional status of the neuronal network of advanced PD patients following deep brain stimulation (DBS) of bilateral STN. Methods: Five PD patients in advanced stage, rated according to unified PD rat- ing scale (UPDRS) motion score, underwent bilateral STN DBS implantation. Six months after the implantation, each patient was studied with 18 F-FDG PET scans under stimulation turned 'on' and 'off' conditions. Statistical parametric mapping 2 (SPM2) was applied for data analyses. Results: Bilateral STN DBS reduced glucose utilization in lentiform nucleus (globus pallidus), bilateral thalamus, cerebellum, as well as the distal parietal cortex. However, glucose utilization in midbrain and pons was increased. The PD-related pattern (PDRP) scores were significantly different during the 'on' status (2.12 ± 15.24) and 'off' status (4.93 ± 13.01), which corresponded to the clinical improvement of PD symptoms as PDRP scores decreased. Conclusion: 18 F-FDG PET may be useful in monitoring and mapping the metabolism of the neuronal network during bilateral STN DBS, thus supporting its therapeutic impact on PD patients. (authors)

  1. Investigation of synapse formation and function in a glutamatergic-GABAergic two-neuron microcircuit.

    Science.gov (United States)

    Chang, Chia-Ling; Trimbuch, Thorsten; Chao, Hsiao-Tuan; Jordan, Julia-Christine; Herman, Melissa A; Rosenmund, Christian

    2014-01-15

    Neural circuits are composed of mainly glutamatergic and GABAergic neurons, which communicate through synaptic connections. Many factors instruct the formation and function of these synapses; however, it is difficult to dissect the contribution of intrinsic cell programs from that of extrinsic environmental effects in an intact network. Here, we perform paired recordings from two-neuron microculture preparations of mouse hippocampal glutamatergic and GABAergic neurons to investigate how synaptic input and output of these two principal cells develop. In our reduced preparation, we found that glutamatergic neurons showed no change in synaptic output or input regardless of partner neuron cell type or neuronal activity level. In contrast, we found that glutamatergic input caused the GABAergic neuron to modify its output by way of an increase in synapse formation and a decrease in synaptic release efficiency. These findings are consistent with aspects of GABAergic synapse maturation observed in many brain regions. In addition, changes in GABAergic output are cell wide and not target-cell specific. We also found that glutamatergic neuronal activity determined the AMPA receptor properties of synapses on the partner GABAergic neuron. All modifications of GABAergic input and output required activity of the glutamatergic neuron. Because our system has reduced extrinsic factors, the changes we saw in the GABAergic neuron due to glutamatergic input may reflect initiation of maturation programs that underlie the formation and function of in vivo neural circuits.

  2. Impact of weak excitatory synapses on chaotic transients in a diffusively coupled Morris-Lecar neuronal network

    Energy Technology Data Exchange (ETDEWEB)

    Lafranceschina, Jacopo, E-mail: jlafranceschina@alaska.edu; Wackerbauer, Renate, E-mail: rawackerbauer@alaska.edu [Department of Physics, University of Alaska, Fairbanks, Alaska 99775-5920 (United States)

    2015-01-15

    Spatiotemporal chaos collapses to either a rest state or a propagating pulse solution in a ring network of diffusively coupled, excitable Morris-Lecar neurons. Weak excitatory synapses can increase the Lyapunov exponent, expedite the collapse, and promote the collapse to the rest state rather than the pulse state. A single traveling pulse solution may no longer be asymptotic for certain combinations of network topology and (weak) coupling strengths, and initiate spatiotemporal chaos. Multiple pulses can cause chaos initiation due to diffusive and synaptic pulse-pulse interaction. In the presence of chaos initiation, intermittent spatiotemporal chaos exists until typically a collapse to the rest state.

  3. Impact of weak excitatory synapses on chaotic transients in a diffusively coupled Morris-Lecar neuronal network

    International Nuclear Information System (INIS)

    Lafranceschina, Jacopo; Wackerbauer, Renate

    2015-01-01

    Spatiotemporal chaos collapses to either a rest state or a propagating pulse solution in a ring network of diffusively coupled, excitable Morris-Lecar neurons. Weak excitatory synapses can increase the Lyapunov exponent, expedite the collapse, and promote the collapse to the rest state rather than the pulse state. A single traveling pulse solution may no longer be asymptotic for certain combinations of network topology and (weak) coupling strengths, and initiate spatiotemporal chaos. Multiple pulses can cause chaos initiation due to diffusive and synaptic pulse-pulse interaction. In the presence of chaos initiation, intermittent spatiotemporal chaos exists until typically a collapse to the rest state

  4. Communication networks in the brain: neurons, receptors, neurotransmitters, and alcohol.

    Science.gov (United States)

    Lovinger, David M

    2008-01-01

    Nerve cells (i.e., neurons) communicate via a combination of electrical and chemical signals. Within the neuron, electrical signals driven by charged particles allow rapid conduction from one end of the cell to the other. Communication between neurons occurs at tiny gaps called synapses, where specialized parts of the two cells (i.e., the presynaptic and postsynaptic neurons) come within nanometers of one another to allow for chemical transmission. The presynaptic neuron releases a chemical (i.e., a neurotransmitter) that is received by the postsynaptic neuron's specialized proteins called neurotransmitter receptors. The neurotransmitter molecules bind to the receptor proteins and alter postsynaptic neuronal function. Two types of neurotransmitter receptors exist-ligand-gated ion channels, which permit rapid ion flow directly across the outer cell membrane, and G-protein-coupled receptors, which set into motion chemical signaling events within the cell. Hundreds of molecules are known to act as neurotransmitters in the brain. Neuronal development and function also are affected by peptides known as neurotrophins and by steroid hormones. This article reviews the chemical nature, neuronal actions, receptor subtypes, and therapeutic roles of several transmitters, neurotrophins, and hormones. It focuses on neurotransmitters with important roles in acute and chronic alcohol effects on the brain, such as those that contribute to intoxication, tolerance, dependence, and neurotoxicity, as well as maintained alcohol drinking and addiction.

  5. Time-warp invariant pattern detection with bursting neurons

    International Nuclear Information System (INIS)

    Gollisch, Tim

    2008-01-01

    Sound patterns are defined by the temporal relations of their constituents, individual acoustic cues. Auditory systems need to extract these temporal relations to detect or classify sounds. In various cases, ranging from human speech to communication signals of grasshoppers, this pattern detection has been found to display invariance to temporal stretching or compression of the sound signal ('linear time-warp invariance'). In this work, a four-neuron network model is introduced, designed to solve such a detection task for the example of grasshopper courtship songs. As an essential ingredient, the network contains neurons with intrinsic bursting dynamics, which allow them to encode durations between acoustic events in short, rapid sequences of spikes. As shown by analytical calculations and computer simulations, these neuronal dynamics result in a powerful mechanism for temporal integration. Finally, the network reads out the encoded temporal information by detecting equal activity of two such bursting neurons. This leads to the recognition of rhythmic patterns independent of temporal stretching or compression

  6. Coexistence of intermittencies in the neuronal network of the epileptic brain.

    Science.gov (United States)

    Koronovskii, Alexey A; Hramov, Alexander E; Grubov, Vadim V; Moskalenko, Olga I; Sitnikova, Evgenia; Pavlov, Alexey N

    2016-03-01

    Intermittent behavior occurs widely in nature. At present, several types of intermittencies are known and well-studied. However, consideration of intermittency has usually been limited to the analysis of cases when only one certain type of intermittency takes place. In this paper, we report on the temporal behavior of the complex neuronal network in the epileptic brain, when two types of intermittent behavior coexist and alternate with each other. We prove the presence of this phenomenon in physiological experiments with WAG/Rij rats being the model living system of absence epilepsy. In our paper, the deduced theoretical law for distributions of the lengths of laminar phases prescribing the power law with a degree of -2 agrees well with the experimental neurophysiological data.

  7. A network architecture supporting consistent rich behavior in collaborative interactive applications.

    Science.gov (United States)

    Marsh, James; Glencross, Mashhuda; Pettifer, Steve; Hubbold, Roger

    2006-01-01

    Network architectures for collaborative virtual reality have traditionally been dominated by client-server and peer-to-peer approaches, with peer-to-peer strategies typically being favored where minimizing latency is a priority, and client-server where consistency is key. With increasingly sophisticated behavior models and the demand for better support for haptics, we argue that neither approach provides sufficient support for these scenarios and, thus, a hybrid architecture is required. We discuss the relative performance of different distribution strategies in the face of real network conditions and illustrate the problems they face. Finally, we present an architecture that successfully meets many of these challenges and demonstrate its use in a distributed virtual prototyping application which supports simultaneous collaboration for assembly, maintenance, and training applications utilizing haptics.

  8. Detecting dependencies between spike trains of pairs of neurons through copulas

    DEFF Research Database (Denmark)

    Sacerdote, Laura; Tamborrino, Massimiliano; Zucca, Cristina

    2011-01-01

    The dynamics of a neuron are influenced by the connections with the network where it lies. Recorded spike trains exhibit patterns due to the interactions between neurons. However, the structure of the network is not known. A challenging task is to investigate it from the analysis of simultaneously...... the two neurons. Furthermore, the method recognizes the presence of delays in the spike propagation....

  9. Collective firing regularity of a scale-free Hodgkin–Huxley neuronal network in response to a subthreshold signal

    Energy Technology Data Exchange (ETDEWEB)

    Yilmaz, Ergin, E-mail: erginyilmaz@yahoo.com [Department of Biomedical Engineering, Engineering Faculty, Bülent Ecevit University, 67100 Zonguldak (Turkey); Ozer, Mahmut [Department of Electrical and Electronics Engineering, Engineering Faculty, Bülent Ecevit University, 67100 Zonguldak (Turkey)

    2013-08-01

    We consider a scale-free network of stochastic HH neurons driven by a subthreshold periodic stimulus and investigate how the collective spiking regularity or the collective temporal coherence changes with the stimulus frequency, the intrinsic noise (or the cell size), the network average degree and the coupling strength. We show that the best temporal coherence is obtained for a certain level of the intrinsic noise when the frequencies of the external stimulus and the subthreshold oscillations of the network elements match. We also find that the collective regularity exhibits a resonance-like behavior depending on both the coupling strength and the network average degree at the optimal values of the stimulus frequency and the cell size, indicating that the best temporal coherence also requires an optimal coupling strength and an optimal average degree of the connectivity.

  10. Regulatory Mechanisms Controlling Maturation of Serotonin Neuron Identity and Function.

    Science.gov (United States)

    Spencer, William C; Deneris, Evan S

    2017-01-01

    The brain serotonin (5-hydroxytryptamine; 5-HT) system has been extensively studied for its role in normal physiology and behavior, as well as, neuropsychiatric disorders. The broad influence of 5-HT on brain function, is in part due to the vast connectivity pattern of 5-HT-producing neurons throughout the CNS. 5-HT neurons are born and terminally specified midway through embryogenesis, then enter a protracted period of maturation, where they functionally integrate into CNS circuitry and then are maintained throughout life. The transcriptional regulatory networks controlling progenitor cell generation and terminal specification of 5-HT neurons are relatively well-understood, yet the factors controlling 5-HT neuron maturation are only recently coming to light. In this review, we first provide an update on the regulatory network controlling 5-HT neuron development, then delve deeper into the properties and regulatory strategies governing 5-HT neuron maturation. In particular, we discuss the role of the 5-HT neuron terminal selector transcription factor (TF) Pet-1 as a key regulator of 5-HT neuron maturation. Pet-1 was originally shown to positively regulate genes needed for 5-HT synthesis, reuptake and vesicular transport, hence 5-HT neuron-type transmitter identity. It has now been shown to regulate, both positively and negatively, many other categories of genes in 5-HT neurons including ion channels, GPCRs, transporters, neuropeptides, and other transcription factors. Its function as a terminal selector results in the maturation of 5-HT neuron excitability, firing characteristics, and synaptic modulation by several neurotransmitters. Furthermore, there is a temporal requirement for Pet-1 in the control of postmitotic gene expression trajectories thus indicating a direct role in 5-HT neuron maturation. Proper regulation of the maturation of cellular identity is critical for normal neuronal functioning and perturbations in the gene regulatory networks controlling

  11. Human Brain Networks: Spiking Neuron Models, Multistability, Synchronization, Thermodynamics, Maximum Entropy Production, and Anesthetic Cascade Mechanisms

    Directory of Open Access Journals (Sweden)

    Wassim M. Haddad

    2014-07-01

    Full Text Available Advances in neuroscience have been closely linked to mathematical modeling beginning with the integrate-and-fire model of Lapicque and proceeding through the modeling of the action potential by Hodgkin and Huxley to the current era. The fundamental building block of the central nervous system, the neuron, may be thought of as a dynamic element that is “excitable”, and can generate a pulse or spike whenever the electrochemical potential across the cell membrane of the neuron exceeds a threshold. A key application of nonlinear dynamical systems theory to the neurosciences is to study phenomena of the central nervous system that exhibit nearly discontinuous transitions between macroscopic states. A very challenging and clinically important problem exhibiting this phenomenon is the induction of general anesthesia. In any specific patient, the transition from consciousness to unconsciousness as the concentration of anesthetic drugs increases is very sharp, resembling a thermodynamic phase transition. This paper focuses on multistability theory for continuous and discontinuous dynamical systems having a set of multiple isolated equilibria and/or a continuum of equilibria. Multistability is the property whereby the solutions of a dynamical system can alternate between two or more mutually exclusive Lyapunov stable and convergent equilibrium states under asymptotically slowly changing inputs or system parameters. In this paper, we extend the theory of multistability to continuous, discontinuous, and stochastic nonlinear dynamical systems. In particular, Lyapunov-based tests for multistability and synchronization of dynamical systems with continuously differentiable and absolutely continuous flows are established. The results are then applied to excitatory and inhibitory biological neuronal networks to explain the underlying mechanism of action for anesthesia and consciousness from a multistable dynamical system perspective, thereby providing a

  12. A Heuristic Approach to Intra-Brain Communications Using Chaos in a Recurrent Neural Network Model

    Science.gov (United States)

    Soma, Ken-ichiro; Mori, Ryota; Sato, Ryuichi; Nara, Shigetoshi

    2011-09-01

    To approach functional roles of chaos in brain, a heuristic model to consider mechanisms of intra-brain communications is proposed. The key idea is to use chaos in firing pattern dynamics of a recurrent neural network consisting of birary state neurons, as propagation medium of pulse signals. Computer experiments and numerical methods are introduced to evaluate signal transport characteristics by calculating correlation functions between sending neurons and receiving neurons of pulse signals.

  13. Dynamic neural networking as a basis for plasticity in the control of heart rate.

    Science.gov (United States)

    Kember, G; Armour, J A; Zamir, M

    2013-01-21

    A model is proposed in which the relationship between individual neurons within a neural network is dynamically changing to the effect of providing a measure of "plasticity" in the control of heart rate. The neural network on which the model is based consists of three populations of neurons residing in the central nervous system, the intrathoracic extracardiac nervous system, and the intrinsic cardiac nervous system. This hierarchy of neural centers is used to challenge the classical view that the control of heart rate, a key clinical index, resides entirely in central neuronal command (spinal cord, medulla oblongata, and higher centers). Our results indicate that dynamic networking allows for the possibility of an interplay among the three populations of neurons to the effect of altering the order of control of heart rate among them. This interplay among the three levels of control allows for different neural pathways for the control of heart rate to emerge under different blood flow demands or disease conditions and, as such, it has significant clinical implications because current understanding and treatment of heart rate anomalies are based largely on a single level of control and on neurons acting in unison as a single entity rather than individually within a (plastically) interconnected network. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Lactate rescues neuronal sodium homeostasis during impaired energy metabolism.

    Science.gov (United States)

    Karus, Claudia; Ziemens, Daniel; Rose, Christine R

    2015-01-01

    Recently, we established that recurrent activity evokes network sodium oscillations in neurons and astrocytes in hippocampal tissue slices. Interestingly, metabolic integrity of astrocytes was essential for the neurons' capacity to maintain low sodium and to recover from sodium loads, indicating an intimate metabolic coupling between the 2 cell types. Here, we studied if lactate can support neuronal sodium homeostasis during impaired energy metabolism by analyzing whether glucose removal, pharmacological inhibition of glycolysis and/or addition of lactate affect cellular sodium regulation. Furthermore, we studied the effect of lactate on sodium regulation during recurrent network activity and upon inhibition of the glial Krebs cycle by sodium-fluoroacetate. Our results indicate that lactate is preferentially used by neurons. They demonstrate that lactate supports neuronal sodium homeostasis and rescues the effects of glial poisoning by sodium-fluoroacetate. Altogether, they are in line with the proposed transfer of lactate from astrocytes to neurons, the so-called astrocyte-neuron-lactate shuttle.

  15. Lactate rescues neuronal sodium homeostasis during impaired energy metabolism

    Science.gov (United States)

    Karus, Claudia; Ziemens, Daniel; Rose, Christine R

    2015-01-01

    Recently, we established that recurrent activity evokes network sodium oscillations in neurons and astrocytes in hippocampal tissue slices. Interestingly, metabolic integrity of astrocytes was essential for the neurons' capacity to maintain low sodium and to recover from sodium loads, indicating an intimate metabolic coupling between the 2 cell types. Here, we studied if lactate can support neuronal sodium homeostasis during impaired energy metabolism by analyzing whether glucose removal, pharmacological inhibition of glycolysis and/or addition of lactate affect cellular sodium regulation. Furthermore, we studied the effect of lactate on sodium regulation during recurrent network activity and upon inhibition of the glial Krebs cycle by sodium-fluoroacetate. Our results indicate that lactate is preferentially used by neurons. They demonstrate that lactate supports neuronal sodium homeostasis and rescues the effects of glial poisoning by sodium-fluoroacetate. Altogether, they are in line with the proposed transfer of lactate from astrocytes to neurons, the so-called astrocyte-neuron-lactate shuttle. PMID:26039160

  16. Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.

    Science.gov (United States)

    Merolla, Paul A; Arthur, John V; Alvarez-Icaza, Rodrigo; Cassidy, Andrew S; Sawada, Jun; Akopyan, Filipp; Jackson, Bryan L; Imam, Nabil; Guo, Chen; Nakamura, Yutaka; Brezzo, Bernard; Vo, Ivan; Esser, Steven K; Appuswamy, Rathinakumar; Taba, Brian; Amir, Arnon; Flickner, Myron D; Risk, William P; Manohar, Rajit; Modha, Dharmendra S

    2014-08-08

    Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts. Copyright © 2014, American Association for the Advancement of Science.

  17. Two cell circuits of oriented adult hippocampal neurons on self-assembled monolayers for use in the study of neuronal communication in a defined system.

    Science.gov (United States)

    Edwards, Darin; Stancescu, Maria; Molnar, Peter; Hickman, James J

    2013-08-21

    In this study, we demonstrate the directed formation of small circuits of electrically active, synaptically connected neurons derived from the hippocampus of adult rats through the use of engineered chemically modified culture surfaces that orient the polarity of the neuronal processes. Although synaptogenesis, synaptic communication, synaptic plasticity, and brain disease pathophysiology can be studied using brain slice or dissociated embryonic neuronal culture systems, the complex elements found in neuronal synapses makes specific studies difficult in these random cultures. The study of synaptic transmission in mature adult neurons and factors affecting synaptic transmission are generally studied in organotypic cultures, in brain slices, or in vivo. However, engineered neuronal networks would allow these studies to be performed instead on simple functional neuronal circuits derived from adult brain tissue. Photolithographic patterned self-assembled monolayers (SAMs) were used to create the two-cell "bidirectional polarity" circuit patterns. This pattern consisted of a cell permissive SAM, N-1[3-(trimethoxysilyl)propyl] diethylenetriamine (DETA), and was composed of two 25 μm somal adhesion sites connected with 5 μm lines acting as surface cues for guided axonal and dendritic regeneration. Surrounding the DETA pattern was a background of a non-cell-permissive poly(ethylene glycol) (PEG) SAM. Adult hippocampal neurons were first cultured on coverslips coated with DETA monolayers and were later passaged onto the PEG-DETA bidirectional polarity patterns in serum-free medium. These neurons followed surface cues, attaching and regenerating only along the DETA substrate to form small engineered neuronal circuits. These circuits were stable for more than 21 days in vitro (DIV), during which synaptic connectivity was evaluated using basic electrophysiological methods.

  18. A Markov model for the temporal dynamics of balanced random networks of finite size

    Science.gov (United States)

    Lagzi, Fereshteh; Rotter, Stefan

    2014-01-01

    The balanced state of recurrent networks of excitatory and inhibitory spiking neurons is characterized by fluctuations of population activity about an attractive fixed point. Numerical simulations show that these dynamics are essentially nonlinear, and the intrinsic noise (self-generated fluctuations) in networks of finite size is state-dependent. Therefore, stochastic differential equations with additive noise of fixed amplitude cannot provide an adequate description of the stochastic dynamics. The noise model should, rather, result from a self-consistent description of the network dynamics. Here, we consider a two-state Markovian neuron model, where spikes correspond to transitions from the active state to the refractory state. Excitatory and inhibitory input to this neuron affects the transition rates between the two states. The corresponding nonlinear dependencies can be identified directly from numerical simulations of networks of leaky integrate-and-fire neurons, discretized at a time resolution in the sub-millisecond range. Deterministic mean-field equations, and a noise component that depends on the dynamic state of the network, are obtained from this model. The resulting stochastic model reflects the behavior observed in numerical simulations quite well, irrespective of the size of the network. In particular, a strong temporal correlation between the two populations, a hallmark of the balanced state in random recurrent networks, are well represented by our model. Numerical simulations of such networks show that a log-normal distribution of short-term spike counts is a property of balanced random networks with fixed in-degree that has not been considered before, and our model shares this statistical property. Furthermore, the reconstruction of the flow from simulated time series suggests that the mean-field dynamics of finite-size networks are essentially of Wilson-Cowan type. We expect that this novel nonlinear stochastic model of the interaction between

  19. Cultured Neural Networks: Optimization of Patterned Network Adhesiveness and Characterization of their Neural Activity

    Directory of Open Access Journals (Sweden)

    W. L. C. Rutten

    2006-01-01

    Full Text Available One type of future, improved neural interface is the “cultured probe”. It is a hybrid type of neural information transducer or prosthesis, for stimulation and/or recording of neural activity. It would consist of a microelectrode array (MEA on a planar substrate, each electrode being covered and surrounded by a local circularly confined network (“island” of cultured neurons. The main purpose of the local networks is that they act as biofriendly intermediates for collateral sprouts from the in vivo system, thus allowing for an effective and selective neuron–electrode interface. As a secondary purpose, one may envisage future information processing applications of these intermediary networks. In this paper, first, progress is shown on how substrates can be chemically modified to confine developing networks, cultured from dissociated rat cortex cells, to “islands” surrounding an electrode site. Additional coating of neurophobic, polyimide-coated substrate by triblock-copolymer coating enhances neurophilic-neurophobic adhesion contrast. Secondly, results are given on neuronal activity in patterned, unconnected and connected, circular “island” networks. For connected islands, the larger the island diameter (50, 100 or 150 μm, the more spontaneous activity is seen. Also, activity may show a very high degree of synchronization between two islands. For unconnected islands, activity may start at 22 days in vitro (DIV, which is two weeks later than in unpatterned networks.

  20. MIRNAS in Astrocyte-Derived Exosomes as Possible Mediators of Neuronal Plasticity

    Directory of Open Access Journals (Sweden)

    Carlos Lafourcade

    2016-01-01

    Full Text Available Astrocytes use gliotransmitters to modulate neuronal function and plasticity. However, the role of small extracellular vesicles, called exosomes, in astrocyte-to-neuron signaling is mostly unknown. Exosomes originate in multivesicular bodies of parent cells and are secreted by fusion of the multivesicular body limiting membrane with the plasma membrane. Their molecular cargo, consisting of RNA species, proteins, and lipids, is in part cell type and cell state specific. Among the RNA species transported by exosomes, microRNAs (miRNAs are able to modify gene expression in recipient cells. Several miRNAs present in astrocytes are regulated under pathological conditions, and this may have far-reaching consequences if they are loaded in exosomes. We propose that astrocyte-derived miRNA-loaded exosomes, such as miR-26a, are dysregulated in several central nervous system diseases; thus potentially controlling neuronal morphology and synaptic transmission through validated and predicted targets. Unraveling the contribution of this new signaling mechanism to the maintenance and plasticity of neuronal networks will impact our understanding on the physiology and pathophysiology of the central nervous system.