WorldWideScience

Sample records for biological neuronal networks

  1. Functional model of biological neural networks.

    Science.gov (United States)

    Lo, James Ting-Ho

    2010-12-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.

  2. Constructing Precisely Computing Networks with Biophysical Spiking Neurons.

    Science.gov (United States)

    Schwemmer, Michael A; Fairhall, Adrienne L; Denéve, Sophie; Shea-Brown, Eric T

    2015-07-15

    While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Denéve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output (Boerlin and Denéve, 2011; Boerlin et al., 2013). By postulating that each neuron fires to reduce the error in the network's output, it was demonstrated that linear computations can be performed by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation. We derive a network of neurons with standard spike-generating currents and synapses with realistic timescales that computes based upon the principle that the precise timing of each spike is important for the computation. We then show that our network reproduces a number of key features of cortical networks

  3. Autapse-induced synchronization in a coupled neuronal network

    International Nuclear Information System (INIS)

    Ma, Jun; Song, Xinlin; Jin, Wuyin; Wang, Chuni

    2015-01-01

    Highlights: • The functional effect of autapse on neuronal activity is detected. • Autapse driving plays active role in regulating electrical activities as pacemaker. • It confirms biological experimental results for rhythm synchronization between heterogeneous cells. - Abstract: The effect of autapse on coupled neuronal network is detected. In our studies, three identical neurons are connected with ring type and autapse connected to one neuron of the network. The autapse connected to neuron can impose time-delayed feedback in close loop on the neuron thus the dynamics of membrane potentials can be changed. Firstly, the effect of autapse driving on single neuron is confirmed that negative feedback can calm down the neuronal activity while positive feedback can excite the neuronal activity. Secondly, the collective electrical behaviors of neurons are regulated by a pacemaker, which associated with the autapse forcing. By using appropriate gain and time delay in the autapse, the neurons can reach synchronization and the membrane potentials of all neurons can oscillate with the same rhythm under mutual coupling. It indicates that autapse forcing plays an important role in changing the collective electric activities of neuronal network, and appropriate electric modes can be selected due to the switch of feedback type(positive or negative) in autapse. And the autapse-induced synchronization in network is also consistent with some biological experiments about synchronization between nonidentical neurons.

  4. Synchronization of the small-world neuronal network with unreliable synapses

    International Nuclear Information System (INIS)

    Li, Chunguang; Zheng, Qunxian

    2010-01-01

    As is well known, synchronization phenomena are ubiquitous in neuronal systems. Recently a lot of work concerning the synchronization of the neuronal network has been accomplished. In these works, the synapses are usually considered reliable, but experimental results show that, in biological neuronal networks, synapses are usually unreliable. In our previous work, we have studied the synchronization of the neuronal network with unreliable synapses; however, we have not paid attention to the effect of topology on the synchronization of the neuronal network. Several recent studies have found that biological neuronal networks have typical properties of small-world networks, characterized by a short path length and high clustering coefficient. In this work, mainly based on the small-world neuronal network (SWNN) with inhibitory neurons, we study the effect of network topology on the synchronization of the neuronal network with unreliable synapses. Together with the network topology, the effects of the GABAergic reversal potential, time delay and noise are also considered. Interestingly, we found a counter-intuitive phenomenon for the SWNN with specific shortcut adding probability, that is, the less reliable the synapses, the better the synchronization performance of the SWNN. We also consider the effects of both local noise and global noise in this work. It is shown that these two different types of noise have distinct effects on the synchronization: one is negative and the other is positive

  5. Complete and phase synchronization in a heterogeneous small-world neuronal network

    International Nuclear Information System (INIS)

    Fang, Han; Qi-Shao, Lu; Quan-Bao, Ji; Marian, Wiercigroch

    2009-01-01

    Synchronous firing of neurons is thought to be important for information communication in neuronal networks. This paper investigates the complete and phase synchronization in a heterogeneous small-world chaotic Hindmarsh–Rose neuronal network. The effects of various network parameters on synchronization behaviour are discussed with some biological explanations. Complete synchronization of small-world neuronal networks is studied theoretically by the master stability function method. It is shown that the coupling strength necessary for complete or phase synchronization decreases with the neuron number, the node degree and the connection density are increased. The effect of heterogeneity of neuronal networks is also considered and it is found that the network heterogeneity has an adverse effect on synchrony. (general)

  6. Performance of networks of artificial neurons: The role of clustering

    International Nuclear Information System (INIS)

    Kim, Beom Jun

    2004-01-01

    The performance of the Hopfield neural network model is numerically studied on various complex networks, such as the Watts-Strogatz network, the Barabasi-Albert network, and the neuronal network of Caenorhabditis elegans. Through the use of a systematic way of controlling the clustering coefficient, with the degree of each neuron kept unchanged, we find that the networks with the lower clustering exhibit much better performance. The results are discussed in the practical viewpoint of application, and the biological implications are also suggested

  7. Connectivity and dynamics of neuronal networks as defined by the shape of individual neurons

    International Nuclear Information System (INIS)

    Ahnert, Sebastian E; A N Travencolo, Bruno; Costa, Luciano da Fontoura

    2009-01-01

    Biological neuronal networks constitute a special class of dynamical systems, as they are formed by individual geometrical components, namely the neurons. In the existing literature, relatively little attention has been given to the influence of neuron shape on the overall connectivity and dynamics of the emerging networks. The current work addresses this issue by considering simplified neuronal shapes consisting of circular regions (soma/axons) with spokes (dendrites). Networks are grown by placing these patterns randomly in the two-dimensional (2D) plane and establishing connections whenever a piece of dendrite falls inside an axon. Several topological and dynamical properties of the resulting graph are measured, including the degree distribution, clustering coefficients, symmetry of connections, size of the largest connected component, as well as three hierarchical measurements of the local topology. By varying the number of processes of the individual basic patterns, we can quantify relationships between the individual neuronal shape and the topological and dynamical features of the networks. Integrate-and-fire dynamics on these networks is also investigated with respect to transient activation from a source node, indicating that long-range connections play an important role in the propagation of avalanches.

  8. Heterogeneous delay-induced asynchrony and resonance in a small-world neuronal network system

    Science.gov (United States)

    Yu, Wen-Ting; Tang, Jun; Ma, Jun; Yang, Xianqing

    2016-06-01

    A neuronal network often involves time delay caused by the finite signal propagation time in a given biological network. This time delay is not a homogenous fluctuation in a biological system. The heterogeneous delay-induced asynchrony and resonance in a noisy small-world neuronal network system are numerically studied in this work by calculating synchronization measure and spike interval distribution. We focus on three different delay conditions: double-values delay, triple-values delay, and Gaussian-distributed delay. Our results show the following: 1) the heterogeneity in delay results in asynchronous firing in the neuronal network, and 2) maximum synchronization could be achieved through resonance given that the delay values are integer or half-integer times of each other.

  9. An FPGA Platform for Real-Time Simulation of Spiking Neuronal Networks.

    Science.gov (United States)

    Pani, Danilo; Meloni, Paolo; Tuveri, Giuseppe; Palumbo, Francesca; Massobrio, Paolo; Raffo, Luigi

    2017-01-01

    In the last years, the idea to dynamically interface biological neurons with artificial ones has become more and more urgent. The reason is essentially due to the design of innovative neuroprostheses where biological cell assemblies of the brain can be substituted by artificial ones. For closed-loop experiments with biological neuronal networks interfaced with in silico modeled networks, several technological challenges need to be faced, from the low-level interfacing between the living tissue and the computational model to the implementation of the latter in a suitable form for real-time processing. Field programmable gate arrays (FPGAs) can improve flexibility when simple neuronal models are required, obtaining good accuracy, real-time performance, and the possibility to create a hybrid system without any custom hardware, just programming the hardware to achieve the required functionality. In this paper, this possibility is explored presenting a modular and efficient FPGA design of an in silico spiking neural network exploiting the Izhikevich model. The proposed system, prototypically implemented on a Xilinx Virtex 6 device, is able to simulate a fully connected network counting up to 1,440 neurons, in real-time, at a sampling rate of 10 kHz, which is reasonable for small to medium scale extra-cellular closed-loop experiments.

  10. Dynamic neuronal ensembles: Issues in representing structure change in object-oriented, biologically-based brain models

    Energy Technology Data Exchange (ETDEWEB)

    Vahie, S.; Zeigler, B.P.; Cho, H. [Univ. of Arizona, Tucson, AZ (United States)

    1996-12-31

    This paper describes the structure of dynamic neuronal ensembles (DNEs). DNEs represent a new paradigm for learning, based on biological neural networks that use variable structures. We present a computational neural element that demonstrates biological neuron functionality such as neurotransmitter feedback absolute refractory period and multiple output potentials. More specifically, we will develop a network of neural elements that have the ability to dynamically strengthen, weaken, add and remove interconnections. We demonstrate that the DNE is capable of performing dynamic modifications to neuron connections and exhibiting biological neuron functionality. In addition to its applications for learning, DNEs provide an excellent environment for testing and analysis of biological neural systems. An example of habituation and hyper-sensitization in biological systems, using a neural circuit from a snail is presented and discussed. This paper provides an insight into the DNE paradigm using models developed and simulated in DEVS.

  11. Network and neuronal membrane properties in hybrid networks reciprocally regulate selectivity to rapid thalamocortical inputs.

    Science.gov (United States)

    Pesavento, Michael J; Pinto, David J

    2012-11-01

    Rapidly changing environments require rapid processing from sensory inputs. Varying deflection velocities of a rodent's primary facial vibrissa cause varying temporal neuronal activity profiles within the ventral posteromedial thalamic nucleus. Local neuron populations in a single somatosensory layer 4 barrel transform sparsely coded input into a spike count based on the input's temporal profile. We investigate this transformation by creating a barrel-like hybrid network with whole cell recordings of in vitro neurons from a cortical slice preparation, embedding the biological neuron in the simulated network by presenting virtual synaptic conductances via a conductance clamp. Utilizing the hybrid network, we examine the reciprocal network properties (local excitatory and inhibitory synaptic convergence) and neuronal membrane properties (input resistance) by altering the barrel population response to diverse thalamic input. In the presence of local network input, neurons are more selective to thalamic input timing; this arises from strong feedforward inhibition. Strongly inhibitory (damping) network regimes are more selective to timing and less selective to the magnitude of input but require stronger initial input. Input selectivity relies heavily on the different membrane properties of excitatory and inhibitory neurons. When inhibitory and excitatory neurons had identical membrane properties, the sensitivity of in vitro neurons to temporal vs. magnitude features of input was substantially reduced. Increasing the mean leak conductance of the inhibitory cells decreased the network's temporal sensitivity, whereas increasing excitatory leak conductance enhanced magnitude sensitivity. Local network synapses are essential in shaping thalamic input, and differing membrane properties of functional classes reciprocally modulate this effect.

  12. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks.

    Science.gov (United States)

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L; Lindén, Henrik; Tetzlaff, Tom; van Albada, Sacha J; Grün, Sonja; Diesmann, Markus; Einevoll, Gaute T

    2016-12-01

    With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network model for a ∼1 mm 2 patch of primary visual cortex, predict laminar LFPs for different network states, assess the relative LFP contribution from different laminar populations, and investigate effects of input correlations and neuron density on the LFP. The generic nature of the hybrid scheme and its public implementation in hybridLFPy form the basis for LFP predictions from other and larger point-neuron network models, as well as extensions of the current application with additional biological detail. © The Author 2016. Published by Oxford University Press.

  13. [A novel biologic electricity signal measurement based on neuron chip].

    Science.gov (United States)

    Lei, Yinsheng; Wang, Mingshi; Sun, Tongjing; Zhu, Qiang; Qin, Ran

    2006-06-01

    Neuron chip is a multiprocessor with three pipeline CPU; its communication protocol and control processor are integrated in effect to carry out the function of communication, control, attemper, I/O, etc. A novel biologic electronic signal measurement network system is composed of intelligent measurement nodes with neuron chip at the core. In this study, the electronic signals such as ECG, EEG, EMG and BOS can be synthetically measured by those intelligent nodes, and some valuable diagnostic messages are found. Wavelet transform is employed in this system to analyze various biologic electronic signals due to its strong time-frequency ability of decomposing signal local character. Better effect is gained. This paper introduces the hardware structure of network and intelligent measurement node, the measurement theory and the signal figure of data acquisition and processing.

  14. Order-based representation in random networks of cortical neurons.

    Directory of Open Access Journals (Sweden)

    Goded Shahaf

    2008-11-01

    Full Text Available The wide range of time scales involved in neural excitability and synaptic transmission might lead to ongoing change in the temporal structure of responses to recurring stimulus presentations on a trial-to-trial basis. This is probably the most severe biophysical constraint on putative time-based primitives of stimulus representation in neuronal networks. Here we show that in spontaneously developing large-scale random networks of cortical neurons in vitro the order in which neurons are recruited following each stimulus is a naturally emerging representation primitive that is invariant to significant temporal changes in spike times. With a relatively small number of randomly sampled neurons, the information about stimulus position is fully retrievable from the recruitment order. The effective connectivity that makes order-based representation invariant to time warping is characterized by the existence of stations through which activity is required to pass in order to propagate further into the network. This study uncovers a simple invariant in a noisy biological network in vitro; its applicability under in vivo constraints remains to be seen.

  15. Growth of cortical neuronal network in vitro: Modeling and analysis

    International Nuclear Information System (INIS)

    Lai, P.-Y.; Jia, L. C.; Chan, C. K.

    2006-01-01

    We present a detailed analysis and theoretical growth models to account for recent experimental data on the growth of cortical neuronal networks in vitro [Phys. Rev. Lett. 93, 088101 (2004)]. The experimentally observed synchronized firing frequency of a well-connected neuronal network is shown to be proportional to the mean network connectivity. The growth of the network is consistent with the model of an early enhanced growth of connection, but followed by a retarded growth once the synchronized cluster is formed. Microscopic models with dominant excluded volume interactions are consistent with the observed exponential decay of the mean connection probability as a function of the mean network connectivity. The biological implications of the growth model are also discussed

  16. Coherence resonance in globally coupled neuronal networks with different neuron numbers

    International Nuclear Information System (INIS)

    Ning Wei-Lian; Zhang Zheng-Zhen; Zeng Shang-You; Luo Xiao-Shu; Hu Jin-Lin; Zeng Shao-Wen; Qiu Yi; Wu Hui-Si

    2012-01-01

    Because a brain consists of tremendous neuronal networks with different neuron numbers ranging from tens to tens of thousands, we study the coherence resonance due to ion channel noises in globally coupled neuronal networks with different neuron numbers. We confirm that for all neuronal networks with different neuron numbers there exist the array enhanced coherence resonance and the optimal synaptic conductance to cause the maximal spiking coherence. Furthermoremore, the enhancement effects of coupling on spiking coherence and on optimal synaptic conductance are almost the same, regardless of the neuron numbers in the neuronal networks. Therefore for all the neuronal networks with different neuron numbers in the brain, relative weak synaptic conductance (0.1 mS/cm 2 ) is sufficient to induce the maximal spiking coherence and the best sub-threshold signal encoding. (interdisciplinary physics and related areas of science and technology)

  17. Analysis of connectivity map: Control to glutamate injured and phenobarbital treated neuronal network

    Science.gov (United States)

    Kamal, Hassan; Kanhirodan, Rajan; Srinivas, Kalyan V.; Sikdar, Sujit K.

    2010-04-01

    We study the responses of a cultured neural network when it is exposed to epileptogenesis glutamate injury causing epilepsy and subsequent treatment with phenobarbital by constructing connectivity map of neurons using correlation matrix. This study is particularly useful in understanding the pharmaceutical drug induced changes in the neuronal network properties with insights into changes at the systems biology level.

  18. Network reconfiguration and neuronal plasticity in rhythm-generating networks.

    Science.gov (United States)

    Koch, Henner; Garcia, Alfredo J; Ramirez, Jan-Marino

    2011-12-01

    Neuronal networks are highly plastic and reconfigure in a state-dependent manner. The plasticity at the network level emerges through multiple intrinsic and synaptic membrane properties that imbue neurons and their interactions with numerous nonlinear properties. These properties are continuously regulated by neuromodulators and homeostatic mechanisms that are critical to maintain not only network stability and also adapt networks in a short- and long-term manner to changes in behavioral, developmental, metabolic, and environmental conditions. This review provides concrete examples from neuronal networks in invertebrates and vertebrates, and illustrates that the concepts and rules that govern neuronal networks and behaviors are universal.

  19. Complete Neuron-Astrocyte Interaction Model: Digital Multiplierless Design and Networking Mechanism.

    Science.gov (United States)

    Haghiri, Saeed; Ahmadi, Arash; Saif, Mehrdad

    2017-02-01

    Glial cells, also known as neuroglia or glia, are non-neuronal cells providing support and protection for neurons in the central nervous system (CNS). They also act as supportive cells in the brain. Among a variety of glial cells, the star-shaped glial cells, i.e., astrocytes, are the largest cell population in the brain. The important role of astrocyte such as neuronal synchronization, synaptic information regulation, feedback to neural activity and extracellular regulation make the astrocytes play a vital role in brain disease. This paper presents a modified complete neuron-astrocyte interaction model that is more suitable for efficient and large scale biological neural network realization on digital platforms. Simulation results show that the modified complete interaction model can reproduce biological-like behavior of the original neuron-astrocyte mechanism. The modified interaction model is investigated in terms of digital realization feasibility and cost targeting a low cost hardware implementation. Networking behavior of this interaction is investigated and compared between two cases: i) the neuron spiking mechanism without astrocyte effects, and ii) the effect of astrocyte in regulating the neurons behavior and synaptic transmission via controlling the LTP and LTD processes. Hardware implementation on FPGA shows that the modified model mimics the main mechanism of neuron-astrocyte communication with higher performance and considerably lower hardware overhead cost compared with the original interaction model.

  20. A distance constrained synaptic plasticity model of C. elegans neuronal network

    Science.gov (United States)

    Badhwar, Rahul; Bagler, Ganesh

    2017-03-01

    Brain research has been driven by enquiry for principles of brain structure organization and its control mechanisms. The neuronal wiring map of C. elegans, the only complete connectome available till date, presents an incredible opportunity to learn basic governing principles that drive structure and function of its neuronal architecture. Despite its apparently simple nervous system, C. elegans is known to possess complex functions. The nervous system forms an important underlying framework which specifies phenotypic features associated to sensation, movement, conditioning and memory. In this study, with the help of graph theoretical models, we investigated the C. elegans neuronal network to identify network features that are critical for its control. The 'driver neurons' are associated with important biological functions such as reproduction, signalling processes and anatomical structural development. We created 1D and 2D network models of C. elegans neuronal system to probe the role of features that confer controllability and small world nature. The simple 1D ring model is critically poised for the number of feed forward motifs, neuronal clustering and characteristic path-length in response to synaptic rewiring, indicating optimal rewiring. Using empirically observed distance constraint in the neuronal network as a guiding principle, we created a distance constrained synaptic plasticity model that simultaneously explains small world nature, saturation of feed forward motifs as well as observed number of driver neurons. The distance constrained model suggests optimum long distance synaptic connections as a key feature specifying control of the network.

  1. Robust emergence of small-world structure in networks of spiking neurons.

    Science.gov (United States)

    Kwok, Hoi Fei; Jurica, Peter; Raffone, Antonino; van Leeuwen, Cees

    2007-03-01

    Spontaneous activity in biological neural networks shows patterns of dynamic synchronization. We propose that these patterns support the formation of a small-world structure-network connectivity optimal for distributed information processing. We present numerical simulations with connected Hindmarsh-Rose neurons in which, starting from random connection distributions, small-world networks evolve as a result of applying an adaptive rewiring rule. The rule connects pairs of neurons that tend fire in synchrony, and disconnects ones that fail to synchronize. Repeated application of the rule leads to small-world structures. This mechanism is robustly observed for bursting and irregular firing regimes.

  2. Simulating synchronization in neuronal networks

    Science.gov (United States)

    Fink, Christian G.

    2016-06-01

    We discuss several techniques used in simulating neuronal networks by exploring how a network's connectivity structure affects its propensity for synchronous spiking. Network connectivity is generated using the Watts-Strogatz small-world algorithm, and two key measures of network structure are described. These measures quantify structural characteristics that influence collective neuronal spiking, which is simulated using the leaky integrate-and-fire model. Simulations show that adding a small number of random connections to an otherwise lattice-like connectivity structure leads to a dramatic increase in neuronal synchronization.

  3. A Scalable Weight-Free Learning Algorithm for Regulatory Control of Cell Activity in Spiking Neuronal Networks.

    Science.gov (United States)

    Zhang, Xu; Foderaro, Greg; Henriquez, Craig; Ferrari, Silvia

    2018-03-01

    Recent developments in neural stimulation and recording technologies are providing scientists with the ability of recording and controlling the activity of individual neurons in vitro or in vivo, with very high spatial and temporal resolution. Tools such as optogenetics, for example, are having a significant impact in the neuroscience field by delivering optical firing control with the precision and spatiotemporal resolution required for investigating information processing and plasticity in biological brains. While a number of training algorithms have been developed to date for spiking neural network (SNN) models of biological neuronal circuits, exiting methods rely on learning rules that adjust the synaptic strengths (or weights) directly, in order to obtain the desired network-level (or functional-level) performance. As such, they are not applicable to modifying plasticity in biological neuronal circuits, in which synaptic strengths only change as a result of pre- and post-synaptic neuron firings or biological mechanisms beyond our control. This paper presents a weight-free training algorithm that relies solely on adjusting the spatiotemporal delivery of neuron firings in order to optimize the network performance. The proposed weight-free algorithm does not require any knowledge of the SNN model or its plasticity mechanisms. As a result, this training approach is potentially realizable in vitro or in vivo via neural stimulation and recording technologies, such as optogenetics and multielectrode arrays, and could be utilized to control plasticity at multiple scales of biological neuronal circuits. The approach is demonstrated by training SNNs with hundreds of units to control a virtual insect navigating in an unknown environment.

  4. Organization of excitable dynamics in hierarchical biological networks.

    Directory of Open Access Journals (Sweden)

    Mark Müller-Linow

    Full Text Available This study investigates the contributions of network topology features to the dynamic behavior of hierarchically organized excitable networks. Representatives of different types of hierarchical networks as well as two biological neural networks are explored with a three-state model of node activation for systematically varying levels of random background network stimulation. The results demonstrate that two principal topological aspects of hierarchical networks, node centrality and network modularity, correlate with the network activity patterns at different levels of spontaneous network activation. The approach also shows that the dynamic behavior of the cerebral cortical systems network in the cat is dominated by the network's modular organization, while the activation behavior of the cellular neuronal network of Caenorhabditis elegans is strongly influenced by hub nodes. These findings indicate the interaction of multiple topological features and dynamic states in the function of complex biological networks.

  5. Understanding the Generation of Network Bursts by Adaptive Oscillatory Neurons

    Directory of Open Access Journals (Sweden)

    Tanguy Fardet

    2018-02-01

    Full Text Available Experimental and numerical studies have revealed that isolated populations of oscillatory neurons can spontaneously synchronize and generate periodic bursts involving the whole network. Such a behavior has notably been observed for cultured neurons in rodent's cortex or hippocampus. We show here that a sufficient condition for this network bursting is the presence of an excitatory population of oscillatory neurons which displays spike-driven adaptation. We provide an analytic model to analyze network bursts generated by coupled adaptive exponential integrate-and-fire neurons. We show that, for strong synaptic coupling, intrinsically tonic spiking neurons evolve to reach a synchronized intermittent bursting state. The presence of inhibitory neurons or plastic synapses can then modulate this dynamics in many ways but is not necessary for its appearance. Thanks to a simple self-consistent equation, our model gives an intuitive and semi-quantitative tool to understand the bursting behavior. Furthermore, it suggests that after-hyperpolarization currents are sufficient to explain bursting termination. Through a thorough mapping between the theoretical parameters and ion-channel properties, we discuss the biological mechanisms that could be involved and the relevance of the explored parameter-space. Such an insight enables us to propose experimentally-testable predictions regarding how blocking fast, medium or slow after-hyperpolarization channels would affect the firing rate and burst duration, as well as the interburst interval.

  6. Color encoding in biologically-inspired convolutional neural networks.

    Science.gov (United States)

    Rafegas, Ivet; Vanrell, Maria

    2018-05-11

    Convolutional Neural Networks have been proposed as suitable frameworks to model biological vision. Some of these artificial networks showed representational properties that rival primate performances in object recognition. In this paper we explore how color is encoded in a trained artificial network. It is performed by estimating a color selectivity index for each neuron, which allows us to describe the neuron activity to a color input stimuli. The index allows us to classify whether they are color selective or not and if they are of a single or double color. We have determined that all five convolutional layers of the network have a large number of color selective neurons. Color opponency clearly emerges in the first layer, presenting 4 main axes (Black-White, Red-Cyan, Blue-Yellow and Magenta-Green), but this is reduced and rotated as we go deeper into the network. In layer 2 we find a denser hue sampling of color neurons and opponency is reduced almost to one new main axis, the Bluish-Orangish coinciding with the dataset bias. In layers 3, 4 and 5 color neurons are similar amongst themselves, presenting different type of neurons that detect specific colored objects (e.g., orangish faces), specific surrounds (e.g., blue sky) or specific colored or contrasted object-surround configurations (e.g. blue blob in a green surround). Overall, our work concludes that color and shape representation are successively entangled through all the layers of the studied network, revealing certain parallelisms with the reported evidences in primate brains that can provide useful insight into intermediate hierarchical spatio-chromatic representations. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. An Asynchronous Recurrent Network of Cellular Automaton-Based Neurons and Its Reproduction of Spiking Neural Network Activities.

    Science.gov (United States)

    Matsubara, Takashi; Torikai, Hiroyuki

    2016-04-01

    Modeling and implementation approaches for the reproduction of input-output relationships in biological nervous tissues contribute to the development of engineering and clinical applications. However, because of high nonlinearity, the traditional modeling and implementation approaches encounter difficulties in terms of generalization ability (i.e., performance when reproducing an unknown data set) and computational resources (i.e., computation time and circuit elements). To overcome these difficulties, asynchronous cellular automaton-based neuron (ACAN) models, which are described as special kinds of cellular automata that can be implemented as small asynchronous sequential logic circuits have been proposed. This paper presents a novel type of such ACAN and a theoretical analysis of its excitability. This paper also presents a novel network of such neurons, which can mimic input-output relationships of biological and nonlinear ordinary differential equation model neural networks. Numerical analyses confirm that the presented network has a higher generalization ability than other major modeling and implementation approaches. In addition, Field-Programmable Gate Array-implementations confirm that the presented network requires lower computational resources.

  8. Self-organized criticality occurs in non-conservative neuronal networks during `up' states

    Science.gov (United States)

    Millman, Daniel; Mihalas, Stefan; Kirkwood, Alfredo; Niebur, Ernst

    2010-10-01

    During sleep, under anaesthesia and in vitro, cortical neurons in sensory, motor, association and executive areas fluctuate between so-called up and down states, which are characterized by distinct membrane potentials and spike rates. Another phenomenon observed in preparations similar to those that exhibit up and down states-such as anaesthetized rats, brain slices and cultures devoid of sensory input, as well as awake monkey cortex-is self-organized criticality (SOC). SOC is characterized by activity `avalanches' with a branching parameter near unity and size distribution that obeys a power law with a critical exponent of about -3/2. Recent work has demonstrated SOC in conservative neuronal network models, but critical behaviour breaks down when biologically realistic `leaky' neurons are introduced. Here, we report robust SOC behaviour in networks of non-conservative leaky integrate-and-fire neurons with short-term synaptic depression. We show analytically and numerically that these networks typically have two stable activity levels, corresponding to up and down states, that the networks switch spontaneously between these states and that up states are critical and down states are subcritical.

  9. An FPGA-based silicon neuronal network with selectable excitability silicon neurons

    Directory of Open Access Journals (Sweden)

    Jing eLi

    2012-12-01

    Full Text Available This paper presents a digital silicon neuronal network which simulates the nerve system in creatures and has the ability to execute intelligent tasks, such as associative memory. Two essential elements, the mathematical-structure-based digital spiking silicon neuron (DSSN and the transmitter release based silicon synapse, allow the network to show rich dynamic behaviors and are computationally efficient for hardware implementation. We adopt mixed pipeline and parallel structure and shift operations to design a sufficient large and complex network without excessive hardware resource cost. The network with $256$ full-connected neurons is built on a Digilent Atlys board equipped with a Xilinx Spartan-6 LX45 FPGA. Besides, a memory control block and USB control block are designed to accomplish the task of data communication between the network and the host PC. This paper also describes the mechanism of associative memory performed in the silicon neuronal network. The network is capable of retrieving stored patterns if the inputs contain enough information of them. The retrieving probability increases with the similarity between the input and the stored pattern increasing. Synchronization of neurons is observed when the successful stored pattern retrieval occurs.

  10. Towards the understanding of network information processing in biology

    Science.gov (United States)

    Singh, Vijay

    Living organisms perform incredibly well in detecting a signal present in the environment. This information processing is achieved near optimally and quite reliably, even though the sources of signals are highly variable and complex. The work in the last few decades has given us a fair understanding of how individual signal processing units like neurons and cell receptors process signals, but the principles of collective information processing on biological networks are far from clear. Information processing in biological networks, like the brain, metabolic circuits, cellular-signaling circuits, etc., involves complex interactions among a large number of units (neurons, receptors). The combinatorially large number of states such a system can exist in makes it impossible to study these systems from the first principles, starting from the interactions between the basic units. The principles of collective information processing on such complex networks can be identified using coarse graining approaches. This could provide insights into the organization and function of complex biological networks. Here I study models of biological networks using continuum dynamics, renormalization, maximum likelihood estimation and information theory. Such coarse graining approaches identify features that are essential for certain processes performed by underlying biological networks. We find that long-range connections in the brain allow for global scale feature detection in a signal. These also suppress the noise and remove any gaps present in the signal. Hierarchical organization with long-range connections leads to large-scale connectivity at low synapse numbers. Time delays can be utilized to separate a mixture of signals with temporal scales. Our observations indicate that the rules in multivariate signal processing are quite different from traditional single unit signal processing.

  11. Learning and coding in biological neural networks

    Science.gov (United States)

    Fiete, Ila Rani

    How can large groups of neurons that locally modify their activities learn to collectively perform a desired task? Do studies of learning in small networks tell us anything about learning in the fantastically large collection of neurons that make up a vertebrate brain? What factors do neurons optimize by encoding sensory inputs or motor commands in the way they do? In this thesis I present a collection of four theoretical works: each of the projects was motivated by specific constraints and complexities of biological neural networks, as revealed by experimental studies; together, they aim to partially address some of the central questions of neuroscience posed above. We first study the role of sparse neural activity, as seen in the coding of sequential commands in a premotor area responsible for birdsong. We show that the sparse coding of temporal sequences in the songbird brain can, in a network where the feedforward plastic weights must translate the sparse sequential code into a time-varying muscle code, facilitate learning by minimizing synaptic interference. Next, we propose a biologically plausible synaptic plasticity rule that can perform goal-directed learning in recurrent networks of voltage-based spiking neurons that interact through conductances. Learning is based on the correlation of noisy local activity with a global reward signal; we prove that this rule performs stochastic gradient ascent on the reward. Thus, if the reward signal quantifies network performance on some desired task, the plasticity rule provably drives goal-directed learning in the network. To assess the convergence properties of the learning rule, we compare it with a known example of learning in the brain. Song-learning in finches is a clear example of a learned behavior, with detailed available neurophysiological data. With our learning rule, we train an anatomically accurate model birdsong network that drives a sound source to mimic an actual zebrafinch song. Simulation and

  12. Coherent and intermittent ensemble oscillations emerge from networks of irregular spiking neurons.

    Science.gov (United States)

    Hoseini, Mahmood S; Wessel, Ralf

    2016-01-01

    Local field potential (LFP) recordings from spatially distant cortical circuits reveal episodes of coherent gamma oscillations that are intermittent, and of variable peak frequency and duration. Concurrently, single neuron spiking remains largely irregular and of low rate. The underlying potential mechanisms of this emergent network activity have long been debated. Here we reproduce such intermittent ensemble oscillations in a model network, consisting of excitatory and inhibitory model neurons with the characteristics of regular-spiking (RS) pyramidal neurons, and fast-spiking (FS) and low-threshold spiking (LTS) interneurons. We find that fluctuations in the external inputs trigger reciprocally connected and irregularly spiking RS and FS neurons in episodes of ensemble oscillations, which are terminated by the recruitment of the LTS population with concurrent accumulation of inhibitory conductance in both RS and FS neurons. The model qualitatively reproduces experimentally observed phase drift, oscillation episode duration distributions, variation in the peak frequency, and the concurrent irregular single-neuron spiking at low rate. Furthermore, consistent with previous experimental studies using optogenetic manipulation, periodic activation of FS, but not RS, model neurons causes enhancement of gamma oscillations. In addition, increasing the coupling between two model networks from low to high reveals a transition from independent intermittent oscillations to coherent intermittent oscillations. In conclusion, the model network suggests biologically plausible mechanisms for the generation of episodes of coherent intermittent ensemble oscillations with irregular spiking neurons in cortical circuits. Copyright © 2016 the American Physiological Society.

  13. A Reconfigurable and Biologically Inspired Paradigm for Computation Using Network-On-Chip and Spiking Neural Networks

    Directory of Open Access Journals (Sweden)

    Jim Harkin

    2009-01-01

    Full Text Available FPGA devices have emerged as a popular platform for the rapid prototyping of biological Spiking Neural Networks (SNNs applications, offering the key requirement of reconfigurability. However, FPGAs do not efficiently realise the biologically plausible neuron and synaptic models of SNNs, and current FPGA routing structures cannot accommodate the high levels of interneuron connectivity inherent in complex SNNs. This paper highlights and discusses the current challenges of implementing scalable SNNs on reconfigurable FPGAs. The paper proposes a novel field programmable neural network architecture (EMBRACE, incorporating low-power analogue spiking neurons, interconnected using a Network-on-Chip architecture. Results on the evaluation of the EMBRACE architecture using the XOR benchmark problem are presented, and the performance of the architecture is discussed. The paper also discusses the adaptability of the EMBRACE architecture in supporting fault tolerant computing.

  14. Biological Networks Entropies: Examples in Neural Memory Networks, Genetic Regulation Networks and Social Epidemic Networks

    Directory of Open Access Journals (Sweden)

    Jacques Demongeot

    2018-01-01

    Full Text Available Networks used in biological applications at different scales (molecule, cell and population are of different types: neuronal, genetic, and social, but they share the same dynamical concepts, in their continuous differential versions (e.g., non-linear Wilson-Cowan system as well as in their discrete Boolean versions (e.g., non-linear Hopfield system; in both cases, the notion of interaction graph G(J associated to its Jacobian matrix J, and also the concepts of frustrated nodes, positive or negative circuits of G(J, kinetic energy, entropy, attractors, structural stability, etc., are relevant and useful for studying the dynamics and the robustness of these systems. We will give some general results available for both continuous and discrete biological networks, and then study some specific applications of three new notions of entropy: (i attractor entropy, (ii isochronal entropy and (iii entropy centrality; in three domains: a neural network involved in the memory evocation, a genetic network responsible of the iron control and a social network accounting for the obesity spread in high school environment.

  15. Orientation selectivity in inhibition-dominated networks of spiking neurons: effect of single neuron properties and network dynamics.

    Science.gov (United States)

    Sadeh, Sadra; Rotter, Stefan

    2015-01-01

    The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are robust and do not

  16. Orientation selectivity in inhibition-dominated networks of spiking neurons: effect of single neuron properties and network dynamics.

    Directory of Open Access Journals (Sweden)

    Sadra Sadeh

    2015-01-01

    Full Text Available The neuronal mechanisms underlying the emergence of orientation selectivity in the primary visual cortex of mammals are still elusive. In rodents, visual neurons show highly selective responses to oriented stimuli, but neighboring neurons do not necessarily have similar preferences. Instead of a smooth map, one observes a salt-and-pepper organization of orientation selectivity. Modeling studies have recently confirmed that balanced random networks are indeed capable of amplifying weakly tuned inputs and generating highly selective output responses, even in absence of feature-selective recurrent connectivity. Here we seek to elucidate the neuronal mechanisms underlying this phenomenon by resorting to networks of integrate-and-fire neurons, which are amenable to analytic treatment. Specifically, in networks of perfect integrate-and-fire neurons, we observe that highly selective and contrast invariant output responses emerge, very similar to networks of leaky integrate-and-fire neurons. We then demonstrate that a theory based on mean firing rates and the detailed network topology predicts the output responses, and explains the mechanisms underlying the suppression of the common-mode, amplification of modulation, and contrast invariance. Increasing inhibition dominance in our networks makes the rectifying nonlinearity more prominent, which in turn adds some distortions to the otherwise essentially linear prediction. An extension of the linear theory can account for all the distortions, enabling us to compute the exact shape of every individual tuning curve in our networks. We show that this simple form of nonlinearity adds two important properties to orientation selectivity in the network, namely sharpening of tuning curves and extra suppression of the modulation. The theory can be further extended to account for the nonlinearity of the leaky model by replacing the rectifier by the appropriate smooth input-output transfer function. These results are

  17. How structure determines correlations in neuronal networks.

    Directory of Open Access Journals (Sweden)

    Volker Pernice

    2011-05-01

    Full Text Available Networks are becoming a ubiquitous metaphor for the understanding of complex biological systems, spanning the range between molecular signalling pathways, neural networks in the brain, and interacting species in a food web. In many models, we face an intricate interplay between the topology of the network and the dynamics of the system, which is generally very hard to disentangle. A dynamical feature that has been subject of intense research in various fields are correlations between the noisy activity of nodes in a network. We consider a class of systems, where discrete signals are sent along the links of the network. Such systems are of particular relevance in neuroscience, because they provide models for networks of neurons that use action potentials for communication. We study correlations in dynamic networks with arbitrary topology, assuming linear pulse coupling. With our novel approach, we are able to understand in detail how specific structural motifs affect pairwise correlations. Based on a power series decomposition of the covariance matrix, we describe the conditions under which very indirect interactions will have a pronounced effect on correlations and population dynamics. In random networks, we find that indirect interactions may lead to a broad distribution of activation levels with low average but highly variable correlations. This phenomenon is even more pronounced in networks with distance dependent connectivity. In contrast, networks with highly connected hubs or patchy connections often exhibit strong average correlations. Our results are particularly relevant in view of new experimental techniques that enable the parallel recording of spiking activity from a large number of neurons, an appropriate interpretation of which is hampered by the currently limited understanding of structure-dynamics relations in complex networks.

  18. APPLICATION OF UKRAINIAN GRID INFRASTRUCTURE FOR INVESTIGATION OF NONLINEAR DYNAMICS IN LARGE NEURONAL NETWORKS

    Directory of Open Access Journals (Sweden)

    O. О. Sudakov

    2015-12-01

    Full Text Available In present work the Ukrainian National Grid (UNG infrastructure was applied for investigation of synchronization in large networks of interacting neurons. This application is important for solving of modern neuroscience problems related to mechanisms of nervous system activities (memory, cognition etc. and nervous pathologies (epilepsy, Parkinsonism, etc.. Modern non-linear dynamics theories and applications provides powerful basis for computer simulations of biological neuronal networks and investigation of phenomena which mechanisms hardly could be clarified by other approaches. Cubic millimeter of brain tissue contains about 105 neurons, so realistic (Hodgkin-Huxley model and phenomenological (Kuramoto-Sakaguchi, FitzHugh-Nagumo, etc. models simulations require consideration of large neurons numbers.

  19. Neurons from the adult human dentate nucleus: neural networks in the neuron classification.

    Science.gov (United States)

    Grbatinić, Ivan; Marić, Dušica L; Milošević, Nebojša T

    2015-04-07

    Topological (central vs. border neuron type) and morphological classification of adult human dentate nucleus neurons according to their quantified histomorphological properties using neural networks on real and virtual neuron samples. In the real sample 53.1% and 14.1% of central and border neurons, respectively, are classified correctly with total of 32.8% of misclassified neurons. The most important result present 62.2% of misclassified neurons in border neurons group which is even greater than number of correctly classified neurons (37.8%) in that group, showing obvious failure of network to classify neurons correctly based on computational parameters used in our study. On the virtual sample 97.3% of misclassified neurons in border neurons group which is much greater than number of correctly classified neurons (2.7%) in that group, again confirms obvious failure of network to classify neurons correctly. Statistical analysis shows that there is no statistically significant difference in between central and border neurons for each measured parameter (p>0.05). Total of 96.74% neurons are morphologically classified correctly by neural networks and each one belongs to one of the four histomorphological types: (a) neurons with small soma and short dendrites, (b) neurons with small soma and long dendrites, (c) neuron with large soma and short dendrites, (d) neurons with large soma and long dendrites. Statistical analysis supports these results (pneurons can be classified in four neuron types according to their quantitative histomorphological properties. These neuron types consist of two neuron sets, small and large ones with respect to their perykarions with subtypes differing in dendrite length i.e. neurons with short vs. long dendrites. Besides confirmation of neuron classification on small and large ones, already shown in literature, we found two new subtypes i.e. neurons with small soma and long dendrites and with large soma and short dendrites. These neurons are

  20. [Network structures in biological systems].

    Science.gov (United States)

    Oleskin, A V

    2013-01-01

    Network structures (networks) that have been extensively studied in the humanities are characterized by cohesion, a lack of a central control unit, and predominantly fractal properties. They are contrasted with structures that contain a single centre (hierarchies) as well as with those whose elements predominantly compete with one another (market-type structures). As far as biological systems are concerned, their network structures can be subdivided into a number of types involving different organizational mechanisms. Network organization is characteristic of various structural levels of biological systems ranging from single cells to integrated societies. These networks can be classified into two main subgroups: (i) flat (leaderless) network structures typical of systems that are composed of uniform elements and represent modular organisms or at least possess manifest integral properties and (ii) three-dimensional, partly hierarchical structures characterized by significant individual and/or intergroup (intercaste) differences between their elements. All network structures include an element that performs structural, protective, and communication-promoting functions. By analogy to cell structures, this element is denoted as the matrix of a network structure. The matrix includes a material and an immaterial component. The material component comprises various structures that belong to the whole structure and not to any of its elements per se. The immaterial (ideal) component of the matrix includes social norms and rules regulating network elements' behavior. These behavioral rules can be described in terms of algorithms. Algorithmization enables modeling the behavior of various network structures, particularly of neuron networks and their artificial analogs.

  1. Stages of neuronal network formation

    International Nuclear Information System (INIS)

    Woiterski, Lydia; Käs, Josef A; Claudepierre, Thomas; Luxenhofer, Robert; Jordan, Rainer

    2013-01-01

    Graph theoretical approaches have become a powerful tool for investigating the architecture and dynamics of complex networks. The topology of network graphs revealed small-world properties for very different real systems among these neuronal networks. In this study, we observed the early development of mouse retinal ganglion cell (RGC) networks in vitro using time-lapse video microscopy. By means of a time-resolved graph theoretical analysis of the connectivity, shortest path length and the edge length, we were able to discover the different stages during the network formation. Starting from single cells, at the first stage neurons connected to each other ending up in a network with maximum complexity. In the further course, we observed a simplification of the network which manifested in a change of relevant network parameters such as the minimization of the path length. Moreover, we found that RGC networks self-organized as small-world networks at both stages; however, the optimization occurred only in the second stage. (paper)

  2. Bursting synchronization in clustered neuronal networks

    International Nuclear Information System (INIS)

    Yu Hai-Tao; Wang Jiang; Deng Bin; Wei Xi-Le

    2013-01-01

    Neuronal networks in the brain exhibit the modular (clustered) property, i.e., they are composed of certain subnetworks with differential internal and external connectivity. We investigate bursting synchronization in a clustered neuronal network. A transition to mutual-phase synchronization takes place on the bursting time scale of coupled neurons, while on the spiking time scale, they behave asynchronously. This synchronization transition can be induced by the variations of inter- and intracoupling strengths, as well as the probability of random links between different subnetworks. Considering that some pathological conditions are related with the synchronization of bursting neurons in the brain, we analyze the control of bursting synchronization by using a time-periodic external signal in the clustered neuronal network. Simulation results show a frequency locking tongue in the driving parameter plane, where bursting synchronization is maintained, even in the presence of external driving. Hence, effective synchronization suppression can be realized with the driving parameters outside the frequency locking region. (interdisciplinary physics and related areas of science and technology)

  3. Doubly stochastic coherence in complex neuronal networks

    Science.gov (United States)

    Gao, Yang; Wang, Jianjun

    2012-11-01

    A system composed of coupled FitzHugh-Nagumo neurons with various topological structures is investigated under the co-presence of two independently additive and multiplicative Gaussian white noises, in which particular attention is paid to the neuronal networks spiking regularity. As the additive noise intensity and the multiplicative noise intensity are simultaneously adjusted to optimal values, the temporal periodicity of the output of the system reaches the maximum, indicating the occurrence of doubly stochastic coherence. The network topology randomness exerts different influences on the temporal coherence of the spiking oscillation for dissimilar coupling strength regimes. At a small coupling strength, the spiking regularity shows nearly no difference in the regular, small-world, and completely random networks. At an intermediate coupling strength, the temporal periodicity in a small-world neuronal network can be improved slightly by adding a small fraction of long-range connections. At a large coupling strength, the dynamical behavior of the neurons completely loses the resonance property with regard to the additive noise intensity or the multiplicative noise intensity, and the spiking regularity decreases considerably with the increase of the network topology randomness. The network topology randomness plays more of a depressed role than a favorable role in improving the temporal coherence of the spiking oscillation in the neuronal network research study.

  4. Phase transitions and self-organized criticality in networks of stochastic spiking neurons.

    Science.gov (United States)

    Brochini, Ludmila; de Andrade Costa, Ariadne; Abadi, Miguel; Roque, Antônio C; Stolfi, Jorge; Kinouchi, Osame

    2016-11-07

    Phase transitions and critical behavior are crucial issues both in theoretical and experimental neuroscience. We report analytic and computational results about phase transitions and self-organized criticality (SOC) in networks with general stochastic neurons. The stochastic neuron has a firing probability given by a smooth monotonic function Φ(V) of the membrane potential V, rather than a sharp firing threshold. We find that such networks can operate in several dynamic regimes (phases) depending on the average synaptic weight and the shape of the firing function Φ. In particular, we encounter both continuous and discontinuous phase transitions to absorbing states. At the continuous transition critical boundary, neuronal avalanches occur whose distributions of size and duration are given by power laws, as observed in biological neural networks. We also propose and test a new mechanism to produce SOC: the use of dynamic neuronal gains - a form of short-term plasticity probably located at the axon initial segment (AIS) - instead of depressing synapses at the dendrites (as previously studied in the literature). The new self-organization mechanism produces a slightly supercritical state, that we called SOSC, in accord to some intuitions of Alan Turing.

  5. Synaptic Plasticity and Spike Synchronisation in Neuronal Networks

    Science.gov (United States)

    Borges, Rafael R.; Borges, Fernando S.; Lameu, Ewandson L.; Protachevicz, Paulo R.; Iarosz, Kelly C.; Caldas, Iberê L.; Viana, Ricardo L.; Macau, Elbert E. N.; Baptista, Murilo S.; Grebogi, Celso; Batista, Antonio M.

    2017-12-01

    Brain plasticity, also known as neuroplasticity, is a fundamental mechanism of neuronal adaptation in response to changes in the environment or due to brain injury. In this review, we show our results about the effects of synaptic plasticity on neuronal networks composed by Hodgkin-Huxley neurons. We show that the final topology of the evolved network depends crucially on the ratio between the strengths of the inhibitory and excitatory synapses. Excitation of the same order of inhibition revels an evolved network that presents the rich-club phenomenon, well known to exist in the brain. For initial networks with considerably larger inhibitory strengths, we observe the emergence of a complex evolved topology, where neurons sparsely connected to other neurons, also a typical topology of the brain. The presence of noise enhances the strength of both types of synapses, but if the initial network has synapses of both natures with similar strengths. Finally, we show how the synchronous behaviour of the evolved network will reflect its evolved topology.

  6. Identification of neuronal network properties from the spectral analysis of calcium imaging signals in neuronal cultures.

    Science.gov (United States)

    Tibau, Elisenda; Valencia, Miguel; Soriano, Jordi

    2013-01-01

    Neuronal networks in vitro are prominent systems to study the development of connections in living neuronal networks and the interplay between connectivity, activity and function. These cultured networks show a rich spontaneous activity that evolves concurrently with the connectivity of the underlying network. In this work we monitor the development of neuronal cultures, and record their activity using calcium fluorescence imaging. We use spectral analysis to characterize global dynamical and structural traits of the neuronal cultures. We first observe that the power spectrum can be used as a signature of the state of the network, for instance when inhibition is active or silent, as well as a measure of the network's connectivity strength. Second, the power spectrum identifies prominent developmental changes in the network such as GABAA switch. And third, the analysis of the spatial distribution of the spectral density, in experiments with a controlled disintegration of the network through CNQX, an AMPA-glutamate receptor antagonist in excitatory neurons, reveals the existence of communities of strongly connected, highly active neurons that display synchronous oscillations. Our work illustrates the interest of spectral analysis for the study of in vitro networks, and its potential use as a network-state indicator, for instance to compare healthy and diseased neuronal networks.

  7. Inverse stochastic resonance in networks of spiking neurons.

    Science.gov (United States)

    Uzuntarla, Muhammet; Barreto, Ernest; Torres, Joaquin J

    2017-07-01

    Inverse Stochastic Resonance (ISR) is a phenomenon in which the average spiking rate of a neuron exhibits a minimum with respect to noise. ISR has been studied in individual neurons, but here, we investigate ISR in scale-free networks, where the average spiking rate is calculated over the neuronal population. We use Hodgkin-Huxley model neurons with channel noise (i.e., stochastic gating variable dynamics), and the network connectivity is implemented via electrical or chemical connections (i.e., gap junctions or excitatory/inhibitory synapses). We find that the emergence of ISR depends on the interplay between each neuron's intrinsic dynamical structure, channel noise, and network inputs, where the latter in turn depend on network structure parameters. We observe that with weak gap junction or excitatory synaptic coupling, network heterogeneity and sparseness tend to favor the emergence of ISR. With inhibitory coupling, ISR is quite robust. We also identify dynamical mechanisms that underlie various features of this ISR behavior. Our results suggest possible ways of experimentally observing ISR in actual neuronal systems.

  8. Solving Constraint Satisfaction Problems with Networks of Spiking Neurons.

    Science.gov (United States)

    Jonke, Zeno; Habenschuss, Stefan; Maass, Wolfgang

    2016-01-01

    Network of neurons in the brain apply-unlike processors in our current generation of computer hardware-an event-based processing strategy, where short pulses (spikes) are emitted sparsely by neurons to signal the occurrence of an event at a particular point in time. Such spike-based computations promise to be substantially more power-efficient than traditional clocked processing schemes. However, it turns out to be surprisingly difficult to design networks of spiking neurons that can solve difficult computational problems on the level of single spikes, rather than rates of spikes. We present here a new method for designing networks of spiking neurons via an energy function. Furthermore, we show how the energy function of a network of stochastically firing neurons can be shaped in a transparent manner by composing the networks of simple stereotypical network motifs. We show that this design approach enables networks of spiking neurons to produce approximate solutions to difficult (NP-hard) constraint satisfaction problems from the domains of planning/optimization and verification/logical inference. The resulting networks employ noise as a computational resource. Nevertheless, the timing of spikes plays an essential role in their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines) and Gibbs sampling.

  9. Recurrently connected and localized neuronal communities initiate coordinated spontaneous activity in neuronal networks

    Science.gov (United States)

    Amin, Hayder; Maccione, Alessandro; Nieus, Thierry

    2017-01-01

    Developing neuronal systems intrinsically generate coordinated spontaneous activity that propagates by involving a large number of synchronously firing neurons. In vivo, waves of spikes transiently characterize the activity of developing brain circuits and are fundamental for activity-dependent circuit formation. In vitro, coordinated spontaneous spiking activity, or network bursts (NBs), interleaved within periods of asynchronous spikes emerge during the development of 2D and 3D neuronal cultures. Several studies have investigated this type of activity and its dynamics, but how a neuronal system generates these coordinated events remains unclear. Here, we investigate at a cellular level the generation of network bursts in spontaneously active neuronal cultures by exploiting high-resolution multielectrode array recordings and computational network modelling. Our analysis reveals that NBs are generated in specialized regions of the network (functional neuronal communities) that feature neuronal links with high cross-correlation peak values, sub-millisecond lags and that share very similar structural connectivity motifs providing recurrent interactions. We show that the particular properties of these local structures enable locally amplifying spontaneous asynchronous spikes and that this mechanism can lead to the initiation of NBs. Through the analysis of simulated and experimental data, we also show that AMPA currents drive the coordinated activity, while NMDA and GABA currents are only involved in shaping the dynamics of NBs. Overall, our results suggest that the presence of functional neuronal communities with recurrent local connections allows a neuronal system to generate spontaneous coordinated spiking activity events. As suggested by the rules used for implementing our computational model, such functional communities might naturally emerge during network development by following simple constraints on distance-based connectivity. PMID:28749937

  10. Recurrently connected and localized neuronal communities initiate coordinated spontaneous activity in neuronal networks.

    Directory of Open Access Journals (Sweden)

    Davide Lonardoni

    2017-07-01

    Full Text Available Developing neuronal systems intrinsically generate coordinated spontaneous activity that propagates by involving a large number of synchronously firing neurons. In vivo, waves of spikes transiently characterize the activity of developing brain circuits and are fundamental for activity-dependent circuit formation. In vitro, coordinated spontaneous spiking activity, or network bursts (NBs, interleaved within periods of asynchronous spikes emerge during the development of 2D and 3D neuronal cultures. Several studies have investigated this type of activity and its dynamics, but how a neuronal system generates these coordinated events remains unclear. Here, we investigate at a cellular level the generation of network bursts in spontaneously active neuronal cultures by exploiting high-resolution multielectrode array recordings and computational network modelling. Our analysis reveals that NBs are generated in specialized regions of the network (functional neuronal communities that feature neuronal links with high cross-correlation peak values, sub-millisecond lags and that share very similar structural connectivity motifs providing recurrent interactions. We show that the particular properties of these local structures enable locally amplifying spontaneous asynchronous spikes and that this mechanism can lead to the initiation of NBs. Through the analysis of simulated and experimental data, we also show that AMPA currents drive the coordinated activity, while NMDA and GABA currents are only involved in shaping the dynamics of NBs. Overall, our results suggest that the presence of functional neuronal communities with recurrent local connections allows a neuronal system to generate spontaneous coordinated spiking activity events. As suggested by the rules used for implementing our computational model, such functional communities might naturally emerge during network development by following simple constraints on distance-based connectivity.

  11. Solving constraint satisfaction problems with networks of spiking neurons

    Directory of Open Access Journals (Sweden)

    Zeno eJonke

    2016-03-01

    Full Text Available Network of neurons in the brain apply – unlike processors in our current generation ofcomputer hardware – an event-based processing strategy, where short pulses (spikes areemitted sparsely by neurons to signal the occurrence of an event at a particular point intime. Such spike-based computations promise to be substantially more power-efficient thantraditional clocked processing schemes. However it turned out to be surprisingly difficult todesign networks of spiking neurons that can solve difficult computational problems on the levelof single spikes (rather than rates of spikes. We present here a new method for designingnetworks of spiking neurons via an energy function. Furthermore we show how the energyfunction of a network of stochastically firing neurons can be shaped in a quite transparentmanner by composing the networks of simple stereotypical network motifs. We show that thisdesign approach enables networks of spiking neurons to produce approximate solutions todifficult (NP-hard constraint satisfaction problems from the domains of planning/optimizationand verification/logical inference. The resulting networks employ noise as a computationalresource. Nevertheless the timing of spikes (rather than just spike rates plays an essential rolein their computations. Furthermore, networks of spiking neurons carry out for the Traveling Salesman Problem a more efficient stochastic search for good solutions compared with stochastic artificial neural networks (Boltzmann machines and Gibbs sampling.

  12. Mirrored STDP Implements Autoencoder Learning in a Network of Spiking Neurons.

    Science.gov (United States)

    Burbank, Kendra S

    2015-12-01

    The autoencoder algorithm is a simple but powerful unsupervised method for training neural networks. Autoencoder networks can learn sparse distributed codes similar to those seen in cortical sensory areas such as visual area V1, but they can also be stacked to learn increasingly abstract representations. Several computational neuroscience models of sensory areas, including Olshausen & Field's Sparse Coding algorithm, can be seen as autoencoder variants, and autoencoders have seen extensive use in the machine learning community. Despite their power and versatility, autoencoders have been difficult to implement in a biologically realistic fashion. The challenges include their need to calculate differences between two neuronal activities and their requirement for learning rules which lead to identical changes at feedforward and feedback connections. Here, we study a biologically realistic network of integrate-and-fire neurons with anatomical connectivity and synaptic plasticity that closely matches that observed in cortical sensory areas. Our choice of synaptic plasticity rules is inspired by recent experimental and theoretical results suggesting that learning at feedback connections may have a different form from learning at feedforward connections, and our results depend critically on this novel choice of plasticity rules. Specifically, we propose that plasticity rules at feedforward versus feedback connections are temporally opposed versions of spike-timing dependent plasticity (STDP), leading to a symmetric combined rule we call Mirrored STDP (mSTDP). We show that with mSTDP, our network follows a learning rule that approximately minimizes an autoencoder loss function. When trained with whitened natural image patches, the learned synaptic weights resemble the receptive fields seen in V1. Our results use realistic synaptic plasticity rules to show that the powerful autoencoder learning algorithm could be within the reach of real biological networks.

  13. Perceptron-like computation based on biologically-inspired neurons with heterosynaptic mechanisms

    Science.gov (United States)

    Kaluza, Pablo; Urdapilleta, Eugenio

    2014-10-01

    Perceptrons are one of the fundamental paradigms in artificial neural networks and a key processing scheme in supervised classification tasks. However, the algorithm they provide is given in terms of unrealistically simple processing units and connections and therefore, its implementation in real neural networks is hard to be fulfilled. In this work, we present a neural circuit able to perform perceptron's computation based on realistic models of neurons and synapses. The model uses Wang-Buzsáki neurons with coupling provided by axodendritic and axoaxonic synapses (heterosynapsis). The main characteristics of the feedforward perceptron operation are conserved, which allows to combine both approaches: whereas the classical artificial system can be used to learn a particular problem, its solution can be directly implemented in this neural circuit. As a result, we propose a biologically-inspired system able to work appropriately in a wide range of frequencies and system parameters, while keeping robust to noise and error.

  14. Management of synchronized network activity by highly active neurons

    International Nuclear Information System (INIS)

    Shein, Mark; Raichman, Nadav; Ben-Jacob, Eshel; Volman, Vladislav; Hanein, Yael

    2008-01-01

    Increasing evidence supports the idea that spontaneous brain activity may have an important functional role. Cultured neuronal networks provide a suitable model system to search for the mechanisms by which neuronal spontaneous activity is maintained and regulated. This activity is marked by synchronized bursting events (SBEs)—short time windows (hundreds of milliseconds) of rapid neuronal firing separated by long quiescent periods (seconds). However, there exists a special subset of rapidly firing neurons whose activity also persists between SBEs. It has been proposed that these highly active (HA) neurons play an important role in the management (i.e. establishment, maintenance and regulation) of the synchronized network activity. Here, we studied the dynamical properties and the functional role of HA neurons in homogeneous and engineered networks, during early network development, upon recovery from chemical inhibition and in response to electrical stimulations. We found that their sequences of inter-spike intervals (ISI) exhibit long time correlations and a unimodal distribution. During the network's development and under intense inhibition, the observed activity follows a transition period during which mostly HA neurons are active. Studying networks with engineered geometry, we found that HA neurons are precursors (the first to fire) of the spontaneous SBEs and are more responsive to electrical stimulations

  15. Automated quantification of neuronal networks and single-cell calcium dynamics using calcium imaging.

    Science.gov (United States)

    Patel, Tapan P; Man, Karen; Firestein, Bonnie L; Meaney, David F

    2015-03-30

    Recent advances in genetically engineered calcium and membrane potential indicators provide the potential to estimate the activation dynamics of individual neurons within larger, mesoscale networks (100s-1000+neurons). However, a fully integrated automated workflow for the analysis and visualization of neural microcircuits from high speed fluorescence imaging data is lacking. Here we introduce FluoroSNNAP, Fluorescence Single Neuron and Network Analysis Package. FluoroSNNAP is an open-source, interactive software developed in MATLAB for automated quantification of numerous biologically relevant features of both the calcium dynamics of single-cells and network activity patterns. FluoroSNNAP integrates and improves upon existing tools for spike detection, synchronization analysis, and inference of functional connectivity, making it most useful to experimentalists with little or no programming knowledge. We apply FluoroSNNAP to characterize the activity patterns of neuronal microcircuits undergoing developmental maturation in vitro. Separately, we highlight the utility of single-cell analysis for phenotyping a mixed population of neurons expressing a human mutant variant of the microtubule associated protein tau and wild-type tau. We show the performance of semi-automated cell segmentation using spatiotemporal independent component analysis and significant improvement in detecting calcium transients using a template-based algorithm in comparison to peak-based or wavelet-based detection methods. Our software further enables automated analysis of microcircuits, which is an improvement over existing methods. We expect the dissemination of this software will facilitate a comprehensive analysis of neuronal networks, promoting the rapid interrogation of circuits in health and disease. Copyright © 2015. Published by Elsevier B.V.

  16. Population coding in sparsely connected networks of noisy neurons.

    Science.gov (United States)

    Tripp, Bryan P; Orchard, Jeff

    2012-01-01

    This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behavior. However, population coding theory has often ignored network structure, or assumed discrete, fully connected populations (in contrast with the sparsely connected, continuous sheet of the cortex). In this study, we modeled a sheet of cortical neurons with sparse, primarily local connections, and found that a network with this structure could encode multiple internal state variables with high signal-to-noise ratio. However, we were unable to create high-fidelity networks by instantiating connections at random according to spatial connection probabilities. In our models, high-fidelity networks required additional structure, with higher cluster factors and correlations between the inputs to nearby neurons.

  17. Pattern formation and firing synchronization in networks of map neurons

    International Nuclear Information System (INIS)

    Wang Qingyun; Duan Zhisheng; Huang Lin; Chen Guanrong; Lu Qishao

    2007-01-01

    Patterns and collective phenomena such as firing synchronization are studied in networks of nonhomogeneous oscillatory neurons and mixtures of oscillatory and excitable neurons, with dynamics of each neuron described by a two-dimensional (2D) Rulkov map neuron. It is shown that as the coupling strength is increased, typical patterns emerge spatially, which propagate through the networks in the form of beautiful target waves or parallel ones depending on the size of networks. Furthermore, we investigate the transitions of firing synchronization characterized by the rate of firing when the coupling strength is increased. It is found that there exists an intermediate coupling strength; firing synchronization is minimal simultaneously irrespective of the size of networks. For further increasing the coupling strength, synchronization is enhanced. Since noise is inevitable in real neurons, we also investigate the effects of white noise on firing synchronization for different networks. For the networks of oscillatory neurons, it is shown that firing synchronization decreases when the noise level increases. For the missed networks, firing synchronization is robust under the noise conditions considered in this paper. Results presented in this paper should prove to be valuable for understanding the properties of collective dynamics in real neuronal networks

  18. The Hypocretin/Orexin Neuronal Networks in Zebrafish.

    Science.gov (United States)

    Elbaz, Idan; Levitas-Djerbi, Talia; Appelbaum, Lior

    2017-01-01

    The hypothalamic Hypocretin/Orexin (Hcrt) neurons secrete two Hcrt neuropeptides. These neurons and peptides play a major role in the regulation of feeding, sleep wake cycle, reward-seeking, addiction, and stress. Loss of Hcrt neurons causes the sleep disorder narcolepsy. The zebrafish has become an attractive model to study the Hcrt neuronal network because it is a transparent vertebrate that enables simple genetic manipulation, imaging of the structure and function of neuronal circuits in live animals, and high-throughput monitoring of behavioral performance during both day and night. The zebrafish Hcrt network comprises ~16-60 neurons, which similar to mammals, are located in the hypothalamus and widely innervate the brain and spinal cord, and regulate various fundamental behaviors such as feeding, sleep, and wakefulness. Here we review how the zebrafish contributes to the study of the Hcrt neuronal system molecularly, anatomically, physiologically, and pathologically.

  19. Visualizing neuronal network connectivity with connectivity pattern tables

    Directory of Open Access Journals (Sweden)

    Eilen Nordlie

    2010-01-01

    Full Text Available Complex ideas are best conveyed through well-designed illustrations. Up to now, computational neuroscientists have mostly relied on box-and-arrow diagrams of even complex neuronal networks, often using ad hoc notations with conflicting use of symbols from paper to paper. This significantly impedes the communication of ideas in neuronal network modeling. We present here Connectivity Pattern Tables (CPTs as a clutter-free visualization of connectivity in large neuronal networks containing two-dimensional populations of neurons. CPTs can be generated automatically from the same script code used to create the actual network in the NEST simulator. Through aggregation, CPTs can be viewed at different levels, providing either full detail or summary information. We also provide the open source ConnPlotter tool as a means to create connectivity pattern tables.

  20. Theoretical Neuroanatomy:Analyzing the Structure, Dynamics,and Function of Neuronal Networks

    Science.gov (United States)

    Seth, Anil K.; Edelman, Gerald M.

    The mammalian brain is an extraordinary object: its networks give rise to our conscious experiences as well as to the generation of adaptive behavior for the organism within its environment. Progress in understanding the structure, dynamics and function of the brain faces many challenges. Biological neural networks change over time, their detailed structure is difficult to elucidate, and they are highly heterogeneous both in their neuronal units and synaptic connections. In facing these challenges, graph-theoretic and information-theoretic approaches have yielded a number of useful insights and promise many more.

  1. Results on a Binding Neuron Model and Their Implications for Modified Hourglass Model for Neuronal Network

    Directory of Open Access Journals (Sweden)

    Viswanathan Arunachalam

    2013-01-01

    Full Text Available The classical models of single neuron like Hodgkin-Huxley point neuron or leaky integrate and fire neuron assume the influence of postsynaptic potentials to last till the neuron fires. Vidybida (2008 in a refreshing departure has proposed models for binding neurons in which the trace of an input is remembered only for a finite fixed period of time after which it is forgotten. The binding neurons conform to the behaviour of real neurons and are applicable in constructing fast recurrent networks for computer modeling. This paper develops explicitly several useful results for a binding neuron like the firing time distribution and other statistical characteristics. We also discuss the applicability of the developed results in constructing a modified hourglass network model in which there are interconnected neurons with excitatory as well as inhibitory inputs. Limited simulation results of the hourglass network are presented.

  2. Network feedback regulates motor output across a range of modulatory neuron activity.

    Science.gov (United States)

    Spencer, Robert M; Blitz, Dawn M

    2016-06-01

    Modulatory projection neurons alter network neuron synaptic and intrinsic properties to elicit multiple different outputs. Sensory and other inputs elicit a range of modulatory neuron activity that is further shaped by network feedback, yet little is known regarding how the impact of network feedback on modulatory neurons regulates network output across a physiological range of modulatory neuron activity. Identified network neurons, a fully described connectome, and a well-characterized, identified modulatory projection neuron enabled us to address this issue in the crab (Cancer borealis) stomatogastric nervous system. The modulatory neuron modulatory commissural neuron 1 (MCN1) activates and modulates two networks that generate rhythms via different cellular mechanisms and at distinct frequencies. MCN1 is activated at rates of 5-35 Hz in vivo and in vitro. Additionally, network feedback elicits MCN1 activity time-locked to motor activity. We asked how network activation, rhythm speed, and neuron activity levels are regulated by the presence or absence of network feedback across a physiological range of MCN1 activity rates. There were both similarities and differences in responses of the two networks to MCN1 activity. Many parameters in both networks were sensitive to network feedback effects on MCN1 activity. However, for most parameters, MCN1 activity rate did not determine the extent to which network output was altered by the addition of network feedback. These data demonstrate that the influence of network feedback on modulatory neuron activity is an important determinant of network output and feedback can be effective in shaping network output regardless of the extent of network modulation. Copyright © 2016 the American Physiological Society.

  3. Computing with Spiking Neuron Networks

    NARCIS (Netherlands)

    H. Paugam-Moisy; S.M. Bohte (Sander); G. Rozenberg; T.H.W. Baeck (Thomas); J.N. Kok (Joost)

    2012-01-01

    htmlabstractAbstract Spiking Neuron Networks (SNNs) are often referred to as the 3rd gener- ation of neural networks. Highly inspired from natural computing in the brain and recent advances in neurosciences, they derive their strength and interest from an ac- curate modeling of synaptic interactions

  4. Hybrid Scheme for Modeling Local Field Potentials from Point-Neuron Networks

    DEFF Research Database (Denmark)

    Hagen, Espen; Dahmen, David; Stavrinou, Maria L

    2016-01-01

    on populations of network-equivalent multicompartment neuron models with layer-specific synaptic connectivity, can be used with an arbitrary number of point-neuron network populations, and allows for a full separation of simulated network dynamics and LFPs. We apply the scheme to a full-scale cortical network......With rapidly advancing multi-electrode recording technology, the local field potential (LFP) has again become a popular measure of neuronal activity in both research and clinical applications. Proper understanding of the LFP requires detailed mathematical modeling incorporating the anatomical...... and electrophysiological features of neurons near the recording electrode, as well as synaptic inputs from the entire network. Here we propose a hybrid modeling scheme combining efficient point-neuron network models with biophysical principles underlying LFP generation by real neurons. The LFP predictions rely...

  5. Energy-efficient neural information processing in individual neurons and neuronal networks.

    Science.gov (United States)

    Yu, Lianchun; Yu, Yuguo

    2017-11-01

    Brains are composed of networks of an enormous number of neurons interconnected with synapses. Neural information is carried by the electrical signals within neurons and the chemical signals among neurons. Generating these electrical and chemical signals is metabolically expensive. The fundamental issue raised here is whether brains have evolved efficient ways of developing an energy-efficient neural code from the molecular level to the circuit level. Here, we summarize the factors and biophysical mechanisms that could contribute to the energy-efficient neural code for processing input signals. The factors range from ion channel kinetics, body temperature, axonal propagation of action potentials, low-probability release of synaptic neurotransmitters, optimal input and noise, the size of neurons and neuronal clusters, excitation/inhibition balance, coding strategy, cortical wiring, and the organization of functional connectivity. Both experimental and computational evidence suggests that neural systems may use these factors to maximize the efficiency of energy consumption in processing neural signals. Studies indicate that efficient energy utilization may be universal in neuronal systems as an evolutionary consequence of the pressure of limited energy. As a result, neuronal connections may be wired in a highly economical manner to lower energy costs and space. Individual neurons within a network may encode independent stimulus components to allow a minimal number of neurons to represent whole stimulus characteristics efficiently. This basic principle may fundamentally change our view of how billions of neurons organize themselves into complex circuits to operate and generate the most powerful intelligent cognition in nature. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  6. Collective stochastic coherence in recurrent neuronal networks

    Science.gov (United States)

    Sancristóbal, Belén; Rebollo, Beatriz; Boada, Pol; Sanchez-Vives, Maria V.; Garcia-Ojalvo, Jordi

    2016-09-01

    Recurrent networks of dynamic elements frequently exhibit emergent collective oscillations, which can show substantial regularity even when the individual elements are considerably noisy. How noise-induced dynamics at the local level coexists with regular oscillations at the global level is still unclear. Here we show that a combination of stochastic recurrence-based initiation with deterministic refractoriness in an excitable network can reconcile these two features, leading to maximum collective coherence for an intermediate noise level. We report this behaviour in the slow oscillation regime exhibited by a cerebral cortex network under dynamical conditions resembling slow-wave sleep and anaesthesia. Computational analysis of a biologically realistic network model reveals that an intermediate level of background noise leads to quasi-regular dynamics. We verify this prediction experimentally in cortical slices subject to varying amounts of extracellular potassium, which modulates neuronal excitability and thus synaptic noise. The model also predicts that this effectively regular state should exhibit noise-induced memory of the spatial propagation profile of the collective oscillations, which is also verified experimentally. Taken together, these results allow us to construe the high regularity observed experimentally in the brain as an instance of collective stochastic coherence.

  7. Population coding in sparsely connected networks of noisy neurons

    OpenAIRE

    Tripp, Bryan P.; Orchard, Jeff

    2012-01-01

    This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and be...

  8. Population Coding in Sparsely Connected Networks of Noisy Neurons

    Directory of Open Access Journals (Sweden)

    Bryan Patrick Tripp

    2012-05-01

    Full Text Available This study examines the relationship between population coding and spatial connection statistics in networks of noisy neurons. Encoding of sensory information in the neocortex is thought to require coordinated neural populations, because individual cortical neurons respond to a wide range of stimuli, and exhibit highly variable spiking in response to repeated stimuli. Population coding is rooted in network structure, because cortical neurons receive information only from other neurons, and because the information they encode must be decoded by other neurons, if it is to affect behaviour. However, population coding theory has often ignored network structure, or assumed discrete, fully-connected populations (in contrast with the sparsely connected, continuous sheet of the cortex. In this study, we model a sheet of cortical neurons with sparse, primarily local connections, and find that a network with this structure can encode multiple internal state variables with high signal-to-noise ratio. However, in our model, although connection probability varies with the distance between neurons, we find that the connections cannot be instantiated at random according to these probabilities, but must have additional structure if information is to be encoded with high fidelity.

  9. Bistability induces episodic spike communication by inhibitory neurons in neuronal networks.

    Science.gov (United States)

    Kazantsev, V B; Asatryan, S Yu

    2011-09-01

    Bistability is one of the important features of nonlinear dynamical systems. In neurodynamics, bistability has been found in basic Hodgkin-Huxley equations describing the cell membrane dynamics. When the neuron is clamped near its threshold, the stable rest potential may coexist with the stable limit cycle describing periodic spiking. However, this effect is often neglected in network computations where the neurons are typically reduced to threshold firing units (e.g., integrate-and-fire models). We found that the bistability may induce spike communication by inhibitory coupled neurons in the spiking network. The communication is realized in the form of episodic discharges with synchronous (correlated) spikes during the episodes. A spiking phase map is constructed to describe the synchronization and to estimate basic spike phase locking modes.

  10. Analysis of complex networks from biology to linguistics

    CERN Document Server

    Dehmer, Matthias

    2009-01-01

    Mathematical problems such as graph theory problems are of increasing importance for the analysis of modelling data in biomedical research such as in systems biology, neuronal network modelling etc. This book follows a new approach of including graph theory from a mathematical perspective with specific applications of graph theory in biomedical and computational sciences. The book is written by renowned experts in the field and offers valuable background information for a wide audience.

  11. Dislocation Coupling-Induced Transition of Synchronization in Two-Layer Neuronal Networks

    International Nuclear Information System (INIS)

    Qin Hui-Xin; Ma Jun; Wang Chun-Ni; Jin Wu-Yin

    2014-01-01

    The mutual coupling between neurons in a realistic neuronal system is much complex, and a two-layer neuronal network is designed to investigate the transition of electric activities of neurons. The Hindmarsh—Rose neuron model is used to describe the local dynamics of each neuron, and neurons in the two-layer networks are coupled in dislocated type. The coupling intensity between two-layer networks, and the coupling ratio (Pro), which defines the percentage involved in the coupling in each layer, are changed to observe the synchronization transition of collective behaviors in the two-layer networks. It is found that the two-layer networks of neurons becomes synchronized with increasing the coupling intensity and coupling ratio (Pro) beyond certain thresholds. An ordered wave in the first layer is useful to wake up the rest state in the second layer, or suppress the spatiotemporal state in the second layer under coupling by generating target wave or spiral waves. And the scheme of dislocation coupling can be used to suppress spatiotemporal chaos and excite quiescent neurons. (interdisciplinary physics and related areas of science and technology)

  12. Control of neuronal network organization by chemical surface functionalization of multi-walled carbon nanotube arrays

    International Nuclear Information System (INIS)

    Liu Jie; Bibari, Olivier; Marchand, Gilles; Benabid, Alim-Louis; Sauter-Starace, Fabien; Appaix, Florence; De Waard, Michel

    2011-01-01

    Carbon nanotube substrates are promising candidates for biological applications and devices. Interfacing of these carbon nanotubes with neurons can be controlled by chemical modifications. In this study, we investigated how chemical surface functionalization of multi-walled carbon nanotube arrays (MWNT-A) influences neuronal adhesion and network organization. Functionalization of MWNT-A dramatically modifies the length of neurite fascicles, cluster inter-connection success rate, and the percentage of neurites that escape from the clusters. We propose that chemical functionalization represents a method of choice for developing applications in which neuronal patterning on MWNT-A substrates is required.

  13. Control of neuronal network organization by chemical surface functionalization of multi-walled carbon nanotube arrays

    Energy Technology Data Exchange (ETDEWEB)

    Liu Jie; Bibari, Olivier; Marchand, Gilles; Benabid, Alim-Louis; Sauter-Starace, Fabien [CEA, LETI-Minatec, 17 Rue des Martyrs, 38054 Grenoble Cedex 9 (France); Appaix, Florence; De Waard, Michel, E-mail: fabien.sauter@cea.fr, E-mail: michel.dewaard@ujf-grenoble.fr [Inserm U836, Grenoble Institute of Neuroscience, Site Sante la Tronche, Batiment Edmond J Safra, Chemin Fortune Ferrini, BP170, 38042 Grenoble Cedex 09 (France)

    2011-05-13

    Carbon nanotube substrates are promising candidates for biological applications and devices. Interfacing of these carbon nanotubes with neurons can be controlled by chemical modifications. In this study, we investigated how chemical surface functionalization of multi-walled carbon nanotube arrays (MWNT-A) influences neuronal adhesion and network organization. Functionalization of MWNT-A dramatically modifies the length of neurite fascicles, cluster inter-connection success rate, and the percentage of neurites that escape from the clusters. We propose that chemical functionalization represents a method of choice for developing applications in which neuronal patterning on MWNT-A substrates is required.

  14. Towards reproducible descriptions of neuronal network models.

    Directory of Open Access Journals (Sweden)

    Eilen Nordlie

    2009-08-01

    Full Text Available Progress in science depends on the effective exchange of ideas among scientists. New ideas can be assessed and criticized in a meaningful manner only if they are formulated precisely. This applies to simulation studies as well as to experiments and theories. But after more than 50 years of neuronal network simulations, we still lack a clear and common understanding of the role of computational models in neuroscience as well as established practices for describing network models in publications. This hinders the critical evaluation of network models as well as their re-use. We analyze here 14 research papers proposing neuronal network models of different complexity and find widely varying approaches to model descriptions, with regard to both the means of description and the ordering and placement of material. We further observe great variation in the graphical representation of networks and the notation used in equations. Based on our observations, we propose a good model description practice, composed of guidelines for the organization of publications, a checklist for model descriptions, templates for tables presenting model structure, and guidelines for diagrams of networks. The main purpose of this good practice is to trigger a debate about the communication of neuronal network models in a manner comprehensible to humans, as opposed to machine-readable model description languages. We believe that the good model description practice proposed here, together with a number of other recent initiatives on data-, model-, and software-sharing, may lead to a deeper and more fruitful exchange of ideas among computational neuroscientists in years to come. We further hope that work on standardized ways of describing--and thinking about--complex neuronal networks will lead the scientific community to a clearer understanding of high-level concepts in network dynamics, and will thus lead to deeper insights into the function of the brain.

  15. Effect of correlating adjacent neurons for identifying communications: Feasibility experiment in a cultured neuronal network

    OpenAIRE

    Yoshi Nishitani; Chie Hosokawa; Yuko Mizuno-Matsumoto; Tomomitsu Miyoshi; Shinichi Tamura

    2017-01-01

    Neuronal networks have fluctuating characteristics, unlike the stable characteristics seen in computers. The underlying mechanisms that drive reliable communication among neuronal networks and their ability to perform intelligible tasks remain unknown. Recently, in an attempt to resolve this issue, we showed that stimulated neurons communicate via spikes that propagate temporally, in the form of spike trains. We named this phenomenon “spike wave propagation”. In these previous studies, using ...

  16. NT2 derived neuronal and astrocytic network signalling.

    Directory of Open Access Journals (Sweden)

    Eric J Hill

    Full Text Available A major focus of stem cell research is the generation of neurons that may then be implanted to treat neurodegenerative diseases. However, a picture is emerging where astrocytes are partners to neurons in sustaining and modulating brain function. We therefore investigated the functional properties of NT2 derived astrocytes and neurons using electrophysiological and calcium imaging approaches. NT2 neurons (NT2Ns expressed sodium dependent action potentials, as well as responses to depolarisation and the neurotransmitter glutamate. NT2Ns exhibited spontaneous and coordinated calcium elevations in clusters and in extended processes, indicating local and long distance signalling. Tetrodotoxin sensitive network activity could also be evoked by electrical stimulation. Similarly, NT2 astrocytes (NT2As exhibited morphology and functional properties consistent with this glial cell type. NT2As responded to neuronal activity and to exogenously applied neurotransmitters with calcium elevations, and in contrast to neurons, also exhibited spontaneous rhythmic calcium oscillations. NT2As also generated propagating calcium waves that were gap junction and purinergic signalling dependent. Our results show that NT2 derived astrocytes exhibit appropriate functionality and that NT2N networks interact with NT2A networks in co-culture. These findings underline the utility of such cultures to investigate human brain cell type signalling under controlled conditions. Furthermore, since stem cell derived neuron function and survival is of great importance therapeutically, our findings suggest that the presence of complementary astrocytes may be valuable in supporting stem cell derived neuronal networks. Indeed, this also supports the intriguing possibility of selective therapeutic replacement of astrocytes in diseases where these cells are either lost or lose functionality.

  17. Bayesian Inference and Online Learning in Poisson Neuronal Networks.

    Science.gov (United States)

    Huang, Yanping; Rao, Rajesh P N

    2016-08-01

    Motivated by the growing evidence for Bayesian computation in the brain, we show how a two-layer recurrent network of Poisson neurons can perform both approximate Bayesian inference and learning for any hidden Markov model. The lower-layer sensory neurons receive noisy measurements of hidden world states. The higher-layer neurons infer a posterior distribution over world states via Bayesian inference from inputs generated by sensory neurons. We demonstrate how such a neuronal network with synaptic plasticity can implement a form of Bayesian inference similar to Monte Carlo methods such as particle filtering. Each spike in a higher-layer neuron represents a sample of a particular hidden world state. The spiking activity across the neural population approximates the posterior distribution over hidden states. In this model, variability in spiking is regarded not as a nuisance but as an integral feature that provides the variability necessary for sampling during inference. We demonstrate how the network can learn the likelihood model, as well as the transition probabilities underlying the dynamics, using a Hebbian learning rule. We present results illustrating the ability of the network to perform inference and learning for arbitrary hidden Markov models.

  18. Distribution of orientation selectivity in recurrent networks of spiking neurons with different random topologies.

    Science.gov (United States)

    Sadeh, Sadra; Rotter, Stefan

    2014-01-01

    Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity.

  19. Developmental time windows for axon growth influence neuronal network topology.

    Science.gov (United States)

    Lim, Sol; Kaiser, Marcus

    2015-04-01

    Early brain connectivity development consists of multiple stages: birth of neurons, their migration and the subsequent growth of axons and dendrites. Each stage occurs within a certain period of time depending on types of neurons and cortical layers. Forming synapses between neurons either by growing axons starting at similar times for all neurons (much-overlapped time windows) or at different time points (less-overlapped) may affect the topological and spatial properties of neuronal networks. Here, we explore the extreme cases of axon formation during early development, either starting at the same time for all neurons (parallel, i.e., maximally overlapped time windows) or occurring for each neuron separately one neuron after another (serial, i.e., no overlaps in time windows). For both cases, the number of potential and established synapses remained comparable. Topological and spatial properties, however, differed: Neurons that started axon growth early on in serial growth achieved higher out-degrees, higher local efficiency and longer axon lengths while neurons demonstrated more homogeneous connectivity patterns for parallel growth. Second, connection probability decreased more rapidly with distance between neurons for parallel growth than for serial growth. Third, bidirectional connections were more numerous for parallel growth. Finally, we tested our predictions with C. elegans data. Together, this indicates that time windows for axon growth influence the topological and spatial properties of neuronal networks opening up the possibility to a posteriori estimate developmental mechanisms based on network properties of a developed network.

  20. The Dynamics of Networks of Identical Theta Neurons.

    Science.gov (United States)

    Laing, Carlo R

    2018-02-05

    We consider finite and infinite all-to-all coupled networks of identical theta neurons. Two types of synaptic interactions are investigated: instantaneous and delayed (via first-order synaptic processing). Extensive use is made of the Watanabe/Strogatz (WS) ansatz for reducing the dimension of networks of identical sinusoidally-coupled oscillators. As well as the degeneracy associated with the constants of motion of the WS ansatz, we also find continuous families of solutions for instantaneously coupled neurons, resulting from the reversibility of the reduced model and the form of the synaptic input. We also investigate a number of similar related models. We conclude that the dynamics of networks of all-to-all coupled identical neurons can be surprisingly complicated.

  1. Effects of extracellular potassium diffusion on electrically coupled neuron networks

    Science.gov (United States)

    Wu, Xing-Xing; Shuai, Jianwei

    2015-02-01

    Potassium accumulation and diffusion during neuronal epileptiform activity have been observed experimentally, and potassium lateral diffusion has been suggested to play an important role in nonsynaptic neuron networks. We adopt a hippocampal CA1 pyramidal neuron network in a zero-calcium condition to better understand the influence of extracellular potassium dynamics on the stimulus-induced activity. The potassium concentration in the interstitial space for each neuron is regulated by potassium currents, Na+-K+ pumps, glial buffering, and ion diffusion. In addition to potassium diffusion, nearby neurons are also coupled through gap junctions. Our results reveal that the latency of the first spike responding to stimulus monotonically decreases with increasing gap-junction conductance but is insensitive to potassium diffusive coupling. The duration of network oscillations shows a bell-like shape with increasing potassium diffusive coupling at weak gap-junction coupling. For modest electrical coupling, there is an optimal K+ diffusion strength, at which the flow of potassium ions among the network neurons appropriately modulates interstitial potassium concentrations in a degree that provides the most favorable environment for the generation and continuance of the action potential waves in the network.

  2. Discriminative topological features reveal biological network mechanisms

    Directory of Open Access Journals (Sweden)

    Levovitz Chaya

    2004-11-01

    Full Text Available Abstract Background Recent genomic and bioinformatic advances have motivated the development of numerous network models intending to describe graphs of biological, technological, and sociological origin. In most cases the success of a model has been evaluated by how well it reproduces a few key features of the real-world data, such as degree distributions, mean geodesic lengths, and clustering coefficients. Often pairs of models can reproduce these features with indistinguishable fidelity despite being generated by vastly different mechanisms. In such cases, these few target features are insufficient to distinguish which of the different models best describes real world networks of interest; moreover, it is not clear a priori that any of the presently-existing algorithms for network generation offers a predictive description of the networks inspiring them. Results We present a method to assess systematically which of a set of proposed network generation algorithms gives the most accurate description of a given biological network. To derive discriminative classifiers, we construct a mapping from the set of all graphs to a high-dimensional (in principle infinite-dimensional "word space". This map defines an input space for classification schemes which allow us to state unambiguously which models are most descriptive of a given network of interest. Our training sets include networks generated from 17 models either drawn from the literature or introduced in this work. We show that different duplication-mutation schemes best describe the E. coli genetic network, the S. cerevisiae protein interaction network, and the C. elegans neuronal network, out of a set of network models including a linear preferential attachment model and a small-world model. Conclusions Our method is a first step towards systematizing network models and assessing their predictability, and we anticipate its usefulness for a number of communities.

  3. Spiking Neurons for Analysis of Patterns

    Science.gov (United States)

    Huntsberger, Terrance

    2008-01-01

    Artificial neural networks comprising spiking neurons of a novel type have been conceived as improved pattern-analysis and pattern-recognition computational systems. These neurons are represented by a mathematical model denoted the state-variable model (SVM), which among other things, exploits a computational parallelism inherent in spiking-neuron geometry. Networks of SVM neurons offer advantages of speed and computational efficiency, relative to traditional artificial neural networks. The SVM also overcomes some of the limitations of prior spiking-neuron models. There are numerous potential pattern-recognition, tracking, and data-reduction (data preprocessing) applications for these SVM neural networks on Earth and in exploration of remote planets. Spiking neurons imitate biological neurons more closely than do the neurons of traditional artificial neural networks. A spiking neuron includes a central cell body (soma) surrounded by a tree-like interconnection network (dendrites). Spiking neurons are so named because they generate trains of output pulses (spikes) in response to inputs received from sensors or from other neurons. They gain their speed advantage over traditional neural networks by using the timing of individual spikes for computation, whereas traditional artificial neurons use averages of activity levels over time. Moreover, spiking neurons use the delays inherent in dendritic processing in order to efficiently encode the information content of incoming signals. Because traditional artificial neurons fail to capture this encoding, they have less processing capability, and so it is necessary to use more gates when implementing traditional artificial neurons in electronic circuitry. Such higher-order functions as dynamic tasking are effected by use of pools (collections) of spiking neurons interconnected by spike-transmitting fibers. The SVM includes adaptive thresholds and submodels of transport of ions (in imitation of such transport in biological

  4. Emergent properties of interacting populations of spiking neurons.

    Science.gov (United States)

    Cardanobile, Stefano; Rotter, Stefan

    2011-01-01

    Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system. Here, we present and discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks is faithfully reflected by a set of non-linear rate equations, describing all interactions on the population level. These equations are similar in structure to Lotka-Volterra equations, well known by their use in modeling predator-prey relations in population biology, but abundant applications to economic theory have also been described. We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of interacting neuronal populations.

  5. Numerical simulation of coherent resonance in a model network of Rulkov neurons

    Science.gov (United States)

    Andreev, Andrey V.; Runnova, Anastasia E.; Pisarchik, Alexander N.

    2018-04-01

    In this paper we study the spiking behaviour of a neuronal network consisting of Rulkov elements. We find that the regularity of this behaviour maximizes at a certain level of environment noise. This effect referred to as coherence resonance is demonstrated in a random complex network of Rulkov neurons. An external stimulus added to some of neurons excites them, and then activates other neurons in the network. The network coherence is also maximized at the certain stimulus amplitude.

  6. Versatile Networks of Simulated Spiking Neurons Displaying Winner-Take-All Behavior

    Directory of Open Access Journals (Sweden)

    Yanqing eChen

    2013-03-01

    Full Text Available We describe simulations of large-scale networks of excitatory and inhibitory spiking neurons that can generate dynamically stable winner-take-all (WTA behavior. The network connectivity is a variant of center-surround architecture that we call center-annular-surround (CAS. In this architecture each neuron is excited by nearby neighbors and inhibited by more distant neighbors in an annular-surround region. The neural units of these networks simulate conductance-based spiking neurons that interact via mechanisms susceptible to both short-term synaptic plasticity and STDP. We show that such CAS networks display robust WTA behavior unlike the center-surround networks and other control architectures that we have studied. We find that a large-scale network of spiking neurons with separate populations of excitatory and inhibitory neurons can give rise to smooth maps of sensory input. In addition, we show that a humanoid Brain-Based-Device (BBD under the control of a spiking WTA neural network can learn to reach to target positions in its visual field, thus demonstrating the acquisition of sensorimotor coordination.

  7. Versatile networks of simulated spiking neurons displaying winner-take-all behavior.

    Science.gov (United States)

    Chen, Yanqing; McKinstry, Jeffrey L; Edelman, Gerald M

    2013-01-01

    We describe simulations of large-scale networks of excitatory and inhibitory spiking neurons that can generate dynamically stable winner-take-all (WTA) behavior. The network connectivity is a variant of center-surround architecture that we call center-annular-surround (CAS). In this architecture each neuron is excited by nearby neighbors and inhibited by more distant neighbors in an annular-surround region. The neural units of these networks simulate conductance-based spiking neurons that interact via mechanisms susceptible to both short-term synaptic plasticity and STDP. We show that such CAS networks display robust WTA behavior unlike the center-surround networks and other control architectures that we have studied. We find that a large-scale network of spiking neurons with separate populations of excitatory and inhibitory neurons can give rise to smooth maps of sensory input. In addition, we show that a humanoid brain-based-device (BBD) under the control of a spiking WTA neural network can learn to reach to target positions in its visual field, thus demonstrating the acquisition of sensorimotor coordination.

  8. Dynamics of Moment Neuronal Networks with Intra- and Inter-Interactions

    Directory of Open Access Journals (Sweden)

    Xuyan Xiang

    2015-01-01

    Full Text Available A framework of moment neuronal networks with intra- and inter-interactions is presented. It is to show how the spontaneous activity is propagated across the homogeneous and heterogeneous network. The input-output firing relationship and the stability are first explored for a homogeneous network. For heterogeneous network without the constraint of the correlation coefficients between neurons, a more sophisticated dynamics is then explored. With random interactions, the network gets easily synchronized. However, desynchronization is produced by a lateral interaction such as Mexico hat function. It is the external intralayer input unit that offers a more sophisticated and unexpected dynamics over the predecessors. Hence, the work further opens up the possibility of carrying out a stochastic computation in neuronal networks.

  9. A real-time hybrid neuron network for highly parallel cognitive systems.

    Science.gov (United States)

    Christiaanse, Gerrit Jan; Zjajo, Amir; Galuzzi, Carlo; van Leuken, Rene

    2016-08-01

    For comprehensive understanding of how neurons communicate with each other, new tools need to be developed that can accurately mimic the behaviour of such neurons and neuron networks under `real-time' constraints. In this paper, we propose an easily customisable, highly pipelined, neuron network design, which executes optimally scheduled floating-point operations for maximal amount of biophysically plausible neurons per FPGA family type. To reduce the required amount of resources without adverse effect on the calculation latency, a single exponent instance is used for multiple neuron calculation operations. Experimental results indicate that the proposed network design allows the simulation of up to 1188 neurons on Virtex7 (XC7VX550T) device in brain real-time yielding a speed-up of x12.4 compared to the state-of-the art.

  10. Stochastic resonance in small-world neuronal networks with hybrid electrical–chemical synapses

    International Nuclear Information System (INIS)

    Wang, Jiang; Guo, Xinmeng; Yu, Haitao; Liu, Chen; Deng, Bin; Wei, Xile; Chen, Yingyuan

    2014-01-01

    Highlights: •We study stochastic resonance in small-world neural networks with hybrid synapses. •The resonance effect depends largely on the probability of chemical synapse. •An optimal chemical synapse probability exists to evoke network resonance. •Network topology affects the stochastic resonance in hybrid neuronal networks. - Abstract: The dependence of stochastic resonance in small-world neuronal networks with hybrid electrical–chemical synapses on the probability of chemical synapse and the rewiring probability is investigated. A subthreshold periodic signal is imposed on one single neuron within the neuronal network as a pacemaker. It is shown that, irrespective of the probability of chemical synapse, there exists a moderate intensity of external noise optimizing the response of neuronal networks to the pacemaker. Moreover, the effect of pacemaker driven stochastic resonance of the system depends largely on the probability of chemical synapse. A high probability of chemical synapse will need lower noise intensity to evoke the phenomenon of stochastic resonance in the networked neuronal systems. In addition, for fixed noise intensity, there is an optimal chemical synapse probability, which can promote the propagation of the localized subthreshold pacemaker across neural networks. And the optimal chemical synapses probability turns even larger as the coupling strength decreases. Furthermore, the small-world topology has a significant impact on the stochastic resonance in hybrid neuronal networks. It is found that increasing the rewiring probability can always enhance the stochastic resonance until it approaches the random network limit

  11. Spiking sychronization regulated by noise in three types of Hodgkin—Huxley neuronal networks

    International Nuclear Information System (INIS)

    Zhang Zheng-Zhen; Zeng Shang-You; Tang Wen-Yan; Hu Jin-Lin; Zeng Shao-Wen; Ning Wei-Lian; Qiu Yi; Wu Hui-Si

    2012-01-01

    In this paper, we study spiking synchronization in three different types of Hodgkin—Huxley neuronal networks, which are the small-world, regular, and random neuronal networks. All the neurons are subjected to subthreshold stimulus and external noise. It is found that in each of all the neuronal networks there is an optimal strength of noise to induce the maximal spiking synchronization. We further demonstrate that in each of the neuronal networks there is a range of synaptic conductance to induce the effect that an optimal strength of noise maximizes the spiking synchronization. Only when the magnitude of the synaptic conductance is moderate, will the effect be considerable. However, if the synaptic conductance is small or large, the effect vanishes. As the connections between neurons increase, the synaptic conductance to maximize the effect decreases. Therefore, we show quantitatively that the noise-induced maximal synchronization in the Hodgkin—Huxley neuronal network is a general effect, regardless of the specific type of neuronal network

  12. Network control principles predict neuron function in the Caenorhabditis elegans connectome

    Science.gov (United States)

    Yan, Gang; Vértes, Petra E.; Towlson, Emma K.; Chew, Yee Lian; Walker, Denise S.; Schafer, William R.; Barabási, Albert-László

    2017-10-01

    Recent studies on the controllability of complex systems offer a powerful mathematical framework to systematically explore the structure-function relationship in biological, social, and technological networks. Despite theoretical advances, we lack direct experimental proof of the validity of these widely used control principles. Here we fill this gap by applying a control framework to the connectome of the nematode Caenorhabditis elegans, allowing us to predict the involvement of each C. elegans neuron in locomotor behaviours. We predict that control of the muscles or motor neurons requires 12 neuronal classes, which include neuronal groups previously implicated in locomotion by laser ablation, as well as one previously uncharacterized neuron, PDB. We validate this prediction experimentally, finding that the ablation of PDB leads to a significant loss of dorsoventral polarity in large body bends. Importantly, control principles also allow us to investigate the involvement of individual neurons within each neuronal class. For example, we predict that, within the class of DD motor neurons, only three (DD04, DD05, or DD06) should affect locomotion when ablated individually. This prediction is also confirmed; single cell ablations of DD04 or DD05 specifically affect posterior body movements, whereas ablations of DD02 or DD03 do not. Our predictions are robust to deletions of weak connections, missing connections, and rewired connections in the current connectome, indicating the potential applicability of this analytical framework to larger and less well-characterized connectomes.

  13. Network control principles predict neuron function in the Caenorhabditis elegans connectome.

    Science.gov (United States)

    Yan, Gang; Vértes, Petra E; Towlson, Emma K; Chew, Yee Lian; Walker, Denise S; Schafer, William R; Barabási, Albert-László

    2017-10-26

    Recent studies on the controllability of complex systems offer a powerful mathematical framework to systematically explore the structure-function relationship in biological, social, and technological networks. Despite theoretical advances, we lack direct experimental proof of the validity of these widely used control principles. Here we fill this gap by applying a control framework to the connectome of the nematode Caenorhabditis elegans, allowing us to predict the involvement of each C. elegans neuron in locomotor behaviours. We predict that control of the muscles or motor neurons requires 12 neuronal classes, which include neuronal groups previously implicated in locomotion by laser ablation, as well as one previously uncharacterized neuron, PDB. We validate this prediction experimentally, finding that the ablation of PDB leads to a significant loss of dorsoventral polarity in large body bends. Importantly, control principles also allow us to investigate the involvement of individual neurons within each neuronal class. For example, we predict that, within the class of DD motor neurons, only three (DD04, DD05, or DD06) should affect locomotion when ablated individually. This prediction is also confirmed; single cell ablations of DD04 or DD05 specifically affect posterior body movements, whereas ablations of DD02 or DD03 do not. Our predictions are robust to deletions of weak connections, missing connections, and rewired connections in the current connectome, indicating the potential applicability of this analytical framework to larger and less well-characterized connectomes.

  14. Pulsed neural networks consisting of single-flux-quantum spiking neurons

    International Nuclear Information System (INIS)

    Hirose, T.; Asai, T.; Amemiya, Y.

    2007-01-01

    An inhibitory pulsed neural network was developed for brain-like information processing, by using single-flux-quantum (SFQ) circuits. It consists of spiking neuron devices that are coupled to each other through all-to-all inhibitory connections. The network selects neural activity. The operation of the neural network was confirmed by computer simulation. SFQ neuron devices can imitate the operation of the inhibition phenomenon of neural networks

  15. Associative memory in phasing neuron networks

    Energy Technology Data Exchange (ETDEWEB)

    Nair, Niketh S [ORNL; Bochove, Erik J. [United States Air Force Research Laboratory, Kirtland Air Force Base; Braiman, Yehuda [ORNL

    2014-01-01

    We studied pattern formation in a network of coupled Hindmarsh-Rose model neurons and introduced a new model for associative memory retrieval using networks of Kuramoto oscillators. Hindmarsh-Rose Neural Networks can exhibit a rich set of collective dynamics that can be controlled by their connectivity. Specifically, we showed an instance of Hebb's rule where spiking was correlated with network topology. Based on this, we presented a simple model of associative memory in coupled phase oscillators.

  16. Efficient computation in networks of spiking neurons: simulations and theory

    International Nuclear Information System (INIS)

    Natschlaeger, T.

    1999-01-01

    One of the most prominent features of biological neural systems is that individual neurons communicate via short electrical pulses, the so called action potentials or spikes. In this thesis we investigate possible mechanisms which can in principle explain how complex computations in spiking neural networks (SNN) can be performed very fast, i.e. within a few 10 milliseconds. Some of these models are based on the assumption that relevant information is encoded by the timing of individual spikes (temporal coding). We will also discuss a model which is based on a population code and still is able to perform fast complex computations. In their natural environment biological neural systems have to process signals with a rich temporal structure. Hence it is an interesting question how neural systems process time series. In this context we explore possible links between biophysical characteristics of single neurons (refractory behavior, connectivity, time course of postsynaptic potentials) and synapses (unreliability, dynamics) on the one hand and possible computations on times series on the other hand. Furthermore we describe a general model of computation that exploits dynamic synapses. This model provides a general framework for understanding how neural systems process time-varying signals. (author)

  17. SERS investigations and electrical recording of neuronal networks with three-dimensional plasmonic nanoantennas (Conference Presentation)

    Science.gov (United States)

    De Angelis, Francesco

    2017-06-01

    SERS investigations and electrical recording of neuronal networks with three-dimensional plasmonic nanoantennas Michele Dipalo, Valeria Caprettini, Anbrea Barbaglia, Laura Lovato, Francesco De Angelis e-mail: francesco.deangelis@iit.it Istituto Italiano di Tecnologia, Via Morego 30, 16163, Genova Biological systems are analysed mainly by optical, chemical or electrical methods. Normally each of these techniques provides only partial information about the environment, while combined investigations could reveal new phenomena occurring in complex systems such as in-vitro neuronal networks. Aiming at the merging of optical and electrical investigations of biological samples, we introduced three-dimensional plasmonic nanoantennas on CMOS-based electrical sensors [1]. The overall device is then capable of enhanced Raman Analysis of cultured cells combined with electrical recording of neuronal activity. The Raman measurements show a much higher sensitivity when performed on the tip of the nanoantenna in respect to the flat substrate [2]; this effect is a combination of the high plasmonic field enhancement and of the tight adhesion of cells on the nanoantenna tip. Furthermore, when plasmonic opto-poration is exploited [3] the 3D nanoelectrodes are able to penetrate through the cell membrane thus accessing the intracellular environment. Our latest results (unpublished) show that the technique is completely non-invasive and solves many problems related to state-of-the-art intracellular recording approaches on large neuronal networks. This research received funding from ERC-IDEAS Program: "Neuro-Plasmonics" [Grant n. 616213]. References: [1] M. Dipalo, G. C. Messina, H. Amin, R. La Rocca, V. Shalabaeva, A. Simi, A. Maccione, P. Zilio, L. Berdondini, F. De Angelis, Nanoscale 2015, 7, 3703. [2] R. La Rocca, G. C. Messina, M. Dipalo, V. Shalabaeva, F. De Angelis, Small 2015, 11, 4632. [3] G. C. Messina et al., Spatially, Temporally, and Quantitatively Controlled Delivery of

  18. Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.

    Directory of Open Access Journals (Sweden)

    Stojan Jovanović

    2016-06-01

    Full Text Available The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.

  19. Interplay between Graph Topology and Correlations of Third Order in Spiking Neuronal Networks.

    Science.gov (United States)

    Jovanović, Stojan; Rotter, Stefan

    2016-06-01

    The study of processes evolving on networks has recently become a very popular research field, not only because of the rich mathematical theory that underpins it, but also because of its many possible applications, a number of them in the field of biology. Indeed, molecular signaling pathways, gene regulation, predator-prey interactions and the communication between neurons in the brain can be seen as examples of networks with complex dynamics. The properties of such dynamics depend largely on the topology of the underlying network graph. In this work, we want to answer the following question: Knowing network connectivity, what can be said about the level of third-order correlations that will characterize the network dynamics? We consider a linear point process as a model for pulse-coded, or spiking activity in a neuronal network. Using recent results from theory of such processes, we study third-order correlations between spike trains in such a system and explain which features of the network graph (i.e. which topological motifs) are responsible for their emergence. Comparing two different models of network topology-random networks of Erdős-Rényi type and networks with highly interconnected hubs-we find that, in random networks, the average measure of third-order correlations does not depend on the local connectivity properties, but rather on global parameters, such as the connection probability. This, however, ceases to be the case in networks with a geometric out-degree distribution, where topological specificities have a strong impact on average correlations.

  20. The synchronization of FitzHugh–Nagumo neuron network coupled by gap junction

    International Nuclear Information System (INIS)

    Zhan Yong; Zhang Suhua; Zhao Tongjun; An Hailong; Zhang Zhendong; Han Yingrong; Liu Hui; Zhang Yuhong

    2008-01-01

    It is well known that the strong coupling can synchronize a network of nonlinear oscillators. Synchronization provides the basis of the remarkable computational performance of the brain. In this paper the FitzHugh–Nagumo neuron network is constructed. The dependence of the synchronization on the coupling strength, the noise intensity and the size of the neuron network has been discussed. The results indicate that the coupling among neurons works to improve the synchronization, and noise increases the neuron random dynamics and the local fluctuations; the larger the size of network, the worse the synchronization. The dependence of the synchronization on the strength of the electric synapse coupling and chemical synapse coupling has also been discussed, which proves that electric synapse coupling can enhance the synchronization of the neuron network largely

  1. How the self-coupled neuron can affect the chaotic synchronization of network

    International Nuclear Information System (INIS)

    Jia Chenhui; Wang Jiang; Deng, Bin

    2009-01-01

    We have calculated 34 kinds of three-cell neuron networks' minimum coupling strength, from the result; we find that a self-coupled neuron can have some effect on the synchronization of the network. The reason is the self-coupled neurons make the number of neurons looks 'decrease', and they decrease the coupling strength of the other neurons which are coupled with them.

  2. Towards building hybrid biological/in silico neural networks for motor neuroprosthetic control

    Directory of Open Access Journals (Sweden)

    Mehmet eKocaturk

    2015-08-01

    Full Text Available In this article, we introduce the Bioinspired Neuroprosthetic Design Environment (BNDE as a practical platform for the development of novel brain machine interface (BMI controllers which are based on spiking model neurons. We built the BNDE around a hard real-time system so that it is capable of creating simulated synapses from extracellularly recorded neurons to model neurons. In order to evaluate the practicality of the BNDE for neuroprosthetic control experiments, a novel, adaptive BMI controller was developed and tested using real-time closed-loop simulations. The present controller consists of two in silico medium spiny neurons which receive simulated synaptic inputs from recorded motor cortical neurons. In the closed-loop simulations, the recordings from the cortical neurons were imitated using an external, hardware-based neural signal synthesizer. By implementing a reward-modulated spike timing-dependent plasticity rule, the controller achieved perfect target reach accuracy for a two target reaching task in one dimensional space. The BNDE combines the flexibility of software-based spiking neural network (SNN simulations with powerful online data visualization tools and is a low-cost, PC-based and all-in-one solution for developing neurally-inspired BMI controllers. We believe the BNDE is the first implementation which is capable of creating hybrid biological/in silico neural networks for motor neuroprosthetic control and utilizes multiple CPU cores for computationally intensive real-time SNN simulations.

  3. Synchronization in a non-uniform network of excitatory spiking neurons

    Science.gov (United States)

    Echeveste, Rodrigo; Gros, Claudius

    Spontaneous synchronization of pulse coupled elements is ubiquitous in nature and seems to be of vital importance for life. Networks of pacemaker cells in the heart, extended populations of southeast asian fireflies, and neuronal oscillations in cortical networks, are examples of this. In the present work, a rich repertoire of dynamical states with different degrees of synchronization are found in a network of excitatory-only spiking neurons connected in a non-uniform fashion. In particular, uncorrelated and partially correlated states are found without the need for inhibitory neurons or external currents. The phase transitions between these states, as well the robustness, stability, and response of the network to external stimulus are studied.

  4. From biological neural networks to thinking machines: Transitioning biological organizational principles to computer technology

    Science.gov (United States)

    Ross, Muriel D.

    1991-01-01

    The three-dimensional organization of the vestibular macula is under study by computer assisted reconstruction and simulation methods as a model for more complex neural systems. One goal of this research is to transition knowledge of biological neural network architecture and functioning to computer technology, to contribute to the development of thinking computers. Maculas are organized as weighted neural networks for parallel distributed processing of information. The network is characterized by non-linearity of its terminal/receptive fields. Wiring appears to develop through constrained randomness. A further property is the presence of two main circuits, highly channeled and distributed modifying, that are connected through feedforward-feedback collaterals and biasing subcircuit. Computer simulations demonstrate that differences in geometry of the feedback (afferent) collaterals affects the timing and the magnitude of voltage changes delivered to the spike initiation zone. Feedforward (efferent) collaterals act as voltage followers and likely inhibit neurons of the distributed modifying circuit. These results illustrate the importance of feedforward-feedback loops, of timing, and of inhibition in refining neural network output. They also suggest that it is the distributed modifying network that is most involved in adaptation, memory, and learning. Tests of macular adaptation, through hyper- and microgravitational studies, support this hypothesis since synapses in the distributed modifying circuit, but not the channeled circuit, are altered. Transitioning knowledge of biological systems to computer technology, however, remains problematical.

  5. Dominating biological networks.

    Directory of Open Access Journals (Sweden)

    Tijana Milenković

    Full Text Available Proteins are essential macromolecules of life that carry out most cellular processes. Since proteins aggregate to perform function, and since protein-protein interaction (PPI networks model these aggregations, one would expect to uncover new biology from PPI network topology. Hence, using PPI networks to predict protein function and role of protein pathways in disease has received attention. A debate remains open about whether network properties of "biologically central (BC" genes (i.e., their protein products, such as those involved in aging, cancer, infectious diseases, or signaling and drug-targeted pathways, exhibit some topological centrality compared to the rest of the proteins in the human PPI network.To help resolve this debate, we design new network-based approaches and apply them to get new insight into biological function and disease. We hypothesize that BC genes have a topologically central (TC role in the human PPI network. We propose two different concepts of topological centrality. We design a new centrality measure to capture complex wirings of proteins in the network that identifies as TC those proteins that reside in dense extended network neighborhoods. Also, we use the notion of domination and find dominating sets (DSs in the PPI network, i.e., sets of proteins such that every protein is either in the DS or is a neighbor of the DS. Clearly, a DS has a TC role, as it enables efficient communication between different network parts. We find statistically significant enrichment in BC genes of TC nodes and outperform the existing methods indicating that genes involved in key biological processes occupy topologically complex and dense regions of the network and correspond to its "spine" that connects all other network parts and can thus pass cellular signals efficiently throughout the network. To our knowledge, this is the first study that explores domination in the context of PPI networks.

  6. Emergent properties of interacting populations of spiking neurons

    Directory of Open Access Journals (Sweden)

    Stefano eCardanobile

    2011-12-01

    Full Text Available Dynamic neuronal networks are a key paradigm of increasing importance in brain research, concerned with the functional analysis of biological neuronal networks and, at the same time, with the synthesis of artificial brain-like systems. In this context, neuronal network models serve as mathematical tools to understand the function of brains, but they might as well develop into future tools for enhancing certain functions of our nervous system.Here, we discuss our recent achievements in developing multiplicative point processes into a viable mathematical framework for spiking network modeling. The perspective is that the dynamic behavior of these neuronal networks on the population level is faithfully reflected by a set of non-linear rate equations, describing all interactions on this level. These equations, in turn, are similar in structure to the Lotka-Volterra equations, well known by their use in modeling predator-prey relationships in population biology, but abundant applications to economic theory have also been described.We present a number of biologically relevant examples for spiking network function, which can be studied with the help of the aforementioned correspondence between spike trains and specific systems of non-linear coupled ordinary differential equations. We claim that, enabled by the use of multiplicative point processes, we can make essential contributions to a more thorough understanding of the dynamical properties of neural populations.

  7. Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.

    Science.gov (United States)

    Kriegeskorte, Nikolaus

    2015-11-24

    Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision.

  8. Phase-locking and bistability in neuronal networks with synaptic depression

    Science.gov (United States)

    Akcay, Zeynep; Huang, Xinxian; Nadim, Farzan; Bose, Amitabha

    2018-02-01

    We consider a recurrent network of two oscillatory neurons that are coupled with inhibitory synapses. We use the phase response curves of the neurons and the properties of short-term synaptic depression to define Poincaré maps for the activity of the network. The fixed points of these maps correspond to phase-locked modes of the network. Using these maps, we analyze the conditions that allow short-term synaptic depression to lead to the existence of bistable phase-locked, periodic solutions. We show that bistability arises when either the phase response curve of the neuron or the short-term depression profile changes steeply enough. The results apply to any Type I oscillator and we illustrate our findings using the Quadratic Integrate-and-Fire and Morris-Lecar neuron models.

  9. Dynamical Encoding by Networks of Competing Neuron Groups: Winnerless Competition

    International Nuclear Information System (INIS)

    Rabinovich, M.; Volkovskii, A.; Lecanda, P.; Huerta, R.; Abarbanel, H. D. I.; Laurent, G.

    2001-01-01

    Following studies of olfactory processing in insects and fish, we investigate neural networks whose dynamics in phase space is represented by orbits near the heteroclinic connections between saddle regions (fixed points or limit cycles). These networks encode input information as trajectories along the heteroclinic connections. If there are N neurons in the network, the capacity is approximately e(N-1) ! , i.e., much larger than that of most traditional network structures. We show that a small winnerless competition network composed of FitzHugh-Nagumo spiking neurons efficiently transforms input information into a spatiotemporal output

  10. Network biology: Describing biological systems by complex networks. Comment on "Network science of biological systems at different scales: A review" by M. Gosak et al.

    Science.gov (United States)

    Jalili, Mahdi

    2018-03-01

    I enjoyed reading Gosak et al. review on analysing biological systems from network science perspective [1]. Network science, first started within Physics community, is now a mature multidisciplinary field of science with many applications ranging from Ecology to biology, medicine, social sciences, engineering and computer science. Gosak et al. discussed how biological systems can be modelled and described by complex network theory which is an important application of network science. Although there has been considerable progress in network biology over the past two decades, this is just the beginning and network science has a great deal to offer to biology and medical sciences.

  11. Spiral Wave in Small-World Networks of Hodgkin-Huxley Neurons

    International Nuclear Information System (INIS)

    Ma Jun; Zhang Cairong; Yang Lijian; Wu Ying

    2010-01-01

    The effect of small-world connection and noise on the formation and transition of spiral wave in the networks of Hodgkin-Huxley neurons are investigated in detail. Some interesting results are found in our numerical studies. i) The quiescent neurons are activated to propagate electric signal to others by generating and developing spiral wave from spiral seed in small area. ii) A statistical factor is defined to describe the collective properties and phase transition induced by the topology of networks and noise. iii) Stable rotating spiral wave can be generated and keeps robust when the rewiring probability is below certain threshold, otherwise, spiral wave can not be developed from the spiral seed and spiral wave breakup occurs for a stable rotating spiral wave. iv) Gaussian white noise is introduced on the membrane of neurons to study the noise-induced phase transition on spiral wave in small-world networks of neurons. It is confirmed that Gaussian white noise plays active role in supporting and developing spiral wave in the networks of neurons, and appearance of smaller factor of synchronization indicates high possibility to induce spiral wave. (interdisciplinary physics and related areas of science and technology)

  12. Complex Behavior in a Selective Aging Neuron Model Based on Small World Networks

    International Nuclear Information System (INIS)

    Zhang Guiqing; Chen Tianlun

    2008-01-01

    Complex behavior in a selective aging simple neuron model based on small world networks is investigated. The basic elements of the model are endowed with the main features of a neuron function. The structure of the selective aging neuron model is discussed. We also give some properties of the new network and find that the neuron model displays a power-law behavior. If the brain network is small world-like network, the mean avalanche size is almost the same unless the aging parameter is big enough.

  13. Synthetic biological networks

    International Nuclear Information System (INIS)

    Archer, Eric; Süel, Gürol M

    2013-01-01

    Despite their obvious relationship and overlap, the field of physics is blessed with many insightful laws, while such laws are sadly absent in biology. Here we aim to discuss how the rise of a more recent field known as synthetic biology may allow us to more directly test hypotheses regarding the possible design principles of natural biological networks and systems. In particular, this review focuses on synthetic gene regulatory networks engineered to perform specific functions or exhibit particular dynamic behaviors. Advances in synthetic biology may set the stage to uncover the relationship of potential biological principles to those developed in physics. (review article)

  14. Pacemaker neuron and network oscillations depend on a neuromodulator-regulated linear current

    Directory of Open Access Journals (Sweden)

    Shunbing Zhao

    2010-05-01

    Full Text Available Linear leak currents have been implicated in the regulation of neuronal excitability, generation of neuronal and network oscillations, and network state transitions. Yet, few studies have directly tested the dependence of network oscillations on leak currents or explored the role of leak currents on network activity. In the oscillatory pyloric network of decapod crustaceans neuromodulatory inputs are necessary for pacemaker activity. A large subset of neuromodulators is known to activate a single voltage-gated inward current IMI, which has been shown to regulate the rhythmic activity of the network and its pacemaker neurons. Using the dynamic clamp technique, we show that the crucial component of IMI for the generation of oscillatory activity is only a close-to-linear portion of the current-voltage relationship. The nature of this conductance is such that the presence or the absence of neuromodulators effectively regulates the amount of leak current and the input resistance in the pacemaker neurons. When deprived of neuromodulatory inputs, pyloric oscillations are disrupted; yet, a linear reduction of the total conductance in a single neuron within the pacemaker group recovers not only the pacemaker activity in that neuron, but also leads to a recovery of oscillations in the entire pyloric network. The recovered activity produces proper frequency and phasing that is similar to that induced by neuromodulators. These results show that the passive properties of pacemaker neurons can significantly affect their capacity to generate and regulate the oscillatory activity of an entire network, and that this feature is exploited by neuromodulatory inputs.

  15. Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity

    Science.gov (United States)

    Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R.; Baldelli, Pietro; Benfenati, Fabio

    2013-01-01

    Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows. PMID:23970852

  16. Long-term optical stimulation of channelrhodopsin-expressing neurons to study network plasticity.

    Science.gov (United States)

    Lignani, Gabriele; Ferrea, Enrico; Difato, Francesco; Amarù, Jessica; Ferroni, Eleonora; Lugarà, Eleonora; Espinoza, Stefano; Gainetdinov, Raul R; Baldelli, Pietro; Benfenati, Fabio

    2013-01-01

    Neuronal plasticity produces changes in excitability, synaptic transmission, and network architecture in response to external stimuli. Network adaptation to environmental conditions takes place in time scales ranging from few seconds to days, and modulates the entire network dynamics. To study the network response to defined long-term experimental protocols, we setup a system that combines optical and electrophysiological tools embedded in a cell incubator. Primary hippocampal neurons transduced with lentiviruses expressing channelrhodopsin-2/H134R were subjected to various photostimulation protocols in a time window in the order of days. To monitor the effects of light-induced gating of network activity, stimulated transduced neurons were simultaneously recorded using multi-electrode arrays (MEAs). The developed experimental model allows discerning short-term, long-lasting, and adaptive plasticity responses of the same neuronal network to distinct stimulation frequencies applied over different temporal windows.

  17. Attractor dynamics in local neuronal networks

    Directory of Open Access Journals (Sweden)

    Jean-Philippe eThivierge

    2014-03-01

    Full Text Available Patterns of synaptic connectivity in various regions of the brain are characterized by the presence of synaptic motifs, defined as unidirectional and bidirectional synaptic contacts that follow a particular configuration and link together small groups of neurons. Recent computational work proposes that a relay network (two populations communicating via a third, relay population of neurons can generate precise patterns of neural synchronization. Here, we employ two distinct models of neuronal dynamics and show that simulated neural circuits designed in this way are caught in a global attractor of activity that prevents neurons from modulating their response on the basis of incoming stimuli. To circumvent the emergence of a fixed global attractor, we propose a mechanism of selective gain inhibition that promotes flexible responses to external stimuli. We suggest that local neuronal circuits may employ this mechanism to generate precise patterns of neural synchronization whose transient nature delimits the occurrence of a brief stimulus.

  18. Spike Code Flow in Cultured Neuronal Networks.

    Science.gov (United States)

    Tamura, Shinichi; Nishitani, Yoshi; Hosokawa, Chie; Miyoshi, Tomomitsu; Sawai, Hajime; Kamimura, Takuya; Yagi, Yasushi; Mizuno-Matsumoto, Yuko; Chen, Yen-Wei

    2016-01-01

    We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of "1101" and "1011," which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the "maximum cross-correlations" among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  19. Extracting neuronal functional network dynamics via adaptive Granger causality analysis.

    Science.gov (United States)

    Sheikhattar, Alireza; Miran, Sina; Liu, Ji; Fritz, Jonathan B; Shamma, Shihab A; Kanold, Patrick O; Babadi, Behtash

    2018-04-24

    Quantifying the functional relations between the nodes in a network based on local observations is a key challenge in studying complex systems. Most existing time series analysis techniques for this purpose provide static estimates of the network properties, pertain to stationary Gaussian data, or do not take into account the ubiquitous sparsity in the underlying functional networks. When applied to spike recordings from neuronal ensembles undergoing rapid task-dependent dynamics, they thus hinder a precise statistical characterization of the dynamic neuronal functional networks underlying adaptive behavior. We develop a dynamic estimation and inference paradigm for extracting functional neuronal network dynamics in the sense of Granger, by integrating techniques from adaptive filtering, compressed sensing, point process theory, and high-dimensional statistics. We demonstrate the utility of our proposed paradigm through theoretical analysis, algorithm development, and application to synthetic and real data. Application of our techniques to two-photon Ca 2+ imaging experiments from the mouse auditory cortex reveals unique features of the functional neuronal network structures underlying spontaneous activity at unprecedented spatiotemporal resolution. Our analysis of simultaneous recordings from the ferret auditory and prefrontal cortical areas suggests evidence for the role of rapid top-down and bottom-up functional dynamics across these areas involved in robust attentive behavior.

  20. Burst analysis tool for developing neuronal networks exhibiting highly varying action potential dynamics

    Directory of Open Access Journals (Sweden)

    Fikret Emre eKapucu

    2012-06-01

    Full Text Available In this paper we propose a firing statistics based neuronal network burst detection algorithm for neuronal networks exhibiting highly variable action potential dynamics. Electrical activity of neuronal networks is generally analyzed by the occurrences of spikes and bursts both in time and space. Commonly accepted analysis tools employ burst detection algorithms based on predefined criteria. However, maturing neuronal networks, such as those originating from human embryonic stem cells (hESC, exhibit highly variable network structure and time-varying dynamics. To explore the developing burst/spike activities of such networks, we propose a burst detection algorithm which utilizes the firing statistics based on interspike interval (ISI histograms. Moreover, the algorithm calculates interspike interval thresholds for burst spikes as well as for pre-burst spikes and burst tails by evaluating the cumulative moving average and skewness of the ISI histogram. Because of the adaptive nature of the proposed algorithm, its analysis power is not limited by the type of neuronal cell network at hand. We demonstrate the functionality of our algorithm with two different types of microelectrode array (MEA data recorded from spontaneously active hESC-derived neuronal cell networks. The same data was also analyzed by two commonly employed burst detection algorithms and the differences in burst detection results are illustrated. The results demonstrate that our method is both adaptive to the firing statistics of the network and yields successful burst detection from the data. In conclusion, the proposed method is a potential tool for analyzing of hESC-derived neuronal cell networks and thus can be utilized in studies aiming to understand the development and functioning of human neuronal networks and as an analysis tool for in vitro drug screening and neurotoxicity assays.

  1. Defects formation and spiral waves in a network of neurons in presence of electromagnetic induction.

    Science.gov (United States)

    Rostami, Zahra; Jafari, Sajad

    2018-04-01

    Complex anatomical and physiological structure of an excitable tissue (e.g., cardiac tissue) in the body can represent different electrical activities through normal or abnormal behavior. Abnormalities of the excitable tissue coming from different biological reasons can lead to formation of some defects. Such defects can cause some successive waves that may end up to some additional reorganizing beating behaviors like spiral waves or target waves. In this study, formation of defects and the resulting emitted waves in an excitable tissue are investigated. We have considered a square array network of neurons with nearest-neighbor connections to describe the excitable tissue. Fundamentally, electrophysiological properties of ion currents in the body are responsible for exhibition of electrical spatiotemporal patterns. More precisely, fluctuation of accumulated ions inside and outside of cell causes variable electrical and magnetic field. Considering undeniable mutual effects of electrical field and magnetic field, we have proposed the new Hindmarsh-Rose (HR) neuronal model for the local dynamics of each individual neuron in the network. In this new neuronal model, the influence of magnetic flow on membrane potential is defined. This improved model holds more bifurcation parameters. Moreover, the dynamical behavior of the tissue is investigated in different states of quiescent, spiking, bursting and even chaotic state. The resulting spatiotemporal patterns are represented and the time series of some sampled neurons are displayed, as well.

  2. Conceptual Network Model From Sensory Neurons to Astrocytes of the Human Nervous System.

    Science.gov (United States)

    Yang, Yiqun; Yeo, Chai Kiat

    2015-07-01

    From a single-cell animal like paramecium to vertebrates like ape, the nervous system plays an important role in responding to the variations of the environment. Compared to animals, the nervous system in the human body possesses more intricate organization and utility. The nervous system anatomy has been understood progressively, yet the explanation at the cell level regarding complete information transmission is still lacking. Along the signal pathway toward the brain, an external stimulus first activates action potentials in the sensing neuron and these electric pulses transmit along the spinal nerve or cranial nerve to the neurons in the brain. Second, calcium elevation is triggered in the branch of astrocyte at the tripartite synapse. Third, the local calcium wave expands to the entire territory of the astrocyte. Finally, the calcium wave propagates to the neighboring astrocyte via gap junction channel. In our study, we integrate the existing mathematical model and biological experiments in each step of the signal transduction to establish a conceptual network model for the human nervous system. The network is composed of four layers and the communication protocols of each layer could be adapted to entities with different characterizations. We verify our simulation results against the available biological experiments and mathematical models and provide a test case of the integrated network. As the production of conscious episode in the human nervous system is still under intense research, our model serves as a useful tool to facilitate, complement and verify current and future study in human cognition.

  3. Altering neuronal excitability to preserve network connectivity in a computational model of Alzheimer's disease.

    Directory of Open Access Journals (Sweden)

    Willem de Haan

    2017-09-01

    Full Text Available Neuronal hyperactivity and hyperexcitability of the cerebral cortex and hippocampal region is an increasingly observed phenomenon in preclinical Alzheimer's disease (AD. In later stages, oscillatory slowing and loss of functional connectivity are ubiquitous. Recent evidence suggests that neuronal dynamics have a prominent role in AD pathophysiology, making it a potentially interesting therapeutic target. However, although neuronal activity can be manipulated by various (non-pharmacological means, intervening in a highly integrated system that depends on complex dynamics can produce counterintuitive and adverse effects. Computational dynamic network modeling may serve as a virtual test ground for developing effective interventions. To explore this approach, a previously introduced large-scale neural mass network with human brain topology was used to simulate the temporal evolution of AD-like, activity-dependent network degeneration. In addition, six defense strategies that either enhanced or diminished neuronal excitability were tested against the degeneration process, targeting excitatory and inhibitory neurons combined or separately. Outcome measures described oscillatory, connectivity and topological features of the damaged networks. Over time, the various interventions produced diverse large-scale network effects. Contrary to our hypothesis, the most successful strategy was a selective stimulation of all excitatory neurons in the network; it substantially prolonged the preservation of network integrity. The results of this study imply that functional network damage due to pathological neuronal activity can be opposed by targeted adjustment of neuronal excitability levels. The present approach may help to explore therapeutic effects aimed at preserving or restoring neuronal network integrity and contribute to better-informed intervention choices in future clinical trials in AD.

  4. Replicating receptive fields of simple and complex cells in primary visual cortex in a neuronal network model with temporal and population sparseness and reliability.

    Science.gov (United States)

    Tanaka, Takuma; Aoyagi, Toshio; Kaneko, Takeshi

    2012-10-01

    We propose a new principle for replicating receptive field properties of neurons in the primary visual cortex. We derive a learning rule for a feedforward network, which maintains a low firing rate for the output neurons (resulting in temporal sparseness) and allows only a small subset of the neurons in the network to fire at any given time (resulting in population sparseness). Our learning rule also sets the firing rates of the output neurons at each time step to near-maximum or near-minimum levels, resulting in neuronal reliability. The learning rule is simple enough to be written in spatially and temporally local forms. After the learning stage is performed using input image patches of natural scenes, output neurons in the model network are found to exhibit simple-cell-like receptive field properties. When the output of these simple-cell-like neurons are input to another model layer using the same learning rule, the second-layer output neurons after learning become less sensitive to the phase of gratings than the simple-cell-like input neurons. In particular, some of the second-layer output neurons become completely phase invariant, owing to the convergence of the connections from first-layer neurons with similar orientation selectivity to second-layer neurons in the model network. We examine the parameter dependencies of the receptive field properties of the model neurons after learning and discuss their biological implications. We also show that the localized learning rule is consistent with experimental results concerning neuronal plasticity and can replicate the receptive fields of simple and complex cells.

  5. Detection of 5-hydroxytryptamine (5-HT) in vitro using a hippocampal neuronal network-based biosensor with extracellular potential analysis of neurons.

    Science.gov (United States)

    Hu, Liang; Wang, Qin; Qin, Zhen; Su, Kaiqi; Huang, Liquan; Hu, Ning; Wang, Ping

    2015-04-15

    5-hydroxytryptamine (5-HT) is an important neurotransmitter in regulating emotions and related behaviors in mammals. To detect and monitor the 5-HT, effective and convenient methods are demanded in investigation of neuronal network. In this study, hippocampal neuronal networks (HNNs) endogenously expressing 5-HT receptors were employed as sensing elements to build an in vitro neuronal network-based biosensor. The electrophysiological characteristics were analyzed in both neuron and network levels. The firing rates and amplitudes were derived from signal to determine the biosensor response characteristics. The experimental results demonstrate a dose-dependent inhibitory effect of 5-HT on hippocampal neuron activities, indicating the effectiveness of this hybrid biosensor in detecting 5-HT with a response range from 0.01μmol/L to 10μmol/L. In addition, the cross-correlation analysis of HNNs activities suggests 5-HT could weaken HNN connectivity reversibly, providing more specificity of this biosensor in detecting 5-HT. Moreover, 5-HT induced spatiotemporal firing pattern alterations could be monitored in neuron and network levels simultaneously by this hybrid biosensor in a convenient and direct way. With those merits, this neuronal network-based biosensor will be promising to be a valuable and utility platform for the study of neurotransmitter in vitro. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. Building functional networks of spiking model neurons.

    Science.gov (United States)

    Abbott, L F; DePasquale, Brian; Memmesheimer, Raoul-Martin

    2016-03-01

    Most of the networks used by computer scientists and many of those studied by modelers in neuroscience represent unit activities as continuous variables. Neurons, however, communicate primarily through discontinuous spiking. We review methods for transferring our ability to construct interesting networks that perform relevant tasks from the artificial continuous domain to more realistic spiking network models. These methods raise a number of issues that warrant further theoretical and experimental study.

  7. Short-term memory in networks of dissociated cortical neurons.

    Science.gov (United States)

    Dranias, Mark R; Ju, Han; Rajaram, Ezhilarasan; VanDongen, Antonius M J

    2013-01-30

    Short-term memory refers to the ability to store small amounts of stimulus-specific information for a short period of time. It is supported by both fading and hidden memory processes. Fading memory relies on recurrent activity patterns in a neuronal network, whereas hidden memory is encoded using synaptic mechanisms, such as facilitation, which persist even when neurons fall silent. We have used a novel computational and optogenetic approach to investigate whether these same memory processes hypothesized to support pattern recognition and short-term memory in vivo, exist in vitro. Electrophysiological activity was recorded from primary cultures of dissociated rat cortical neurons plated on multielectrode arrays. Cultures were transfected with ChannelRhodopsin-2 and optically stimulated using random dot stimuli. The pattern of neuronal activity resulting from this stimulation was analyzed using classification algorithms that enabled the identification of stimulus-specific memories. Fading memories for different stimuli, encoded in ongoing neural activity, persisted and could be distinguished from each other for as long as 1 s after stimulation was terminated. Hidden memories were detected by altered responses of neurons to additional stimulation, and this effect persisted longer than 1 s. Interestingly, network bursts seem to eliminate hidden memories. These results are similar to those that have been reported from similar experiments in vivo and demonstrate that mechanisms of information processing and short-term memory can be studied using cultured neuronal networks, thereby setting the stage for therapeutic applications using this platform.

  8. Biological neural networks as model systems for designing future parallel processing computers

    Science.gov (United States)

    Ross, Muriel D.

    1991-01-01

    One of the more interesting debates of the present day centers on whether human intelligence can be simulated by computer. The author works under the premise that neurons individually are not smart at all. Rather, they are physical units which are impinged upon continuously by other matter that influences the direction of voltage shifts across the units membranes. It is only the action of a great many neurons, billions in the case of the human nervous system, that intelligent behavior emerges. What is required to understand even the simplest neural system is painstaking analysis, bit by bit, of the architecture and the physiological functioning of its various parts. The biological neural network studied, the vestibular utricular and saccular maculas of the inner ear, are among the most simple of the mammalian neural networks to understand and model. While there is still a long way to go to understand even this most simple neural network in sufficient detail for extrapolation to computers and robots, a start was made. Moreover, the insights obtained and the technologies developed help advance the understanding of the more complex neural networks that underlie human intelligence.

  9. Spike Code Flow in Cultured Neuronal Networks

    Directory of Open Access Journals (Sweden)

    Shinichi Tamura

    2016-01-01

    Full Text Available We observed spike trains produced by one-shot electrical stimulation with 8 × 8 multielectrodes in cultured neuronal networks. Each electrode accepted spikes from several neurons. We extracted the short codes from spike trains and obtained a code spectrum with a nominal time accuracy of 1%. We then constructed code flow maps as movies of the electrode array to observe the code flow of “1101” and “1011,” which are typical pseudorandom sequence such as that we often encountered in a literature and our experiments. They seemed to flow from one electrode to the neighboring one and maintained their shape to some extent. To quantify the flow, we calculated the “maximum cross-correlations” among neighboring electrodes, to find the direction of maximum flow of the codes with lengths less than 8. Normalized maximum cross-correlations were almost constant irrespective of code. Furthermore, if the spike trains were shuffled in interval orders or in electrodes, they became significantly small. Thus, the analysis suggested that local codes of approximately constant shape propagated and conveyed information across the network. Hence, the codes can serve as visible and trackable marks of propagating spike waves as well as evaluating information flow in the neuronal network.

  10. Leaky Integrate and Fire Neuron by Charge-Discharge Dynamics in Floating-Body MOSFET.

    Science.gov (United States)

    Dutta, Sangya; Kumar, Vinay; Shukla, Aditya; Mohapatra, Nihar R; Ganguly, Udayan

    2017-08-15

    Neuro-biology inspired Spiking Neural Network (SNN) enables efficient learning and recognition tasks. To achieve a large scale network akin to biology, a power and area efficient electronic neuron is essential. Earlier, we had demonstrated an LIF neuron by a novel 4-terminal impact ionization based n+/p/n+ with an extended gate (gated-INPN) device by physics simulation. Excellent improvement in area and power compared to conventional analog circuit implementations was observed. In this paper, we propose and experimentally demonstrate a compact conventional 3-terminal partially depleted (PD) SOI- MOSFET (100 nm gate length) to replace the 4-terminal gated-INPN device. Impact ionization (II) induced floating body effect in SOI-MOSFET is used to capture LIF neuron behavior to demonstrate spiking frequency dependence on input. MHz operation enables attractive hardware acceleration compared to biology. Overall, conventional PD-SOI-CMOS technology enables very-large-scale-integration (VLSI) which is essential for biology scale (~10 11 neuron based) large neural networks.

  11. Complexity in neuronal noise depends on network interconnectivity.

    Science.gov (United States)

    Serletis, Demitre; Zalay, Osbert C; Valiante, Taufik A; Bardakjian, Berj L; Carlen, Peter L

    2011-06-01

    "Noise," or noise-like activity (NLA), defines background electrical membrane potential fluctuations at the cellular level of the nervous system, comprising an important aspect of brain dynamics. Using whole-cell voltage recordings from fast-spiking stratum oriens interneurons and stratum pyramidale neurons located in the CA3 region of the intact mouse hippocampus, we applied complexity measures from dynamical systems theory (i.e., 1/f(γ) noise and correlation dimension) and found evidence for complexity in neuronal NLA, ranging from high- to low-complexity dynamics. Importantly, these high- and low-complexity signal features were largely dependent on gap junction and chemical synaptic transmission. Progressive neuronal isolation from the surrounding local network via gap junction blockade (abolishing gap junction-dependent spikelets) and then chemical synaptic blockade (abolishing excitatory and inhibitory post-synaptic potentials), or the reverse order of these treatments, resulted in emergence of high-complexity NLA dynamics. Restoring local network interconnectivity via blockade washout resulted in resolution to low-complexity behavior. These results suggest that the observed increase in background NLA complexity is the result of reduced network interconnectivity, thereby highlighting the potential importance of the NLA signal to the study of network state transitions arising in normal and abnormal brain dynamics (such as in epilepsy, for example).

  12. Reverse engineering a mouse embryonic stem cell-specific transcriptional network reveals a new modulator of neuronal differentiation.

    Science.gov (United States)

    De Cegli, Rossella; Iacobacci, Simona; Flore, Gemma; Gambardella, Gennaro; Mao, Lei; Cutillo, Luisa; Lauria, Mario; Klose, Joachim; Illingworth, Elizabeth; Banfi, Sandro; di Bernardo, Diego

    2013-01-01

    Gene expression profiles can be used to infer previously unknown transcriptional regulatory interaction among thousands of genes, via systems biology 'reverse engineering' approaches. We 'reverse engineered' an embryonic stem (ES)-specific transcriptional network from 171 gene expression profiles, measured in ES cells, to identify master regulators of gene expression ('hubs'). We discovered that E130012A19Rik (E13), highly expressed in mouse ES cells as compared with differentiated cells, was a central 'hub' of the network. We demonstrated that E13 is a protein-coding gene implicated in regulating the commitment towards the different neuronal subtypes and glia cells. The overexpression and knock-down of E13 in ES cell lines, undergoing differentiation into neurons and glia cells, caused a strong up-regulation of the glutamatergic neurons marker Vglut2 and a strong down-regulation of the GABAergic neurons marker GAD65 and of the radial glia marker Blbp. We confirmed E13 expression in the cerebral cortex of adult mice and during development. By immuno-based affinity purification, we characterized protein partners of E13, involved in the Polycomb complex. Our results suggest a role of E13 in regulating the division between glutamatergic projection neurons and GABAergic interneurons and glia cells possibly by epigenetic-mediated transcriptional regulation.

  13. Neuronal network analyses: premises, promises and uncertainties

    OpenAIRE

    Parker, David

    2010-01-01

    Neuronal networks assemble the cellular components needed for sensory, motor and cognitive functions. Any rational intervention in the nervous system will thus require an understanding of network function. Obtaining this understanding is widely considered to be one of the major tasks facing neuroscience today. Network analyses have been performed for some years in relatively simple systems. In addition to the direct insights these systems have provided, they also illustrate some of the diffic...

  14. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity.

    Science.gov (United States)

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework.

  15. Reciprocal cholinergic and GABAergic modulation of the small ventrolateral pacemaker neurons of Drosophila's circadian clock neuron network.

    Science.gov (United States)

    Lelito, Katherine R; Shafer, Orie T

    2012-04-01

    The relatively simple clock neuron network of Drosophila is a valuable model system for the neuronal basis of circadian timekeeping. Unfortunately, many key neuronal classes of this network are inaccessible to electrophysiological analysis. We have therefore adopted the use of genetically encoded sensors to address the physiology of the fly's circadian clock network. Using genetically encoded Ca(2+) and cAMP sensors, we have investigated the physiological responses of two specific classes of clock neuron, the large and small ventrolateral neurons (l- and s-LN(v)s), to two neurotransmitters implicated in their modulation: acetylcholine (ACh) and γ-aminobutyric acid (GABA). Live imaging of l-LN(v) cAMP and Ca(2+) dynamics in response to cholinergic agonist and GABA application were well aligned with published electrophysiological data, indicating that our sensors were capable of faithfully reporting acute physiological responses to these transmitters within single adult clock neuron soma. We extended these live imaging methods to s-LN(v)s, critical neuronal pacemakers whose physiological properties in the adult brain are largely unknown. Our s-LN(v) experiments revealed the predicted excitatory responses to bath-applied cholinergic agonists and the predicted inhibitory effects of GABA and established that the antagonism of ACh and GABA extends to their effects on cAMP signaling. These data support recently published but physiologically untested models of s-LN(v) modulation and lead to the prediction that cholinergic and GABAergic inputs to s-LN(v)s will have opposing effects on the phase and/or period of the molecular clock within these critical pacemaker neurons.

  16. Endogenous fields enhanced stochastic resonance in a randomly coupled neuronal network

    International Nuclear Information System (INIS)

    Deng, Bin; Wang, Lin; Wang, Jiang; Wei, Xi-le; Yu, Hai-tao

    2014-01-01

    Highlights: • We study effects of endogenous fields on stochastic resonance in a neural network. • Stochastic resonance can be notably enhanced by endogenous field feedback. • Endogenous field feedback delay plays a vital role in stochastic resonance. • The parameters of low-passed filter play a subtle role in SR. - Abstract: Endogenous field, evoked by structured neuronal network activity in vivo, is correlated with many vital neuronal processes. In this paper, the effects of endogenous fields on stochastic resonance (SR) in a randomly connected neuronal network are investigated. The network consists of excitatory and inhibitory neurons and the axonal conduction delays between neurons are also considered. Numerical results elucidate that endogenous field feedback results in more rhythmic macroscope activation of the network for proper time delay and feedback coefficient. The response of the network to the weak periodic stimulation can be notably enhanced by endogenous field feedback. Moreover, the endogenous field feedback delay plays a vital role in SR. We reveal that appropriately tuned delays of the feedback can either induce the enhancement of SR, appearing at every integer multiple of the weak input signal’s oscillation period, or the depression of SR, appearing at every integer multiple of half the weak input signal’s oscillation period for the same feedback coefficient. Interestingly, the parameters of low-passed filter which is used in obtaining the endogenous field feedback signal play a subtle role in SR

  17. The Role of Adult-Born Neurons in the Constantly Changing Olfactory Bulb Network

    Directory of Open Access Journals (Sweden)

    Sarah Malvaut

    2016-01-01

    Full Text Available The adult mammalian brain is remarkably plastic and constantly undergoes structurofunctional modifications in response to environmental stimuli. In many regions plasticity is manifested by modifications in the efficacy of existing synaptic connections or synapse formation and elimination. In a few regions, however, plasticity is brought by the addition of new neurons that integrate into established neuronal networks. This type of neuronal plasticity is particularly prominent in the olfactory bulb (OB where thousands of neuronal progenitors are produced on a daily basis in the subventricular zone (SVZ and migrate along the rostral migratory stream (RMS towards the OB. In the OB, these neuronal precursors differentiate into local interneurons, mature, and functionally integrate into the bulbar network by establishing output synapses with principal neurons. Despite continuous progress, it is still not well understood how normal functioning of the OB is preserved in the constantly remodelling bulbar network and what role adult-born neurons play in odor behaviour. In this review we will discuss different levels of morphofunctional plasticity effected by adult-born neurons and their functional role in the adult OB and also highlight the possibility that different subpopulations of adult-born cells may fulfill distinct functions in the OB neuronal network and odor behaviour.

  18. The Role of Adult-Born Neurons in the Constantly Changing Olfactory Bulb Network

    Science.gov (United States)

    Malvaut, Sarah; Saghatelyan, Armen

    2016-01-01

    The adult mammalian brain is remarkably plastic and constantly undergoes structurofunctional modifications in response to environmental stimuli. In many regions plasticity is manifested by modifications in the efficacy of existing synaptic connections or synapse formation and elimination. In a few regions, however, plasticity is brought by the addition of new neurons that integrate into established neuronal networks. This type of neuronal plasticity is particularly prominent in the olfactory bulb (OB) where thousands of neuronal progenitors are produced on a daily basis in the subventricular zone (SVZ) and migrate along the rostral migratory stream (RMS) towards the OB. In the OB, these neuronal precursors differentiate into local interneurons, mature, and functionally integrate into the bulbar network by establishing output synapses with principal neurons. Despite continuous progress, it is still not well understood how normal functioning of the OB is preserved in the constantly remodelling bulbar network and what role adult-born neurons play in odor behaviour. In this review we will discuss different levels of morphofunctional plasticity effected by adult-born neurons and their functional role in the adult OB and also highlight the possibility that different subpopulations of adult-born cells may fulfill distinct functions in the OB neuronal network and odor behaviour. PMID:26839709

  19. Optimization behavior of brainstem respiratory neurons. A cerebral neural network model.

    Science.gov (United States)

    Poon, C S

    1991-01-01

    A recent model of respiratory control suggested that the steady-state respiratory responses to CO2 and exercise may be governed by an optimal control law in the brainstem respiratory neurons. It was not certain, however, whether such complex optimization behavior could be accomplished by a realistic biological neural network. To test this hypothesis, we developed a hybrid computer-neural model in which the dynamics of the lung, brain and other tissue compartments were simulated on a digital computer. Mimicking the "controller" was a human subject who pedalled on a bicycle with varying speed (analog of ventilatory output) with a view to minimize an analog signal of the total cost of breathing (chemical and mechanical) which was computed interactively and displayed on an oscilloscope. In this manner, the visuomotor cortex served as a proxy (homolog) of the brainstem respiratory neurons in the model. Results in 4 subjects showed a linear steady-state ventilatory CO2 response to arterial PCO2 during simulated CO2 inhalation and a nearly isocapnic steady-state response during simulated exercise. Thus, neural optimization is a plausible mechanism for respiratory control during exercise and can be achieved by a neural network with cognitive computational ability without the need for an exercise stimulus.

  20. How adaptation shapes spike rate oscillations in recurrent neuronal networks

    Directory of Open Access Journals (Sweden)

    Moritz eAugustin

    2013-02-01

    Full Text Available Neural mass signals from in-vivo recordings often show oscillations with frequencies ranging from <1 Hz to 100 Hz. Fast rhythmic activity in the beta and gamma range can be generated by network based mechanisms such as recurrent synaptic excitation-inhibition loops. Slower oscillations might instead depend on neuronal adaptation currents whose timescales range from tens of milliseconds to seconds. Here we investigate how the dynamics of such adaptation currents contribute to spike rate oscillations and resonance properties in recurrent networks of excitatory and inhibitory neurons. Based on a network of sparsely coupled spiking model neurons with two types of adaptation current and conductance based synapses with heterogeneous strengths and delays we use a mean-field approach to analyze oscillatory network activity. For constant external input, we find that spike-triggered adaptation currents provide a mechanism to generate slow oscillations over a wide range of adaptation timescales as long as recurrent synaptic excitation is sufficiently strong. Faster rhythms occur when recurrent inhibition is slower than excitation and oscillation frequency increases with the strength of inhibition. Adaptation facilitates such network based oscillations for fast synaptic inhibition and leads to decreased frequencies. For oscillatory external input, adaptation currents amplify a narrow band of frequencies and cause phase advances for low frequencies in addition to phase delays at higher frequencies. Our results therefore identify the different key roles of neuronal adaptation dynamics for rhythmogenesis and selective signal propagation in recurrent networks.

  1. Bi-directional astrocytic regulation of neuronal activity within a network

    Directory of Open Access Journals (Sweden)

    Susan Yu Gordleeva

    2012-11-01

    Full Text Available The concept of a tripartite synapse holds that astrocytes can affect both the pre- and postsynaptic compartments through the Ca2+-dependent release of gliotransmitters. Because astrocytic Ca2+ transients usually last for a few seconds, we assumed that astrocytic regulation of synaptic transmission may also occur on the scale of seconds. Here, we considered the basic physiological functions of tripartite synapses and investigated astrocytic regulation at the level of neural network activity. The firing dynamics of individual neurons in a spontaneous firing network was described by the Hodgkin-Huxley model. The neurons received excitatory synaptic input driven by the Poisson spike train with variable frequency. The mean field concentration of the released neurotransmitter was used to describe the presynaptic dynamics. The amplitudes of the excitatory postsynaptic currents (PSCs obeyed the gamma distribution law. In our model, astrocytes depressed the presynaptic release and enhanced the postsynaptic currents. As a result, low frequency synaptic input was suppressed while high frequency input was amplified. The analysis of the neuron spiking frequency as an indicator of network activity revealed that tripartite synaptic transmission dramatically changed the local network operation compared to bipartite synapses. Specifically, the astrocytes supported homeostatic regulation of the network activity by increasing or decreasing firing of the neurons. Thus, the astrocyte activation may modulate a transition of neural network into bistable regime of activity with two stable firing levels and spontaneous transitions between them.

  2. Diagnosis of cranial hemangioma: Comparison between logistic regression analysis and neuronal network

    International Nuclear Information System (INIS)

    Arana, E.; Marti-Bonmati, L.; Bautista, D.; Paredes, R.

    1998-01-01

    To study the utility of logistic regression and the neuronal network in the diagnosis of cranial hemangiomas. Fifteen patients presenting hemangiomas were selected form a total of 167 patients with cranial lesions. All were evaluated by plain radiography and computed tomography (CT). Nineteen variables in their medical records were reviewed. Logistic regression and neuronal network models were constructed and validated by the jackknife (leave-one-out) approach. The yields of the two models were compared by means of ROC curves, using the area under the curve as parameter. Seven men and 8 women presented hemangiomas. The mean age of these patients was 38.4 (15.4 years (mea ± standard deviation). Logistic regression identified as significant variables the shape, soft tissue mass and periosteal reaction. The neuronal network lent more importance to the existence of ossified matrix, ruptured cortical vein and the mixed calcified-blastic (trabeculated) pattern. The neuronal network showed a greater yield than logistic regression (Az, 0.9409) (0.004 versus 0.7211± 0.075; p<0.001). The neuronal network discloses hidden interactions among the variables, providing a higher yield in the characterization of cranial hemangiomas and constituting a medical diagnostic acid. (Author)29 refs

  3. Synaptic network activity induces neuronal differentiation of adult hippocampal precursor cells through BDNF signaling

    Directory of Open Access Journals (Sweden)

    Harish Babu

    2009-09-01

    Full Text Available Adult hippocampal neurogenesis is regulated by activity. But how do neural precursor cells in the hippocampus respond to surrounding network activity and translate increased neural activity into a developmental program? Here we show that long-term potential (LTP-like synaptic activity within a cellular network of mature hippocampal neurons promotes neuronal differentiation of newly generated cells. In co-cultures of precursor cells with primary hippocampal neurons, LTP-like synaptic plasticity induced by addition of glycine in Mg2+-free media for 5 min, produced synchronous network activity and subsequently increased synaptic strength between neurons. Furthermore, this synchronous network activity led to a significant increase in neuronal differentiation from the co-cultured neural precursor cells. When applied directly to precursor cells, glycine and Mg2+-free solution did not induce neuronal differentiation. Synaptic plasticity-induced neuronal differentiation of precursor cells was observed in the presence of GABAergic neurotransmission blockers but was dependent on NMDA-mediated Ca2+ influx. Most importantly, neuronal differentiation required the release of brain-derived neurotrophic factor (BDNF from the underlying substrate hippocampal neurons as well as TrkB receptor phosphorylation in precursor cells. This suggests that activity-dependent stem cell differentiation within the hippocampal network is mediated via synaptically evoked BDNF signaling.

  4. Causal Interrogation of Neuronal Networks and Behavior through Virally Transduced Ivermectin Receptors.

    Science.gov (United States)

    Obenhaus, Horst A; Rozov, Andrei; Bertocchi, Ilaria; Tang, Wannan; Kirsch, Joachim; Betz, Heinrich; Sprengel, Rolf

    2016-01-01

    The causal interrogation of neuronal networks involved in specific behaviors requires the spatially and temporally controlled modulation of neuronal activity. For long-term manipulation of neuronal activity, chemogenetic tools provide a reasonable alternative to short-term optogenetic approaches. Here we show that virus mediated gene transfer of the ivermectin (IVM) activated glycine receptor mutant GlyRα1 (AG) can be used for the selective and reversible silencing of specific neuronal networks in mice. In the striatum, dorsal hippocampus, and olfactory bulb, GlyRα1 (AG) promoted IVM dependent effects in representative behavioral assays. Moreover, GlyRα1 (AG) mediated silencing had a strong and reversible impact on neuronal ensemble activity and c-Fos activation in the olfactory bulb. Together our results demonstrate that long-term, reversible and re-inducible neuronal silencing via GlyRα1 (AG) is a promising tool for the interrogation of network mechanisms underlying the control of behavior and memory formation.

  5. Degree of synchronization modulated by inhibitory neurons in clustered excitatory-inhibitory recurrent networks

    Science.gov (United States)

    Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua

    2018-01-01

    An excitatory-inhibitory recurrent neuronal network is established to numerically study the effect of inhibitory neurons on the synchronization degree of neuronal systems. The obtained results show that, with the number of inhibitory neurons and the coupling strength from an inhibitory neuron to an excitatory neuron increasing, inhibitory neurons can not only reduce the synchronization degree when the synchronization degree of the excitatory population is initially higher, but also enhance it when it is initially lower. Meanwhile, inhibitory neurons could also help the neuronal networks to maintain moderate synchronized states. In this paper, we call this effect as modulation effect of inhibitory neurons. With the obtained results, it is further revealed that the ratio of excitatory neurons to inhibitory neurons being nearly 4 : 1 is an economic and affordable choice for inhibitory neurons to realize this modulation effect.

  6. Probabilistic Inference in General Graphical Models through Sampling in Stochastic Networks of Spiking Neurons

    Science.gov (United States)

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-01-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows (“explaining away”) and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons. PMID:22219717

  7. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons.

    Directory of Open Access Journals (Sweden)

    Dejan Pecevski

    2011-12-01

    Full Text Available An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away" and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.

  8. Probabilistic inference in general graphical models through sampling in stochastic networks of spiking neurons.

    Science.gov (United States)

    Pecevski, Dejan; Buesing, Lars; Maass, Wolfgang

    2011-12-01

    An important open problem of computational neuroscience is the generic organization of computations in networks of neurons in the brain. We show here through rigorous theoretical analysis that inherent stochastic features of spiking neurons, in combination with simple nonlinear computational operations in specific network motifs and dendritic arbors, enable networks of spiking neurons to carry out probabilistic inference through sampling in general graphical models. In particular, it enables them to carry out probabilistic inference in Bayesian networks with converging arrows ("explaining away") and with undirected loops, that occur in many real-world tasks. Ubiquitous stochastic features of networks of spiking neurons, such as trial-to-trial variability and spontaneous activity, are necessary ingredients of the underlying computational organization. We demonstrate through computer simulations that this approach can be scaled up to neural emulations of probabilistic inference in fairly large graphical models, yielding some of the most complex computations that have been carried out so far in networks of spiking neurons.

  9. Extracting functionally feedforward networks from a population of spiking neurons.

    Science.gov (United States)

    Vincent, Kathleen; Tauskela, Joseph S; Thivierge, Jean-Philippe

    2012-01-01

    Neuronal avalanches are a ubiquitous form of activity characterized by spontaneous bursts whose size distribution follows a power-law. Recent theoretical models have replicated power-law avalanches by assuming the presence of functionally feedforward connections (FFCs) in the underlying dynamics of the system. Accordingly, avalanches are generated by a feedforward chain of activation that persists despite being embedded in a larger, massively recurrent circuit. However, it is unclear to what extent networks of living neurons that exhibit power-law avalanches rely on FFCs. Here, we employed a computational approach to reconstruct the functional connectivity of cultured cortical neurons plated on multielectrode arrays (MEAs) and investigated whether pharmacologically induced alterations in avalanche dynamics are accompanied by changes in FFCs. This approach begins by extracting a functional network of directed links between pairs of neurons, and then evaluates the strength of FFCs using Schur decomposition. In a first step, we examined the ability of this approach to extract FFCs from simulated spiking neurons. The strength of FFCs obtained in strictly feedforward networks diminished monotonically as links were gradually rewired at random. Next, we estimated the FFCs of spontaneously active cortical neuron cultures in the presence of either a control medium, a GABA(A) receptor antagonist (PTX), or an AMPA receptor antagonist combined with an NMDA receptor antagonist (APV/DNQX). The distribution of avalanche sizes in these cultures was modulated by this pharmacology, with a shallower power-law under PTX (due to the prominence of larger avalanches) and a steeper power-law under APV/DNQX (due to avalanches recruiting fewer neurons) relative to control cultures. The strength of FFCs increased in networks after application of PTX, consistent with an amplification of feedforward activity during avalanches. Conversely, FFCs decreased after application of APV

  10. Noise and Synchronization Analysis of the Cold-Receptor Neuronal Network Model

    Directory of Open Access Journals (Sweden)

    Ying Du

    2014-01-01

    Full Text Available This paper analyzes the dynamics of the cold receptor neural network model. First, it examines noise effects on neuronal stimulus in the model. From ISI plots, it is shown that there are considerable differences between purely deterministic simulations and noisy ones. The ISI-distance is used to measure the noise effects on spike trains quantitatively. It is found that spike trains observed in neural models can be more strongly affected by noise for different temperatures in some aspects; meanwhile, spike train has greater variability with the noise intensity increasing. The synchronization of neuronal network with different connectivity patterns is also studied. It is shown that chaotic and high period patterns are more difficult to get complete synchronization than the situation in single spike and low period patterns. The neuronal network will exhibit various patterns of firing synchronization by varying some key parameters such as the coupling strength. Different types of firing synchronization are diagnosed by a correlation coefficient and the ISI-distance method. The simulations show that the synchronization status of neurons is related to the network connectivity patterns.

  11. Introduction to Concepts in Artificial Neural Networks

    Science.gov (United States)

    Niebur, Dagmar

    1995-01-01

    This introduction to artificial neural networks summarizes some basic concepts of computational neuroscience and the resulting models of artificial neurons. The terminology of biological and artificial neurons, biological and machine learning and neural processing is introduced. The concepts of supervised and unsupervised learning are explained with examples from the power system area. Finally, a taxonomy of different types of neurons and different classes of artificial neural networks is presented.

  12. Novel transcriptional networks regulated by CLOCK in human neurons.

    Science.gov (United States)

    Fontenot, Miles R; Berto, Stefano; Liu, Yuxiang; Werthmann, Gordon; Douglas, Connor; Usui, Noriyoshi; Gleason, Kelly; Tamminga, Carol A; Takahashi, Joseph S; Konopka, Genevieve

    2017-11-01

    The molecular mechanisms underlying human brain evolution are not fully understood; however, previous work suggested that expression of the transcription factor CLOCK in the human cortex might be relevant to human cognition and disease. In this study, we investigated this novel transcriptional role for CLOCK in human neurons by performing chromatin immunoprecipitation sequencing for endogenous CLOCK in adult neocortices and RNA sequencing following CLOCK knockdown in differentiated human neurons in vitro. These data suggested that CLOCK regulates the expression of genes involved in neuronal migration, and a functional assay showed that CLOCK knockdown increased neuronal migratory distance. Furthermore, dysregulation of CLOCK disrupts coexpressed networks of genes implicated in neuropsychiatric disorders, and the expression of these networks is driven by hub genes with human-specific patterns of expression. These data support a role for CLOCK-regulated transcriptional cascades involved in human brain evolution and function. © 2017 Fontenot et al.; Published by Cold Spring Harbor Laboratory Press.

  13. Human embryonic stem cell-derived neurons adopt and regulate the activity of an established neural network

    Science.gov (United States)

    Weick, Jason P.; Liu, Yan; Zhang, Su-Chun

    2011-01-01

    Whether hESC-derived neurons can fully integrate with and functionally regulate an existing neural network remains unknown. Here, we demonstrate that hESC-derived neurons receive unitary postsynaptic currents both in vitro and in vivo and adopt the rhythmic firing behavior of mouse cortical networks via synaptic integration. Optical stimulation of hESC-derived neurons expressing Channelrhodopsin-2 elicited both inhibitory and excitatory postsynaptic currents and triggered network bursting in mouse neurons. Furthermore, light stimulation of hESC-derived neurons transplanted to the hippocampus of adult mice triggered postsynaptic currents in host pyramidal neurons in acute slice preparations. Thus, hESC-derived neurons can participate in and modulate neural network activity through functional synaptic integration, suggesting they are capable of contributing to neural network information processing both in vitro and in vivo. PMID:22106298

  14. Mapping biological systems to network systems

    CERN Document Server

    Rathore, Heena

    2016-01-01

    The book presents the challenges inherent in the paradigm shift of network systems from static to highly dynamic distributed systems – it proposes solutions that the symbiotic nature of biological systems can provide into altering networking systems to adapt to these changes. The author discuss how biological systems – which have the inherent capabilities of evolving, self-organizing, self-repairing and flourishing with time – are inspiring researchers to take opportunities from the biology domain and map them with the problems faced in network domain. The book revolves around the central idea of bio-inspired systems -- it begins by exploring why biology and computer network research are such a natural match. This is followed by presenting a broad overview of biologically inspired research in network systems -- it is classified by the biological field that inspired each topic and by the area of networking in which that topic lies. Each case elucidates how biological concepts have been most successfully ...

  15. Effect of Transcranial Magnetic Stimulation on Neuronal Networks

    Science.gov (United States)

    Unsal, Ahmet; Hadimani, Ravi; Jiles, David

    2013-03-01

    The human brain contains around 100 billion nerve cells controlling our day to day activities. Consequently, brain disorders often result in impairments such as paralysis, loss of coordination and seizure. It has been said that 1 in 5 Americans suffer some diagnosable mental disorder. There is an urgent need to understand the disorders, prevent them and if possible, develop permanent cure for them. As a result, a significant amount of research activities is being directed towards brain research. Transcranial Magnetic Stimulation (TMS) is a promising tool for diagnosing and treating brain disorders. It is a non-invasive treatment method that produces a current flow in the brain which excites the neurons. Even though TMS has been verified to have advantageous effects on various brain related disorders, there have not been enough studies on the impact of TMS on cells. In this study, we are investigating the electrophysiological effects of TMS on one dimensional neuronal culture grown in a circular pathway. Electrical currents are produced on the neuronal networks depending on the directionality of the applied field. This aids in understanding how neuronal networks react under TMS treatment.

  16. Neuronal Networks on Nanocellulose Scaffolds.

    Science.gov (United States)

    Jonsson, Malin; Brackmann, Christian; Puchades, Maja; Brattås, Karoline; Ewing, Andrew; Gatenholm, Paul; Enejder, Annika

    2015-11-01

    Proliferation, integration, and neurite extension of PC12 cells, a widely used culture model for cholinergic neurons, were studied in nanocellulose scaffolds biosynthesized by Gluconacetobacter xylinus to allow a three-dimensional (3D) extension of neurites better mimicking neuronal networks in tissue. The interaction with control scaffolds was compared with cationized nanocellulose (trimethyl ammonium betahydroxy propyl [TMAHP] cellulose) to investigate the impact of surface charges on the cell interaction mechanisms. Furthermore, coatings with extracellular matrix proteins (collagen, fibronectin, and laminin) were investigated to determine the importance of integrin-mediated cell attachment. Cell proliferation was evaluated by a cellular proliferation assay, while cell integration and neurite propagation were studied by simultaneous label-free Coherent anti-Stokes Raman Scattering and second harmonic generation microscopy, providing 3D images of PC12 cells and arrangement of nanocellulose fibrils, respectively. Cell attachment and proliferation were enhanced by TMAHP modification, but not by protein coating. Protein coating instead promoted active interaction between the cells and the scaffold, hence lateral cell migration and integration. Irrespective of surface modification, deepest cell integration measured was one to two cell layers, whereas neurites have a capacity to integrate deeper than the cell bodies in the scaffold due to their fine dimensions and amoeba-like migration pattern. Neurites with lengths of >50 μm were observed, successfully connecting individual cells and cell clusters. In conclusion, TMAHP-modified nanocellulose scaffolds promote initial cellular scaffold adhesion, which combined with additional cell-scaffold treatments enables further formation of 3D neuronal networks.

  17. A Neuronal Network Model for Pitch Selectivity and Representation

    OpenAIRE

    Huang, Chengcheng; Rinzel, John

    2016-01-01

    Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among c...

  18. The influence of single neuron dynamics and network topology on time delay-induced multiple synchronous behaviors in inhibitory coupled network

    International Nuclear Information System (INIS)

    Zhao, Zhiguo; Gu, Huaguang

    2015-01-01

    Highlights: • Time delay-induced multiple synchronous behaviors was simulated in neuronal networks. • Multiple behaviors appear at time delays shorter than a bursting period of neurons. • The more spikes per burst of bursting, the more synchronous regions of time delay. • From regular to random via small-world networks, synchronous degree becomes weak. • An interpretation of the multiple behaviors and the influence of network are provided. - Abstract: Time delay induced-multiple synchronous behaviors are simulated in neuronal network composed of many inhibitory neurons and appear at different time delays shorter than a period of endogenous bursting of individual neurons. It is different from previous investigations wherein only one of multiple synchronous behaviors appears at time delay shorter than a period of endogenous firing and others appear at time delay longer than the period duration. The bursting patterns of the synchronous behaviors are identified based on the dynamics of an individual neuron stimulated by a signal similar to the inhibitory coupling current, which is applied at the decaying branch of a spike and suitable phase within the quiescent state of the endogenous bursting. If a burst of endogenous bursting contains more spikes, the synchronous behaviors appear at more regions of time delay. As the coupling strength increases, the multiple synchronous behaviors appear in a sequence because the different threshold of coupling current or strength is needed to achieve synchronous behaviors. From regular, to small-world, and to random networks, synchronous degree of the multiple synchronous behaviors becomes weak, and synchronous bursting patterns with lower spikes per burst disappear, which is properly interpreted by the difference of coupling current between neurons induced by different degree and the high threshold of coupling current to achieve synchronization for the absent synchronous bursting patterns. The results of the influence of

  19. Inference of neuronal network spike dynamics and topology from calcium imaging data

    Directory of Open Access Journals (Sweden)

    Henry eLütcke

    2013-12-01

    Full Text Available Two-photon calcium imaging enables functional analysis of neuronal circuits by inferring action potential (AP occurrence ('spike trains' from cellular fluorescence signals. It remains unclear how experimental parameters such as signal-to-noise ratio (SNR and acquisition rate affect spike inference and whether additional information about network structure can be extracted. Here we present a simulation framework for quantitatively assessing how well spike dynamics and network topology can be inferred from noisy calcium imaging data. For simulated AP-evoked calcium transients in neocortical pyramidal cells, we analyzed the quality of spike inference as a function of SNR and data acquisition rate using a recently introduced peeling algorithm. Given experimentally attainable values of SNR and acquisition rate, neural spike trains could be reconstructed accurately and with up to millisecond precision. We then applied statistical neuronal network models to explore how remaining uncertainties in spike inference affect estimates of network connectivity and topological features of network organization. We define the experimental conditions suitable for inferring whether the network has a scale-free structure and determine how well hub neurons can be identified. Our findings provide a benchmark for future calcium imaging studies that aim to reliably infer neuronal network properties.

  20. Network bursts in cortical neuronal cultures: 'noise - versus pacemaker'- driven neural network simulations

    NARCIS (Netherlands)

    Gritsun, T.; Stegenga, J.; le Feber, Jakob; Rutten, Wim

    2009-01-01

    In this paper we address the issue of spontaneous bursting activity in cortical neuronal cultures and explain what might cause this collective behavior using computer simulations of two different neural network models. While the common approach to acivate a passive network is done by introducing

  1. Impact of Partial Time Delay on Temporal Dynamics of Watts-Strogatz Small-World Neuronal Networks

    Science.gov (United States)

    Yan, Hao; Sun, Xiaojuan

    2017-06-01

    In this paper, we mainly discuss effects of partial time delay on temporal dynamics of Watts-Strogatz (WS) small-world neuronal networks by controlling two parameters. One is the time delay τ and the other is the probability of partial time delay pdelay. Temporal dynamics of WS small-world neuronal networks are discussed with the aid of temporal coherence and mean firing rate. With the obtained simulation results, it is revealed that for small time delay τ, the probability pdelay could weaken temporal coherence and increase mean firing rate of neuronal networks, which indicates that it could improve neuronal firings of the neuronal networks while destroying firing regularity. For large time delay τ, temporal coherence and mean firing rate do not have great changes with respect to pdelay. Time delay τ always has great influence on both temporal coherence and mean firing rate no matter what is the value of pdelay. Moreover, with the analysis of spike trains and histograms of interspike intervals of neurons inside neuronal networks, it is found that the effects of partial time delays on temporal coherence and mean firing rate could be the result of locking between the period of neuronal firing activities and the value of time delay τ. In brief, partial time delay could have great influence on temporal dynamics of the neuronal networks.

  2. To Break or to Brake Neuronal Network Accelerated by Ammonium Ions?

    Directory of Open Access Journals (Sweden)

    Vladimir V Dynnik

    Full Text Available The aim of present study was to investigate the effects of ammonium ions on in vitro neuronal network activity and to search alternative methods of acute ammonia neurotoxicity prevention.Rat hippocampal neuronal and astrocytes co-cultures in vitro, fluorescent microscopy and perforated patch clamp were used to monitor the changes in intracellular Ca2+- and membrane potential produced by ammonium ions and various modulators in the cells implicated in neural networks.Low concentrations of NH4Cl (0.1-4 mM produce short temporal effects on network activity. Application of 5-8 mM NH4Cl: invariably transforms diverse network firing regimen to identical burst patterns, characterized by substantial neuronal membrane depolarization at plateau phase of potential and high-amplitude Ca2+-oscillations; raises frequency and average for period of oscillations Ca2+-level in all cells implicated in network; results in the appearance of group of «run out» cells with high intracellular Ca2+ and steadily diminished amplitudes of oscillations; increases astrocyte Ca2+-signalling, characterized by the appearance of groups of cells with increased intracellular Ca2+-level and/or chaotic Ca2+-oscillations. Accelerated network activity may be suppressed by the blockade of NMDA or AMPA/kainate-receptors or by overactivation of AMPA/kainite-receptors. Ammonia still activate neuronal firing in the presence of GABA(A receptors antagonist bicuculline, indicating that «disinhibition phenomenon» is not implicated in the mechanisms of networks acceleration. Network activity may also be slowed down by glycine, agonists of metabotropic inhibitory receptors, betaine, L-carnitine, L-arginine, etc.Obtained results demonstrate that ammonium ions accelerate neuronal networks firing, implicating ionotropic glutamate receptors, having preserved the activities of group of inhibitory ionotropic and metabotropic receptors. This may mean, that ammonia neurotoxicity might be prevented by

  3. Clustering promotes switching dynamics in networks of noisy neurons

    Science.gov (United States)

    Franović, Igor; Klinshov, Vladimir

    2018-02-01

    Macroscopic variability is an emergent property of neural networks, typically manifested in spontaneous switching between the episodes of elevated neuronal activity and the quiescent episodes. We investigate the conditions that facilitate switching dynamics, focusing on the interplay between the different sources of noise and heterogeneity of the network topology. We consider clustered networks of rate-based neurons subjected to external and intrinsic noise and derive an effective model where the network dynamics is described by a set of coupled second-order stochastic mean-field systems representing each of the clusters. The model provides an insight into the different contributions to effective macroscopic noise and qualitatively indicates the parameter domains where switching dynamics may occur. By analyzing the mean-field model in the thermodynamic limit, we demonstrate that clustering promotes multistability, which gives rise to switching dynamics in a considerably wider parameter region compared to the case of a non-clustered network with sparse random connection topology.

  4. Comparing biological networks via graph compression

    Directory of Open Access Journals (Sweden)

    Hayashida Morihiro

    2010-09-01

    Full Text Available Abstract Background Comparison of various kinds of biological data is one of the main problems in bioinformatics and systems biology. Data compression methods have been applied to comparison of large sequence data and protein structure data. Since it is still difficult to compare global structures of large biological networks, it is reasonable to try to apply data compression methods to comparison of biological networks. In existing compression methods, the uniqueness of compression results is not guaranteed because there is some ambiguity in selection of overlapping edges. Results This paper proposes novel efficient methods, CompressEdge and CompressVertices, for comparing large biological networks. In the proposed methods, an original network structure is compressed by iteratively contracting identical edges and sets of connected edges. Then, the similarity of two networks is measured by a compression ratio of the concatenated networks. The proposed methods are applied to comparison of metabolic networks of several organisms, H. sapiens, M. musculus, A. thaliana, D. melanogaster, C. elegans, E. coli, S. cerevisiae, and B. subtilis, and are compared with an existing method. These results suggest that our methods can efficiently measure the similarities between metabolic networks. Conclusions Our proposed algorithms, which compress node-labeled networks, are useful for measuring the similarity of large biological networks.

  5. Distribution of spinal neuronal networks controlling forward and backward locomotion.

    Science.gov (United States)

    Merkulyeva, Natalia; Veshchitskii, Aleksandr; Gorsky, Oleg; Pavlova, Natalia; Zelenin, Pavel V; Gerasimenko, Yury; Deliagina, Tatiana G; Musienko, Pavel

    2018-04-20

    Higher vertebrates, including humans, are capable not only of forward (FW) locomotion but also of walking in other directions relative to the body axis [backward (BW), sideways, etc.]. While the neural mechanisms responsible for controlling FW locomotion have been studied in considerable detail, the mechanisms controlling steps in other directions are mostly unknown. The aim of the present study was to investigate the distribution of spinal neuronal networks controlling FW and BW locomotion. First, we applied electrical epidural stimulation (ES) to different segments of the spinal cord from L2 to S2 to reveal zones triggering FW and BW locomotion in decerebrate cats of either sex. Second, to determine the location of spinal neurons activated during FW and BW locomotion, we used c-fos immunostaining. We found that the neuronal networks responsible for FW locomotion were distributed broadly in the lumbosacral spinal cord and could be activated by ES of any segment from L3 to S2. By contrast, networks generating BW locomotion were activated by ES of a limited zone from the caudal part of L5 to the caudal part of L7. In the intermediate part of the gray matter within this zone, a significantly higher number of c- fos -positive interneurons was revealed in BW-stepping cats compared with FW-stepping cats. We suggest that this region of the spinal cord contains the network that determines the BW direction of locomotion. Significance Statement Sequential and single steps in various directions relative to the body axis [forward (FW), backward (BW), sideways, etc.] are used during locomotion and to correct for perturbations, respectively. The mechanisms controlling step direction are unknown. In the present study, for the first time we compared the distributions of spinal neuronal networks controlling FW and BW locomotion. Using a marker to visualize active neurons, we demonstrated that in the intermediate part of the gray matter within L6 and L7 spinal segments

  6. Mechanisms of Winner-Take-All and Group Selection in Neuronal Spiking Networks.

    Science.gov (United States)

    Chen, Yanqing

    2017-01-01

    A major function of central nervous systems is to discriminate different categories or types of sensory input. Neuronal networks accomplish such tasks by learning different sensory maps at several stages of neural hierarchy, such that different neurons fire selectively to reflect different internal or external patterns and states. The exact mechanisms of such map formation processes in the brain are not completely understood. Here we study the mechanism by which a simple recurrent/reentrant neuronal network accomplish group selection and discrimination to different inputs in order to generate sensory maps. We describe the conditions and mechanism of transition from a rhythmic epileptic state (in which all neurons fire synchronized and indiscriminately to any input) to a winner-take-all state in which only a subset of neurons fire for a specific input. We prove an analytic condition under which a stable bump solution and a winner-take-all state can emerge from the local recurrent excitation-inhibition interactions in a three-layer spiking network with distinct excitatory and inhibitory populations, and demonstrate the importance of surround inhibitory connection topology on the stability of dynamic patterns in spiking neural network.

  7. A combined Bodian-Nissl stain for improved network analysis in neuronal cell culture.

    Science.gov (United States)

    Hightower, M; Gross, G W

    1985-11-01

    Bodian and Nissl procedures were combined to stain dissociated mouse spinal cord cells cultured on coverslips. The Bodian technique stains fine neuronal processes in great detail as well as an intracellular fibrillar network concentrated around the nucleus and in proximal neurites. The Nissl stain clearly delimits neuronal cytoplasm in somata and in large dendrites. A combination of these techniques allows the simultaneous depiction of neuronal perikarya and all afferent and efferent processes. Costaining with little background staining by either procedure suggests high specificity for neurons. This procedure could be exploited for routine network analysis of cultured neurons.

  8. Modular analysis of biological networks.

    Science.gov (United States)

    Kaltenbach, Hans-Michael; Stelling, Jörg

    2012-01-01

    The analysis of complex biological networks has traditionally relied on decomposition into smaller, semi-autonomous units such as individual signaling pathways. With the increased scope of systems biology (models), rational approaches to modularization have become an important topic. With increasing acceptance of de facto modularity in biology, widely different definitions of what constitutes a module have sparked controversies. Here, we therefore review prominent classes of modular approaches based on formal network representations. Despite some promising research directions, several important theoretical challenges remain open on the way to formal, function-centered modular decompositions for dynamic biological networks.

  9. Oscillations in the bistable regime of neuronal networks.

    Science.gov (United States)

    Roxin, Alex; Compte, Albert

    2016-07-01

    Bistability between attracting fixed points in neuronal networks has been hypothesized to underlie persistent activity observed in several cortical areas during working memory tasks. In network models this kind of bistability arises due to strong recurrent excitation, sufficient to generate a state of high activity created in a saddle-node (SN) bifurcation. On the other hand, canonical network models of excitatory and inhibitory neurons (E-I networks) robustly produce oscillatory states via a Hopf (H) bifurcation due to the E-I loop. This mechanism for generating oscillations has been invoked to explain the emergence of brain rhythms in the β to γ bands. Although both bistability and oscillatory activity have been intensively studied in network models, there has not been much focus on the coincidence of the two. Here we show that when oscillations emerge in E-I networks in the bistable regime, their phenomenology can be explained to a large extent by considering coincident SN and H bifurcations, known as a codimension two Takens-Bogdanov bifurcation. In particular, we find that such oscillations are not composed of a stable limit cycle, but rather are due to noise-driven oscillatory fluctuations. Furthermore, oscillations in the bistable regime can, in principle, have arbitrarily low frequency.

  10. Modulation of neuronal network activity with ghrelin

    NARCIS (Netherlands)

    Stoyanova, Irina; Rutten, Wim; le Feber, Jakob

    2012-01-01

    Ghrelin is a neuropeptide regulating multiple physiological processes, including high brain functions such as learning and memory formation. However, the effect of ghrelin on network activity patterns and developments has not been studied yet. Therefore, we used dissociated cortical neurons plated

  11. Toward Petascale Biologically Plausible Neural Networks

    Science.gov (United States)

    Long, Lyle

    This talk will describe an approach to achieving petascale neural networks. Artificial intelligence has been oversold for many decades. Computers in the beginning could only do about 16,000 operations per second. Computer processing power, however, has been doubling every two years thanks to Moore's law, and growing even faster due to massively parallel architectures. Finally, 60 years after the first AI conference we have computers on the order of the performance of the human brain (1016 operations per second). The main issues now are algorithms, software, and learning. We have excellent models of neurons, such as the Hodgkin-Huxley model, but we do not know how the human neurons are wired together. With careful attention to efficient parallel computing, event-driven programming, table lookups, and memory minimization massive scale simulations can be performed. The code that will be described was written in C + + and uses the Message Passing Interface (MPI). It uses the full Hodgkin-Huxley neuron model, not a simplified model. It also allows arbitrary network structures (deep, recurrent, convolutional, all-to-all, etc.). The code is scalable, and has, so far, been tested on up to 2,048 processor cores using 107 neurons and 109 synapses.

  12. Efficient transmission of subthreshold signals in complex networks of spiking neurons.

    Science.gov (United States)

    Torres, Joaquin J; Elices, Irene; Marro, J

    2015-01-01

    We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances--that naturally balances the network with excitatory and inhibitory synapses--and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest.

  13. Efficient transmission of subthreshold signals in complex networks of spiking neurons.

    Directory of Open Access Journals (Sweden)

    Joaquin J Torres

    Full Text Available We investigate the efficient transmission and processing of weak, subthreshold signals in a realistic neural medium in the presence of different levels of the underlying noise. Assuming Hebbian weights for maximal synaptic conductances--that naturally balances the network with excitatory and inhibitory synapses--and considering short-term synaptic plasticity affecting such conductances, we found different dynamic phases in the system. This includes a memory phase where population of neurons remain synchronized, an oscillatory phase where transitions between different synchronized populations of neurons appears and an asynchronous or noisy phase. When a weak stimulus input is applied to each neuron, increasing the level of noise in the medium we found an efficient transmission of such stimuli around the transition and critical points separating different phases for well-defined different levels of stochasticity in the system. We proved that this intriguing phenomenon is quite robust, as it occurs in different situations including several types of synaptic plasticity, different type and number of stored patterns and diverse network topologies, namely, diluted networks and complex topologies such as scale-free and small-world networks. We conclude that the robustness of the phenomenon in different realistic scenarios, including spiking neurons, short-term synaptic plasticity and complex networks topologies, make very likely that it could also occur in actual neural systems as recent psycho-physical experiments suggest.

  14. Niche-dependent development of functional neuronal networks from embryonic stem cell-derived neural populations

    Directory of Open Access Journals (Sweden)

    Siebler Mario

    2009-08-01

    Full Text Available Abstract Background The present work was performed to investigate the ability of two different embryonic stem (ES cell-derived neural precursor populations to generate functional neuronal networks in vitro. The first ES cell-derived neural precursor population was cultivated as free-floating neural aggregates which are known to form a developmental niche comprising different types of neural cells, including neural precursor cells (NPCs, progenitor cells and even further matured cells. This niche provides by itself a variety of different growth factors and extracellular matrix proteins that influence the proliferation and differentiation of neural precursor and progenitor cells. The second population was cultivated adherently in monolayer cultures to control most stringently the extracellular environment. This population comprises highly homogeneous NPCs which are supposed to represent an attractive way to provide well-defined neuronal progeny. However, the ability of these different ES cell-derived immature neural cell populations to generate functional neuronal networks has not been assessed so far. Results While both precursor populations were shown to differentiate into sufficient quantities of mature NeuN+ neurons that also express GABA or vesicular-glutamate-transporter-2 (vGlut2, only aggregate-derived neuronal populations exhibited a synchronously oscillating network activity 2–4 weeks after initiating the differentiation as detected by the microelectrode array technology. Neurons derived from homogeneous NPCs within monolayer cultures did merely show uncorrelated spiking activity even when differentiated for up to 12 weeks. We demonstrated that these neurons exhibited sparsely ramified neurites and an embryonic vGlut2 distribution suggesting an inhibited terminal neuronal maturation. In comparison, neurons derived from heterogeneous populations within neural aggregates appeared as fully mature with a dense neurite network and punctuated

  15. A Neuron- and a Synapse Chip for Artificial Neural Networks

    DEFF Research Database (Denmark)

    Lansner, John; Lehmann, Torsten

    1992-01-01

    A cascadable, analog, CMOS chip set has been developed for hardware implementations of artificial neural networks (ANN's):I) a neuron chip containing an array of neurons with hyperbolic tangent activation functions and adjustable gains, and II) a synapse chip (or a matrix-vector multiplier) where...

  16. Memristor-based neural networks: Synaptic versus neuronal stochasticity

    KAUST Repository

    Naous, Rawan

    2016-11-02

    In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors. The ionic process involved in the underlying switching behavior of the memristive elements is considered as the main source of stochasticity of its operation. Building on its inherent variability, the memristor is incorporated into abstract models of stochastic neurons and synapses. Two approaches of stochastic neural networks are investigated. Aside from the size and area perspective, the impact on the system performance, in terms of accuracy, recognition rates, and learning, among these two approaches and where the memristor would fall into place are the main comparison points to be considered.

  17. Chimera-like states in a neuronal network model of the cat brain

    Science.gov (United States)

    Santos, M. S.; Szezech, J. D.; Borges, F. S.; Iarosz, K. C.; Caldas, I. L.; Batista, A. M.; Viana, R. L.; Kurths, J.

    2017-08-01

    Neuronal systems have been modeled by complex networks in different description levels. Recently, it has been verified that networks can simultaneously exhibit one coherent and other incoherent domain, known as chimera states. In this work, we study the existence of chimera states in a network considering the connectivity matrix based on the cat cerebral cortex. The cerebral cortex of the cat can be separated in 65 cortical areas organised into the four cognitive regions: visual, auditory, somatosensory-motor and frontolimbic. We consider a network where the local dynamics is given by the Hindmarsh-Rose model. The Hindmarsh-Rose equations are a well known model of neuronal activity that has been considered to simulate membrane potential in neuron. Here, we analyse under which conditions chimera states are present, as well as the affects induced by intensity of coupling on them. We observe the existence of chimera states in that incoherent structure can be composed of desynchronised spikes or desynchronised bursts. Moreover, we find that chimera states with desynchronised bursts are more robust to neuronal noise than with desynchronised spikes.

  18. Impact of delays on the synchronization transitions of modular neuronal networks with hybrid synapses

    Science.gov (United States)

    Liu, Chen; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Tsang, Kaiming; Chan, Wailok

    2013-09-01

    The combined effects of the information transmission delay and the ratio of the electrical and chemical synapses on the synchronization transitions in the hybrid modular neuronal network are investigated in this paper. Numerical results show that the synchronization of neuron activities can be either promoted or destroyed as the information transmission delay increases, irrespective of the probability of electrical synapses in the hybrid-synaptic network. Interestingly, when the number of the electrical synapses exceeds a certain level, further increasing its proportion can obviously enhance the spatiotemporal synchronization transitions. Moreover, the coupling strength has a significant effect on the synchronization transition. The dominated type of the synapse always has a more profound effect on the emergency of the synchronous behaviors. Furthermore, the results of the modular neuronal network structures demonstrate that excessive partitioning of the modular network may result in the dramatic detriment of neuronal synchronization. Considering that information transmission delays are inevitable in intra- and inter-neuronal networks communication, the obtained results may have important implications for the exploration of the synchronization mechanism underlying several neural system diseases such as Parkinson's Disease.

  19. Arrays of microLEDs and astrocytes: biological amplifiers to optogenetically modulate neuronal networks reducing light requirement.

    Directory of Open Access Journals (Sweden)

    Rolando Berlinguer-Palmini

    Full Text Available In the modern view of synaptic transmission, astrocytes are no longer confined to the role of merely supportive cells. Although they do not generate action potentials, they nonetheless exhibit electrical activity and can influence surrounding neurons through gliotransmitter release. In this work, we explored whether optogenetic activation of glial cells could act as an amplification mechanism to optical neural stimulation via gliotransmission to the neural network. We studied the modulation of gliotransmission by selective photo-activation of channelrhodopsin-2 (ChR2 and by means of a matrix of individually addressable super-bright microLEDs (μLEDs with an excitation peak at 470 nm. We combined Ca2+ imaging techniques and concurrent patch-clamp electrophysiology to obtain subsequent glia/neural activity. First, we tested the μLEDs efficacy in stimulating ChR2-transfected astrocyte. ChR2-induced astrocytic current did not desensitize overtime, and was linearly increased and prolonged by increasing μLED irradiance in terms of intensity and surface illumination. Subsequently, ChR2 astrocytic stimulation by broad-field LED illumination with the same spectral profile, increased both glial cells and neuronal calcium transient frequency and sEPSCs suggesting that few ChR2-transfected astrocytes were able to excite surrounding not-ChR2-transfected astrocytes and neurons. Finally, by using the μLEDs array to selectively light stimulate ChR2 positive astrocytes we were able to increase the synaptic activity of single neurons surrounding it. In conclusion, ChR2-transfected astrocytes and μLEDs system were shown to be an amplifier of synaptic activity in mixed corticalneuronal and glial cells culture.

  20. Arrays of microLEDs and astrocytes: biological amplifiers to optogenetically modulate neuronal networks reducing light requirement.

    Science.gov (United States)

    Berlinguer-Palmini, Rolando; Narducci, Roberto; Merhan, Kamyar; Dilaghi, Arianna; Moroni, Flavio; Masi, Alessio; Scartabelli, Tania; Landucci, Elisa; Sili, Maria; Schettini, Antonio; McGovern, Brian; Maskaant, Pleun; Degenaar, Patrick; Mannaioni, Guido

    2014-01-01

    In the modern view of synaptic transmission, astrocytes are no longer confined to the role of merely supportive cells. Although they do not generate action potentials, they nonetheless exhibit electrical activity and can influence surrounding neurons through gliotransmitter release. In this work, we explored whether optogenetic activation of glial cells could act as an amplification mechanism to optical neural stimulation via gliotransmission to the neural network. We studied the modulation of gliotransmission by selective photo-activation of channelrhodopsin-2 (ChR2) and by means of a matrix of individually addressable super-bright microLEDs (μLEDs) with an excitation peak at 470 nm. We combined Ca2+ imaging techniques and concurrent patch-clamp electrophysiology to obtain subsequent glia/neural activity. First, we tested the μLEDs efficacy in stimulating ChR2-transfected astrocyte. ChR2-induced astrocytic current did not desensitize overtime, and was linearly increased and prolonged by increasing μLED irradiance in terms of intensity and surface illumination. Subsequently, ChR2 astrocytic stimulation by broad-field LED illumination with the same spectral profile, increased both glial cells and neuronal calcium transient frequency and sEPSCs suggesting that few ChR2-transfected astrocytes were able to excite surrounding not-ChR2-transfected astrocytes and neurons. Finally, by using the μLEDs array to selectively light stimulate ChR2 positive astrocytes we were able to increase the synaptic activity of single neurons surrounding it. In conclusion, ChR2-transfected astrocytes and μLEDs system were shown to be an amplifier of synaptic activity in mixed corticalneuronal and glial cells culture.

  1. In Vitro Reconstruction of Neuronal Networks Derived from Human iPS Cells Using Microfabricated Devices.

    Directory of Open Access Journals (Sweden)

    Yuzo Takayama

    Full Text Available Morphology and function of the nervous system is maintained via well-coordinated processes both in central and peripheral nervous tissues, which govern the homeostasis of organs/tissues. Impairments of the nervous system induce neuronal disorders such as peripheral neuropathy or cardiac arrhythmia. Although further investigation is warranted to reveal the molecular mechanisms of progression in such diseases, appropriate model systems mimicking the patient-specific communication between neurons and organs are not established yet. In this study, we reconstructed the neuronal network in vitro either between neurons of the human induced pluripotent stem (iPS cell derived peripheral nervous system (PNS and central nervous system (CNS, or between PNS neurons and cardiac cells in a morphologically and functionally compartmentalized manner. Networks were constructed in photolithographically microfabricated devices with two culture compartments connected by 20 microtunnels. We confirmed that PNS and CNS neurons connected via synapses and formed a network. Additionally, calcium-imaging experiments showed that the bundles originating from the PNS neurons were functionally active and responded reproducibly to external stimuli. Next, we confirmed that CNS neurons showed an increase in calcium activity during electrical stimulation of networked bundles from PNS neurons in order to demonstrate the formation of functional cell-cell interactions. We also confirmed the formation of synapses between PNS neurons and mature cardiac cells. These results indicate that compartmentalized culture devices are promising tools for reconstructing network-wide connections between PNS neurons and various organs, and might help to understand patient-specific molecular and functional mechanisms under normal and pathological conditions.

  2. Multiple synchronization transitions in scale-free neuronal networks with electrical and chemical hybrid synapses

    International Nuclear Information System (INIS)

    Liu, Chen; Wang, Jiang; Wang, Lin; Yu, Haitao; Deng, Bin; Wei, Xile; Tsang, Kaiming; Chan, Wailok

    2014-01-01

    Highlights: • Synchronization transitions in hybrid scale-free neuronal networks are investigated. • Multiple synchronization transitions can be induced by the time delay. • Effect of synchronization transitions depends on the ratio of the electrical and chemical synapses. • Coupling strength and the density of inter-neuronal links can enhance the synchronization. -- Abstract: The impacts of information transmission delay on the synchronization transitions in scale-free neuronal networks with electrical and chemical hybrid synapses are investigated. Numerical results show that multiple appearances of synchronization regions transitions can be induced by different information transmission delays. With the time delay increasing, the synchronization of neuronal activities can be enhanced or destroyed, irrespective of the probability of chemical synapses in the whole hybrid neuronal network. In particular, for larger probability of electrical synapses, the regions of synchronous activities appear broader with stronger synchronization ability of electrical synapses compared with chemical ones. Moreover, it can be found that increasing the coupling strength can promote synchronization monotonously, playing the similar role of the increasing the probability of the electrical synapses. Interestingly, the structures and parameters of the scale-free neuronal networks, especially the structural evolvement plays a more subtle role in the synchronization transitions. In the network formation process, it is found that every new vertex is attached to the more old vertices already present in the network, the more synchronous activities will be emerge

  3. Network dynamics in nociceptive pathways assessed by the neuronal avalanche model

    Directory of Open Access Journals (Sweden)

    Wu José

    2012-04-01

    Full Text Available Abstract Background Traditional electroencephalography provides a critical assessment of pain responses. The perception of pain, however, may involve a series of signal transmission pathways in higher cortical function. Recent studies have shown that a mathematical method, the neuronal avalanche model, may be applied to evaluate higher-order network dynamics. The neuronal avalanche is a cascade of neuronal activity, the size distribution of which can be approximated by a power law relationship manifested by the slope of a straight line (i.e., the α value. We investigated whether the neuronal avalanche could be a useful index for nociceptive assessment. Findings Neuronal activity was recorded with a 4 × 8 multichannel electrode array in the primary somatosensory cortex (S1 and anterior cingulate cortex (ACC. Under light anesthesia, peripheral pinch stimulation increased the slope of the α value in both the ACC and S1, whereas brush stimulation increased the α value only in the S1. The increase in α values was blocked in both regions under deep anesthesia. The increase in α values in the ACC induced by peripheral pinch stimulation was blocked by medial thalamic lesion, but the increase in α values in the S1 induced by brush and pinch stimulation was not affected. Conclusions The neuronal avalanche model shows a critical state in the cortical network for noxious-related signal processing. The α value may provide an index of brain network activity that distinguishes the responses to somatic stimuli from the control state. These network dynamics may be valuable for the evaluation of acute nociceptive processes and may be applied to chronic pathological pain conditions.

  4. Cytokines and cytokine networks target neurons to modulate long-term potentiation.

    Science.gov (United States)

    Prieto, G Aleph; Cotman, Carl W

    2017-04-01

    Cytokines play crucial roles in the communication between brain cells including neurons and glia, as well as in the brain-periphery interactions. In the brain, cytokines modulate long-term potentiation (LTP), a cellular correlate of memory. Whether cytokines regulate LTP by direct effects on neurons or by indirect mechanisms mediated by non-neuronal cells is poorly understood. Elucidating neuron-specific effects of cytokines has been challenging because most brain cells express cytokine receptors. Moreover, cytokines commonly increase the expression of multiple cytokines in their target cells, thus increasing the complexity of brain cytokine networks even after single-cytokine challenges. Here, we review evidence on both direct and indirect-mediated modulation of LTP by cytokines. We also describe novel approaches based on neuron- and synaptosome-enriched systems to identify cytokines able to directly modulate LTP, by targeting neurons and synapses. These approaches can test multiple samples in parallel, thus allowing the study of multiple cytokines simultaneously. Hence, a cytokine networks perspective coupled with neuron-specific analysis may contribute to delineation of maps of the modulation of LTP by cytokines. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems. PMID:29503613

  6. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.

    Science.gov (United States)

    Jordan, Jakob; Ippen, Tammo; Helias, Moritz; Kitayama, Itaru; Sato, Mitsuhisa; Igarashi, Jun; Diesmann, Markus; Kunkel, Susanne

    2018-01-01

    State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  7. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

    Directory of Open Access Journals (Sweden)

    Jakob Jordan

    2018-02-01

    Full Text Available State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

  8. Spiking Regularity and Coherence in Complex Hodgkin–Huxley Neuron Networks

    International Nuclear Information System (INIS)

    Zhi-Qiang, Sun; Ping, Xie; Wei, Li; Peng-Ye, Wang

    2010-01-01

    We study the effects of the strength of coupling between neurons on the spiking regularity and coherence in a complex network with randomly connected Hodgkin–Huxley neurons driven by colored noise. It is found that for the given topology realization and colored noise correlation time, there exists an optimal strength of coupling, at which the spiking regularity of the network reaches the best level. Moreover, when the temporal regularity reaches the best level, the spatial coherence of the system has already increased to a relatively high level. In addition, for the given number of neurons and noise correlation time, the values of average regularity and spatial coherence at the optimal strength of coupling are nearly independent of the topology realization. Furthermore, there exists an optimal value of colored noise correlation time at which the spiking regularity can reach its best level. These results may be helpful for understanding of the real neuron world. (cross-disciplinary physics and related areas of science and technology)

  9. Effects of channel noise on firing coherence of small-world Hodgkin-Huxley neuronal networks

    Science.gov (United States)

    Sun, X. J.; Lei, J. Z.; Perc, M.; Lu, Q. S.; Lv, S. J.

    2011-01-01

    We investigate the effects of channel noise on firing coherence of Watts-Strogatz small-world networks consisting of biophysically realistic HH neurons having a fraction of blocked voltage-gated sodium and potassium ion channels embedded in their neuronal membranes. The intensity of channel noise is determined by the number of non-blocked ion channels, which depends on the fraction of working ion channels and the membrane patch size with the assumption of homogeneous ion channel density. We find that firing coherence of the neuronal network can be either enhanced or reduced depending on the source of channel noise. As shown in this paper, sodium channel noise reduces firing coherence of neuronal networks; in contrast, potassium channel noise enhances it. Furthermore, compared with potassium channel noise, sodium channel noise plays a dominant role in affecting firing coherence of the neuronal network. Moreover, we declare that the observed phenomena are independent of the rewiring probability.

  10. Decoding spikes in a spiking neuronal network

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jianfeng [Department of Informatics, University of Sussex, Brighton BN1 9QH (United Kingdom); Ding, Mingzhou [Department of Mathematics, Florida Atlantic University, Boca Raton, FL 33431 (United States)

    2004-06-04

    We investigate how to reliably decode the input information from the output of a spiking neuronal network. A maximum likelihood estimator of the input signal, together with its Fisher information, is rigorously calculated. The advantage of the maximum likelihood estimation over the 'brute-force rate coding' estimate is clearly demonstrated. It is pointed out that the ergodic assumption in neuroscience, i.e. a temporal average is equivalent to an ensemble average, is in general not true. Averaging over an ensemble of neurons usually gives a biased estimate of the input information. A method on how to compensate for the bias is proposed. Reconstruction of dynamical input signals with a group of spiking neurons is extensively studied and our results show that less than a spike is sufficient to accurately decode dynamical inputs.

  11. Decoding spikes in a spiking neuronal network

    International Nuclear Information System (INIS)

    Feng Jianfeng; Ding, Mingzhou

    2004-01-01

    We investigate how to reliably decode the input information from the output of a spiking neuronal network. A maximum likelihood estimator of the input signal, together with its Fisher information, is rigorously calculated. The advantage of the maximum likelihood estimation over the 'brute-force rate coding' estimate is clearly demonstrated. It is pointed out that the ergodic assumption in neuroscience, i.e. a temporal average is equivalent to an ensemble average, is in general not true. Averaging over an ensemble of neurons usually gives a biased estimate of the input information. A method on how to compensate for the bias is proposed. Reconstruction of dynamical input signals with a group of spiking neurons is extensively studied and our results show that less than a spike is sufficient to accurately decode dynamical inputs

  12. [Functional organization and structure of the serotonergic neuronal network of terrestrial snail].

    Science.gov (United States)

    Nikitin, E S; Balaban, P M

    2011-01-01

    The extension of knowledge how the brain works requires permanent improvement of methods of recording of neuronal activity and increase in the number of neurons recorded simultaneously to better understand the collective work of neuronal networks and assemblies. Conventional methods allow simultaneous intracellular recording up to 2-5 neurons and their membrane potentials, currents or monosynaptic connections or observation of spiking of neuronal groups with subsequent discrimination of individual spikes with loss of details of the dynamics of membrane potential. We recorded activity of a compact group of serotonergic neurons (up to 56 simultaneously) in the ganglion of a terrestrial mollusk using the method of optical recording of membrane potential that allowed to record individual action potentials in details with action potential parameters and to reveal morphology of the neurons rcorded. We demonstrated clear clustering in the group in relation with the dynamics of action potentials and phasic or tonic components in the neuronal responses to external electrophysiological and tactile stimuli. Also, we showed that identified neuron Pd2 could induce activation of a significant number of neurons in the group whereas neuron Pd4 did not induce any activation. However, its activation is delayed with regard to activation of the reacting group of neurons. Our data strongly support the concept of possible delegation of the integrative function by the network to a single neuron.

  13. Effects of the network structure and coupling strength on the noise-induced response delay of a neuronal network

    International Nuclear Information System (INIS)

    Ozer, Mahmut; Uzuntarla, Muhammet

    2008-01-01

    The Hodgkin-Huxley (H-H) neuron model driven by stimuli just above threshold shows a noise-induced response delay with respect to time to the first spike for a certain range of noise strengths, an effect called 'noise delayed decay' (NDD). We study the response time of a network of coupled H-H neurons, and investigate how the NDD can be affected by the connection topology of the network and the coupling strength. We show that the NDD effect exists for weak and intermediate coupling strengths, whereas it disappears for strong coupling strength regardless of the connection topology. We also show that although the network structure has very little effect on the NDD for a weak coupling strength, the network structure plays a key role for an intermediate coupling strength by decreasing the NDD effect with the increasing number of random shortcuts, and thus provides an additional operating regime, that is absent in the regular network, in which the neurons may also exploit a spike time code

  14. Computational Models of Neuron-Astrocyte Interactions Lead to Improved Efficacy in the Performance of Neural Networks

    Science.gov (United States)

    Alvarellos-González, Alberto; Pazos, Alejandro; Porto-Pazos, Ana B.

    2012-01-01

    The importance of astrocytes, one part of the glial system, for information processing in the brain has recently been demonstrated. Regarding information processing in multilayer connectionist systems, it has been shown that systems which include artificial neurons and astrocytes (Artificial Neuron-Glia Networks) have well-known advantages over identical systems including only artificial neurons. Since the actual impact of astrocytes in neural network function is unknown, we have investigated, using computational models, different astrocyte-neuron interactions for information processing; different neuron-glia algorithms have been implemented for training and validation of multilayer Artificial Neuron-Glia Networks oriented toward classification problem resolution. The results of the tests performed suggest that all the algorithms modelling astrocyte-induced synaptic potentiation improved artificial neural network performance, but their efficacy depended on the complexity of the problem. PMID:22649480

  15. Neuronal avalanches and learning

    Energy Technology Data Exchange (ETDEWEB)

    Arcangelis, Lucilla de, E-mail: dearcangelis@na.infn.it [Department of Information Engineering and CNISM, Second University of Naples, 81031 Aversa (Italy)

    2011-05-01

    Networks of living neurons represent one of the most fascinating systems of biology. If the physical and chemical mechanisms at the basis of the functioning of a single neuron are quite well understood, the collective behaviour of a system of many neurons is an extremely intriguing subject. Crucial ingredient of this complex behaviour is the plasticity property of the network, namely the capacity to adapt and evolve depending on the level of activity. This plastic ability is believed, nowadays, to be at the basis of learning and memory in real brains. Spontaneous neuronal activity has recently shown features in common to other complex systems. Experimental data have, in fact, shown that electrical information propagates in a cortex slice via an avalanche mode. These avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems and successful models have been developed to describe their behaviour. In this contribution we discuss a statistical mechanical model for the complex activity in a neuronal network. The model implements the main physiological properties of living neurons and is able to reproduce recent experimental results. Then, we discuss the learning abilities of this neuronal network. Learning occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. The system is able to learn all the tested rules, in particular the exclusive OR (XOR) and a random rule with three inputs. The learning dynamics exhibits universal features as function of the strength of plastic adaptation. Any rule could be learned provided that the plastic adaptation is sufficiently slow.

  16. Neuronal avalanches and learning

    International Nuclear Information System (INIS)

    Arcangelis, Lucilla de

    2011-01-01

    Networks of living neurons represent one of the most fascinating systems of biology. If the physical and chemical mechanisms at the basis of the functioning of a single neuron are quite well understood, the collective behaviour of a system of many neurons is an extremely intriguing subject. Crucial ingredient of this complex behaviour is the plasticity property of the network, namely the capacity to adapt and evolve depending on the level of activity. This plastic ability is believed, nowadays, to be at the basis of learning and memory in real brains. Spontaneous neuronal activity has recently shown features in common to other complex systems. Experimental data have, in fact, shown that electrical information propagates in a cortex slice via an avalanche mode. These avalanches are characterized by a power law distribution for the size and duration, features found in other problems in the context of the physics of complex systems and successful models have been developed to describe their behaviour. In this contribution we discuss a statistical mechanical model for the complex activity in a neuronal network. The model implements the main physiological properties of living neurons and is able to reproduce recent experimental results. Then, we discuss the learning abilities of this neuronal network. Learning occurs via plastic adaptation of synaptic strengths by a non-uniform negative feedback mechanism. The system is able to learn all the tested rules, in particular the exclusive OR (XOR) and a random rule with three inputs. The learning dynamics exhibits universal features as function of the strength of plastic adaptation. Any rule could be learned provided that the plastic adaptation is sufficiently slow.

  17. Memristor-based neural networks: Synaptic versus neuronal stochasticity

    KAUST Repository

    Naous, Rawan; Alshedivat, Maruan; Neftci, Emre; Cauwenberghs, Gert; Salama, Khaled N.

    2016-01-01

    In neuromorphic circuits, stochasticity in the cortex can be mapped into the synaptic or neuronal components. The hardware emulation of these stochastic neural networks are currently being extensively studied using resistive memories or memristors

  18. Delay-induced diversity of firing behavior and ordered chaotic firing in adaptive neuronal networks

    International Nuclear Information System (INIS)

    Gong Yubing; Wang Li; Xu Bo

    2012-01-01

    In this paper, we study the effect of time delay on the firing behavior and temporal coherence and synchronization in Newman–Watts thermosensitive neuron networks with adaptive coupling. At beginning, the firing exhibit disordered spiking in absence of time delay. As time delay is increased, the neurons exhibit diversity of firing behaviors including bursting with multiple spikes in a burst, spiking, bursting with four, three and two spikes, firing death, and bursting with increasing amplitude. The spiking is the most ordered, exhibiting coherence resonance (CR)-like behavior, and the firing synchronization becomes enhanced with the increase of time delay. As growth rate of coupling strength or network randomness increases, CR-like behavior shifts to smaller time delay and the synchronization of firing increases. These results show that time delay can induce diversity of firing behaviors in adaptive neuronal networks, and can order the chaotic firing by enhancing and optimizing the temporal coherence and enhancing the synchronization of firing. However, the phenomenon of firing death shows that time delay may inhibit the firing of adaptive neuronal networks. These findings provide new insight into the role of time delay in the firing activity of adaptive neuronal networks, and can help to better understand the complex firing phenomena in neural networks.

  19. Identification of important nodes in directed biological networks: a network motif approach.

    Directory of Open Access Journals (Sweden)

    Pei Wang

    Full Text Available Identification of important nodes in complex networks has attracted an increasing attention over the last decade. Various measures have been proposed to characterize the importance of nodes in complex networks, such as the degree, betweenness and PageRank. Different measures consider different aspects of complex networks. Although there are numerous results reported on undirected complex networks, few results have been reported on directed biological networks. Based on network motifs and principal component analysis (PCA, this paper aims at introducing a new measure to characterize node importance in directed biological networks. Investigations on five real-world biological networks indicate that the proposed method can robustly identify actually important nodes in different networks, such as finding command interneurons, global regulators and non-hub but evolutionary conserved actually important nodes in biological networks. Receiver Operating Characteristic (ROC curves for the five networks indicate remarkable prediction accuracy of the proposed measure. The proposed index provides an alternative complex network metric. Potential implications of the related investigations include identifying network control and regulation targets, biological networks modeling and analysis, as well as networked medicine.

  20. Robust spatial memory maps in flickering neuronal networks: a topological model

    Science.gov (United States)

    Dabaghian, Yuri; Babichev, Andrey; Memoli, Facundo; Chowdhury, Samir; Rice University Collaboration; Ohio State University Collaboration

    It is widely accepted that the hippocampal place cells provide a substrate of the neuronal representation of the environment--the ``cognitive map''. However, hippocampal network, as any other network in the brain is transient: thousands of hippocampal neurons die every day and the connections formed by these cells constantly change due to various forms of synaptic plasticity. What then explains the remarkable reliability of our spatial memories? We propose a computational approach to answering this question based on a couple of insights. First, we propose that the hippocampal cognitive map is fundamentally topological, and hence it is amenable to analysis by topological methods. We then apply several novel methods from homology theory, to understand how dynamic connections between cells influences the speed and reliability of spatial learning. We simulate the rat's exploratory movements through different environments and study how topological invariants of these environments arise in a network of simulated neurons with ``flickering'' connectivity. We find that despite transient connectivity the network of place cells produces a stable representation of the topology of the environment.

  1. Effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks

    Science.gov (United States)

    Sun, Xiaojuan; Perc, Matjaž; Kurths, Jürgen

    2017-05-01

    In this paper, we study effects of partial time delays on phase synchronization in Watts-Strogatz small-world neuronal networks. Our focus is on the impact of two parameters, namely the time delay τ and the probability of partial time delay pdelay, whereby the latter determines the probability with which a connection between two neurons is delayed. Our research reveals that partial time delays significantly affect phase synchronization in this system. In particular, partial time delays can either enhance or decrease phase synchronization and induce synchronization transitions with changes in the mean firing rate of neurons, as well as induce switching between synchronized neurons with period-1 firing to synchronized neurons with period-2 firing. Moreover, in comparison to a neuronal network where all connections are delayed, we show that small partial time delay probabilities have especially different influences on phase synchronization of neuronal networks.

  2. Review of Biological Network Data and Its Applications

    Directory of Open Access Journals (Sweden)

    Donghyeon Yu

    2013-12-01

    Full Text Available Studying biological networks, such as protein-protein interactions, is key to understanding complex biological activities. Various types of large-scale biological datasets have been collected and analyzed with high-throughput technologies, including DNA microarray, next-generation sequencing, and the two-hybrid screening system, for this purpose. In this review, we focus on network-based approaches that help in understanding biological systems and identifying biological functions. Accordingly, this paper covers two major topics in network biology: reconstruction of gene regulatory networks and network-based applications, including protein function prediction, disease gene prioritization, and network-based genome-wide association study.

  3. Turbofan engine diagnostics neuron network size optimization method which takes into account overlaerning effect

    Directory of Open Access Journals (Sweden)

    О.С. Якушенко

    2010-01-01

    Full Text Available  The article is devoted to the problem of gas turbine engine (GTE technical state class automatic recognition with operation parameters by neuron networks. The one of main problems for creation the neuron networks is determination of their optimal structures size (amount of layers in network and count of neurons in each layer.The method of neuron network size optimization intended for classification of GTE technical state is considered in the article. Optimization is cared out with taking into account of overlearning effect possibility when a learning network loses property of generalization and begins strictly describing educational data set. To determinate a moment when overlearning effect is appeared in learning neuron network the method  of three data sets is used. The method is based on the comparison of recognition quality parameters changes which were calculated during recognition of educational and control data sets. As the moment when network overlearning effect is appeared the moment when control data set recognition quality begins deteriorating but educational data set recognition quality continues still improving is used. To determinate this moment learning process periodically is terminated and simulation of network with education and control data sets is fulfilled. The optimization of two-, three- and four-layer networks is conducted and some results of optimization are shown. Also the extended educational set is created and shown. The set describes 16 GTE technical state classes and each class is represented with 200 points (200 possible technical state class realizations instead of 20 points using in the former articles. It was done to increase representativeness of data set.In the article the algorithm of optimization is considered and some results which were obtained with it are shown. The results of experiments were analyzed to determinate most optimal neuron network structure. This structure provides most high-quality GTE

  4. Role of Noise in Complex Networks of FitzHugh-Nagumo Neurons

    International Nuclear Information System (INIS)

    Fortuna, Luigi; Frasca, Mattia; La Rosa, Manuela

    2005-01-01

    This paper deals with the open question related to the role of noise in complex networks of interconnected FitzHugh-Nagumo neurons. In this paper this problem is faced with extensive simulations of different network topologies. The results show that several topologies behave in an optimal way with respect to the range of noise level leading to an improvement in the stimulus-response coherence, while other with respect to the maximum values of the performance index. The best results in terms of both the suitable noise level and high stimulus response coherence have been obtained when a diversity in neuron characteristic parameters has been introduced and the neurons have been connected in a small-world topology

  5. Analyzing topological characteristics of neuronal functional networks in the rat brain

    International Nuclear Information System (INIS)

    Lu, Hu; Yang, Shengtao; Song, Yuqing; Wei, Hui

    2014-01-01

    In this study, we recorded spike trains from brain cortical neurons of several behavioral rats in vivo by using multi-electrode recordings. An NFN was constructed in each trial, obtaining a total of 150 NFNs in this study. The topological characteristics of NFNs were analyzed by using the two most important characteristics of complex networks, namely, small-world structure and community structure. We found that the small-world properties exist in different NFNs constructed in this study. Modular function Q was used to determine the existence of community structure in NFNs, through which we found that community-structure characteristics, which are related to recorded spike train data sets, are more evident in the Y-maze task than in the DM-GM task. Our results can also be used to analyze further the relationship between small-world characteristics and the cognitive behavioral responses of rats. - Highlights: • We constructed the neuronal function networks based on the recorded neurons. • We analyzed the two main complex network characteristics, namely, small-world structure and community structure. • NFNs which were constructed based on the recorded neurons in this study exhibit small-world properties. • Some NFNs have community structure characteristics

  6. Analyzing topological characteristics of neuronal functional networks in the rat brain

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Hu [School of Computer Science and Communication Engineering, Jiangsu University, Jiangsu 212003 (China); School of Computer Science, Fudan University, Shanghai 200433 (China); Yang, Shengtao [Institutes of Brain Science, Fudan University, Shanghai 200433 (China); Song, Yuqing [School of Computer Science and Communication Engineering, Jiangsu University, Jiangsu 212003 (China); Wei, Hui [School of Computer Science, Fudan University, Shanghai 200433 (China)

    2014-08-28

    In this study, we recorded spike trains from brain cortical neurons of several behavioral rats in vivo by using multi-electrode recordings. An NFN was constructed in each trial, obtaining a total of 150 NFNs in this study. The topological characteristics of NFNs were analyzed by using the two most important characteristics of complex networks, namely, small-world structure and community structure. We found that the small-world properties exist in different NFNs constructed in this study. Modular function Q was used to determine the existence of community structure in NFNs, through which we found that community-structure characteristics, which are related to recorded spike train data sets, are more evident in the Y-maze task than in the DM-GM task. Our results can also be used to analyze further the relationship between small-world characteristics and the cognitive behavioral responses of rats. - Highlights: • We constructed the neuronal function networks based on the recorded neurons. • We analyzed the two main complex network characteristics, namely, small-world structure and community structure. • NFNs which were constructed based on the recorded neurons in this study exhibit small-world properties. • Some NFNs have community structure characteristics.

  7. Dynamical patterns of calcium signaling in a functional model of neuron-astrocyte networks

    DEFF Research Database (Denmark)

    Postnov, D.E.; Koreshkov, R.N.; Brazhe, N.A.

    2009-01-01

    We propose a functional mathematical model for neuron-astrocyte networks. The model incorporates elements of the tripartite synapse and the spatial branching structure of coupled astrocytes. We consider glutamate-induced calcium signaling as a specific mode of excitability and transmission...... in astrocytic-neuronal networks. We reproduce local and global dynamical patterns observed experimentally....

  8. Neuronal network disintegration: common pathways linking neurodegenerative diseases.

    Science.gov (United States)

    Ahmed, Rebekah M; Devenney, Emma M; Irish, Muireann; Ittner, Arne; Naismith, Sharon; Ittner, Lars M; Rohrer, Jonathan D; Halliday, Glenda M; Eisen, Andrew; Hodges, John R; Kiernan, Matthew C

    2016-11-01

    Neurodegeneration refers to a heterogeneous group of brain disorders that progressively evolve. It has been increasingly appreciated that many neurodegenerative conditions overlap at multiple levels and therefore traditional clinicopathological correlation approaches to better classify a disease have met with limited success. Neuronal network disintegration is fundamental to neurodegeneration, and concepts based around such a concept may better explain the overlap between their clinical and pathological phenotypes. In this Review, promoters of overlap in neurodegeneration incorporating behavioural, cognitive, metabolic, motor, and extrapyramidal presentations will be critically appraised. In addition, evidence that may support the existence of large-scale networks that might be contributing to phenotypic differentiation will be considered across a neurodegenerative spectrum. Disintegration of neuronal networks through different pathological processes, such as prion-like spread, may provide a better paradigm of disease and thereby facilitate the identification of novel therapies for neurodegeneration. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  9. Identifying Controlling Nodes in Neuronal Networks in Different Scales

    Science.gov (United States)

    Tang, Yang; Gao, Huijun; Zou, Wei; Kurths, Jürgen

    2012-01-01

    Recent studies have detected hubs in neuronal networks using degree, betweenness centrality, motif and synchronization and revealed the importance of hubs in their structural and functional roles. In addition, the analysis of complex networks in different scales are widely used in physics community. This can provide detailed insights into the intrinsic properties of networks. In this study, we focus on the identification of controlling regions in cortical networks of cats’ brain in microscopic, mesoscopic and macroscopic scales, based on single-objective evolutionary computation methods. The problem is investigated by considering two measures of controllability separately. The impact of the number of driver nodes on controllability is revealed and the properties of controlling nodes are shown in a statistical way. Our results show that the statistical properties of the controlling nodes display a concave or convex shape with an increase of the allowed number of controlling nodes, revealing a transition in choosing driver nodes from the areas with a large degree to the areas with a low degree. Interestingly, the community Auditory in cats’ brain, which has sparse connections with other communities, plays an important role in controlling the neuronal networks. PMID:22848475

  10. Single-cell Transcriptional Analysis Reveals Novel Neuronal Phenotypes and Interaction Networks involved In the Central Circadian Clock

    Directory of Open Access Journals (Sweden)

    James Park

    2016-10-01

    Full Text Available Single-cell heterogeneity confounds efforts to understand how a population of cells organizes into cellular networks that underlie tissue-level function. This complexity is prominent in the mammalian suprachiasmatic nucleus (SCN. Here, individual neurons exhibit a remarkable amount of asynchronous behavior and transcriptional heterogeneity. However, SCN neurons are able to generate precisely coordinated synaptic and molecular outputs that synchronize the body to a common circadian cycle by organizing into cellular networks. To understand this emergent cellular network property, it is important to reconcile single-neuron heterogeneity with network organization. In light of recent studies suggesting that transcriptionally heterogeneous cells organize into distinct cellular phenotypes, we characterized the transcriptional, spatial, and functional organization of 352 SCN neurons from mice experiencing phase-shifts in their circadian cycle. Using the community structure detection method and multivariate analytical techniques, we identified previously undescribed neuronal phenotypes that are likely to participate in regulatory networks with known SCN cell types. Based on the newly discovered neuronal phenotypes, we developed a data-driven neuronal network structure in which multiple cell types interact through known synaptic and paracrine signaling mechanisms. These results provide a basis from which to interpret the functional variability of SCN neurons and describe methodologies towards understanding how a population of heterogeneous single cells organizes into cellular networks that underlie tissue-level function.

  11. A reanalysis of "Two types of asynchronous activity in networks of excitatory and inhibitory spiking neurons".

    Science.gov (United States)

    Engelken, Rainer; Farkhooi, Farzad; Hansel, David; van Vreeswijk, Carl; Wolf, Fred

    2016-01-01

    Neuronal activity in the central nervous system varies strongly in time and across neuronal populations. It is a longstanding proposal that such fluctuations generically arise from chaotic network dynamics. Various theoretical studies predict that the rich dynamics of rate models operating in the chaotic regime can subserve circuit computation and learning. Neurons in the brain, however, communicate via spikes and it is a theoretical challenge to obtain similar rate fluctuations in networks of spiking neuron models. A recent study investigated spiking balanced networks of leaky integrate and fire (LIF) neurons and compared their dynamics to a matched rate network with identical topology, where single unit input-output functions were chosen from isolated LIF neurons receiving Gaussian white noise input. A mathematical analogy between the chaotic instability in networks of rate units and the spiking network dynamics was proposed. Here we revisit the behavior of the spiking LIF networks and these matched rate networks. We find expected hallmarks of a chaotic instability in the rate network: For supercritical coupling strength near the transition point, the autocorrelation time diverges. For subcritical coupling strengths, we observe critical slowing down in response to small external perturbations. In the spiking network, we found in contrast that the timescale of the autocorrelations is insensitive to the coupling strength and that rate deviations resulting from small input perturbations rapidly decay. The decay speed even accelerates for increasing coupling strength. In conclusion, our reanalysis demonstrates fundamental differences between the behavior of pulse-coupled spiking LIF networks and rate networks with matched topology and input-output function. In particular there is no indication of a corresponding chaotic instability in the spiking network.

  12. Response of Cultured Neuronal Network Activity After High-Intensity Power Frequency Magnetic Field Exposure

    Directory of Open Access Journals (Sweden)

    Atsushi Saito

    2018-03-01

    Full Text Available High-intensity and low frequency (1–100 kHz time-varying electromagnetic fields stimulate the human body through excitation of the nervous system. In power frequency range (50/60 Hz, a frequency-dependent threshold of the external electric field-induced neuronal modulation in cultured neuronal networks was used as one of the biological indicator in international guidelines; however, the threshold of the magnetic field-induced neuronal modulation has not been elucidated. In this study, we exposed rat brain-derived neuronal networks to a high-intensity power frequency magnetic field (hPF-MF, and evaluated the modulation of synchronized bursting activity using a multi-electrode array (MEA-based extracellular recording technique. As a result of short-term hPF-MF exposure (50–400 mT root-mean-square (rms, 50 Hz, sinusoidal wave, 6 s, the synchronized bursting activity was increased in the 400 mT-exposed group. On the other hand, no change was observed in the 50–200 mT-exposed groups. In order to clarify the mechanisms of the 400 mT hPF-MF exposure-induced neuronal response, we evaluated it after blocking inhibitory synapses using bicuculline methiodide (BMI; subsequently, increase in bursting activity was observed with BMI application, and the response of 400 mT hPF-MF exposure disappeared. Therefore, it was suggested that the response of hPF-MF exposure was involved in the inhibitory input. Next, we screened the inhibitory pacemaker-like neuronal activity which showed autonomous 4–10 Hz firing with CNQX and D-AP5 application, and it was confirmed that the activity was reduced after 400 mT hPF-MF exposure. Comparison of these experimental results with estimated values of the induced electric field (E-field in the culture medium revealed that the change in synchronized bursting activity occurred over 0.3 V/m, which was equivalent to the findings of a previous study that used the external electric fields. In addition, the results suggested that

  13. Spiking, Bursting, and Population Dynamics in a Network of Growth Transform Neurons.

    Science.gov (United States)

    Gangopadhyay, Ahana; Chakrabartty, Shantanu

    2017-04-27

    This paper investigates the dynamical properties of a network of neurons, each of which implements an asynchronous mapping based on polynomial growth transforms. In the first part of this paper, we present a geometric approach for visualizing the dynamics of the network where each of the neurons traverses a trajectory in a dual optimization space, whereas the network itself traverses a trajectory in an equivalent primal optimization space. We show that as the network learns to solve basic classification tasks, different choices of primal-dual mapping produce unique but interpretable neural dynamics like noise shaping, spiking, and bursting. While the proposed framework is general enough, in this paper, we demonstrate its use for designing support vector machines (SVMs) that exhibit noise-shaping properties similar to those of ΣΔ modulators, and for designing SVMs that learn to encode information using spikes and bursts. It is demonstrated that the emergent switching, spiking, and burst dynamics produced by each neuron encodes its respective margin of separation from a classification hyperplane whose parameters are encoded by the network population dynamics. We believe that the proposed growth transform neuron model and the underlying geometric framework could serve as an important tool to connect well-established machine learning algorithms like SVMs to neuromorphic principles like spiking, bursting, population encoding, and noise shaping.

  14. Stochastic Wilson–Cowan models of neuronal network dynamics with memory and delay

    International Nuclear Information System (INIS)

    Goychuk, Igor; Goychuk, Andriy

    2015-01-01

    We consider a simple Markovian class of the stochastic Wilson–Cowan type models of neuronal network dynamics, which incorporates stochastic delay caused by the existence of a refractory period of neurons. From the point of view of the dynamics of the individual elements, we are dealing with a network of non-Markovian stochastic two-state oscillators with memory, which are coupled globally in a mean-field fashion. This interrelation of a higher-dimensional Markovian and lower-dimensional non-Markovian dynamics is discussed in its relevance to the general problem of the network dynamics of complex elements possessing memory. The simplest model of this class is provided by a three-state Markovian neuron with one refractory state, which causes firing delay with an exponentially decaying memory within the two-state reduced model. This basic model is used to study critical avalanche dynamics (the noise sustained criticality) in a balanced feedforward network consisting of the excitatory and inhibitory neurons. Such avalanches emerge due to the network size dependent noise (mesoscopic noise). Numerical simulations reveal an intermediate power law in the distribution of avalanche sizes with the critical exponent around −1.16. We show that this power law is robust upon a variation of the refractory time over several orders of magnitude. However, the avalanche time distribution is biexponential. It does not reflect any genuine power law dependence. (paper)

  15. Querying Large Biological Network Datasets

    Science.gov (United States)

    Gulsoy, Gunhan

    2013-01-01

    New experimental methods has resulted in increasing amount of genetic interaction data to be generated every day. Biological networks are used to store genetic interaction data gathered. Increasing amount of data available requires fast large scale analysis methods. Therefore, we address the problem of querying large biological network datasets.…

  16. Causal biological network database: a comprehensive platform of causal biological network models focused on the pulmonary and vascular systems.

    Science.gov (United States)

    Boué, Stéphanie; Talikka, Marja; Westra, Jurjen Willem; Hayes, William; Di Fabio, Anselmo; Park, Jennifer; Schlage, Walter K; Sewer, Alain; Fields, Brett; Ansari, Sam; Martin, Florian; Veljkovic, Emilija; Kenney, Renee; Peitsch, Manuel C; Hoeng, Julia

    2015-01-01

    With the wealth of publications and data available, powerful and transparent computational approaches are required to represent measured data and scientific knowledge in a computable and searchable format. We developed a set of biological network models, scripted in the Biological Expression Language, that reflect causal signaling pathways across a wide range of biological processes, including cell fate, cell stress, cell proliferation, inflammation, tissue repair and angiogenesis in the pulmonary and cardiovascular context. This comprehensive collection of networks is now freely available to the scientific community in a centralized web-based repository, the Causal Biological Network database, which is composed of over 120 manually curated and well annotated biological network models and can be accessed at http://causalbionet.com. The website accesses a MongoDB, which stores all versions of the networks as JSON objects and allows users to search for genes, proteins, biological processes, small molecules and keywords in the network descriptions to retrieve biological networks of interest. The content of the networks can be visualized and browsed. Nodes and edges can be filtered and all supporting evidence for the edges can be browsed and is linked to the original articles in PubMed. Moreover, networks may be downloaded for further visualization and evaluation. Database URL: http://causalbionet.com © The Author(s) 2015. Published by Oxford University Press.

  17. Graph-based unsupervised segmentation algorithm for cultured neuronal networks' structure characterization and modeling.

    Science.gov (United States)

    de Santos-Sierra, Daniel; Sendiña-Nadal, Irene; Leyva, Inmaculada; Almendral, Juan A; Ayali, Amir; Anava, Sarit; Sánchez-Ávila, Carmen; Boccaletti, Stefano

    2015-06-01

    Large scale phase-contrast images taken at high resolution through the life of a cultured neuronal network are analyzed by a graph-based unsupervised segmentation algorithm with a very low computational cost, scaling linearly with the image size. The processing automatically retrieves the whole network structure, an object whose mathematical representation is a matrix in which nodes are identified neurons or neurons' clusters, and links are the reconstructed connections between them. The algorithm is also able to extract any other relevant morphological information characterizing neurons and neurites. More importantly, and at variance with other segmentation methods that require fluorescence imaging from immunocytochemistry techniques, our non invasive measures entitle us to perform a longitudinal analysis during the maturation of a single culture. Such an analysis furnishes the way of individuating the main physical processes underlying the self-organization of the neurons' ensemble into a complex network, and drives the formulation of a phenomenological model yet able to describe qualitatively the overall scenario observed during the culture growth. © 2014 International Society for Advancement of Cytometry.

  18. Dual-mode operation of neuronal networks involved in left-right alternation

    DEFF Research Database (Denmark)

    Talpalar, Adolfo E.; Bouvier, Julien; Borgius, Lotta

    2013-01-01

    All forms of locomotion are repetitive motor activities that require coordinated bilateral activation of muscles. The executive elements of locomotor control are networks of spinal neurons that determine gait pattern through the sequential activation of motor-neuron pools on either side of the bo...

  19. Network Analyses in Systems Biology: New Strategies for Dealing with Biological Complexity

    DEFF Research Database (Denmark)

    Green, Sara; Serban, Maria; Scholl, Raphael

    2018-01-01

    of biological networks using tools from graph theory to the application of dynamical systems theory to understand the behavior of complex biological systems. We show how network approaches support and extend traditional mechanistic strategies but also offer novel strategies for dealing with biological...... strategies? When and how can network and mechanistic approaches interact in productive ways? In this paper we address these questions by focusing on how biological networks are represented and analyzed in a diverse class of case studies. Our examples span from the investigation of organizational properties...

  20. Developmental changes of neuronal networks associated with strategic social decision-making.

    Science.gov (United States)

    Steinmann, Elisabeth; Schmalor, Antonia; Prehn-Kristensen, Alexander; Wolff, Stephan; Galka, Andreas; Möhring, Jan; Gerber, Wolf-Dieter; Petermann, Franz; Stephani, Ulrich; Siniatchkin, Michael

    2014-04-01

    One of the important prerequisites for successful social interaction is the willingness of each individual to cooperate socially. Using the ultimatum game, several studies have demonstrated that the process of decision-making to cooperate or to defeat in interaction with a partner is associated with activation of the dorsolateral prefrontal cortex (DLPFC), anterior cingulate cortex (ACC), anterior insula (AI), and inferior frontal cortex (IFC). This study investigates developmental changes in this neuronal network. 15 healthy children (8-12 years), 15 adolescents (13-18 years) and 15 young adults (19-28 years) were investigated using the ultimatum game. Neuronal networks representing decision-making based on strategic thinking were characterized using functional MRI. In all age groups, the process of decision-making in reaction to unfair offers was associated with hemodynamic changes in similar regions. Compared with children, however, healthy adults and adolescents revealed greater activation in the IFC and the fusiform gyrus, as well as the nucleus accumbens. In contrast, healthy children displayed more activation in the AI, the dorsal part of the ACC, and the DLPFC. There were no differences in brain activations between adults and adolescents. The neuronal mechanisms underlying strategic social decision making are already developed by the age of eight. Decision-making based on strategic thinking is associated with age-dependent involvement of different brain regions. Neuronal networks underlying theory of mind and reward anticipation are more activated in adults and adolescents with regard to the increasing perspective taking with age. In relation to emotional reactivity and respective compensatory coping in younger ages, children have higher activations in a neuronal network associated with emotional processing and executive control. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Hopf bifurcation of an (n + 1) -neuron bidirectional associative memory neural network model with delays.

    Science.gov (United States)

    Xiao, Min; Zheng, Wei Xing; Cao, Jinde

    2013-01-01

    Recent studies on Hopf bifurcations of neural networks with delays are confined to simplified neural network models consisting of only two, three, four, five, or six neurons. It is well known that neural networks are complex and large-scale nonlinear dynamical systems, so the dynamics of the delayed neural networks are very rich and complicated. Although discussing the dynamics of networks with a few neurons may help us to understand large-scale networks, there are inevitably some complicated problems that may be overlooked if simplified networks are carried over to large-scale networks. In this paper, a general delayed bidirectional associative memory neural network model with n + 1 neurons is considered. By analyzing the associated characteristic equation, the local stability of the trivial steady state is examined, and then the existence of the Hopf bifurcation at the trivial steady state is established. By applying the normal form theory and the center manifold reduction, explicit formulae are derived to determine the direction and stability of the bifurcating periodic solution. Furthermore, the paper highlights situations where the Hopf bifurcations are particularly critical, in the sense that the amplitude and the period of oscillations are very sensitive to errors due to tolerances in the implementation of neuron interconnections. It is shown that the sensitivity is crucially dependent on the delay and also significantly influenced by the feature of the number of neurons. Numerical simulations are carried out to illustrate the main results.

  2. Stochastic resonance on Newman-Watts networks of Hodgkin-Huxley neurons with local periodic driving

    Energy Technology Data Exchange (ETDEWEB)

    Ozer, Mahmut [Zonguldak Karaelmas University, Engineering Faculty, Department of Electrical and Electronics Engineering, 67100 Zonguldak (Turkey)], E-mail: mahmutozer2002@yahoo.com; Perc, Matjaz [University of Maribor, Faculty of Natural Sciences and Mathematics, Department of Physics, Koroska cesta 160, SI-2000 Maribor (Slovenia); Uzuntarla, Muhammet [Zonguldak Karaelmas University, Engineering Faculty, Department of Electrical and Electronics Engineering, 67100 Zonguldak (Turkey)

    2009-03-02

    We study the phenomenon of stochastic resonance on Newman-Watts small-world networks consisting of biophysically realistic Hodgkin-Huxley neurons with a tunable intensity of intrinsic noise via voltage-gated ion channels embedded in neuronal membranes. Importantly thereby, the subthreshold periodic driving is introduced to a single neuron of the network, thus acting as a pacemaker trying to impose its rhythm on the whole ensemble. We show that there exists an optimal intensity of intrinsic ion channel noise by which the outreach of the pacemaker extends optimally across the whole network. This stochastic resonance phenomenon can be further amplified via fine-tuning of the small-world network structure, and depends significantly also on the coupling strength among neurons and the driving frequency of the pacemaker. In particular, we demonstrate that the noise-induced transmission of weak localized rhythmic activity peaks when the pacemaker frequency matches the intrinsic frequency of subthreshold oscillations. The implications of our findings for weak signal detection and information propagation across neural networks are discussed.

  3. Hidden neuronal correlations in cultured networks

    International Nuclear Information System (INIS)

    Segev, Ronen; Baruchi, Itay; Hulata, Eyal; Ben-Jacob, Eshel

    2004-01-01

    Utilization of a clustering algorithm on neuronal spatiotemporal correlation matrices recorded during a spontaneous activity of in vitro networks revealed the existence of hidden correlations: the sequence of synchronized bursting events (SBEs) is composed of statistically distinguishable subgroups each with its own distinct pattern of interneuron spatiotemporal correlations. These findings hint that each of the SBE subgroups can serve as a template for coding, storage, and retrieval of a specific information

  4. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo

    2015-09-15

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation and angiogenesis) and ion transportation networks (e.g., neural networks) is explained in detail and basic analytical features like the gradient flow structure of the fluid transportation network model and the impact of the model parameters on the geometry and topology of network formation are analyzed. We also present a numerical finite-element based discretization scheme and discuss sample cases of network formation simulations.

  5. Synchronous bursts on scale-free neuronal networks with attractive and repulsive coupling.

    Directory of Open Access Journals (Sweden)

    Qingyun Wang

    Full Text Available This paper investigates the dependence of synchronization transitions of bursting oscillations on the information transmission delay over scale-free neuronal networks with attractive and repulsive coupling. It is shown that for both types of coupling, the delay always plays a subtle role in either promoting or impairing synchronization. In particular, depending on the inherent oscillation period of individual neurons, regions of irregular and regular propagating excitatory fronts appear intermittently as the delay increases. These delay-induced synchronization transitions are manifested as well-expressed minima in the measure for spatiotemporal synchrony. For attractive coupling, the minima appear at every integer multiple of the average oscillation period, while for the repulsive coupling, they appear at every odd multiple of the half of the average oscillation period. The obtained results are robust to the variations of the dynamics of individual neurons, the system size, and the neuronal firing type. Hence, they can be used to characterize attractively or repulsively coupled scale-free neuronal networks with delays.

  6. Spatio-temporal specialization of GABAergic septo-hippocampal neurons for rhythmic network activity.

    Science.gov (United States)

    Unal, Gunes; Crump, Michael G; Viney, Tim J; Éltes, Tímea; Katona, Linda; Klausberger, Thomas; Somogyi, Peter

    2018-03-03

    Medial septal GABAergic neurons of the basal forebrain innervate the hippocampus and related cortical areas, contributing to the coordination of network activity, such as theta oscillations and sharp wave-ripple events, via a preferential innervation of GABAergic interneurons. Individual medial septal neurons display diverse activity patterns, which may be related to their termination in different cortical areas and/or to the different types of innervated interneurons. To test these hypotheses, we extracellularly recorded and juxtacellularly labeled single medial septal neurons in anesthetized rats in vivo during hippocampal theta and ripple oscillations, traced their axons to distant cortical target areas, and analyzed their postsynaptic interneurons. Medial septal GABAergic neurons exhibiting different hippocampal theta phase preferences and/or sharp wave-ripple related activity terminated in restricted hippocampal regions, and selectively targeted a limited number of interneuron types, as established on the basis of molecular markers. We demonstrate the preferential innervation of bistratified cells in CA1 and of basket cells in CA3 by individual axons. One group of septal neurons was suppressed during sharp wave-ripples, maintained their firing rate across theta and non-theta network states and mainly fired along the descending phase of CA1 theta oscillations. In contrast, neurons that were active during sharp wave-ripples increased their firing significantly during "theta" compared to "non-theta" states, with most firing during the ascending phase of theta oscillations. These results demonstrate that specialized septal GABAergic neurons contribute to the coordination of network activity through parallel, target area- and cell type-selective projections to the hippocampus.

  7. BiologicalNetworks 2.0 - an integrative view of genome biology data

    Directory of Open Access Journals (Sweden)

    Ponomarenko Julia

    2010-12-01

    Full Text Available Abstract Background A significant problem in the study of mechanisms of an organism's development is the elucidation of interrelated factors which are making an impact on the different levels of the organism, such as genes, biological molecules, cells, and cell systems. Numerous sources of heterogeneous data which exist for these subsystems are still not integrated sufficiently enough to give researchers a straightforward opportunity to analyze them together in the same frame of study. Systematic application of data integration methods is also hampered by a multitude of such factors as the orthogonal nature of the integrated data and naming problems. Results Here we report on a new version of BiologicalNetworks, a research environment for the integral visualization and analysis of heterogeneous biological data. BiologicalNetworks can be queried for properties of thousands of different types of biological entities (genes/proteins, promoters, COGs, pathways, binding sites, and other and their relations (interactions, co-expression, co-citations, and other. The system includes the build-pathways infrastructure for molecular interactions/relations and module discovery in high-throughput experiments. Also implemented in BiologicalNetworks are the Integrated Genome Viewer and Comparative Genomics Browser applications, which allow for the search and analysis of gene regulatory regions and their conservation in multiple species in conjunction with molecular pathways/networks, experimental data and functional annotations. Conclusions The new release of BiologicalNetworks together with its back-end database introduces extensive functionality for a more efficient integrated multi-level analysis of microarray, sequence, regulatory, and other data. BiologicalNetworks is freely available at http://www.biologicalnetworks.org.

  8. Biological transportation networks: Modeling and simulation

    KAUST Repository

    Albi, Giacomo; Artina, Marco; Foransier, Massimo; Markowich, Peter A.

    2015-01-01

    We present a model for biological network formation originally introduced by Cai and Hu [Adaptation and optimization of biological transport networks, Phys. Rev. Lett. 111 (2013) 138701]. The modeling of fluid transportation (e.g., leaf venation

  9. Communication on the structure of biological networks

    Indian Academy of Sciences (India)

    Introduction. Over the past few years, network science has drawn attention from a large number of ... The qualitative properties of biological networks cannot ... Here, we study the underlying undirected structure of empirical biological networks.

  10. The Latin American Biological Dosimetry Network (LBDNet)

    International Nuclear Information System (INIS)

    Garcia, O.; Lamadrid, A.I.; Gonzalez, J.E.; Romero, I.; Mandina, T.; Di Giorgio, M.; Radl, A.; Taja, M.R.; Sapienza, C.E.; Deminge, M.M.; Fernandez Rearte, J.; Stuck Oliveira, M.; Valdivia, P.; Guerrero-Carbajal, C.; Arceo Maldonado, C.; Cortina Ramirez, G.E.; Espinoza, M.; Martinez-Lopez, W.; Di Tomasso, M.

    2016-01-01

    Biological Dosimetry is a necessary support for national radiation protection programmes and emergency response schemes. The Latin American Biological Dosimetry Network (LBDNet) was formally founded in 2007 to provide early biological dosimetry assistance in case of radiation emergencies in the Latin American Region. Here are presented the main topics considered in the foundational document of the network, which comprise: mission, partners, concept of operation, including the mechanism to request support for biological dosimetry assistance in the region, and the network capabilities. The process for network activation and the role of the coordinating laboratory during biological dosimetry emergency response is also presented. This information is preceded by historical remarks on biological dosimetry cooperation in Latin America. A summary of the main experimental and practical results already obtained by the LBDNet is also included. (authors)

  11. Measuring the evolutionary rewiring of biological networks.

    Directory of Open Access Journals (Sweden)

    Chong Shou

    Full Text Available We have accumulated a large amount of biological network data and expect even more to come. Soon, we anticipate being able to compare many different biological networks as we commonly do for molecular sequences. It has long been believed that many of these networks change, or "rewire", at different rates. It is therefore important to develop a framework to quantify the differences between networks in a unified fashion. We developed such a formalism based on analogy to simple models of sequence evolution, and used it to conduct a systematic study of network rewiring on all the currently available biological networks. We found that, similar to sequences, biological networks show a decreased rate of change at large time divergences, because of saturation in potential substitutions. However, different types of biological networks consistently rewire at different rates. Using comparative genomics and proteomics data, we found a consistent ordering of the rewiring rates: transcription regulatory, phosphorylation regulatory, genetic interaction, miRNA regulatory, protein interaction, and metabolic pathway network, from fast to slow. This ordering was found in all comparisons we did of matched networks between organisms. To gain further intuition on network rewiring, we compared our observed rewirings with those obtained from simulation. We also investigated how readily our formalism could be mapped to other network contexts; in particular, we showed how it could be applied to analyze changes in a range of "commonplace" networks such as family trees, co-authorships and linux-kernel function dependencies.

  12. Effects of network structure on the synchronizability of nonlinearly coupled Hindmarsh–Rose neurons

    International Nuclear Information System (INIS)

    Li, Chun-Hsien; Yang, Suh-Yuh

    2015-01-01

    This work is devoted to investigate the effects of network structure on the synchronizability of nonlinearly coupled dynamical network of Hindmarsh–Rose neurons with a sigmoidal coupling function. We mainly focus on the networks that exhibit the small-world character or scale-free property. By checking the first nonzero eigenvalue of the outer-coupling matrix, which is closely related to the synchronization threshold, the synchronizabilities of three specific network ensembles with prescribed network structures are compared. Interestingly, we find that networks with more connections will not necessarily result in better synchronizability. - Highlights: • We investigate the effects of network structure on the synchronizability of nonlinearly coupled Hindmarsh–Rose neurons. • We mainly consider the networks that exhibit the small-world character or scale-free property. • The synchronizability of three specific network ensembles with prescribed network structures are compared. • Networks with more connections will not necessarily result in better synchronizability

  13. Effects of network structure on the synchronizability of nonlinearly coupled Hindmarsh–Rose neurons

    Energy Technology Data Exchange (ETDEWEB)

    Li, Chun-Hsien, E-mail: chli@nknucc.nknu.edu.tw [Department of Mathematics, National Kaohsiung Normal University, Yanchao District, Kaohsiung City 82444, Taiwan (China); Yang, Suh-Yuh, E-mail: syyang@math.ncu.edu.tw [Department of Mathematics, National Central University, Jhongli District, Taoyuan City 32001, Taiwan (China)

    2015-10-23

    This work is devoted to investigate the effects of network structure on the synchronizability of nonlinearly coupled dynamical network of Hindmarsh–Rose neurons with a sigmoidal coupling function. We mainly focus on the networks that exhibit the small-world character or scale-free property. By checking the first nonzero eigenvalue of the outer-coupling matrix, which is closely related to the synchronization threshold, the synchronizabilities of three specific network ensembles with prescribed network structures are compared. Interestingly, we find that networks with more connections will not necessarily result in better synchronizability. - Highlights: • We investigate the effects of network structure on the synchronizability of nonlinearly coupled Hindmarsh–Rose neurons. • We mainly consider the networks that exhibit the small-world character or scale-free property. • The synchronizability of three specific network ensembles with prescribed network structures are compared. • Networks with more connections will not necessarily result in better synchronizability.

  14. On the Interplay between the Evolvability and Network Robustness in an Evolutionary Biological Network: A Systems Biology Approach

    Science.gov (United States)

    Chen, Bor-Sen; Lin, Ying-Po

    2011-01-01

    In the evolutionary process, the random transmission and mutation of genes provide biological diversities for natural selection. In order to preserve functional phenotypes between generations, gene networks need to evolve robustly under the influence of random perturbations. Therefore, the robustness of the phenotype, in the evolutionary process, exerts a selection force on gene networks to keep network functions. However, gene networks need to adjust, by variations in genetic content, to generate phenotypes for new challenges in the network’s evolution, ie, the evolvability. Hence, there should be some interplay between the evolvability and network robustness in evolutionary gene networks. In this study, the interplay between the evolvability and network robustness of a gene network and a biochemical network is discussed from a nonlinear stochastic system point of view. It was found that if the genetic robustness plus environmental robustness is less than the network robustness, the phenotype of the biological network is robust in evolution. The tradeoff between the genetic robustness and environmental robustness in evolution is discussed from the stochastic stability robustness and sensitivity of the nonlinear stochastic biological network, which may be relevant to the statistical tradeoff between bias and variance, the so-called bias/variance dilemma. Further, the tradeoff could be considered as an antagonistic pleiotropic action of a gene network and discussed from the systems biology perspective. PMID:22084563

  15. Impact of Bounded Noise and Rewiring on the Formation and Instability of Spiral Waves in a Small-World Network of Hodgkin-Huxley Neurons.

    Science.gov (United States)

    Yao, Yuangen; Deng, Haiyou; Ma, Chengzhang; Yi, Ming; Ma, Jun

    2017-01-01

    Spiral waves are observed in the chemical, physical and biological systems, and the emergence of spiral waves in cardiac tissue is linked to some diseases such as heart ventricular fibrillation and epilepsy; thus it has importance in theoretical studies and potential medical applications. Noise is inevitable in neuronal systems and can change the electrical activities of neuron in different ways. Many previous theoretical studies about the impacts of noise on spiral waves focus an unbounded Gaussian noise and even colored noise. In this paper, the impacts of bounded noise and rewiring of network on the formation and instability of spiral waves are discussed in small-world (SW) network of Hodgkin-Huxley (HH) neurons through numerical simulations, and possible statistical analysis will be carried out. Firstly, we present SW network of HH neurons subjected to bounded noise. Then, it is numerically demonstrated that bounded noise with proper intensity σ, amplitude A, or frequency f can facilitate the formation of spiral waves when rewiring probability p is below certain thresholds. In other words, bounded noise-induced resonant behavior can occur in the SW network of neurons. In addition, rewiring probability p always impairs spiral waves, while spiral waves are confirmed to be robust for small p, thus shortcut-induced phase transition of spiral wave with the increase of p is induced. Furthermore, statistical factors of synchronization are calculated to discern the phase transition of spatial pattern, and it is confirmed that larger factor of synchronization is approached with increasing of rewiring probability p, and the stability of spiral wave is destroyed.

  16. Connectivities and synchronous firing in cortical neuronal networks

    International Nuclear Information System (INIS)

    Jia, L.C.; Sano, M.; Lai, P.-Y.; Chan, C.K.

    2004-01-01

    Network connectivities (k-bar) of cortical neural cultures are studied by synchronized firing and determined from measured correlations between fluorescence intensities of firing neurons. The bursting frequency (f) during synchronized firing of the networks is found to be an increasing function of k-bar. With f taken to be proportional to k-bar, a simple random model with a k-bar dependent connection probability p(k-bar) has been constructed to explain our experimental findings successfully

  17. Attractor switching in neuron networks and Spatiotemporal filters for motion processing

    NARCIS (Netherlands)

    Subramanian, Easwara Naga

    2008-01-01

    From a broader perspective, we address two important questions, viz., (a) what kind of mechanism would enable a neuronal network to switch between various tasks or stored patterns? (b) what are the properties of neurons that are used by the visual system in early motion detection? To address (a) we

  18. Novel recurrent neural network for modelling biological networks: oscillatory p53 interaction dynamics.

    Science.gov (United States)

    Ling, Hong; Samarasinghe, Sandhya; Kulasiri, Don

    2013-12-01

    Understanding the control of cellular networks consisting of gene and protein interactions and their emergent properties is a central activity of Systems Biology research. For this, continuous, discrete, hybrid, and stochastic methods have been proposed. Currently, the most common approach to modelling accurate temporal dynamics of networks is ordinary differential equations (ODE). However, critical limitations of ODE models are difficulty in kinetic parameter estimation and numerical solution of a large number of equations, making them more suited to smaller systems. In this article, we introduce a novel recurrent artificial neural network (RNN) that addresses above limitations and produces a continuous model that easily estimates parameters from data, can handle a large number of molecular interactions and quantifies temporal dynamics and emergent systems properties. This RNN is based on a system of ODEs representing molecular interactions in a signalling network. Each neuron represents concentration change of one molecule represented by an ODE. Weights of the RNN correspond to kinetic parameters in the system and can be adjusted incrementally during network training. The method is applied to the p53-Mdm2 oscillation system - a crucial component of the DNA damage response pathways activated by a damage signal. Simulation results indicate that the proposed RNN can successfully represent the behaviour of the p53-Mdm2 oscillation system and solve the parameter estimation problem with high accuracy. Furthermore, we presented a modified form of the RNN that estimates parameters and captures systems dynamics from sparse data collected over relatively large time steps. We also investigate the robustness of the p53-Mdm2 system using the trained RNN under various levels of parameter perturbation to gain a greater understanding of the control of the p53-Mdm2 system. Its outcomes on robustness are consistent with the current biological knowledge of this system. As more

  19. Soft chitosan microbeads scaffold for 3D functional neuronal networks.

    Science.gov (United States)

    Tedesco, Maria Teresa; Di Lisa, Donatella; Massobrio, Paolo; Colistra, Nicolò; Pesce, Mattia; Catelani, Tiziano; Dellacasa, Elena; Raiteri, Roberto; Martinoia, Sergio; Pastorino, Laura

    2018-02-01

    The availability of 3D biomimetic in vitro neuronal networks of mammalian neurons represents a pivotal step for the development of brain-on-a-chip experimental models to study neuronal (dys)functions and particularly neuronal connectivity. The use of hydrogel-based scaffolds for 3D cell cultures has been extensively studied in the last years. However, limited work on biomimetic 3D neuronal cultures has been carried out to date. In this respect, here we investigated the use of a widely popular polysaccharide, chitosan (CHI), for the fabrication of a microbead based 3D scaffold to be coupled to primary neuronal cells. CHI microbeads were characterized by optical and atomic force microscopies. The cell/scaffold interaction was deeply characterized by transmission electron microscopy and by immunocytochemistry using confocal microscopy. Finally, a preliminary electrophysiological characterization by micro-electrode arrays was carried out. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. An introduction to modeling neuronal dynamics

    CERN Document Server

    Börgers, Christoph

    2017-01-01

    This book is intended as a text for a one-semester course on Mathematical and Computational Neuroscience for upper-level undergraduate and beginning graduate students of mathematics, the natural sciences, engineering, or computer science. An undergraduate introduction to differential equations is more than enough mathematical background. Only a slim, high school-level background in physics is assumed, and none in biology. Topics include models of individual nerve cells and their dynamics, models of networks of neurons coupled by synapses and gap junctions, origins and functions of population rhythms in neuronal networks, and models of synaptic plasticity. An extensive online collection of Matlab programs generating the figures accompanies the book. .

  1. Self-Organized Criticality in a Simple Neuron Model Based on Scale-Free Networks

    International Nuclear Information System (INIS)

    Lin Min; Wang Gang; Chen Tianlun

    2006-01-01

    A simple model for a set of interacting idealized neurons in scale-free networks is introduced. The basic elements of the model are endowed with the main features of a neuron function. We find that our model displays power-law behavior of avalanche sizes and generates long-range temporal correlation. More importantly, we find different dynamical behavior for nodes with different connectivity in the scale-free networks.

  2. Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State

    Science.gov (United States)

    Lagzi, Fereshteh; Rotter, Stefan

    2015-01-01

    We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the “within” versus “between” connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed “winnerless competition”, which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might

  3. Dynamics of Competition between Subnetworks of Spiking Neuronal Networks in the Balanced State.

    Science.gov (United States)

    Lagzi, Fereshteh; Rotter, Stefan

    2015-01-01

    We explore and analyze the nonlinear switching dynamics of neuronal networks with non-homogeneous connectivity. The general significance of such transient dynamics for brain function is unclear; however, for instance decision-making processes in perception and cognition have been implicated with it. The network under study here is comprised of three subnetworks of either excitatory or inhibitory leaky integrate-and-fire neurons, of which two are of the same type. The synaptic weights are arranged to establish and maintain a balance between excitation and inhibition in case of a constant external drive. Each subnetwork is randomly connected, where all neurons belonging to a particular population have the same in-degree and the same out-degree. Neurons in different subnetworks are also randomly connected with the same probability; however, depending on the type of the pre-synaptic neuron, the synaptic weight is scaled by a factor. We observed that for a certain range of the "within" versus "between" connection weights (bifurcation parameter), the network activation spontaneously switches between the two sub-networks of the same type. This kind of dynamics has been termed "winnerless competition", which also has a random component here. In our model, this phenomenon is well described by a set of coupled stochastic differential equations of Lotka-Volterra type that imply a competition between the subnetworks. The associated mean-field model shows the same dynamical behavior as observed in simulations of large networks comprising thousands of spiking neurons. The deterministic phase portrait is characterized by two attractors and a saddle node, its stochastic component is essentially given by the multiplicative inherent noise of the system. We find that the dwell time distribution of the active states is exponential, indicating that the noise drives the system randomly from one attractor to the other. A similar model for a larger number of populations might suggest a

  4. Application of hierarchical dissociated neural network in closed-loop hybrid system integrating biological and mechanical intelligence.

    Directory of Open Access Journals (Sweden)

    Yongcheng Li

    Full Text Available Neural networks are considered the origin of intelligence in organisms. In this paper, a new design of an intelligent system merging biological intelligence with artificial intelligence was created. It was based on a neural controller bidirectionally connected to an actual mobile robot to implement a novel vehicle. Two types of experimental preparations were utilized as the neural controller including 'random' and '4Q' (cultured neurons artificially divided into four interconnected parts neural network. Compared to the random cultures, the '4Q' cultures presented absolutely different activities, and the robot controlled by the '4Q' network presented better capabilities in search tasks. Our results showed that neural cultures could be successfully employed to control an artificial agent; the robot performed better and better with the stimulus because of the short-term plasticity. A new framework is provided to investigate the bidirectional biological-artificial interface and develop new strategies for a future intelligent system using these simplified model systems.

  5. Application of Hierarchical Dissociated Neural Network in Closed-Loop Hybrid System Integrating Biological and Mechanical Intelligence

    Science.gov (United States)

    Zhang, Bin; Wang, Yuechao; Li, Hongyi

    2015-01-01

    Neural networks are considered the origin of intelligence in organisms. In this paper, a new design of an intelligent system merging biological intelligence with artificial intelligence was created. It was based on a neural controller bidirectionally connected to an actual mobile robot to implement a novel vehicle. Two types of experimental preparations were utilized as the neural controller including ‘random’ and ‘4Q’ (cultured neurons artificially divided into four interconnected parts) neural network. Compared to the random cultures, the ‘4Q’ cultures presented absolutely different activities, and the robot controlled by the ‘4Q’ network presented better capabilities in search tasks. Our results showed that neural cultures could be successfully employed to control an artificial agent; the robot performed better and better with the stimulus because of the short-term plasticity. A new framework is provided to investigate the bidirectional biological-artificial interface and develop new strategies for a future intelligent system using these simplified model systems. PMID:25992579

  6. Application of hierarchical dissociated neural network in closed-loop hybrid system integrating biological and mechanical intelligence.

    Science.gov (United States)

    Li, Yongcheng; Sun, Rong; Zhang, Bin; Wang, Yuechao; Li, Hongyi

    2015-01-01

    Neural networks are considered the origin of intelligence in organisms. In this paper, a new design of an intelligent system merging biological intelligence with artificial intelligence was created. It was based on a neural controller bidirectionally connected to an actual mobile robot to implement a novel vehicle. Two types of experimental preparations were utilized as the neural controller including 'random' and '4Q' (cultured neurons artificially divided into four interconnected parts) neural network. Compared to the random cultures, the '4Q' cultures presented absolutely different activities, and the robot controlled by the '4Q' network presented better capabilities in search tasks. Our results showed that neural cultures could be successfully employed to control an artificial agent; the robot performed better and better with the stimulus because of the short-term plasticity. A new framework is provided to investigate the bidirectional biological-artificial interface and develop new strategies for a future intelligent system using these simplified model systems.

  7. Mining biological networks from full-text articles.

    Science.gov (United States)

    Czarnecki, Jan; Shepherd, Adrian J

    2014-01-01

    The study of biological networks is playing an increasingly important role in the life sciences. Many different kinds of biological system can be modelled as networks; perhaps the most important examples are protein-protein interaction (PPI) networks, metabolic pathways, gene regulatory networks, and signalling networks. Although much useful information is easily accessible in publicly databases, a lot of extra relevant data lies scattered in numerous published papers. Hence there is a pressing need for automated text-mining methods capable of extracting such information from full-text articles. Here we present practical guidelines for constructing a text-mining pipeline from existing code and software components capable of extracting PPI networks from full-text articles. This approach can be adapted to tackle other types of biological network.

  8. Activity-dependent switch of GABAergic inhibition into glutamatergic excitation in astrocyte-neuron networks.

    Science.gov (United States)

    Perea, Gertrudis; Gómez, Ricardo; Mederos, Sara; Covelo, Ana; Ballesteros, Jesús J; Schlosser, Laura; Hernández-Vivanco, Alicia; Martín-Fernández, Mario; Quintana, Ruth; Rayan, Abdelrahman; Díez, Adolfo; Fuenzalida, Marco; Agarwal, Amit; Bergles, Dwight E; Bettler, Bernhard; Manahan-Vaughan, Denise; Martín, Eduardo D; Kirchhoff, Frank; Araque, Alfonso

    2016-12-24

    Interneurons are critical for proper neural network function and can activate Ca 2+ signaling in astrocytes. However, the impact of the interneuron-astrocyte signaling into neuronal network operation remains unknown. Using the simplest hippocampal Astrocyte-Neuron network, i.e., GABAergic interneuron, pyramidal neuron, single CA3-CA1 glutamatergic synapse, and astrocytes, we found that interneuron-astrocyte signaling dynamically affected excitatory neurotransmission in an activity- and time-dependent manner, and determined the sign (inhibition vs potentiation) of the GABA-mediated effects. While synaptic inhibition was mediated by GABA A receptors, potentiation involved astrocyte GABA B receptors, astrocytic glutamate release, and presynaptic metabotropic glutamate receptors. Using conditional astrocyte-specific GABA B receptor ( Gabbr1 ) knockout mice, we confirmed the glial source of the interneuron-induced potentiation, and demonstrated the involvement of astrocytes in hippocampal theta and gamma oscillations in vivo. Therefore, astrocytes decode interneuron activity and transform inhibitory into excitatory signals, contributing to the emergence of novel network properties resulting from the interneuron-astrocyte interplay.

  9. Mechanism for propagation of rate signals through a 10-layer feedforward neuronal network

    International Nuclear Information System (INIS)

    Jie, Li; Wan-Qing, Yu; Ding, Xu; Feng, Liu; Wei, Wang

    2009-01-01

    Using numerical simulations, we explore the mechanism for propagation of rate signals through a 10-layer feedforward network composed of Hodgkin–Huxley (HH) neurons with sparse connectivity. When white noise is afferent to the input layer, neuronal firing becomes progressively more synchronous in successive layers and synchrony is well developed in deeper layers owing to the feedforward connections between neighboring layers. The synchrony ensures the successful propagation of rate signals through the network when the synaptic conductance is weak. As the synaptic time constant τ syn varies, coherence resonance is observed in the network activity due to the intrinsic property of HH neurons. This makes the output firing rate single-peaked as a function of τ syn , suggesting that the signal propagation can be modulated by the synaptic time constant. These results are consistent with experimental results and advance our understanding of how information is processed in feedforward networks. (cross-disciplinary physics and related areas of science and technology)

  10. OWL Reasoning Framework over Big Biological Knowledge Network

    Science.gov (United States)

    Chen, Huajun; Chen, Xi; Gu, Peiqin; Wu, Zhaohui; Yu, Tong

    2014-01-01

    Recently, huge amounts of data are generated in the domain of biology. Embedded with domain knowledge from different disciplines, the isolated biological resources are implicitly connected. Thus it has shaped a big network of versatile biological knowledge. Faced with such massive, disparate, and interlinked biological data, providing an efficient way to model, integrate, and analyze the big biological network becomes a challenge. In this paper, we present a general OWL (web ontology language) reasoning framework to study the implicit relationships among biological entities. A comprehensive biological ontology across traditional Chinese medicine (TCM) and western medicine (WM) is used to create a conceptual model for the biological network. Then corresponding biological data is integrated into a biological knowledge network as the data model. Based on the conceptual model and data model, a scalable OWL reasoning method is utilized to infer the potential associations between biological entities from the biological network. In our experiment, we focus on the association discovery between TCM and WM. The derived associations are quite useful for biologists to promote the development of novel drugs and TCM modernization. The experimental results show that the system achieves high efficiency, accuracy, scalability, and effectivity. PMID:24877076

  11. Learning by stimulation avoidance: A principle to control spiking neural networks dynamics.

    Science.gov (United States)

    Sinapayen, Lana; Masumori, Atsushi; Ikegami, Takashi

    2017-01-01

    Learning based on networks of real neurons, and learning based on biologically inspired models of neural networks, have yet to find general learning rules leading to widespread applications. In this paper, we argue for the existence of a principle allowing to steer the dynamics of a biologically inspired neural network. Using carefully timed external stimulation, the network can be driven towards a desired dynamical state. We term this principle "Learning by Stimulation Avoidance" (LSA). We demonstrate through simulation that the minimal sufficient conditions leading to LSA in artificial networks are also sufficient to reproduce learning results similar to those obtained in biological neurons by Shahaf and Marom, and in addition explains synaptic pruning. We examined the underlying mechanism by simulating a small network of 3 neurons, then scaled it up to a hundred neurons. We show that LSA has a higher explanatory power than existing hypotheses about the response of biological neural networks to external simulation, and can be used as a learning rule for an embodied application: learning of wall avoidance by a simulated robot. In other works, reinforcement learning with spiking networks can be obtained through global reward signals akin simulating the dopamine system; we believe that this is the first project demonstrating sensory-motor learning with random spiking networks through Hebbian learning relying on environmental conditions without a separate reward system.

  12. Delay-enhanced coherence of spiral waves in noisy Hodgkin-Huxley neuronal networks

    International Nuclear Information System (INIS)

    Wang Qingyun; Perc, Matjaz; Duan Zhisheng; Chen Guanrong

    2008-01-01

    We study the spatial dynamics of spiral waves in noisy Hodgkin-Huxley neuronal ensembles evoked by different information transmission delays and network topologies. In classical settings of coherence resonance the intensity of noise is fine-tuned so as to optimize the system's response. Here, we keep the noise intensity constant, and instead, vary the length of information transmission delay amongst coupled neurons. We show that there exists an intermediate transmission delay by which the spiral waves are optimally ordered, hence indicating the existence of delay-enhanced coherence of spatial dynamics in the examined system. Additionally, we examine the robustness of this phenomenon as the diffusive interaction topology changes towards the small-world type, and discover that shortcut links amongst distant neurons hinder the emergence of coherent spiral waves irrespective of transmission delay length. Presented results thus provide insights that could facilitate the understanding of information transmission delay on realistic neuronal networks

  13. Effects of spike-time-dependent plasticity on the stochastic resonance of small-world neuronal networks

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Haitao; Guo, Xinmeng; Wang, Jiang, E-mail: jiangwang@tju.edu.cn; Deng, Bin; Wei, Xile [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2014-09-01

    The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient for the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks.

  14. Effects of spike-time-dependent plasticity on the stochastic resonance of small-world neuronal networks

    International Nuclear Information System (INIS)

    Yu, Haitao; Guo, Xinmeng; Wang, Jiang; Deng, Bin; Wei, Xile

    2014-01-01

    The phenomenon of stochastic resonance in Newman-Watts small-world neuronal networks is investigated when the strength of synaptic connections between neurons is adaptively adjusted by spike-time-dependent plasticity (STDP). It is shown that irrespective of the synaptic connectivity is fixed or adaptive, the phenomenon of stochastic resonance occurs. The efficiency of network stochastic resonance can be largely enhanced by STDP in the coupling process. Particularly, the resonance for adaptive coupling can reach a much larger value than that for fixed one when the noise intensity is small or intermediate. STDP with dominant depression and small temporal window ratio is more efficient for the transmission of weak external signal in small-world neuronal networks. In addition, we demonstrate that the effect of stochastic resonance can be further improved via fine-tuning of the average coupling strength of the adaptive network. Furthermore, the small-world topology can significantly affect stochastic resonance of excitable neuronal networks. It is found that there exists an optimal probability of adding links by which the noise-induced transmission of weak periodic signal peaks

  15. Exploring biological network structure with clustered random networks

    Directory of Open Access Journals (Sweden)

    Bansal Shweta

    2009-12-01

    Full Text Available Abstract Background Complex biological systems are often modeled as networks of interacting units. Networks of biochemical interactions among proteins, epidemiological contacts among hosts, and trophic interactions in ecosystems, to name a few, have provided useful insights into the dynamical processes that shape and traverse these systems. The degrees of nodes (numbers of interactions and the extent of clustering (the tendency for a set of three nodes to be interconnected are two of many well-studied network properties that can fundamentally shape a system. Disentangling the interdependent effects of the various network properties, however, can be difficult. Simple network models can help us quantify the structure of empirical networked systems and understand the impact of various topological properties on dynamics. Results Here we develop and implement a new Markov chain simulation algorithm to generate simple, connected random graphs that have a specified degree sequence and level of clustering, but are random in all other respects. The implementation of the algorithm (ClustRNet: Clustered Random Networks provides the generation of random graphs optimized according to a local or global, and relative or absolute measure of clustering. We compare our algorithm to other similar methods and show that ours more successfully produces desired network characteristics. Finding appropriate null models is crucial in bioinformatics research, and is often difficult, particularly for biological networks. As we demonstrate, the networks generated by ClustRNet can serve as random controls when investigating the impacts of complex network features beyond the byproduct of degree and clustering in empirical networks. Conclusion ClustRNet generates ensembles of graphs of specified edge structure and clustering. These graphs allow for systematic study of the impacts of connectivity and redundancies on network function and dynamics. This process is a key step in

  16. Spatial coherence resonance and spatial pattern transition induced by the decrease of inhibitory effect in a neuronal network

    Science.gov (United States)

    Tao, Ye; Gu, Huaguang; Ding, Xueli

    2017-10-01

    Spiral waves were observed in the biological experiment on rat brain cortex with the application of carbachol and bicuculline which can block inhibitory coupling from interneurons to pyramidal neurons. To simulate the experimental spiral waves, a two-dimensional neuronal network composed of pyramidal neurons and inhibitory interneurons was built. By decreasing the percentage of active inhibitory interneurons, the random-like spatial patterns change to spiral waves and to random-like spatial patterns or nearly synchronous behaviors. The spiral waves appear at a low percentage of inhibitory interneurons, which matches the experimental condition that inhibitory couplings of the interneurons were blocked. The spiral waves exhibit a higher order or signal-to-noise ratio (SNR) characterized by spatial structure function than both random-like spatial patterns and nearly synchronous behaviors, which shows that changes of the percentage of active inhibitory interneurons can induce spatial coherence resonance-like behaviors. In addition, the relationship between the coherence degree and the spatial structures of the spiral waves is identified. The results not only present a possible and reasonable interpretation to the spiral waves observed in the biological experiment on the brain cortex with disinhibition, but also reveal that the spiral waves exhibit more ordered degree in spatial patterns.

  17. Hierarchical thinking in network biology: the unbiased modularization of biochemical networks.

    Science.gov (United States)

    Papin, Jason A; Reed, Jennifer L; Palsson, Bernhard O

    2004-12-01

    As reconstructed biochemical reaction networks continue to grow in size and scope, there is a growing need to describe the functional modules within them. Such modules facilitate the study of biological processes by deconstructing complex biological networks into conceptually simple entities. The definition of network modules is often based on intuitive reasoning. As an alternative, methods are being developed for defining biochemical network modules in an unbiased fashion. These unbiased network modules are mathematically derived from the structure of the whole network under consideration.

  18. Neural dynamics as sampling: a model for stochastic computation in recurrent networks of spiking neurons.

    Science.gov (United States)

    Buesing, Lars; Bill, Johannes; Nessler, Bernhard; Maass, Wolfgang

    2011-11-01

    The organization of computations in networks of spiking neurons in the brain is still largely unknown, in particular in view of the inherently stochastic features of their firing activity and the experimentally observed trial-to-trial variability of neural systems in the brain. In principle there exists a powerful computational framework for stochastic computations, probabilistic inference by sampling, which can explain a large number of macroscopic experimental data in neuroscience and cognitive science. But it has turned out to be surprisingly difficult to create a link between these abstract models for stochastic computations and more detailed models of the dynamics of networks of spiking neurons. Here we create such a link and show that under some conditions the stochastic firing activity of networks of spiking neurons can be interpreted as probabilistic inference via Markov chain Monte Carlo (MCMC) sampling. Since common methods for MCMC sampling in distributed systems, such as Gibbs sampling, are inconsistent with the dynamics of spiking neurons, we introduce a different approach based on non-reversible Markov chains that is able to reflect inherent temporal processes of spiking neuronal activity through a suitable choice of random variables. We propose a neural network model and show by a rigorous theoretical analysis that its neural activity implements MCMC sampling of a given distribution, both for the case of discrete and continuous time. This provides a step towards closing the gap between abstract functional models of cortical computation and more detailed models of networks of spiking neurons.

  19. From Neurons to Newtons

    DEFF Research Database (Denmark)

    Nielsen, Bjørn Gilbert

    2001-01-01

    proteins generate forces, to the macroscopic levels where overt arm movements are vol- untarily controlled within an unpredictable environment by legions of neurons¯ring in orderly fashion. An extensive computer simulation system has been developed for this thesis, which at present contains a neural...... network scripting language for specifying arbitrary neural architectures, de¯nition ¯les for detailed spinal networks, various biologically realistic models of neurons, and dynamic synapses. Also included are structurally accurate models of intrafusal and extra-fusal muscle ¯bers and a general body...... that an explicit function may be derived which expresses the force that the spindle contractile elements must produce to exactly counter spindle unloading during muscle shortening. This information was used to calculate the corresponding "optimal" °-motoneuronal activity level. For some simple arm movement tasks...

  20. Network Reconstruction of Dynamic Biological Systems

    OpenAIRE

    Asadi, Behrang

    2013-01-01

    Inference of network topology from experimental data is a central endeavor in biology, since knowledge of the underlying signaling mechanisms a requirement for understanding biological phenomena. As one of the most important tools in bioinformatics area, development of methods to reconstruct biological networks has attracted remarkable attention in the current decade. Integration of different data types can lead to remarkable improvements in our ability to identify the connectivity of differe...

  1. Transition Dynamics of a Dentate Gyrus-CA3 Neuronal Network during Temporal Lobe Epilepsy

    Directory of Open Access Journals (Sweden)

    Liyuan Zhang

    2017-07-01

    Full Text Available In temporal lobe epilepsy (TLE, the variation of chemical receptor expression underlies the basis of neural network activity shifts, resulting in neuronal hyperexcitability and epileptiform discharges. However, dynamical mechanisms involved in the transitions of TLE are not fully understood, because of the neuronal diversity and the indeterminacy of network connection. Hence, based on Hodgkin–Huxley (HH type neurons and Pinsky–Rinzel (PR type neurons coupling with glutamatergic and GABAergic synaptic connections respectively, we propose a computational framework which contains dentate gyrus (DG region and CA3 region. By regulating the concentration range of N-methyl-D-aspartate-type glutamate receptor (NMDAR, we demonstrate the pyramidal neuron can generate transitions from interictal to seizure discharges. This suggests that enhanced endogenous activity of NMDAR contributes to excitability in pyramidal neuron. Moreover, we conclude that excitatory discharges in CA3 region vary considerably on account of the excitatory currents produced by the excitatory pyramidal neuron. Interestingly, by changing the backprojection connection, we find that glutamatergic type backprojection can promote the dominant frequency of firings and further motivate excitatory counterpropagation from CA3 region to DG region. However, GABAergic type backprojection can reduce firing rate and block morbid counterpropagation, which may be factored into the terminations of TLE. In addition, neuronal diversity dominated network shows weak correlation with different backprojections. Our modeling and simulation studies provide new insights into the mechanisms of seizures generation and connectionism in local hippocampus, along with the synaptic mechanisms of this disease.

  2. Transition Dynamics of a Dentate Gyrus-CA3 Neuronal Network during Temporal Lobe Epilepsy.

    Science.gov (United States)

    Zhang, Liyuan; Fan, Denggui; Wang, Qingyun

    2017-01-01

    In temporal lobe epilepsy (TLE), the variation of chemical receptor expression underlies the basis of neural network activity shifts, resulting in neuronal hyperexcitability and epileptiform discharges. However, dynamical mechanisms involved in the transitions of TLE are not fully understood, because of the neuronal diversity and the indeterminacy of network connection. Hence, based on Hodgkin-Huxley (HH) type neurons and Pinsky-Rinzel (PR) type neurons coupling with glutamatergic and GABAergic synaptic connections respectively, we propose a computational framework which contains dentate gyrus (DG) region and CA3 region. By regulating the concentration range of N-methyl-D-aspartate-type glutamate receptor (NMDAR), we demonstrate the pyramidal neuron can generate transitions from interictal to seizure discharges. This suggests that enhanced endogenous activity of NMDAR contributes to excitability in pyramidal neuron. Moreover, we conclude that excitatory discharges in CA3 region vary considerably on account of the excitatory currents produced by the excitatory pyramidal neuron. Interestingly, by changing the backprojection connection, we find that glutamatergic type backprojection can promote the dominant frequency of firings and further motivate excitatory counterpropagation from CA3 region to DG region. However, GABAergic type backprojection can reduce firing rate and block morbid counterpropagation, which may be factored into the terminations of TLE. In addition, neuronal diversity dominated network shows weak correlation with different backprojections. Our modeling and simulation studies provide new insights into the mechanisms of seizures generation and connectionism in local hippocampus, along with the synaptic mechanisms of this disease.

  3. BioNSi: A Discrete Biological Network Simulator Tool.

    Science.gov (United States)

    Rubinstein, Amir; Bracha, Noga; Rudner, Liat; Zucker, Noga; Sloin, Hadas E; Chor, Benny

    2016-08-05

    Modeling and simulation of biological networks is an effective and widely used research methodology. The Biological Network Simulator (BioNSi) is a tool for modeling biological networks and simulating their discrete-time dynamics, implemented as a Cytoscape App. BioNSi includes a visual representation of the network that enables researchers to construct, set the parameters, and observe network behavior under various conditions. To construct a network instance in BioNSi, only partial, qualitative biological data suffices. The tool is aimed for use by experimental biologists and requires no prior computational or mathematical expertise. BioNSi is freely available at http://bionsi.wix.com/bionsi , where a complete user guide and a step-by-step manual can also be found.

  4. Biological oscillations for learning walking coordination: dynamic recurrent neural network functionally models physiological central pattern generator.

    Science.gov (United States)

    Hoellinger, Thomas; Petieau, Mathieu; Duvinage, Matthieu; Castermans, Thierry; Seetharaman, Karthik; Cebolla, Ana-Maria; Bengoetxea, Ana; Ivanenko, Yuri; Dan, Bernard; Cheron, Guy

    2013-01-01

    The existence of dedicated neuronal modules such as those organized in the cerebral cortex, thalamus, basal ganglia, cerebellum, or spinal cord raises the question of how these functional modules are coordinated for appropriate motor behavior. Study of human locomotion offers an interesting field for addressing this central question. The coordination of the elevation of the 3 leg segments under a planar covariation rule (Borghese et al., 1996) was recently modeled (Barliya et al., 2009) by phase-adjusted simple oscillators shedding new light on the understanding of the central pattern generator (CPG) processing relevant oscillation signals. We describe the use of a dynamic recurrent neural network (DRNN) mimicking the natural oscillatory behavior of human locomotion for reproducing the planar covariation rule in both legs at different walking speeds. Neural network learning was based on sinusoid signals integrating frequency and amplitude features of the first three harmonics of the sagittal elevation angles of the thigh, shank, and foot of each lower limb. We verified the biological plausibility of the neural networks. Best results were obtained with oscillations extracted from the first three harmonics in comparison to oscillations outside the harmonic frequency peaks. Physiological replication steadily increased with the number of neuronal units from 1 to 80, where similarity index reached 0.99. Analysis of synaptic weighting showed that the proportion of inhibitory connections consistently increased with the number of neuronal units in the DRNN. This emerging property in the artificial neural networks resonates with recent advances in neurophysiology of inhibitory neurons that are involved in central nervous system oscillatory activities. The main message of this study is that this type of DRNN may offer a useful model of physiological central pattern generator for gaining insights in basic research and developing clinical applications.

  5. The Latin American Biological Dosimetry Network (LBDNet).

    Science.gov (United States)

    García, O; Di Giorgio, M; Radl, A; Taja, M R; Sapienza, C E; Deminge, M M; Fernández Rearte, J; Stuck Oliveira, M; Valdivia, P; Lamadrid, A I; González, J E; Romero, I; Mandina, T; Guerrero-Carbajal, C; ArceoMaldonado, C; Cortina Ramírez, G E; Espinoza, M; Martínez-López, W; Di Tomasso, M

    2016-09-01

    Biological Dosimetry is a necessary support for national radiation protection programmes and emergency response schemes. The Latin American Biological Dosimetry Network (LBDNet) was formally founded in 2007 to provide early biological dosimetry assistance in case of radiation emergencies in the Latin American Region. Here are presented the main topics considered in the foundational document of the network, which comprise: mission, partners, concept of operation, including the mechanism to request support for biological dosimetry assistance in the region, and the network capabilities. The process for network activation and the role of the coordinating laboratory during biological dosimetry emergency response is also presented. This information is preceded by historical remarks on biological dosimetry cooperation in Latin America. A summary of the main experimental and practical results already obtained by the LBDNet is also included. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  6. Clustering predicts memory performance in networks of spiking and non-spiking neurons

    Directory of Open Access Journals (Sweden)

    Weiliang eChen

    2011-03-01

    Full Text Available The problem we address in this paper is that of finding effective and parsimonious patterns of connectivity in sparse associative memories. This problem must be addressed in real neuronal systems, so that results in artificial systems could throw light on real systems. We show that there are efficient patterns of connectivity and that these patterns are effective in models with either spiking or non-spiking neurons. This suggests that there may be some underlying general principles governing good connectivity in such networks. We also show that the clustering of the network, measured by Clustering Coefficient, has a strong linear correlation to the performance of associative memory. This result is important since a purely static measure of network connectivity appears to determine an important dynamic property of the network.

  7. Chimera states in a multilayer network of coupled and uncoupled neurons

    Science.gov (United States)

    Majhi, Soumen; Perc, Matjaž; Ghosh, Dibakar

    2017-07-01

    We study the emergence of chimera states in a multilayer neuronal network, where one layer is composed of coupled and the other layer of uncoupled neurons. Through the multilayer structure, the layer with coupled neurons acts as the medium by means of which neurons in the uncoupled layer share information in spite of the absence of physical connections among them. Neurons in the coupled layer are connected with electrical synapses, while across the two layers, neurons are connected through chemical synapses. In both layers, the dynamics of each neuron is described by the Hindmarsh-Rose square wave bursting dynamics. We show that the presence of two different types of connecting synapses within and between the two layers, together with the multilayer network structure, plays a key role in the emergence of between-layer synchronous chimera states and patterns of synchronous clusters. In particular, we find that these chimera states can emerge in the coupled layer regardless of the range of electrical synapses. Even in all-to-all and nearest-neighbor coupling within the coupled layer, we observe qualitatively identical between-layer chimera states. Moreover, we show that the role of information transmission delay between the two layers must not be neglected, and we obtain precise parameter bounds at which chimera states can be observed. The expansion of the chimera region and annihilation of cluster and fully coherent states in the parameter plane for increasing values of inter-layer chemical synaptic time delay are illustrated using effective range measurements. These results are discussed in the light of neuronal evolution, where the coexistence of coherent and incoherent dynamics during the developmental stage is particularly likely.

  8. Nicotinic modulaton of neuronal networks: from receptors to cognition

    NARCIS (Netherlands)

    Mansvelder, H.D.; van Aerde, K.I.; Couey, J.J.; Brussaard, A.B.

    2006-01-01

    Rationale: Nicotine affects many aspects of human cognition, including attention and memory. Activation of nicotinic acetylcholine receptors (nAChRs) in neuronal networks modulates activity and information processing during cognitive tasks, which can be observed in electroencephalograms (EEGs) and

  9. A generic algorithm for layout of biological networks.

    Science.gov (United States)

    Schreiber, Falk; Dwyer, Tim; Marriott, Kim; Wybrow, Michael

    2009-11-12

    Biological networks are widely used to represent processes in biological systems and to capture interactions and dependencies between biological entities. Their size and complexity is steadily increasing due to the ongoing growth of knowledge in the life sciences. To aid understanding of biological networks several algorithms for laying out and graphically representing networks and network analysis results have been developed. However, current algorithms are specialized to particular layout styles and therefore different algorithms are required for each kind of network and/or style of layout. This increases implementation effort and means that new algorithms must be developed for new layout styles. Furthermore, additional effort is necessary to compose different layout conventions in the same diagram. Also the user cannot usually customize the placement of nodes to tailor the layout to their particular need or task and there is little support for interactive network exploration. We present a novel algorithm to visualize different biological networks and network analysis results in meaningful ways depending on network types and analysis outcome. Our method is based on constrained graph layout and we demonstrate how it can handle the drawing conventions used in biological networks. The presented algorithm offers the ability to produce many of the fundamental popular drawing styles while allowing the exibility of constraints to further tailor these layouts.

  10. Biophysical synaptic dynamics in an analog VLSI network of Hodgkin-Huxley neurons.

    Science.gov (United States)

    Yu, Theodore; Cauwenberghs, Gert

    2009-01-01

    We study synaptic dynamics in a biophysical network of four coupled spiking neurons implemented in an analog VLSI silicon microchip. The four neurons implement a generalized Hodgkin-Huxley model with individually configurable rate-based kinetics of opening and closing of Na+ and K+ ion channels. The twelve synapses implement a rate-based first-order kinetic model of neurotransmitter and receptor dynamics, accounting for NMDA and non-NMDA type chemical synapses. The implemented models on the chip are fully configurable by 384 parameters accounting for conductances, reversal potentials, and pre/post-synaptic voltage-dependence of the channel kinetics. We describe the models and present experimental results from the chip characterizing single neuron dynamics, single synapse dynamics, and multi-neuron network dynamics showing phase-locking behavior as a function of synaptic coupling strength. The 3mm x 3mm microchip consumes 1.29 mW power making it promising for applications including neuromorphic modeling and neural prostheses.

  11. Motif statistics and spike correlations in neuronal networks

    International Nuclear Information System (INIS)

    Hu, Yu; Shea-Brown, Eric; Trousdale, James; Josić, Krešimir

    2013-01-01

    Motifs are patterns of subgraphs of complex networks. We studied the impact of such patterns of connectivity on the level of correlated, or synchronized, spiking activity among pairs of cells in a recurrent network of integrate and fire neurons. For a range of network architectures, we find that the pairwise correlation coefficients, averaged across the network, can be closely approximated using only three statistics of network connectivity. These are the overall network connection probability and the frequencies of two second order motifs: diverging motifs, in which one cell provides input to two others, and chain motifs, in which two cells are connected via a third intermediary cell. Specifically, the prevalence of diverging and chain motifs tends to increase correlation. Our method is based on linear response theory, which enables us to express spiking statistics using linear algebra, and a resumming technique, which extrapolates from second order motifs to predict the overall effect of coupling on network correlation. Our motif-based results seek to isolate the effect of network architecture perturbatively from a known network state. (paper)

  12. Efficient digital implementation of a conductance-based globus pallidus neuron and the dynamics analysis

    Science.gov (United States)

    Yang, Shuangming; Wei, Xile; Deng, Bin; Liu, Chen; Li, Huiyan; Wang, Jiang

    2018-03-01

    Balance between biological plausibility of dynamical activities and computational efficiency is one of challenging problems in computational neuroscience and neural system engineering. This paper proposes a set of efficient methods for the hardware realization of the conductance-based neuron model with relevant dynamics, targeting reproducing the biological behaviors with low-cost implementation on digital programmable platform, which can be applied in wide range of conductance-based neuron models. Modified GP neuron models for efficient hardware implementation are presented to reproduce reliable pallidal dynamics, which decode the information of basal ganglia and regulate the movement disorder related voluntary activities. Implementation results on a field-programmable gate array (FPGA) demonstrate that the proposed techniques and models can reduce the resource cost significantly and reproduce the biological dynamics accurately. Besides, the biological behaviors with weak network coupling are explored on the proposed platform, and theoretical analysis is also made for the investigation of biological characteristics of the structured pallidal oscillator and network. The implementation techniques provide an essential step towards the large-scale neural network to explore the dynamical mechanisms in real time. Furthermore, the proposed methodology enables the FPGA-based system a powerful platform for the investigation on neurodegenerative diseases and real-time control of bio-inspired neuro-robotics.

  13. Efficient network reconstruction from dynamical cascades identifies small-world topology of neuronal avalanches.

    Directory of Open Access Journals (Sweden)

    Sinisa Pajevic

    2009-01-01

    Full Text Available Cascading activity is commonly found in complex systems with directed interactions such as metabolic networks, neuronal networks, or disease spreading in social networks. Substantial insight into a system's organization can be obtained by reconstructing the underlying functional network architecture from the observed activity cascades. Here we focus on Bayesian approaches and reduce their computational demands by introducing the Iterative Bayesian (IB and Posterior Weighted Averaging (PWA methods. We introduce a special case of PWA, cast in nonparametric form, which we call the normalized count (NC algorithm. NC efficiently reconstructs random and small-world functional network topologies and architectures from subcritical, critical, and supercritical cascading dynamics and yields significant improvements over commonly used correlation methods. With experimental data, NC identified a functional and structural small-world topology and its corresponding traffic in cortical networks with neuronal avalanche dynamics.

  14. Single or multiple synchronization transitions in scale-free neuronal networks with electrical or chemical coupling

    International Nuclear Information System (INIS)

    Hao Yinghang; Gong, Yubing; Wang Li; Ma Xiaoguang; Yang Chuanlu

    2011-01-01

    Research highlights: → Single synchronization transition for gap-junctional coupling. → Multiple synchronization transitions for chemical synaptic coupling. → Gap junctions and chemical synapses have different impacts on synchronization transition. → Chemical synapses may play a dominant role in neurons' information processing. - Abstract: In this paper, we have studied time delay- and coupling strength-induced synchronization transitions in scale-free modified Hodgkin-Huxley (MHH) neuron networks with gap-junctions and chemical synaptic coupling. It is shown that the synchronization transitions are much different for these two coupling types. For gap-junctions, the neurons exhibit a single synchronization transition with time delay and coupling strength, while for chemical synapses, there are multiple synchronization transitions with time delay, and the synchronization transition with coupling strength is dependent on the time delay lengths. For short delays we observe a single synchronization transition, whereas for long delays the neurons exhibit multiple synchronization transitions as the coupling strength is varied. These results show that gap junctions and chemical synapses have different impacts on the pattern formation and synchronization transitions of the scale-free MHH neuronal networks, and chemical synapses, compared to gap junctions, may play a dominant and more active function in the firing activity of the networks. These findings would be helpful for further understanding the roles of gap junctions and chemical synapses in the firing dynamics of neuronal networks.

  15. Single or multiple synchronization transitions in scale-free neuronal networks with electrical or chemical coupling

    Energy Technology Data Exchange (ETDEWEB)

    Hao Yinghang [School of Physics, Ludong University, Yantai 264025 (China); Gong, Yubing, E-mail: gongyubing09@hotmail.co [School of Physics, Ludong University, Yantai 264025 (China); Wang Li; Ma Xiaoguang; Yang Chuanlu [School of Physics, Ludong University, Yantai 264025 (China)

    2011-04-15

    Research highlights: Single synchronization transition for gap-junctional coupling. Multiple synchronization transitions for chemical synaptic coupling. Gap junctions and chemical synapses have different impacts on synchronization transition. Chemical synapses may play a dominant role in neurons' information processing. - Abstract: In this paper, we have studied time delay- and coupling strength-induced synchronization transitions in scale-free modified Hodgkin-Huxley (MHH) neuron networks with gap-junctions and chemical synaptic coupling. It is shown that the synchronization transitions are much different for these two coupling types. For gap-junctions, the neurons exhibit a single synchronization transition with time delay and coupling strength, while for chemical synapses, there are multiple synchronization transitions with time delay, and the synchronization transition with coupling strength is dependent on the time delay lengths. For short delays we observe a single synchronization transition, whereas for long delays the neurons exhibit multiple synchronization transitions as the coupling strength is varied. These results show that gap junctions and chemical synapses have different impacts on the pattern formation and synchronization transitions of the scale-free MHH neuronal networks, and chemical synapses, compared to gap junctions, may play a dominant and more active function in the firing activity of the networks. These findings would be helpful for further understanding the roles of gap junctions and chemical synapses in the firing dynamics of neuronal networks.

  16. SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks

    OpenAIRE

    Wang, Linnan; Ye, Jinmian; Zhao, Yiyang; Wu, Wei; Li, Ang; Song, Shuaiwen Leon; Xu, Zenglin; Kraska, Tim

    2018-01-01

    Going deeper and wider in neural architectures improves the accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need change to less desired network architectures, or nontrivially dissect a network across multiGPUs. These distract DL practitioners from concentrating on their original machine learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling runtime to enable the network training far be...

  17. A spatially resolved network spike in model neuronal cultures reveals nucleation centers, circular traveling waves and drifting spiral waves.

    Science.gov (United States)

    Paraskevov, A V; Zendrikov, D K

    2017-03-23

    We show that in model neuronal cultures, where the probability of interneuronal connection formation decreases exponentially with increasing distance between the neurons, there exists a small number of spatial nucleation centers of a network spike, from where the synchronous spiking activity starts propagating in the network typically in the form of circular traveling waves. The number of nucleation centers and their spatial locations are unique and unchanged for a given realization of neuronal network but are different for different networks. In contrast, if the probability of interneuronal connection formation is independent of the distance between neurons, then the nucleation centers do not arise and the synchronization of spiking activity during a network spike occurs spatially uniform throughout the network. Therefore one can conclude that spatial proximity of connections between neurons is important for the formation of nucleation centers. It is also shown that fluctuations of the spatial density of neurons at their random homogeneous distribution typical for the experiments in vitro do not determine the locations of the nucleation centers. The simulation results are qualitatively consistent with the experimental observations.

  18. Knowledge-fused differential dependency network models for detecting significant rewiring in biological networks.

    Science.gov (United States)

    Tian, Ye; Zhang, Bai; Hoffman, Eric P; Clarke, Robert; Zhang, Zhen; Shih, Ie-Ming; Xuan, Jianhua; Herrington, David M; Wang, Yue

    2014-07-24

    Modeling biological networks serves as both a major goal and an effective tool of systems biology in studying mechanisms that orchestrate the activities of gene products in cells. Biological networks are context-specific and dynamic in nature. To systematically characterize the selectively activated regulatory components and mechanisms, modeling tools must be able to effectively distinguish significant rewiring from random background fluctuations. While differential networks cannot be constructed by existing knowledge alone, novel incorporation of prior knowledge into data-driven approaches can improve the robustness and biological relevance of network inference. However, the major unresolved roadblocks include: big solution space but a small sample size; highly complex networks; imperfect prior knowledge; missing significance assessment; and heuristic structural parameter learning. To address these challenges, we formulated the inference of differential dependency networks that incorporate both conditional data and prior knowledge as a convex optimization problem, and developed an efficient learning algorithm to jointly infer the conserved biological network and the significant rewiring across different conditions. We used a novel sampling scheme to estimate the expected error rate due to "random" knowledge. Based on that scheme, we developed a strategy that fully exploits the benefit of this data-knowledge integrated approach. We demonstrated and validated the principle and performance of our method using synthetic datasets. We then applied our method to yeast cell line and breast cancer microarray data and obtained biologically plausible results. The open-source R software package and the experimental data are freely available at http://www.cbil.ece.vt.edu/software.htm. Experiments on both synthetic and real data demonstrate the effectiveness of the knowledge-fused differential dependency network in revealing the statistically significant rewiring in biological

  19. Biologically plausible learning in neural networks: a lesson from bacterial chemotaxis.

    Science.gov (United States)

    Shimansky, Yury P

    2009-12-01

    Learning processes in the brain are usually associated with plastic changes made to optimize the strength of connections between neurons. Although many details related to biophysical mechanisms of synaptic plasticity have been discovered, it is unclear how the concurrent performance of adaptive modifications in a huge number of spatial locations is organized to minimize a given objective function. Since direct experimental observation of even a relatively small subset of such changes is not feasible, computational modeling is an indispensable investigation tool for solving this problem. However, the conventional method of error back-propagation (EBP) employed for optimizing synaptic weights in artificial neural networks is not biologically plausible. This study based on computational experiments demonstrated that such optimization can be performed rather efficiently using the same general method that bacteria employ for moving closer to an attractant or away from a repellent. With regard to neural network optimization, this method consists of regulating the probability of an abrupt change in the direction of synaptic weight modification according to the temporal gradient of the objective function. Neural networks utilizing this method (regulation of modification probability, RMP) can be viewed as analogous to swimming in the multidimensional space of their parameters in the flow of biochemical agents carrying information about the optimality criterion. The efficiency of RMP is comparable to that of EBP, while RMP has several important advantages. Since the biological plausibility of RMP is beyond a reasonable doubt, the RMP concept provides a constructive framework for the experimental analysis of learning in natural neural networks.

  20. Anti-correlated cortical networks arise from spontaneous neuronal dynamics at slow timescales.

    Science.gov (United States)

    Kodama, Nathan X; Feng, Tianyi; Ullett, James J; Chiel, Hillel J; Sivakumar, Siddharth S; Galán, Roberto F

    2018-01-12

    In the highly interconnected architectures of the cerebral cortex, recurrent intracortical loops disproportionately outnumber thalamo-cortical inputs. These networks are also capable of generating neuronal activity without feedforward sensory drive. It is unknown, however, what spatiotemporal patterns may be solely attributed to intrinsic connections of the local cortical network. Using high-density microelectrode arrays, here we show that in the isolated, primary somatosensory cortex of mice, neuronal firing fluctuates on timescales from milliseconds to tens of seconds. Slower firing fluctuations reveal two spatially distinct neuronal ensembles, which correspond to superficial and deeper layers. These ensembles are anti-correlated: when one fires more, the other fires less and vice versa. This interplay is clearest at timescales of several seconds and is therefore consistent with shifts between active sensing and anticipatory behavioral states in mice.

  1. Dynamical analysis of Parkinsonian state emulated by hybrid Izhikevich neuron models

    Science.gov (United States)

    Liu, Chen; Wang, Jiang; Yu, Haitao; Deng, Bin; Wei, Xile; Li, Huiyan; Loparo, Kenneth A.; Fietkiewicz, Chris

    2015-11-01

    Computational models play a significant role in exploring novel theories to complement the findings of physiological experiments. Various computational models have been developed to reveal the mechanisms underlying brain functions. Particularly, in the development of therapies to modulate behavioral and pathological abnormalities, computational models provide the basic foundations to exhibit transitions between physiological and pathological conditions. Considering the significant roles of the intrinsic properties of the globus pallidus and the coupling connections between neurons in determining the firing patterns and the dynamical activities of the basal ganglia neuronal network, we propose a hypothesis that pathological behaviors under the Parkinsonian state may originate from combined effects of intrinsic properties of globus pallidus neurons and synaptic conductances in the whole neuronal network. In order to establish a computational efficient network model, hybrid Izhikevich neuron model is used due to its capacity of capturing the dynamical characteristics of the biological neuronal activities. Detailed analysis of the individual Izhikevich neuron model can assist in understanding the roles of model parameters, which then facilitates the establishment of the basal ganglia-thalamic network model, and contributes to a further exploration of the underlying mechanisms of the Parkinsonian state. Simulation results show that the hybrid Izhikevich neuron model is capable of capturing many of the dynamical properties of the basal ganglia-thalamic neuronal network, such as variations of the firing rates and emergence of synchronous oscillations under the Parkinsonian condition, despite the simplicity of the two-dimensional neuronal model. It may suggest that the computational efficient hybrid Izhikevich neuron model can be used to explore basal ganglia normal and abnormal functions. Especially it provides an efficient way of emulating the large-scale neuron network

  2. Computational modeling of seizure dynamics using coupled neuronal networks: factors shaping epileptiform activity.

    Directory of Open Access Journals (Sweden)

    Sebastien Naze

    2015-05-01

    Full Text Available Epileptic seizure dynamics span multiple scales in space and time. Understanding seizure mechanisms requires identifying the relations between seizure components within and across these scales, together with the analysis of their dynamical repertoire. Mathematical models have been developed to reproduce seizure dynamics across scales ranging from the single neuron to the neural population. In this study, we develop a network model of spiking neurons and systematically investigate the conditions, under which the network displays the emergent dynamic behaviors known from the Epileptor, which is a well-investigated abstract model of epileptic neural activity. This approach allows us to study the biophysical parameters and variables leading to epileptiform discharges at cellular and network levels. Our network model is composed of two neuronal populations, characterized by fast excitatory bursting neurons and regular spiking inhibitory neurons, embedded in a common extracellular environment represented by a slow variable. By systematically analyzing the parameter landscape offered by the simulation framework, we reproduce typical sequences of neural activity observed during status epilepticus. We find that exogenous fluctuations from extracellular environment and electro-tonic couplings play a major role in the progression of the seizure, which supports previous studies and further validates our model. We also investigate the influence of chemical synaptic coupling in the generation of spontaneous seizure-like events. Our results argue towards a temporal shift of typical spike waves with fast discharges as synaptic strengths are varied. We demonstrate that spike waves, including interictal spikes, are generated primarily by inhibitory neurons, whereas fast discharges during the wave part are due to excitatory neurons. Simulated traces are compared with in vivo experimental data from rodents at different stages of the disorder. We draw the conclusion

  3. Reconstruction of biological networks based on life science data integration.

    Science.gov (United States)

    Kormeier, Benjamin; Hippe, Klaus; Arrigo, Patrizio; Töpel, Thoralf; Janowski, Sebastian; Hofestädt, Ralf

    2010-10-27

    For the implementation of the virtual cell, the fundamental question is how to model and simulate complex biological networks. Therefore, based on relevant molecular database and information systems, biological data integration is an essential step in constructing biological networks. In this paper, we will motivate the applications BioDWH--an integration toolkit for building life science data warehouses, CardioVINEdb--a information system for biological data in cardiovascular-disease and VANESA--a network editor for modeling and simulation of biological networks. Based on this integration process, the system supports the generation of biological network models. A case study of a cardiovascular-disease related gene-regulated biological network is also presented.

  4. Balance of excitation and inhibition determines 1/f power spectrum in neuronal networks.

    Science.gov (United States)

    Lombardi, F; Herrmann, H J; de Arcangelis, L

    2017-04-01

    The 1/f-like decay observed in the power spectrum of electro-physiological signals, along with scale-free statistics of the so-called neuronal avalanches, constitutes evidence of criticality in neuronal systems. Recent in vitro studies have shown that avalanche dynamics at criticality corresponds to some specific balance of excitation and inhibition, thus suggesting that this is a basic feature of the critical state of neuronal networks. In particular, a lack of inhibition significantly alters the temporal structure of the spontaneous avalanche activity and leads to an anomalous abundance of large avalanches. Here, we study the relationship between network inhibition and the scaling exponent β of the power spectral density (PSD) of avalanche activity in a neuronal network model inspired in Self-Organized Criticality. We find that this scaling exponent depends on the percentage of inhibitory synapses and tends to the value β = 1 for a percentage of about 30%. More specifically, β is close to 2, namely, Brownian noise, for purely excitatory networks and decreases towards values in the interval [1, 1.4] as the percentage of inhibitory synapses ranges between 20% and 30%, in agreement with experimental findings. These results indicate that the level of inhibition affects the frequency spectrum of resting brain activity and suggest the analysis of the PSD scaling behavior as a possible tool to study pathological conditions.

  5. A network of spiking neurons that can represent interval timing: mean field analysis.

    Science.gov (United States)

    Gavornik, Jeffrey P; Shouval, Harel Z

    2011-04-01

    Despite the vital importance of our ability to accurately process and encode temporal information, the underlying neural mechanisms are largely unknown. We have previously described a theoretical framework that explains how temporal representations, similar to those reported in the visual cortex, can form in locally recurrent cortical networks as a function of reward modulated synaptic plasticity. This framework allows networks of both linear and spiking neurons to learn the temporal interval between a stimulus and paired reward signal presented during training. Here we use a mean field approach to analyze the dynamics of non-linear stochastic spiking neurons in a network trained to encode specific time intervals. This analysis explains how recurrent excitatory feedback allows a network structure to encode temporal representations.

  6. The influence of hubs in the structure of a neuronal network during an epileptic seizure

    Science.gov (United States)

    Rodrigues, Abner Cardoso; Cerdeira, Hilda A.; Machado, Birajara Soares

    2016-02-01

    In this work, we propose changes in the structure of a neuronal network with the intention to provoke strong synchronization to simulate episodes of epileptic seizure. Starting with a network of Izhikevich neurons we slowly increase the number of connections in selected nodes in a controlled way, to produce (or not) hubs. We study how these structures alter the synchronization on the spike firings interval, on individual neurons as well as on mean values, as a function of the concentration of connections for random and non-random (hubs) distribution. We also analyze how the post-ictal signal varies for the different distributions. We conclude that a network with hubs is more appropriate to represent an epileptic state.

  7. Patterning human neuronal networks on photolithographically engineered silicon dioxide substrates functionalized with glial analogues.

    Science.gov (United States)

    Hughes, Mark A; Brennan, Paul M; Bunting, Andrew S; Cameron, Katherine; Murray, Alan F; Shipston, Mike J

    2014-05-01

    Interfacing neurons with silicon semiconductors is a challenge being tackled through various bioengineering approaches. Such constructs inform our understanding of neuronal coding and learning and ultimately guide us toward creating intelligent neuroprostheses. A fundamental prerequisite is to dictate the spatial organization of neuronal cells. We sought to pattern neurons using photolithographically defined arrays of polymer parylene-C, activated with fetal calf serum. We used a purified human neuronal cell line [Lund human mesencephalic (LUHMES)] to establish whether neurons remain viable when isolated on-chip or whether they require a supporting cell substrate. When cultured in isolation, LUHMES neurons failed to pattern and did not show any morphological signs of differentiation. We therefore sought a cell type with which to prepattern parylene regions, hypothesizing that this cellular template would enable secondary neuronal adhesion and network formation. From a range of cell lines tested, human embryonal kidney (HEK) 293 cells patterned with highest accuracy. LUHMES neurons adhered to pre-established HEK 293 cell clusters and this coculture environment promoted morphological differentiation of neurons. Neurites extended between islands of adherent cell somata, creating an orthogonally arranged neuronal network. HEK 293 cells appear to fulfill a role analogous to glia, dictating cell adhesion, and generating an environment conducive to neuronal survival. We next replaced HEK 293 cells with slower growing glioma-derived precursors. These primary human cells patterned accurately on parylene and provided a similarly effective scaffold for neuronal adhesion. These findings advance the use of this microfabrication-compatible platform for neuronal patterning. Copyright © 2013 Wiley Periodicals, Inc.

  8. Error-backpropagation in temporally encoded networks of spiking neurons

    NARCIS (Netherlands)

    S.M. Bohte (Sander); J.A. La Poutré (Han); J.N. Kok (Joost)

    2000-01-01

    textabstractFor a network of spiking neurons that encodes information in the timing of individual spike-times, we derive a supervised learning rule, emph{SpikeProp, akin to traditional error-backpropagation and show how to overcome the discontinuities introduced by thresholding. With this algorithm,

  9. Controllability and observability of Boolean networks arising from biology

    Science.gov (United States)

    Li, Rui; Yang, Meng; Chu, Tianguang

    2015-02-01

    Boolean networks are currently receiving considerable attention as a computational scheme for system level analysis and modeling of biological systems. Studying control-related problems in Boolean networks may reveal new insights into the intrinsic control in complex biological systems and enable us to develop strategies for manipulating biological systems using exogenous inputs. This paper considers controllability and observability of Boolean biological networks. We propose a new approach, which draws from the rich theory of symbolic computation, to solve the problems. Consequently, simple necessary and sufficient conditions for reachability, controllability, and observability are obtained, and algorithmic tests for controllability and observability which are based on the Gröbner basis method are presented. As practical applications, we apply the proposed approach to several different biological systems, namely, the mammalian cell-cycle network, the T-cell activation network, the large granular lymphocyte survival signaling network, and the Drosophila segment polarity network, gaining novel insights into the control and/or monitoring of the specific biological systems.

  10. From in silico astrocyte cell models to neuron-astrocyte network models: A review.

    Science.gov (United States)

    Oschmann, Franziska; Berry, Hugues; Obermayer, Klaus; Lenk, Kerstin

    2018-01-01

    The idea that astrocytes may be active partners in synaptic information processing has recently emerged from abundant experimental reports. Because of their spatial proximity to neurons and their bidirectional communication with them, astrocytes are now considered as an important third element of the synapse. Astrocytes integrate and process synaptic information and by doing so generate cytosolic calcium signals that are believed to reflect neuronal transmitter release. Moreover, they regulate neuronal information transmission by releasing gliotransmitters into the synaptic cleft affecting both pre- and postsynaptic receptors. Concurrent with the first experimental reports of the astrocytic impact on neural network dynamics, computational models describing astrocytic functions have been developed. In this review, we give an overview over the published computational models of astrocytic functions, from single-cell dynamics to the tripartite synapse level and network models of astrocytes and neurons. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Networks in Cell Biology

    Science.gov (United States)

    Buchanan, Mark; Caldarelli, Guido; De Los Rios, Paolo; Rao, Francesco; Vendruscolo, Michele

    2010-05-01

    Introduction; 1. Network views of the cell Paolo De Los Rios and Michele Vendruscolo; 2. Transcriptional regulatory networks Sarath Chandra Janga and M. Madan Babu; 3. Transcription factors and gene regulatory networks Matteo Brilli, Elissa Calistri and Pietro Lió; 4. Experimental methods for protein interaction identification Peter Uetz, Björn Titz, Seesandra V. Rajagopala and Gerard Cagney; 5. Modeling protein interaction networks Francesco Rao; 6. Dynamics and evolution of metabolic networks Daniel Segré; 7. Hierarchical modularity in biological networks: the case of metabolic networks Erzsébet Ravasz Regan; 8. Signalling networks Gian Paolo Rossini; Appendix 1. Complex networks: from local to global properties D. Garlaschelli and G. Caldarelli; Appendix 2. Modelling the local structure of networks D. Garlaschelli and G. Caldarelli; Appendix 3. Higher-order topological properties S. Ahnert, T. Fink and G. Caldarelli; Appendix 4. Elementary mathematical concepts A. Gabrielli and G. Caldarelli; References.

  12. Reconstruction of biological networks based on life science data integration

    Directory of Open Access Journals (Sweden)

    Kormeier Benjamin

    2010-06-01

    Full Text Available For the implementation of the virtual cell, the fundamental question is how to model and simulate complex biological networks. Therefore, based on relevant molecular database and information systems, biological data integration is an essential step in constructing biological networks. In this paper, we will motivate the applications BioDWH - an integration toolkit for building life science data warehouses, CardioVINEdb - a information system for biological data in cardiovascular-disease and VANESA- a network editor for modeling and simulation of biological networks. Based on this integration process, the system supports the generation of biological network models. A case study of a cardiovascular-disease related gene-regulated biological network is also presented.

  13. Bifurcation software in Matlab with applications in neuronal modeling.

    Science.gov (United States)

    Govaerts, Willy; Sautois, Bart

    2005-02-01

    Many biological phenomena, notably in neuroscience, can be modeled by dynamical systems. We describe a recent improvement of a Matlab software package for dynamical systems with applications to modeling single neurons and all-to-all connected networks of neurons. The new software features consist of an object-oriented approach to bifurcation computations and the partial inclusion of C-code to speed up the computation. As an application, we study the origin of the spiking behaviour of neurons when the equilibrium state is destabilized by an incoming current. We show that Class II behaviour, i.e. firing with a finite frequency, is possible even if the destabilization occurs through a saddle-node bifurcation. Furthermore, we show that synchronization of an all-to-all connected network of such neurons with only excitatory connections is also possible in this case.

  14. A model of biological neuron with terminal chaos and quantum-like features

    International Nuclear Information System (INIS)

    Conte, Elio; Pierri, GianPaolo; Federici, Antonio; Mendolicchio, Leonardo; Zbilut, Joseph P.

    2006-01-01

    A model of biological neuron is proposed combining terminal dynamics with quantum-like mechanical features, assuming the spin to be an important entity in neurodynamics, and, in particular, in synaptic transmission

  15. Node fingerprinting: an efficient heuristic for aligning biological networks.

    Science.gov (United States)

    Radu, Alex; Charleston, Michael

    2014-10-01

    With the continuing increase in availability of biological data and improvements to biological models, biological network analysis has become a promising area of research. An emerging technique for the analysis of biological networks is through network alignment. Network alignment has been used to calculate genetic distance, similarities between regulatory structures, and the effect of external forces on gene expression, and to depict conditional activity of expression modules in cancer. Network alignment is algorithmically complex, and therefore we must rely on heuristics, ideally as efficient and accurate as possible. The majority of current techniques for network alignment rely on precomputed information, such as with protein sequence alignment, or on tunable network alignment parameters, which may introduce an increased computational overhead. Our presented algorithm, which we call Node Fingerprinting (NF), is appropriate for performing global pairwise network alignment without precomputation or tuning, can be fully parallelized, and is able to quickly compute an accurate alignment between two biological networks. It has performed as well as or better than existing algorithms on biological and simulated data, and with fewer computational resources. The algorithmic validation performed demonstrates the low computational resource requirements of NF.

  16. Barreloid Borders and Neuronal Activity Shape Panglial Gap Junction-Coupled Networks in the Mouse Thalamus.

    Science.gov (United States)

    Claus, Lena; Philippot, Camille; Griemsmann, Stephanie; Timmermann, Aline; Jabs, Ronald; Henneberger, Christian; Kettenmann, Helmut; Steinhäuser, Christian

    2018-01-01

    The ventral posterior nucleus of the thalamus plays an important role in somatosensory information processing. It contains elongated cellular domains called barreloids, which are the structural basis for the somatotopic organization of vibrissae representation. So far, the organization of glial networks in these barreloid structures and its modulation by neuronal activity has not been studied. We have developed a method to visualize thalamic barreloid fields in acute slices. Combining electrophysiology, immunohistochemistry, and electroporation in transgenic mice with cell type-specific fluorescence labeling, we provide the first structure-function analyses of barreloidal glial gap junction networks. We observed coupled networks, which comprised both astrocytes and oligodendrocytes. The spread of tracers or a fluorescent glucose derivative through these networks was dependent on neuronal activity and limited by the barreloid borders, which were formed by uncoupled or weakly coupled oligodendrocytes. Neuronal somata were distributed homogeneously across barreloid fields with their processes running in parallel to the barreloid borders. Many astrocytes and oligodendrocytes were not part of the panglial networks. Thus, oligodendrocytes are the cellular elements limiting the communicating panglial network to a single barreloid, which might be important to ensure proper metabolic support to active neurons located within a particular vibrissae signaling pathway. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. Bifurcation analysis on a generalized recurrent neural network with two interconnected three-neuron components

    International Nuclear Information System (INIS)

    Hajihosseini, Amirhossein; Maleki, Farzaneh; Rokni Lamooki, Gholam Reza

    2011-01-01

    Highlights: → We construct a recurrent neural network by generalizing a specific n-neuron network. → Several codimension 1 and 2 bifurcations take place in the newly constructed network. → The newly constructed network has higher capabilities to learn periodic signals. → The normal form theorem is applied to investigate dynamics of the network. → A series of bifurcation diagrams is given to support theoretical results. - Abstract: A class of recurrent neural networks is constructed by generalizing a specific class of n-neuron networks. It is shown that the newly constructed network experiences generic pitchfork and Hopf codimension one bifurcations. It is also proved that the emergence of generic Bogdanov-Takens, pitchfork-Hopf and Hopf-Hopf codimension two, and the degenerate Bogdanov-Takens bifurcation points in the parameter space is possible due to the intersections of codimension one bifurcation curves. The occurrence of bifurcations of higher codimensions significantly increases the capability of the newly constructed recurrent neural network to learn broader families of periodic signals.

  18. Optimal Detection of a Localized Perturbation in Random Networks of Integrate-and-Fire Neurons

    Science.gov (United States)

    Bernardi, Davide; Lindner, Benjamin

    2017-06-01

    Experimental and theoretical studies suggest that cortical networks are chaotic and coding relies on averages over large populations. However, there is evidence that rats can respond to the short stimulation of a single cortical cell, a theoretically unexplained fact. We study effects of single-cell stimulation on a large recurrent network of integrate-and-fire neurons and propose a simple way to detect the perturbation. Detection rates obtained from simulations and analytical estimates are similar to experimental response rates if the readout is slightly biased towards specific neurons. Near-optimal detection is attained for a broad range of intermediate values of the mean coupling between neurons.

  19. Power Laws, Scale-Free Networks and Genome Biology

    CERN Document Server

    Koonin, Eugene V; Karev, Georgy P

    2006-01-01

    Power Laws, Scale-free Networks and Genome Biology deals with crucial aspects of the theoretical foundations of systems biology, namely power law distributions and scale-free networks which have emerged as the hallmarks of biological organization in the post-genomic era. The chapters in the book not only describe the interesting mathematical properties of biological networks but moves beyond phenomenology, toward models of evolution capable of explaining the emergence of these features. The collection of chapters, contributed by both physicists and biologists, strives to address the problems in this field in a rigorous but not excessively mathematical manner and to represent different viewpoints, which is crucial in this emerging discipline. Each chapter includes, in addition to technical descriptions of properties of biological networks and evolutionary models, a more general and accessible introduction to the respective problems. Most chapters emphasize the potential of theoretical systems biology for disco...

  20. Bio-inspired spiking neural network for nonlinear systems control.

    Science.gov (United States)

    Pérez, Javier; Cabrera, Juan A; Castillo, Juan J; Velasco, Juan M

    2018-08-01

    Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. Design principles in biological networks

    Science.gov (United States)

    Goyal, Sidhartha

    Much of biology emerges from networks of interactions. Even in a single bacterium such as Escherichia coli, there are hundreds of coexisting gene and protein networks. Although biological networks are the outcome of evolution, various physical and biological constraints limit their functional capacity. The focus of this thesis is to understand how functional constraints such as optimal growth in mircoorganisms and information flow in signaling pathways shape the metabolic network of bacterium E. coli and the quorum sensing network of marine bacterium Vibrio harveyi, respectively. Metabolic networks convert basic elemental sources into complex building-blocks eventually leading to cell's growth. Therefore, typically, metabolic pathways are often coupled both by the use of a common substrate and by stoichiometric utilization of their products for cell growth. We showed that such a coupled network with product-feedback inhibition may exhibit limit-cycle oscillations which arise via a Hopf bifurcation. Furthermore, we analyzed several representative metabolic modules and find that, in all cases, simple product-feedback inhibition allows nearly optimal growth, in agreement with the predicted growth-rate by the flux-balance analysis (FBA). Bacteria have fascinating and diverse social lives. They display coordinated group behaviors regulated by quorum sensing (QS) systems. The QS circuit of V. harveyi integrates and funnels different ecological information through a common phosphorelay cascade to a set of small regulatory RNAs (sRNAs) that enables collective behavior. We analyzed the signaling properties and information flow in the QS circuit, which provides a model for information flow in signaling networks more generally. A comparative study of post-transcriptional and conventional transcriptional regulation suggest a niche for sRNAs in allowing cells to transition quickly yet reliably between distinct states. Furthermore, we develop a new framework for analyzing signal

  2. Study Under AC Stimulation on Excitement Properties of Weighted Small-World Biological Neural Networks with Side-Restrain Mechanism

    International Nuclear Information System (INIS)

    Yuan Wujie; Luo Xiaoshu; Jiang Pinqun

    2007-01-01

    In this paper, we propose a new model of weighted small-world biological neural networks based on biophysical Hodgkin-Huxley neurons with side-restrain mechanism. Then we study excitement properties of the model under alternating current (AC) stimulation. The study shows that the excitement properties in the networks are preferably consistent with the behavior properties of a brain nervous system under different AC stimuli, such as refractory period and the brain neural excitement response induced by different intensities of noise and coupling. The results of the study have reference worthiness for the brain nerve electrophysiology and epistemological science.

  3. Mapping cortical mesoscopic networks of single spiking cortical or sub-cortical neurons.

    Science.gov (United States)

    Xiao, Dongsheng; Vanni, Matthieu P; Mitelut, Catalin C; Chan, Allen W; LeDue, Jeffrey M; Xie, Yicheng; Chen, Andrew Cn; Swindale, Nicholas V; Murphy, Timothy H

    2017-02-04

    Understanding the basis of brain function requires knowledge of cortical operations over wide-spatial scales, but also within the context of single neurons. In vivo, wide-field GCaMP imaging and sub-cortical/cortical cellular electrophysiology were used in mice to investigate relationships between spontaneous single neuron spiking and mesoscopic cortical activity. We make use of a rich set of cortical activity motifs that are present in spontaneous activity in anesthetized and awake animals. A mesoscale spike-triggered averaging procedure allowed the identification of motifs that are preferentially linked to individual spiking neurons by employing genetically targeted indicators of neuronal activity. Thalamic neurons predicted and reported specific cycles of wide-scale cortical inhibition/excitation. In contrast, spike-triggered maps derived from single cortical neurons yielded spatio-temporal maps expected for regional cortical consensus function. This approach can define network relationships between any point source of neuronal spiking and mesoscale cortical maps.

  4. Three-dimensional chimera patterns in networks of spiking neuron oscillators

    Science.gov (United States)

    Kasimatis, T.; Hizanidis, J.; Provata, A.

    2018-05-01

    We study the stable spatiotemporal patterns that arise in a three-dimensional (3D) network of neuron oscillators, whose dynamics is described by the leaky integrate-and-fire (LIF) model. More specifically, we investigate the form of the chimera states induced by a 3D coupling matrix with nonlocal topology. The observed patterns are in many cases direct generalizations of the corresponding two-dimensional (2D) patterns, e.g., spheres, layers, and cylinder grids. We also find cylindrical and "cross-layered" chimeras that do not have an equivalent in 2D systems. Quantitative measures are calculated, such as the ratio of synchronized and unsynchronized neurons as a function of the coupling range, the mean phase velocities, and the distribution of neurons in mean phase velocities. Based on these measures, the chimeras are categorized in two families. The first family of patterns is observed for weaker coupling and exhibits higher mean phase velocities for the unsynchronized areas of the network. The opposite holds for the second family, where the unsynchronized areas have lower mean phase velocities. The various measures demonstrate discontinuities, indicating criticality as the parameters cross from the first family of patterns to the second.

  5. Neural networks and its application in biomedical engineering

    International Nuclear Information System (INIS)

    Husnain, S.K.; Bhatti, M.I.

    2002-01-01

    Artificial network (ANNs) is an information processing system that has certain performance characteristics in common with biological neural networks. A neural network is characterized by connections between the neurons, method of determining the weights on the connections and its activation functions while a biological neuron has three types of components that are of particular interest in understanding an artificial neuron: its dendrites, soma, and axon. The actin of the chemical transmitter modifies the incoming signal. The study of neural networks is an extremely interdisciplinary field. Computer-based diagnosis is an increasingly used method that tries to improve the quality of health care. Systems on Neural Networks have been developed extensively in the last ten years with the hope that medical diagnosis and therefore medical care would improve dramatically. The addition of a symbolic processing layer enhances the ANNs in a number of ways. It is, for instance, possible to supplement a network that is purely diagnostic with a level that recommends or nodes in order to more closely simulate the nervous system. (author)

  6. A unified framework for spiking and gap-junction interactions in distributed neuronal network simulations

    Directory of Open Access Journals (Sweden)

    Jan eHahne

    2015-09-01

    Full Text Available Contemporary simulators for networks of point and few-compartment model neurons come with a plethora of ready-to-use neuron and synapse models and support complex network topologies. Recent technological advancements have broadened the spectrum of application further to the efficient simulation of brain-scale networks on supercomputers. In distributed network simulations the amount of spike data that accrues per millisecond and process is typically low, such that a common optimization strategy is to communicate spikes at relatively long intervals, where the upper limit is given by the shortest synaptic transmission delay in the network. This approach is well-suited for simulations that employ only chemical synapses but it has so far impeded the incorporation of gap-junction models, which require instantaneous neuronal interactions. Here, we present a numerical algorithm based on a waveform-relaxation technique which allows for network simulations with gap junctions in a way that is compatible with the delayed communication strategy. Using a reference implementation in the NEST simulator, we demonstrate that the algorithm and the required data structures can be smoothly integrated with existing code such that they complement the infrastructure for spiking connections. To show that the unified framework for gap-junction and spiking interactions achieves high performance and delivers high accuracy...

  7. Communication through resonance in spiking neuronal networks.

    Science.gov (United States)

    Hahn, Gerald; Bujan, Alejandro F; Frégnac, Yves; Aertsen, Ad; Kumar, Arvind

    2014-08-01

    The cortex processes stimuli through a distributed network of specialized brain areas. This processing requires mechanisms that can route neuronal activity across weakly connected cortical regions. Routing models proposed thus far are either limited to propagation of spiking activity across strongly connected networks or require distinct mechanisms that create local oscillations and establish their coherence between distant cortical areas. Here, we propose a novel mechanism which explains how synchronous spiking activity propagates across weakly connected brain areas supported by oscillations. In our model, oscillatory activity unleashes network resonance that amplifies feeble synchronous signals and promotes their propagation along weak connections ("communication through resonance"). The emergence of coherent oscillations is a natural consequence of synchronous activity propagation and therefore the assumption of different mechanisms that create oscillations and provide coherence is not necessary. Moreover, the phase-locking of oscillations is a side effect of communication rather than its requirement. Finally, we show how the state of ongoing activity could affect the communication through resonance and propose that modulations of the ongoing activity state could influence information processing in distributed cortical networks.

  8. Integration of genomic information with biological networks using Cytoscape.

    Science.gov (United States)

    Bauer-Mehren, Anna

    2013-01-01

    Cytoscape is an open-source software for visualizing, analyzing, and modeling biological networks. This chapter explains how to use Cytoscape to analyze the functional effect of sequence variations in the context of biological networks such as protein-protein interaction networks and signaling pathways. The chapter is divided into five parts: (1) obtaining information about the functional effect of sequence variation in a Cytoscape readable format, (2) loading and displaying different types of biological networks in Cytoscape, (3) integrating the genomic information (SNPs and mutations) with the biological networks, and (4) analyzing the effect of the genomic perturbation onto the network structure using Cytoscape built-in functions. Finally, we briefly outline how the integrated data can help in building mathematical network models for analyzing the effect of the sequence variation onto the dynamics of the biological system. Each part is illustrated by step-by-step instructions on an example use case and visualized by many screenshots and figures.

  9. Integrated workflows for spiking neuronal network simulations

    Directory of Open Access Journals (Sweden)

    Ján eAntolík

    2013-12-01

    Full Text Available The increasing availability of computational resources is enabling more detailed, realistic modelling in computational neuroscience, resulting in a shift towards more heterogeneous models of neuronal circuits, and employment of complex experimental protocols. This poses a challenge for existing tool chains, as the set of tools involved in a typical modeller's workflow is expanding concomitantly, with growing complexity in the metadata flowing between them. For many parts of the workflow, a range of tools is available; however, numerous areas lack dedicated tools, while integration of existing tools is limited. This forces modellers to either handle the workflow manually, leading to errors, or to write substantial amounts of code to automate parts of the workflow, in both cases reducing their productivity.To address these issues, we have developed Mozaik: a workflow system for spiking neuronal network simulations written in Python. Mozaik integrates model, experiment and stimulation specification, simulation execution, data storage, data analysis and visualisation into a single automated workflow, ensuring that all relevant metadata are available to all workflow components. It is based on several existing tools, including PyNN, Neo and Matplotlib. It offers a declarative way to specify models and recording configurations using hierarchically organised configuration files. Mozaik automatically records all data together with all relevant metadata about the experimental context, allowing automation of the analysis and visualisation stages. Mozaik has a modular architecture, and the existing modules are designed to be extensible with minimal programming effort. Mozaik increases the productivity of running virtual experiments on highly structured neuronal networks by automating the entire experimental cycle, while increasing the reliability of modelling studies by relieving the user from manual handling of the flow of metadata between the individual

  10. The effect of network biology on drug toxicology

    DEFF Research Database (Denmark)

    Gautier, Laurent; Taboureau, Olivier; Audouze, Karine Marie Laure

    2013-01-01

    Introduction: The high failure rate of drug candidates due to toxicity, during clinical trials, is a critical issue in drug discovery. Network biology has become a promising approach, in this regard, using the increasingly large amount of biological and chemical data available and combining...... it with bioinformatics. With this approach, the assessment of chemical safety can be done across multiple scales of complexity from molecular to cellular and system levels in human health. Network biology can be used at several levels of complexity. Areas covered: This review describes the strengths and limitations...... of network biology. The authors specifically assess this approach across different biological scales when it is applied to toxicity. Expert opinion: There has been much progress made with the amount of data that is generated by various omics technologies. With this large amount of useful data, network...

  11. Models of neural networks temporal aspects of coding and information processing in biological systems

    CERN Document Server

    Hemmen, J; Schulten, Klaus

    1994-01-01

    Since the appearance of Vol. 1 of Models of Neural Networks in 1991, the theory of neural nets has focused on two paradigms: information coding through coherent firing of the neurons and functional feedback. Information coding through coherent neuronal firing exploits time as a cardinal degree of freedom. This capacity of a neural network rests on the fact that the neuronal action potential is a short, say 1 ms, spike, localized in space and time. Spatial as well as temporal correlations of activity may represent different states of a network. In particular, temporal correlations of activity may express that neurons process the same "object" of, for example, a visual scene by spiking at the very same time. The traditional description of a neural network through a firing rate, the famous S-shaped curve, presupposes a wide time window of, say, at least 100 ms. It thus fails to exploit the capacity to "bind" sets of coherently firing neurons for the purpose of both scene segmentation and figure-ground segregatio...

  12. Statistical identification of stimulus-activated network nodes in multi-neuron voltage-sensitive dye optical recordings.

    Science.gov (United States)

    Fathiazar, Elham; Anemuller, Jorn; Kretzberg, Jutta

    2016-08-01

    Voltage-Sensitive Dye (VSD) imaging is an optical imaging method that allows measuring the graded voltage changes of multiple neurons simultaneously. In neuroscience, this method is used to reveal networks of neurons involved in certain tasks. However, the recorded relative dye fluorescence changes are usually low and signals are superimposed by noise and artifacts. Therefore, establishing a reliable method to identify which cells are activated by specific stimulus conditions is the first step to identify functional networks. In this paper, we present a statistical method to identify stimulus-activated network nodes as cells, whose activities during sensory network stimulation differ significantly from the un-stimulated control condition. This method is demonstrated based on voltage-sensitive dye recordings from up to 100 neurons in a ganglion of the medicinal leech responding to tactile skin stimulation. Without relying on any prior physiological knowledge, the network nodes identified by our statistical analysis were found to match well with published cell types involved in tactile stimulus processing and to be consistent across stimulus conditions and preparations.

  13. A Unifying Mathematical Framework for Genetic Robustness, Environmental Robustness, Network Robustness and their Trade-offs on Phenotype Robustness in Biological Networks. Part III: Synthetic Gene Networks in Synthetic Biology

    Science.gov (United States)

    Chen, Bor-Sen; Lin, Ying-Po

    2013-01-01

    Robust stabilization and environmental disturbance attenuation are ubiquitous systematic properties that are observed in biological systems at many different levels. The underlying principles for robust stabilization and environmental disturbance attenuation are universal to both complex biological systems and sophisticated engineering systems. In many biological networks, network robustness should be large enough to confer: intrinsic robustness for tolerating intrinsic parameter fluctuations; genetic robustness for buffering genetic variations; and environmental robustness for resisting environmental disturbances. Network robustness is needed so phenotype stability of biological network can be maintained, guaranteeing phenotype robustness. Synthetic biology is foreseen to have important applications in biotechnology and medicine; it is expected to contribute significantly to a better understanding of functioning of complex biological systems. This paper presents a unifying mathematical framework for investigating the principles of both robust stabilization and environmental disturbance attenuation for synthetic gene networks in synthetic biology. Further, from the unifying mathematical framework, we found that the phenotype robustness criterion for synthetic gene networks is the following: if intrinsic robustness + genetic robustness + environmental robustness ≦ network robustness, then the phenotype robustness can be maintained in spite of intrinsic parameter fluctuations, genetic variations, and environmental disturbances. Therefore, the trade-offs between intrinsic robustness, genetic robustness, environmental robustness, and network robustness in synthetic biology can also be investigated through corresponding phenotype robustness criteria from the systematic point of view. Finally, a robust synthetic design that involves network evolution algorithms with desired behavior under intrinsic parameter fluctuations, genetic variations, and environmental

  14. Neuron array with plastic synapses and programmable dendrites.

    Science.gov (United States)

    Ramakrishnan, Shubha; Wunderlich, Richard; Hasler, Jennifer; George, Suma

    2013-10-01

    We describe a novel neuromorphic chip architecture that models neurons for efficient computation. Traditional architectures of neuron array chips consist of large scale systems that are interfaced with AER for implementing intra- or inter-chip connectivity. We present a chip that uses AER for inter-chip communication but uses fast, reconfigurable FPGA-style routing with local memory for intra-chip connectivity. We model neurons with biologically realistic channel models, synapses and dendrites. This chip is suitable for small-scale network simulations and can also be used for sequence detection, utilizing directional selectivity properties of dendrites, ultimately for use in word recognition.

  15. From biological and social network metaphors to coupled bio-social wireless networks

    Science.gov (United States)

    Barrett, Christopher L.; Eubank, Stephen; Anil Kumar, V.S.; Marathe, Madhav V.

    2010-01-01

    Biological and social analogies have been long applied to complex systems. Inspiration has been drawn from biological solutions to solve problems in engineering products and systems, ranging from Velcro to camouflage to robotics to adaptive and learning computing methods. In this paper, we present an overview of recent advances in understanding biological systems as networks and use this understanding to design and analyse wireless communication networks. We expand on two applications, namely cognitive sensing and control and wireless epidemiology. We discuss how our work in these two applications is motivated by biological metaphors. We believe that recent advances in computing and communications coupled with advances in health and social sciences raise the possibility of studying coupled bio-social communication networks. We argue that we can better utilise the advances in our understanding of one class of networks to better our understanding of the other. PMID:21643462

  16. The role of degree distribution in shaping the dynamics in networks of sparsely connected spiking neurons

    Directory of Open Access Journals (Sweden)

    Alex eRoxin

    2011-03-01

    Full Text Available Neuronal network models often assume a fixed probability of connectionbetween neurons. This assumption leads to random networks withbinomial in-degree and out-degree distributions which are relatively narrow. Here I study the effect of broaddegree distributions on network dynamics by interpolating between abinomial and a truncated powerlaw distribution for the in-degree andout-degree independently. This is done both for an inhibitory network(I network as well as for the recurrent excitatory connections in anetwork of excitatory and inhibitory neurons (EI network. In bothcases increasing the width of the in-degree distribution affects theglobal state of the network by driving transitions betweenasynchronous behavior and oscillations. This effect is reproduced ina simplified rate model which includes the heterogeneity in neuronalinput due to the in-degree of cells. On the other hand, broadeningthe out-degree distribution is shown to increase the fraction ofcommon inputs to pairs of neurons. This leads to increases in theamplitude of the cross-correlation (CC of synaptic currents. In thecase of the I network, despite strong oscillatory CCs in the currents, CCs of the membrane potential are low due to filtering and reset effects, leading to very weak CCs of the spikecount. In the asynchronous regime ofthe EI network, broadening the out-degree increases the amplitude ofCCs in the recurrent excitatory currents, while CC of the totalcurrent is essentially unaffected as are pairwise spikingcorrelations. This is due to a dynamic balance between excitatoryand inhibitory synaptic currents. In the oscillatory regime, changesin the out-degree can have a large effect on spiking correlations andeven on the qualitative dynamical state of the network.

  17. Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.

    Science.gov (United States)

    Spoerer, Courtney J; McClure, Patrick; Kriegeskorte, Nikolaus

    2017-01-01

    Feedforward neural networks provide the dominant model of how the brain performs visual object recognition. However, these networks lack the lateral and feedback connections, and the resulting recurrent neuronal dynamics, of the ventral visual pathway in the human and non-human primate brain. Here we investigate recurrent convolutional neural networks with bottom-up (B), lateral (L), and top-down (T) connections. Combining these types of connections yields four architectures (B, BT, BL, and BLT), which we systematically test and compare. We hypothesized that recurrent dynamics might improve recognition performance in the challenging scenario of partial occlusion. We introduce two novel occluded object recognition tasks to test the efficacy of the models, digit clutter (where multiple target digits occlude one another) and digit debris (where target digits are occluded by digit fragments). We find that recurrent neural networks outperform feedforward control models (approximately matched in parametric complexity) at recognizing objects, both in the absence of occlusion and in all occlusion conditions. Recurrent networks were also found to be more robust to the inclusion of additive Gaussian noise. Recurrent neural networks are better in two respects: (1) they are more neurobiologically realistic than their feedforward counterparts; (2) they are better in terms of their ability to recognize objects, especially under challenging conditions. This work shows that computer vision can benefit from using recurrent convolutional architectures and suggests that the ubiquitous recurrent connections in biological brains are essential for task performance.

  18. Intrinsically active and pacemaker neurons in pluripotent stem cell-derived neuronal populations.

    Science.gov (United States)

    Illes, Sebastian; Jakab, Martin; Beyer, Felix; Gelfert, Renate; Couillard-Despres, Sébastien; Schnitzler, Alfons; Ritter, Markus; Aigner, Ludwig

    2014-03-11

    Neurons generated from pluripotent stem cells (PSCs) self-organize into functional neuronal assemblies in vitro, generating synchronous network activities. Intriguingly, PSC-derived neuronal assemblies develop spontaneous activities that are independent of external stimulation, suggesting the presence of thus far undetected intrinsically active neurons (IANs). Here, by using mouse embryonic stem cells, we provide evidence for the existence of IANs in PSC-neuronal networks based on extracellular multielectrode array and intracellular patch-clamp recordings. IANs remain active after pharmacological inhibition of fast synaptic communication and possess intrinsic mechanisms required for autonomous neuronal activity. PSC-derived IANs are functionally integrated in PSC-neuronal populations, contribute to synchronous network bursting, and exhibit pacemaker properties. The intrinsic activity and pacemaker properties of the neuronal subpopulation identified herein may be particularly relevant for interventions involving transplantation of neural tissues. IANs may be a key element in the regulation of the functional activity of grafted as well as preexisting host neuronal networks.

  19. Reconstructing Causal Biological Networks through Active Learning.

    Directory of Open Access Journals (Sweden)

    Hyunghoon Cho

    Full Text Available Reverse-engineering of biological networks is a central problem in systems biology. The use of intervention data, such as gene knockouts or knockdowns, is typically used for teasing apart causal relationships among genes. Under time or resource constraints, one needs to carefully choose which intervention experiments to carry out. Previous approaches for selecting most informative interventions have largely been focused on discrete Bayesian networks. However, continuous Bayesian networks are of great practical interest, especially in the study of complex biological systems and their quantitative properties. In this work, we present an efficient, information-theoretic active learning algorithm for Gaussian Bayesian networks (GBNs, which serve as important models for gene regulatory networks. In addition to providing linear-algebraic insights unique to GBNs, leading to significant runtime improvements, we demonstrate the effectiveness of our method on data simulated with GBNs and the DREAM4 network inference challenge data sets. Our method generally leads to faster recovery of underlying network structure and faster convergence to final distribution of confidence scores over candidate graph structures using the full data, in comparison to random selection of intervention experiments.

  20. Assessing neuronal networks: understanding Alzheimer's disease.

    LENUS (Irish Health Repository)

    Bokde, Arun L W

    2012-02-01

    Findings derived from neuroimaging of the structural and functional organization of the human brain have led to the widely supported hypothesis that neuronal networks of temporally coordinated brain activity across different regional brain structures underpin cognitive function. Failure of integration within a network leads to cognitive dysfunction. The current discussion on Alzheimer\\'s disease (AD) argues that it presents in part a disconnection syndrome. Studies using functional magnetic resonance imaging, positron emission tomography and electroencephalography demonstrate that synchronicity of brain activity is altered in AD and correlates with cognitive deficits. Moreover, recent advances in diffusion tensor imaging have made it possible to track axonal projections across the brain, revealing substantial regional impairment in fiber-tract integrity in AD. Accumulating evidence points towards a network breakdown reflecting disconnection at both the structural and functional system level. The exact relationship among these multiple mechanistic variables and their contribution to cognitive alterations and ultimately decline is yet unknown. Focused research efforts aimed at the integration of both function and structure hold great promise not only in improving our understanding of cognition but also of its characteristic progressive metamorphosis in complex chronic neurodegenerative disorders such as AD.

  1. Large-scale modeling of epileptic seizures: scaling properties of two parallel neuronal network simulation algorithms.

    Science.gov (United States)

    Pesce, Lorenzo L; Lee, Hyong C; Hereld, Mark; Visser, Sid; Stevens, Rick L; Wildeman, Albert; van Drongelen, Wim

    2013-01-01

    Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons) and processor pool sizes (1 to 256 processors). Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  2. Large-Scale Modeling of Epileptic Seizures: Scaling Properties of Two Parallel Neuronal Network Simulation Algorithms

    Directory of Open Access Journals (Sweden)

    Lorenzo L. Pesce

    2013-01-01

    Full Text Available Our limited understanding of the relationship between the behavior of individual neurons and large neuronal networks is an important limitation in current epilepsy research and may be one of the main causes of our inadequate ability to treat it. Addressing this problem directly via experiments is impossibly complex; thus, we have been developing and studying medium-large-scale simulations of detailed neuronal networks to guide us. Flexibility in the connection schemas and a complete description of the cortical tissue seem necessary for this purpose. In this paper we examine some of the basic issues encountered in these multiscale simulations. We have determined the detailed behavior of two such simulators on parallel computer systems. The observed memory and computation-time scaling behavior for a distributed memory implementation were very good over the range studied, both in terms of network sizes (2,000 to 400,000 neurons and processor pool sizes (1 to 256 processors. Our simulations required between a few megabytes and about 150 gigabytes of RAM and lasted between a few minutes and about a week, well within the capability of most multinode clusters. Therefore, simulations of epileptic seizures on networks with millions of cells should be feasible on current supercomputers.

  3. Detection of M-Sequences from Spike Sequence in Neuronal Networks

    Directory of Open Access Journals (Sweden)

    Yoshi Nishitani

    2012-01-01

    Full Text Available In circuit theory, it is well known that a linear feedback shift register (LFSR circuit generates pseudorandom bit sequences (PRBS, including an M-sequence with the maximum period of length. In this study, we tried to detect M-sequences known as a pseudorandom sequence generated by the LFSR circuit from time series patterns of stimulated action potentials. Stimulated action potentials were recorded from dissociated cultures of hippocampal neurons grown on a multielectrode array. We could find several M-sequences from a 3-stage LFSR circuit (M3. These results show the possibility of assembling LFSR circuits or its equivalent ones in a neuronal network. However, since the M3 pattern was composed of only four spike intervals, the possibility of an accidental detection was not zero. Then, we detected M-sequences from random spike sequences which were not generated from an LFSR circuit and compare the result with the number of M-sequences from the originally observed raster data. As a result, a significant difference was confirmed: a greater number of “0–1” reversed the 3-stage M-sequences occurred than would have accidentally be detected. This result suggests that some LFSR equivalent circuits are assembled in neuronal networks.

  4. Control of bursting synchronization in networks of Hodgkin-Huxley-type neurons with chemical synapses.

    Science.gov (United States)

    Batista, C A S; Viana, R L; Ferrari, F A S; Lopes, S R; Batista, A M; Coninck, J C P

    2013-04-01

    Thermally sensitive neurons present bursting activity for certain temperature ranges, characterized by fast repetitive spiking of action potential followed by a short quiescent period. Synchronization of bursting activity is possible in networks of coupled neurons, and it is sometimes an undesirable feature. Control procedures can suppress totally or partially this collective behavior, with potential applications in deep-brain stimulation techniques. We investigate the control of bursting synchronization in small-world networks of Hodgkin-Huxley-type thermally sensitive neurons with chemical synapses through two different strategies. One is the application of an external time-periodic electrical signal and another consists of a time-delayed feedback signal. We consider the effectiveness of both strategies in terms of protocols of applications suitable to be applied by pacemakers.

  5. Plasticity-induced characteristic changes of pattern dynamics and the related phase transitions in small-world neuronal networks

    International Nuclear Information System (INIS)

    Huang Xu-Hui; Hu Gang

    2014-01-01

    Phase transitions widely exist in nature and occur when some control parameters are changed. In neural systems, their macroscopic states are represented by the activity states of neuron populations, and phase transitions between different activity states are closely related to corresponding functions in the brain. In particular, phase transitions to some rhythmic synchronous firing states play significant roles on diverse brain functions and disfunctions, such as encoding rhythmical external stimuli, epileptic seizure, etc. However, in previous studies, phase transitions in neuronal networks are almost driven by network parameters (e.g., external stimuli), and there has been no investigation about the transitions between typical activity states of neuronal networks in a self-organized way by applying plastic connection weights. In this paper, we discuss phase transitions in electrically coupled and lattice-based small-world neuronal networks (LBSW networks) under spike-timing-dependent plasticity (STDP). By applying STDP on all electrical synapses, various known and novel phase transitions could emerge in LBSW networks, particularly, the phenomenon of self-organized phase transitions (SOPTs): repeated transitions between synchronous and asynchronous firing states. We further explore the mechanics generating SOPTs on the basis of synaptic weight dynamics. (interdisciplinary physics and related areas of science and technology)

  6. Model-based analysis and control of a network of basal ganglia spiking neurons in the normal and Parkinsonian states

    Science.gov (United States)

    Liu, Jianbo; Khalil, Hassan K.; Oweiss, Karim G.

    2011-08-01

    Controlling the spatiotemporal firing pattern of an intricately connected network of neurons through microstimulation is highly desirable in many applications. We investigated in this paper the feasibility of using a model-based approach to the analysis and control of a basal ganglia (BG) network model of Hodgkin-Huxley (HH) spiking neurons through microstimulation. Detailed analysis of this network model suggests that it can reproduce the experimentally observed characteristics of BG neurons under a normal and a pathological Parkinsonian state. A simplified neuronal firing rate model, identified from the detailed HH network model, is shown to capture the essential network dynamics. Mathematical analysis of the simplified model reveals the presence of a systematic relationship between the network's structure and its dynamic response to spatiotemporally patterned microstimulation. We show that both the network synaptic organization and the local mechanism of microstimulation can impose tight constraints on the possible spatiotemporal firing patterns that can be generated by the microstimulated network, which may hinder the effectiveness of microstimulation to achieve a desired objective under certain conditions. Finally, we demonstrate that the feedback control design aided by the mathematical analysis of the simplified model is indeed effective in driving the BG network in the normal and Parskinsonian states to follow a prescribed spatiotemporal firing pattern. We further show that the rhythmic/oscillatory patterns that characterize a dopamine-depleted BG network can be suppressed as a direct consequence of controlling the spatiotemporal pattern of a subpopulation of the output Globus Pallidus internalis (GPi) neurons in the network. This work may provide plausible explanations for the mechanisms underlying the therapeutic effects of deep brain stimulation (DBS) in Parkinson's disease and pave the way towards a model-based, network level analysis and closed

  7. SYNAPTIC DEPRESSION IN DEEP NEURAL NETWORKS FOR SPEECH PROCESSING.

    Science.gov (United States)

    Zhang, Wenhao; Li, Hanyu; Yang, Minda; Mesgarani, Nima

    2016-03-01

    A characteristic property of biological neurons is their ability to dynamically change the synaptic efficacy in response to variable input conditions. This mechanism, known as synaptic depression, significantly contributes to the formation of normalized representation of speech features. Synaptic depression also contributes to the robust performance of biological systems. In this paper, we describe how synaptic depression can be modeled and incorporated into deep neural network architectures to improve their generalization ability. We observed that when synaptic depression is added to the hidden layers of a neural network, it reduces the effect of changing background activity in the node activations. In addition, we show that when synaptic depression is included in a deep neural network trained for phoneme classification, the performance of the network improves under noisy conditions not included in the training phase. Our results suggest that more complete neuron models may further reduce the gap between the biological performance and artificial computing, resulting in networks that better generalize to novel signal conditions.

  8. Spiral Waves and Multiple Spatial Coherence Resonances Induced by Colored Noise in Neuronal Network

    International Nuclear Information System (INIS)

    Tang Zhao; Li Yuye; Xi Lei; Jia Bing; Gu Huaguang

    2012-01-01

    Gaussian colored noise induced spatial patterns and spatial coherence resonances in a square lattice neuronal network composed of Morris-Lecar neurons are studied. Each neuron is at resting state near a saddle-node bifurcation on invariant circle, coupled to its nearest neighbors by electronic coupling. Spiral waves with different structures and disordered spatial structures can be alternately induced within a large range of noise intensity. By calculating spatial structure function and signal-to-noise ratio (SNR), it is found that SNR values are higher when the spiral structures are simple and are lower when the spatial patterns are complex or disordered, respectively. SNR manifest multiple local maximal peaks, indicating that the colored noise can induce multiple spatial coherence resonances. The maximal SNR values decrease as the correlation time of the noise increases. These results not only provide an example of multiple resonances, but also show that Gaussian colored noise play constructive roles in neuronal network. (general)

  9. Artificial Astrocytes Improve Neural Network Performance

    Science.gov (United States)

    Porto-Pazos, Ana B.; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-01-01

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function. PMID:21526157

  10. Artificial astrocytes improve neural network performance.

    Directory of Open Access Journals (Sweden)

    Ana B Porto-Pazos

    Full Text Available Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN and artificial neuron-glia networks (NGN to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  11. Artificial astrocytes improve neural network performance.

    Science.gov (United States)

    Porto-Pazos, Ana B; Veiguela, Noha; Mesejo, Pablo; Navarrete, Marta; Alvarellos, Alberto; Ibáñez, Oscar; Pazos, Alejandro; Araque, Alfonso

    2011-04-19

    Compelling evidence indicates the existence of bidirectional communication between astrocytes and neurons. Astrocytes, a type of glial cells classically considered to be passive supportive cells, have been recently demonstrated to be actively involved in the processing and regulation of synaptic information, suggesting that brain function arises from the activity of neuron-glia networks. However, the actual impact of astrocytes in neural network function is largely unknown and its application in artificial intelligence remains untested. We have investigated the consequences of including artificial astrocytes, which present the biologically defined properties involved in astrocyte-neuron communication, on artificial neural network performance. Using connectionist systems and evolutionary algorithms, we have compared the performance of artificial neural networks (NN) and artificial neuron-glia networks (NGN) to solve classification problems. We show that the degree of success of NGN is superior to NN. Analysis of performances of NN with different number of neurons or different architectures indicate that the effects of NGN cannot be accounted for an increased number of network elements, but rather they are specifically due to astrocytes. Furthermore, the relative efficacy of NGN vs. NN increases as the complexity of the network increases. These results indicate that artificial astrocytes improve neural network performance, and established the concept of Artificial Neuron-Glia Networks, which represents a novel concept in Artificial Intelligence with implications in computational science as well as in the understanding of brain function.

  12. Synchronization stability and pattern selection in a memristive neuronal network

    Science.gov (United States)

    Wang, Chunni; Lv, Mi; Alsaedi, Ahmed; Ma, Jun

    2017-11-01

    Spatial pattern formation and selection depend on the intrinsic self-organization and cooperation between nodes in spatiotemporal systems. Based on a memory neuron model, a regular network with electromagnetic induction is proposed to investigate the synchronization and pattern selection. In our model, the memristor is used to bridge the coupling between the magnetic flux and the membrane potential, and the induction current results from the time-varying electromagnetic field contributed by the exchange of ion currents and the distribution of charged ions. The statistical factor of synchronization predicts the transition of synchronization and pattern stability. The bifurcation analysis of the sampled time series for the membrane potential reveals the mode transition in electrical activity and pattern selection. A formation mechanism is outlined to account for the emergence of target waves. Although an external stimulus is imposed on each neuron uniformly, the diversity in the magnetic flux and the induction current leads to emergence of target waves in the studied network.

  13. Synchronization stability and pattern selection in a memristive neuronal network.

    Science.gov (United States)

    Wang, Chunni; Lv, Mi; Alsaedi, Ahmed; Ma, Jun

    2017-11-01

    Spatial pattern formation and selection depend on the intrinsic self-organization and cooperation between nodes in spatiotemporal systems. Based on a memory neuron model, a regular network with electromagnetic induction is proposed to investigate the synchronization and pattern selection. In our model, the memristor is used to bridge the coupling between the magnetic flux and the membrane potential, and the induction current results from the time-varying electromagnetic field contributed by the exchange of ion currents and the distribution of charged ions. The statistical factor of synchronization predicts the transition of synchronization and pattern stability. The bifurcation analysis of the sampled time series for the membrane potential reveals the mode transition in electrical activity and pattern selection. A formation mechanism is outlined to account for the emergence of target waves. Although an external stimulus is imposed on each neuron uniformly, the diversity in the magnetic flux and the induction current leads to emergence of target waves in the studied network.

  14. Plasticity of Neuron-Glial Transmission: Equipping Glia for Long-Term Integration of Network Activity

    Directory of Open Access Journals (Sweden)

    Wayne Croft

    2015-01-01

    Full Text Available The capacity of synaptic networks to express activity-dependent changes in strength and connectivity is essential for learning and memory processes. In recent years, glial cells (most notably astrocytes have been recognized as active participants in the modulation of synaptic transmission and synaptic plasticity, implicating these electrically nonexcitable cells in information processing in the brain. While the concept of bidirectional communication between neurons and glia and the mechanisms by which gliotransmission can modulate neuronal function are well established, less attention has been focussed on the computational potential of neuron-glial transmission itself. In particular, whether neuron-glial transmission is itself subject to activity-dependent plasticity and what the computational properties of such plasticity might be has not been explored in detail. In this review, we summarize current examples of plasticity in neuron-glial transmission, in many brain regions and neurotransmitter pathways. We argue that induction of glial plasticity typically requires repetitive neuronal firing over long time periods (minutes-hours rather than the short-lived, stereotyped trigger typical of canonical long-term potentiation. We speculate that this equips glia with a mechanism for monitoring average firing rates in the synaptic network, which is suited to the longer term roles proposed for astrocytes in neurophysiology.

  15. Phase-flip bifurcation in a coupled Josephson junction neuron system

    Energy Technology Data Exchange (ETDEWEB)

    Segall, Kenneth, E-mail: ksegall@colgate.edu [Department of Physics and Astronomy, Colgate University, Hamilton, NY 13346 (United States); Guo, Siyang; Crotty, Patrick [Department of Physics and Astronomy, Colgate University, Hamilton, NY 13346 (United States); Schult, Dan [Department of Mathematics, Colgate University, Hamilton, NY 13346 (United States); Miller, Max [Department of Physics and Astronomy, Colgate University, Hamilton, NY 13346 (United States)

    2014-12-15

    Aiming to understand group behaviors and dynamics of neural networks, we have previously proposed the Josephson junction neuron (JJ neuron) as a fast analog model that mimics a biological neuron using superconducting Josephson junctions. In this study, we further analyze the dynamics of the JJ neuron numerically by coupling one JJ neuron to another. In this coupled system we observe a phase-flip bifurcation, where the neurons synchronize out-of-phase at weak coupling and in-phase at strong coupling. We verify this by simulation of the circuit equations and construct a bifurcation diagram for varying coupling strength using the phase response curve and spike phase difference map. The phase-flip bifurcation could be observed experimentally using standard digital superconducting circuitry.

  16. Phase-flip bifurcation in a coupled Josephson junction neuron system

    International Nuclear Information System (INIS)

    Segall, Kenneth; Guo, Siyang; Crotty, Patrick; Schult, Dan; Miller, Max

    2014-01-01

    Aiming to understand group behaviors and dynamics of neural networks, we have previously proposed the Josephson junction neuron (JJ neuron) as a fast analog model that mimics a biological neuron using superconducting Josephson junctions. In this study, we further analyze the dynamics of the JJ neuron numerically by coupling one JJ neuron to another. In this coupled system we observe a phase-flip bifurcation, where the neurons synchronize out-of-phase at weak coupling and in-phase at strong coupling. We verify this by simulation of the circuit equations and construct a bifurcation diagram for varying coupling strength using the phase response curve and spike phase difference map. The phase-flip bifurcation could be observed experimentally using standard digital superconducting circuitry

  17. Functional characterization of GABAA receptor-mediated modulation of cortical neuron network activity in microelectrode array recordings

    DEFF Research Database (Denmark)

    Bader, Benjamin M; Steder, Anne; Klein, Anders Bue

    2017-01-01

    The numerous γ-aminobutyric acid type A receptor (GABAAR) subtypes are differentially expressed and mediate distinct functions at neuronal level. In this study we have investigated GABAAR-mediated modulation of the spontaneous activity patterns of primary neuronal networks from murine frontal...... of the information extractable from the MEA recordings offers interesting insights into the contributions of various GABAAR subtypes/subgroups to cortical network activity and the putative functional interplay between these receptors in these neurons....... cortex by characterizing the effects induced by a wide selection of pharmacological tools at a plethora of activity parameters in microelectrode array (MEA) recordings. The basic characteristics of the primary cortical neurons used in the recordings were studied in some detail, and the expression levels...

  18. Applying differential dynamic logic to reconfigurable biological networks.

    Science.gov (United States)

    Figueiredo, Daniel; Martins, Manuel A; Chaves, Madalena

    2017-09-01

    Qualitative and quantitative modeling frameworks are widely used for analysis of biological regulatory networks, the former giving a preliminary overview of the system's global dynamics and the latter providing more detailed solutions. Another approach is to model biological regulatory networks as hybrid systems, i.e., systems which can display both continuous and discrete dynamic behaviors. Actually, the development of synthetic biology has shown that this is a suitable way to think about biological systems, which can often be constructed as networks with discrete controllers, and present hybrid behaviors. In this paper we discuss this approach as a special case of the reconfigurability paradigm, well studied in Computer Science (CS). In CS there are well developed computational tools to reason about hybrid systems. We argue that it is worth applying such tools in a biological context. One interesting tool is differential dynamic logic (dL), which has recently been developed by Platzer and applied to many case-studies. In this paper we discuss some simple examples of biological regulatory networks to illustrate how dL can be used as an alternative, or also as a complement to methods already used. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Extrasynaptic neurotransmission in the modulation of brain function. Focus on the striatal neuronal-glial networks

    Directory of Open Access Journals (Sweden)

    Kjell eFuxe

    2012-06-01

    Full Text Available Extrasynaptic neurotransmission is an important short distance form of volume transmission (VT and describes the extracellular diffusion of transmitters and modulators after synaptic spillover or extrasynaptic release in the local circuit regions binding to and activating mainly extrasynaptic neuronal and glial receptors in the neuroglial networks of the brain. Receptor-receptor interactions in G protein-coupled receptor (GPCR heteromers play a major role, on dendritic spines and nerve terminals including glutamate synapses, in the integrative processes of the extrasynaptic signaling. Heteromeric complexes between GPCR and ion-channel receptors play a special role in the integration of the synaptic and extrasynaptic signals. Changes in extracellular concentrations of the classical synaptic neurotransmitters glutamate and GABA found with microdialysis is likely an expression of the activity of the neuron-astrocyte unit of the brain and can be used as an index of VT-mediated actions of these two neurotransmitters in the brain. Thus, the activity of neurons may be functionally linked to the activity of astrocytes, which may release glutamate and GABA to the extracellular space where extrasynaptic glutamate and GABA receptors do exist. Wiring transmission (WT and VT are fundamental properties of all neurons of the CNS but the balance between WT and VT varies from one nerve cell population to the other. The focus is on the striatal cellular networks, and the WT and VT and their integration via receptor heteromers are described in the GABA projection neurons, the glutamate, dopamine, 5-hydroxytryptamine (5-HT and histamine striatal afferents, the cholinergic interneurons and different types of GABA interneurons. In addition, the role in these networks of VT signaling of the energy-dependent modulator adenosine and of endocannabinoids mainly formed in the striatal projection neurons will be underlined to understand the communication in the striatal

  20. Impacts of clustering on noise-induced spiking regularity in the excitatory neuronal networks of subnetworks.

    Science.gov (United States)

    Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua

    2015-01-01

    In this paper, we investigate how clustering factors influent spiking regularity of the neuronal network of subnetworks. In order to do so, we fix the averaged coupling probability and the averaged coupling strength, and take the cluster number M, the ratio of intra-connection probability and inter-connection probability R, the ratio of intra-coupling strength and inter-coupling strength S as controlled parameters. With the obtained simulation results, we find that spiking regularity of the neuronal networks has little variations with changing of R and S when M is fixed. However, cluster number M could reduce the spiking regularity to low level when the uniform neuronal network's spiking regularity is at high level. Combined the obtained results, we can see that clustering factors have little influences on the spiking regularity when the entire energy is fixed, which could be controlled by the averaged coupling strength and the averaged connection probability.

  1. Opposite effects of low and high doses of Abeta42 on electrical network and neuronal excitability in the rat prefrontal cortex.

    Science.gov (United States)

    Wang, Yun; Zhang, Guangping; Zhou, Hongwei; Barakat, Amey; Querfurth, Henry

    2009-12-21

    Changes in neuronal synchronization have been found in patients and animal models of Alzheimer's disease (AD). Synchronized behaviors within neuronal networks are important to such complex cognitive processes as working memory. The mechanisms behind these changes are not understood but may involve the action of soluble beta-amyloid (Abeta) on electrical networks. In order to determine if Abeta can induce changes in neuronal synchronization, the activities of pyramidal neurons were recorded in rat prefrontal cortical (PFC) slices under calcium-free conditions using multi-neuron patch clamp technique. Electrical network activities and synchronization among neurons were significantly inhibited by low dose Abeta42 (1 nM) and initially by high dose Abeta42 (500 nM). However, prolonged application of high dose Abeta42 resulted in network activation and tonic firing. Underlying these observations, we discovered that prolonged application of low and high doses of Abeta42 induced opposite changes in action potential (AP)-threshold and after-hyperpolarization (AHP) of neurons. Accordingly, low dose Abeta42 significantly increased the AP-threshold and deepened the AHP, making neurons less excitable. In contrast, high dose Abeta42 significantly reduced the AP-threshold and shallowed the AHP, making neurons more excitable. These results support a model that low dose Abeta42 released into the interstitium has a physiologic feedback role to dampen electrical network activity by reducing neuronal excitability. Higher concentrations of Abeta42 over time promote supra-synchronization between individual neurons by increasing their excitability. The latter may disrupt frontal-based cognitive processing and in some cases lead to epileptiform discharges.

  2. Opposite effects of low and high doses of Abeta42 on electrical network and neuronal excitability in the rat prefrontal cortex.

    Directory of Open Access Journals (Sweden)

    Yun Wang

    Full Text Available Changes in neuronal synchronization have been found in patients and animal models of Alzheimer's disease (AD. Synchronized behaviors within neuronal networks are important to such complex cognitive processes as working memory. The mechanisms behind these changes are not understood but may involve the action of soluble beta-amyloid (Abeta on electrical networks. In order to determine if Abeta can induce changes in neuronal synchronization, the activities of pyramidal neurons were recorded in rat prefrontal cortical (PFC slices under calcium-free conditions using multi-neuron patch clamp technique. Electrical network activities and synchronization among neurons were significantly inhibited by low dose Abeta42 (1 nM and initially by high dose Abeta42 (500 nM. However, prolonged application of high dose Abeta42 resulted in network activation and tonic firing. Underlying these observations, we discovered that prolonged application of low and high doses of Abeta42 induced opposite changes in action potential (AP-threshold and after-hyperpolarization (AHP of neurons. Accordingly, low dose Abeta42 significantly increased the AP-threshold and deepened the AHP, making neurons less excitable. In contrast, high dose Abeta42 significantly reduced the AP-threshold and shallowed the AHP, making neurons more excitable. These results support a model that low dose Abeta42 released into the interstitium has a physiologic feedback role to dampen electrical network activity by reducing neuronal excitability. Higher concentrations of Abeta42 over time promote supra-synchronization between individual neurons by increasing their excitability. The latter may disrupt frontal-based cognitive processing and in some cases lead to epileptiform discharges.

  3. Autaptic pacemaker mediated propagation of weak rhythmic activity across small-world neuronal networks

    Science.gov (United States)

    Yilmaz, Ergin; Baysal, Veli; Ozer, Mahmut; Perc, Matjaž

    2016-02-01

    We study the effects of an autapse, which is mathematically described as a self-feedback loop, on the propagation of weak, localized pacemaker activity across a Newman-Watts small-world network consisting of stochastic Hodgkin-Huxley neurons. We consider that only the pacemaker neuron, which is stimulated by a subthreshold periodic signal, has an electrical autapse that is characterized by a coupling strength and a delay time. We focus on the impact of the coupling strength, the network structure, the properties of the weak periodic stimulus, and the properties of the autapse on the transmission of localized pacemaker activity. Obtained results indicate the existence of optimal channel noise intensity for the propagation of the localized rhythm. Under optimal conditions, the autapse can significantly improve the propagation of pacemaker activity, but only for a specific range of the autaptic coupling strength. Moreover, the autaptic delay time has to be equal to the intrinsic oscillation period of the Hodgkin-Huxley neuron or its integer multiples. We analyze the inter-spike interval histogram and show that the autapse enhances or suppresses the propagation of the localized rhythm by increasing or decreasing the phase locking between the spiking of the pacemaker neuron and the weak periodic signal. In particular, when the autaptic delay time is equal to the intrinsic period of oscillations an optimal phase locking takes place, resulting in a dominant time scale of the spiking activity. We also investigate the effects of the network structure and the coupling strength on the propagation of pacemaker activity. We find that there exist an optimal coupling strength and an optimal network structure that together warrant an optimal propagation of the localized rhythm.

  4. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights

    Directory of Open Access Journals (Sweden)

    Wilten eNicola

    2016-02-01

    Full Text Available A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF. The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks

  5. Obtaining Arbitrary Prescribed Mean Field Dynamics for Recurrently Coupled Networks of Type-I Spiking Neurons with Analytically Determined Weights.

    Science.gov (United States)

    Nicola, Wilten; Tripp, Bryan; Scott, Matthew

    2016-01-01

    A fundamental question in computational neuroscience is how to connect a network of spiking neurons to produce desired macroscopic or mean field dynamics. One possible approach is through the Neural Engineering Framework (NEF). The NEF approach requires quantities called decoders which are solved through an optimization problem requiring large matrix inversion. Here, we show how a decoder can be obtained analytically for type I and certain type II firing rates as a function of the heterogeneity of its associated neuron. These decoders generate approximants for functions that converge to the desired function in mean-squared error like 1/N, where N is the number of neurons in the network. We refer to these decoders as scale-invariant decoders due to their structure. These decoders generate weights for a network of neurons through the NEF formula for weights. These weights force the spiking network to have arbitrary and prescribed mean field dynamics. The weights generated with scale-invariant decoders all lie on low dimensional hypersurfaces asymptotically. We demonstrate the applicability of these scale-invariant decoders and weight surfaces by constructing networks of spiking theta neurons that replicate the dynamics of various well known dynamical systems such as the neural integrator, Van der Pol system and the Lorenz system. As these decoders are analytically determined and non-unique, the weights are also analytically determined and non-unique. We discuss the implications for measured weights of neuronal networks.

  6. Application of random matrix theory to biological networks

    Energy Technology Data Exchange (ETDEWEB)

    Luo Feng [Department of Computer Science, Clemson University, 100 McAdams Hall, Clemson, SC 29634 (United States); Department of Pathology, U.T. Southwestern Medical Center, 5323 Harry Hines Blvd. Dallas, TX 75390-9072 (United States); Zhong Jianxin [Department of Physics, Xiangtan University, Hunan 411105 (China) and Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States)]. E-mail: zhongjn@ornl.gov; Yang Yunfeng [Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Scheuermann, Richard H. [Department of Pathology, U.T. Southwestern Medical Center, 5323 Harry Hines Blvd. Dallas, TX 75390-9072 (United States); Zhou Jizhong [Department of Botany and Microbiology, University of Oklahoma, Norman, OK 73019 (United States) and Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States)]. E-mail: zhouj@ornl.gov

    2006-09-25

    We show that spectral fluctuation of interaction matrices of a yeast protein-protein interaction network and a yeast metabolic network follows the description of the Gaussian orthogonal ensemble (GOE) of random matrix theory (RMT). Furthermore, we demonstrate that while the global biological networks evaluated belong to GOE, removal of interactions between constituents transitions the networks to systems of isolated modules described by the Poisson distribution. Our results indicate that although biological networks are very different from other complex systems at the molecular level, they display the same statistical properties at network scale. The transition point provides a new objective approach for the identification of functional modules.

  7. Synaptic Dynamics and Neuronal Network Connectivity are reflected in the Distribution of Times in Up states

    Directory of Open Access Journals (Sweden)

    Khanh eDao Duc

    2015-07-01

    Full Text Available The dynamics of neuronal networks connected by synaptic dynamics can sustain long periods of depolarization that can last for hundreds of milliseconds such as Up states recorded during sleep or anesthesia. Yet the underlying mechanism driving these periods remain unclear. We show here within a mean-field model that the residence times of the neuronal membrane potential in cortical Up states does not follow a Poissonian law, but presents several peaks. Furthermore, the present modeling approach allows extracting some information about the neuronal network connectivity from the time distribution histogram. Based on a synaptic-depression model, we find that these peaks, that can be observed in histograms of patch-clamp recordings are not artifacts of electrophysiological measurements, but rather are an inherent property of the network dynamics. Analysis of the equations reveals a stable focus located close to the unstable limit cycle, delimiting a region that defines the Up state. The model further shows that the peaks observed in the Up state time distribution are due to winding around the focus before escaping from the basin of attraction. Finally, we use in vivo recordings of intracellular membrane potential and we recover from the peak distribution, some information about the network connectivity. We conclude that it is possible to recover the network connectivity from the distribution of times that the neuronal membrane voltage spends in Up states.

  8. The connection-set algebra--a novel formalism for the representation of connectivity structure in neuronal network models.

    Science.gov (United States)

    Djurfeldt, Mikael

    2012-07-01

    The connection-set algebra (CSA) is a novel and general formalism for the description of connectivity in neuronal network models, from small-scale to large-scale structure. The algebra provides operators to form more complex sets of connections from simpler ones and also provides parameterization of such sets. CSA is expressive enough to describe a wide range of connection patterns, including multiple types of random and/or geometrically dependent connectivity, and can serve as a concise notation for network structure in scientific writing. CSA implementations allow for scalable and efficient representation of connectivity in parallel neuronal network simulators and could even allow for avoiding explicit representation of connections in computer memory. The expressiveness of CSA makes prototyping of network structure easy. A C+ + version of the algebra has been implemented and used in a large-scale neuronal network simulation (Djurfeldt et al., IBM J Res Dev 52(1/2):31-42, 2008b) and an implementation in Python has been publicly released.

  9. Probabilistic biological network alignment.

    Science.gov (United States)

    Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2013-01-01

    Interactions between molecules are probabilistic events. An interaction may or may not happen with some probability, depending on a variety of factors such as the size, abundance, or proximity of the interacting molecules. In this paper, we consider the problem of aligning two biological networks. Unlike existing methods, we allow one of the two networks to contain probabilistic interactions. Allowing interaction probabilities makes the alignment more biologically relevant at the expense of explosive growth in the number of alternative topologies that may arise from different subsets of interactions that take place. We develop a novel method that efficiently and precisely characterizes this massive search space. We represent the topological similarity between pairs of aligned molecules (i.e., proteins) with the help of random variables and compute their expected values. We validate our method showing that, without sacrificing the running time performance, it can produce novel alignments. Our results also demonstrate that our method identifies biologically meaningful mappings under a comprehensive set of criteria used in the literature as well as the statistical coherence measure that we developed to analyze the statistical significance of the similarity of the functions of the aligned protein pairs.

  10. Role of Delays in Shaping Spatiotemporal Dynamics of Neuronal Activity in Large Networks

    International Nuclear Information System (INIS)

    Roxin, Alex; Brunel, Nicolas; Hansel, David

    2005-01-01

    We study the effect of delays on the dynamics of large networks of neurons. We show that delays give rise to a wealth of bifurcations and to a rich phase diagram, which includes oscillatory bumps, traveling waves, lurching waves, standing waves arising via a period-doubling bifurcation, aperiodic regimes, and regimes of multistability. We study the existence and the stability of the various dynamical patterns analytically and numerically in a simplified rate model as a function of the interaction parameters. The results derived in that framework allow us to understand the origin of the diversity of dynamical states observed in large networks of spiking neurons

  11. Alterations of cortical GABA neurons and network oscillations in schizophrenia.

    Science.gov (United States)

    Gonzalez-Burgos, Guillermo; Hashimoto, Takanori; Lewis, David A

    2010-08-01

    The hypothesis that alterations of cortical inhibitory gamma-aminobutyric acid (GABA) neurons are a central element in the pathology of schizophrenia has emerged from a series of postmortem studies. How such abnormalities may contribute to the clinical features of schizophrenia has been substantially informed by a convergence with basic neuroscience studies revealing complex details of GABA neuron function in the healthy brain. Importantly, activity of the parvalbumin-containing class of GABA neurons has been linked to the production of cortical network oscillations. Furthermore, growing knowledge supports the concept that gamma band oscillations (30-80 Hz) are an essential mechanism for cortical information transmission and processing. Herein we review recent studies further indicating that inhibition from parvalbumin-positive GABA neurons is necessary to produce gamma oscillations in cortical circuits; provide an update on postmortem studies documenting that deficits in the expression of glutamic acid decarboxylase67, which accounts for most GABA synthesis in the cortex, are widely observed in schizophrenia; and describe studies using novel, noninvasive approaches directly assessing potential relations between alterations in GABA, oscillations, and cognitive function in schizophrenia.

  12. Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression

    KAUST Repository

    Onesto, Valentina; Cosentino, Carlo; Di Fabrizio, Enzo M.; Cesarelli, Mario; Amato, Francesco; Gentile, Francesco

    2016-01-01

    Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect.

  13. Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression

    KAUST Repository

    Onesto, Valentina

    2016-05-10

    Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect.

  14. Information in a Network of Neuronal Cells: Effect of Cell Density and Short-Term Depression

    Directory of Open Access Journals (Sweden)

    Valentina Onesto

    2016-01-01

    Full Text Available Neurons are specialized, electrically excitable cells which use electrical to chemical signals to transmit and elaborate information. Understanding how the cooperation of a great many of neurons in a grid may modify and perhaps improve the information quality, in contrast to few neurons in isolation, is critical for the rational design of cell-materials interfaces for applications in regenerative medicine, tissue engineering, and personalized lab-on-a-chips. In the present paper, we couple an integrate-and-fire model with information theory variables to analyse the extent of information in a network of nerve cells. We provide an estimate of the information in the network in bits as a function of cell density and short-term depression time. In the model, neurons are connected through a Delaunay triangulation of not-intersecting edges; in doing so, the number of connecting synapses per neuron is approximately constant to reproduce the early time of network development in planar neural cell cultures. In simulations where the number of nodes is varied, we observe an optimal value of cell density for which information in the grid is maximized. In simulations in which the posttransmission latency time is varied, we observe that information increases as the latency time decreases and, for specific configurations of the grid, it is largely enhanced in a resonance effect.

  15. Self-organized criticality in developing neuronal networks.

    Directory of Open Access Journals (Sweden)

    Christian Tetzlaff

    Full Text Available Recently evidence has accumulated that many neural networks exhibit self-organized criticality. In this state, activity is similar across temporal scales and this is beneficial with respect to information flow. If subcritical, activity can die out, if supercritical epileptiform patterns may occur. Little is known about how developing networks will reach and stabilize criticality. Here we monitor the development between 13 and 95 days in vitro (DIV of cortical cell cultures (n = 20 and find four different phases, related to their morphological maturation: An initial low-activity state (≈19 DIV is followed by a supercritical (≈20 DIV and then a subcritical one (≈36 DIV until the network finally reaches stable criticality (≈58 DIV. Using network modeling and mathematical analysis we describe the dynamics of the emergent connectivity in such developing systems. Based on physiological observations, the synaptic development in the model is determined by the drive of the neurons to adjust their connectivity for reaching on average firing rate homeostasis. We predict a specific time course for the maturation of inhibition, with strong onset and delayed pruning, and that total synaptic connectivity should be strongly linked to the relative levels of excitation and inhibition. These results demonstrate that the interplay between activity and connectivity guides developing networks into criticality suggesting that this may be a generic and stable state of many networks in vivo and in vitro.

  16. Effects of neuronal loss in the dynamic model of neural networks

    International Nuclear Information System (INIS)

    Yoon, B-G; Choi, J; Choi, M Y

    2008-01-01

    We study the phase transitions and dynamic behavior of the dynamic model of neural networks, with an emphasis on the effects of neuronal loss due to external stress. In the absence of loss the overall results obtained numerically are found to agree excellently with the theoretical ones. When the external stress is turned on, some neurons may deteriorate and die; such loss of neurons, in general, weakens the memory in the system. As the loss increases beyond a critical value, the order parameter measuring the strength of memory decreases to zero either continuously or discontinuously, namely, the system loses its memory via a second- or a first-order transition, depending on the ratio of the refractory period to the duration of action potential

  17. Human Brain Networks: Spiking Neuron Models, Multistability, Synchronization, Thermodynamics, Maximum Entropy Production, and Anesthetic Cascade Mechanisms

    Directory of Open Access Journals (Sweden)

    Wassim M. Haddad

    2014-07-01

    Full Text Available Advances in neuroscience have been closely linked to mathematical modeling beginning with the integrate-and-fire model of Lapicque and proceeding through the modeling of the action potential by Hodgkin and Huxley to the current era. The fundamental building block of the central nervous system, the neuron, may be thought of as a dynamic element that is “excitable”, and can generate a pulse or spike whenever the electrochemical potential across the cell membrane of the neuron exceeds a threshold. A key application of nonlinear dynamical systems theory to the neurosciences is to study phenomena of the central nervous system that exhibit nearly discontinuous transitions between macroscopic states. A very challenging and clinically important problem exhibiting this phenomenon is the induction of general anesthesia. In any specific patient, the transition from consciousness to unconsciousness as the concentration of anesthetic drugs increases is very sharp, resembling a thermodynamic phase transition. This paper focuses on multistability theory for continuous and discontinuous dynamical systems having a set of multiple isolated equilibria and/or a continuum of equilibria. Multistability is the property whereby the solutions of a dynamical system can alternate between two or more mutually exclusive Lyapunov stable and convergent equilibrium states under asymptotically slowly changing inputs or system parameters. In this paper, we extend the theory of multistability to continuous, discontinuous, and stochastic nonlinear dynamical systems. In particular, Lyapunov-based tests for multistability and synchronization of dynamical systems with continuously differentiable and absolutely continuous flows are established. The results are then applied to excitatory and inhibitory biological neuronal networks to explain the underlying mechanism of action for anesthesia and consciousness from a multistable dynamical system perspective, thereby providing a

  18. A simplified protocol for differentiation of electrophysiologically mature neuronal networks from human induced pluripotent stem cells.

    Science.gov (United States)

    Gunhanlar, N; Shpak, G; van der Kroeg, M; Gouty-Colomer, L A; Munshi, S T; Lendemeijer, B; Ghazvini, M; Dupont, C; Hoogendijk, W J G; Gribnau, J; de Vrij, F M S; Kushner, S A

    2017-04-18

    Progress in elucidating the molecular and cellular pathophysiology of neuropsychiatric disorders has been hindered by the limited availability of living human brain tissue. The emergence of induced pluripotent stem cells (iPSCs) has offered a unique alternative strategy using patient-derived functional neuronal networks. However, methods for reliably generating iPSC-derived neurons with mature electrophysiological characteristics have been difficult to develop. Here, we report a simplified differentiation protocol that yields electrophysiologically mature iPSC-derived cortical lineage neuronal networks without the need for astrocyte co-culture or specialized media. This protocol generates a consistent 60:40 ratio of neurons and astrocytes that arise from a common forebrain neural progenitor. Whole-cell patch-clamp recordings of 114 neurons derived from three independent iPSC lines confirmed their electrophysiological maturity, including resting membrane potential (-58.2±1.0 mV), capacitance (49.1±2.9 pF), action potential (AP) threshold (-50.9±0.5 mV) and AP amplitude (66.5±1.3 mV). Nearly 100% of neurons were capable of firing APs, of which 79% had sustained trains of mature APs with minimal accommodation (peak AP frequency: 11.9±0.5 Hz) and 74% exhibited spontaneous synaptic activity (amplitude, 16.03±0.82 pA; frequency, 1.09±0.17 Hz). We expect this protocol to be of broad applicability for implementing iPSC-based neuronal network models of neuropsychiatric disorders.Molecular Psychiatry advance online publication, 18 April 2017; doi:10.1038/mp.2017.56.

  19. Characterizing the topology of probabilistic biological networks.

    Science.gov (United States)

    Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2013-01-01

    Biological interactions are often uncertain events, that may or may not take place with some probability. This uncertainty leads to a massive number of alternative interaction topologies for each such network. The existing studies analyze the degree distribution of biological networks by assuming that all the given interactions take place under all circumstances. This strong and often incorrect assumption can lead to misleading results. In this paper, we address this problem and develop a sound mathematical basis to characterize networks in the presence of uncertain interactions. Using our mathematical representation, we develop a method that can accurately describe the degree distribution of such networks. We also take one more step and extend our method to accurately compute the joint-degree distributions of node pairs connected by edges. The number of possible network topologies grows exponentially with the number of uncertain interactions. However, the mathematical model we develop allows us to compute these degree distributions in polynomial time in the number of interactions. Our method works quickly even for entire protein-protein interaction (PPI) networks. It also helps us find an adequate mathematical model using MLE. We perform a comparative study of node-degree and joint-degree distributions in two types of biological networks: the classical deterministic networks and the more flexible probabilistic networks. Our results confirm that power-law and log-normal models best describe degree distributions for both probabilistic and deterministic networks. Moreover, the inverse correlation of degrees of neighboring nodes shows that, in probabilistic networks, nodes with large number of interactions prefer to interact with those with small number of interactions more frequently than expected. We also show that probabilistic networks are more robust for node-degree distribution computation than the deterministic ones. all the data sets used, the software

  20. Characterizing Topology of Probabilistic Biological Networks.

    Science.gov (United States)

    Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2013-09-06

    Biological interactions are often uncertain events, that may or may not take place with some probability. Existing studies analyze the degree distribution of biological networks by assuming that all the given interactions take place under all circumstances. This strong and often incorrect assumption can lead to misleading results. Here, we address this problem and develop a sound mathematical basis to characterize networks in the presence of uncertain interactions. We develop a method that accurately describes the degree distribution of such networks. We also extend our method to accurately compute the joint degree distributions of node pairs connected by edges. The number of possible network topologies grows exponentially with the number of uncertain interactions. However, the mathematical model we develop allows us to compute these degree distributions in polynomial time in the number of interactions. It also helps us find an adequate mathematical model using maximum likelihood estimation. Our results demonstrate that power law and log-normal models best describe degree distributions for probabilistic networks. The inverse correlation of degrees of neighboring nodes shows that, in probabilistic networks, nodes with large number of interactions prefer to interact with those with small number of interactions more frequently than expected.

  1. A novel recurrent neural network with one neuron and finite-time convergence for k-winners-take-all operation.

    Science.gov (United States)

    Liu, Qingshan; Dang, Chuangyin; Cao, Jinde

    2010-07-01

    In this paper, based on a one-neuron recurrent neural network, a novel k-winners-take-all ( k -WTA) network is proposed. Finite time convergence of the proposed neural network is proved using the Lyapunov method. The k-WTA operation is first converted equivalently into a linear programming problem. Then, a one-neuron recurrent neural network is proposed to get the kth or (k+1)th largest inputs of the k-WTA problem. Furthermore, a k-WTA network is designed based on the proposed neural network to perform the k-WTA operation. Compared with the existing k-WTA networks, the proposed network has simple structure and finite time convergence. In addition, simulation results on numerical examples show the effectiveness and performance of the proposed k-WTA network.

  2. Multilayer network modeling of integrated biological systems. Comment on "Network science of biological systems at different scales: A review" by Gosak et al.

    Science.gov (United States)

    De Domenico, Manlio

    2018-03-01

    Biological systems, from a cell to the human brain, are inherently complex. A powerful representation of such systems, described by an intricate web of relationships across multiple scales, is provided by complex networks. Recently, several studies are highlighting how simple networks - obtained by aggregating or neglecting temporal or categorical description of biological data - are not able to account for the richness of information characterizing biological systems. More complex models, namely multilayer networks, are needed to account for interdependencies, often varying across time, of biological interacting units within a cell, a tissue or parts of an organism.

  3. Dopamine Attenuates Ketamine-Induced Neuronal Apoptosis in the Developing Rat Retina Independent of Early Synchronized Spontaneous Network Activity.

    Science.gov (United States)

    Dong, Jing; Gao, Lingqi; Han, Junde; Zhang, Junjie; Zheng, Jijian

    2017-07-01

    Deprivation of spontaneous rhythmic electrical activity in early development by anesthesia administration, among other interventions, induces neuronal apoptosis. However, it is unclear whether enhancement of neuronal electrical activity attenuates neuronal apoptosis in either normal development or after anesthesia exposure. The present study investigated the effects of dopamine, an enhancer of spontaneous rhythmic electrical activity, on ketamine-induced neuronal apoptosis in the developing rat retina. TUNEL and immunohistochemical assays indicated that ketamine time- and dose-dependently aggravated physiological and ketamine-induced apoptosis and inhibited early-synchronized spontaneous network activity. Dopamine administration reversed ketamine-induced neuronal apoptosis, but did not reverse the inhibitory effects of ketamine on early synchronized spontaneous network activity despite enhancing it in controls. Blockade of D1, D2, and A2A receptors and inhibition of cAMP/PKA signaling partially antagonized the protective effect of dopamine against ketamine-induced apoptosis. Together, these data indicate that dopamine attenuates ketamine-induced neuronal apoptosis in the developing rat retina by activating the D1, D2, and A2A receptors, and upregulating cAMP/PKA signaling, rather than through modulation of early synchronized spontaneous network activity.

  4. Optimal autaptic and synaptic delays enhanced synchronization transitions induced by each other in Newman–Watts neuronal networks

    International Nuclear Information System (INIS)

    Wang, Baoying; Gong, Yubing; Xie, Huijuan; Wang, Qi

    2016-01-01

    Highlights: • Optimal autaptic delay enhanced synchronization transitions induced by synaptic delay in neuronal networks. • Optimal synaptic delay enhanced synchronization transitions induced by autaptic delay. • Optimal coupling strength enhanced synchronization transitions induced by autaptic or synaptic delay. - Abstract: In this paper, we numerically study the effect of electrical autaptic and synaptic delays on synchronization transitions induced by each other in Newman–Watts Hodgkin–Huxley neuronal networks. It is found that the synchronization transitions induced by synaptic delay vary with varying autaptic delay and become strongest when autaptic delay is optimal. Similarly, the synchronization transitions induced by autaptic delay vary with varying synaptic delay and become strongest at optimal synaptic delay. Also, there is optimal coupling strength by which the synchronization transitions induced by either synaptic or autaptic delay become strongest. These results show that electrical autaptic and synaptic delays can enhance synchronization transitions induced by each other in the neuronal networks. This implies that electrical autaptic and synaptic delays can cooperate with each other and more efficiently regulate the synchrony state of the neuronal networks. These findings could find potential implications for the information transmission in neural systems.

  5. Synaptic and intrinsic activation of GABAergic neurons in the cardiorespiratory brainstem network.

    Science.gov (United States)

    Frank, Julie G; Mendelowitz, David

    2012-01-01

    GABAergic pathways in the brainstem play an essential role in respiratory rhythmogenesis and interactions between the respiratory and cardiovascular neuronal control networks. However, little is known about the identity and function of these GABAergic inhibitory neurons and what determines their activity. In this study we have identified a population of GABAergic neurons in the ventrolateral medulla that receive increased excitatory post-synaptic potentials during inspiration, but also have spontaneous firing in the absence of synaptic input. Using transgenic mice that express GFP under the control of the Gad1 (GAD67) gene promoter, we determined that this population of GABAergic neurons is in close apposition to cardioinhibitory parasympathetic cardiac neurons in the nucleus ambiguus (NA). These neurons fire in synchronization with inspiratory activity. Although they receive excitatory glutamatergic synaptic inputs during inspiration, this excitatory neurotransmission was not altered by blocking nicotinic receptors, and many of these GABAergic neurons continue to fire after synaptic blockade. The spontaneous firing in these GABAergic neurons was not altered by the voltage-gated calcium channel blocker cadmium chloride that blocks both neurotransmission to these neurons and voltage-gated Ca(2+) currents, but spontaneous firing was diminished by riluzole, demonstrating a role of persistent sodium channels in the spontaneous firing in these cardiorespiratory GABAergic neurons that possess a pacemaker phenotype. The spontaneously firing GABAergic neurons identified in this study that increase their activity during inspiration would support respiratory rhythm generation if they acted primarily to inhibit post-inspiratory neurons and thereby release inspiration neurons to increase their activity. This population of inspiratory-modulated GABAergic neurons could also play a role in inhibiting neurons that are most active during expiration and provide a framework for

  6. Synaptic and intrinsic activation of GABAergic neurons in the cardiorespiratory brainstem network.

    Directory of Open Access Journals (Sweden)

    Julie G Frank

    Full Text Available GABAergic pathways in the brainstem play an essential role in respiratory rhythmogenesis and interactions between the respiratory and cardiovascular neuronal control networks. However, little is known about the identity and function of these GABAergic inhibitory neurons and what determines their activity. In this study we have identified a population of GABAergic neurons in the ventrolateral medulla that receive increased excitatory post-synaptic potentials during inspiration, but also have spontaneous firing in the absence of synaptic input. Using transgenic mice that express GFP under the control of the Gad1 (GAD67 gene promoter, we determined that this population of GABAergic neurons is in close apposition to cardioinhibitory parasympathetic cardiac neurons in the nucleus ambiguus (NA. These neurons fire in synchronization with inspiratory activity. Although they receive excitatory glutamatergic synaptic inputs during inspiration, this excitatory neurotransmission was not altered by blocking nicotinic receptors, and many of these GABAergic neurons continue to fire after synaptic blockade. The spontaneous firing in these GABAergic neurons was not altered by the voltage-gated calcium channel blocker cadmium chloride that blocks both neurotransmission to these neurons and voltage-gated Ca(2+ currents, but spontaneous firing was diminished by riluzole, demonstrating a role of persistent sodium channels in the spontaneous firing in these cardiorespiratory GABAergic neurons that possess a pacemaker phenotype. The spontaneously firing GABAergic neurons identified in this study that increase their activity during inspiration would support respiratory rhythm generation if they acted primarily to inhibit post-inspiratory neurons and thereby release inspiration neurons to increase their activity. This population of inspiratory-modulated GABAergic neurons could also play a role in inhibiting neurons that are most active during expiration and provide a

  7. Electrical responses and spontaneous activity of human iPS-derived neuronal networks characterized for three-month culture with 4096-electrode arrays

    Directory of Open Access Journals (Sweden)

    Hayder eAmin

    2016-03-01

    Full Text Available The recent availability of human induced pluripotent stem cells (hiPSCs holds great promise as a novel source of human-derived neurons for cell and tissue therapies as well as for in vitro drug screenings that might replace the use of animal models. However, there is still a considerable lack of knowledge on the functional properties of hiPSC-derived neuronal networks, thus limiting their application. Here, upon optimization of cell culture protocols, we demonstrate that both spontaneous and evoked electrical spiking activities of these networks can be characterized on-chip by taking advantage of the resolution provided by CMOS multielectrode arrays (CMOS-MEAs. These devices feature a large and closely-spaced array of 4096 simultaneously recording electrodes and multi-site on-chip electrical stimulation. Our results show that networks of human-derived neurons can respond to electrical stimulation with a physiological repertoire of spike waveforms after three months of cell culture, a period of time during which the network undergoes the expression of developing patterns of spontaneous spiking activity. To achieve this, we have investigated the impact on the network formation and on the emerging network-wide functional properties induced by different biochemical substrates, i.e. poly-dl-ornithine (PDLO, poly-l-ornithine (PLO, and polyethylenimine (PEI, that were used as adhesion promoters for the cell culture. Interestingly, we found that neuronal networks grown on PDLO coated substrates show significantly higher spontaneous firing activity, reliable responses to low-frequency electrical stimuli, and an appropriate level of PSD-95 that may denote a physiological neuronal maturation profile and synapse stabilization. However, our results also suggest that even three-month culture might not be sufficient for human-derived neuronal network maturation. Taken together, our results highlight the tight relationship existing between substrate coatings

  8. Network analysis reveals stage-specific changes in zebrafish embryo development using time course whole transcriptome profiling and prior biological knowledge.

    Science.gov (United States)

    Zhang, Yuji

    2015-01-01

    Molecular networks act as the backbone of molecular activities within cells, offering a unique opportunity to better understand the mechanism of diseases. While network data usually constitute only static network maps, integrating them with time course gene expression information can provide clues to the dynamic features of these networks and unravel the mechanistic driver genes characterizing cellular responses. Time course gene expression data allow us to broadly "watch" the dynamics of the system. However, one challenge in the analysis of such data is to establish and characterize the interplay among genes that are altered at different time points in the context of a biological process or functional category. Integrative analysis of these data sources will lead us a more complete understanding of how biological entities (e.g., genes and proteins) coordinately perform their biological functions in biological systems. In this paper, we introduced a novel network-based approach to extract functional knowledge from time-dependent biological processes at a system level using time course mRNA sequencing data in zebrafish embryo development. The proposed method was applied to investigate 1α, 25(OH)2D3-altered mechanisms in zebrafish embryo development. We applied the proposed method to a public zebrafish time course mRNA-Seq dataset, containing two different treatments along four time points. We constructed networks between gene ontology biological process categories, which were enriched in differential expressed genes between consecutive time points and different conditions. The temporal propagation of 1α, 25-Dihydroxyvitamin D3-altered transcriptional changes started from a few genes that were altered initially at earlier stage, to large groups of biological coherent genes at later stages. The most notable biological processes included neuronal and retinal development and generalized stress response. In addition, we also investigated the relationship among

  9. Dynamic Control of Synchronous Activity in Networks of Spiking Neurons.

    Directory of Open Access Journals (Sweden)

    Axel Hutt

    Full Text Available Oscillatory brain activity is believed to play a central role in neural coding. Accumulating evidence shows that features of these oscillations are highly dynamic: power, frequency and phase fluctuate alongside changes in behavior and task demands. The role and mechanism supporting this variability is however poorly understood. We here analyze a network of recurrently connected spiking neurons with time delay displaying stable synchronous dynamics. Using mean-field and stability analyses, we investigate the influence of dynamic inputs on the frequency of firing rate oscillations. We show that afferent noise, mimicking inputs to the neurons, causes smoothing of the system's response function, displacing equilibria and altering the stability of oscillatory states. Our analysis further shows that these noise-induced changes cause a shift of the peak frequency of synchronous oscillations that scales with input intensity, leading the network towards critical states. We lastly discuss the extension of these principles to periodic stimulation, in which externally applied driving signals can trigger analogous phenomena. Our results reveal one possible mechanism involved in shaping oscillatory activity in the brain and associated control principles.

  10. Dynamic Control of Synchronous Activity in Networks of Spiking Neurons.

    Science.gov (United States)

    Hutt, Axel; Mierau, Andreas; Lefebvre, Jérémie

    Oscillatory brain activity is believed to play a central role in neural coding. Accumulating evidence shows that features of these oscillations are highly dynamic: power, frequency and phase fluctuate alongside changes in behavior and task demands. The role and mechanism supporting this variability is however poorly understood. We here analyze a network of recurrently connected spiking neurons with time delay displaying stable synchronous dynamics. Using mean-field and stability analyses, we investigate the influence of dynamic inputs on the frequency of firing rate oscillations. We show that afferent noise, mimicking inputs to the neurons, causes smoothing of the system's response function, displacing equilibria and altering the stability of oscillatory states. Our analysis further shows that these noise-induced changes cause a shift of the peak frequency of synchronous oscillations that scales with input intensity, leading the network towards critical states. We lastly discuss the extension of these principles to periodic stimulation, in which externally applied driving signals can trigger analogous phenomena. Our results reveal one possible mechanism involved in shaping oscillatory activity in the brain and associated control principles.

  11. Ordering chaos and synchronization transitions by chemical delay and coupling on scale-free neuronal networks

    International Nuclear Information System (INIS)

    Gong Yubing; Xie Yanhang; Lin Xiu; Hao Yinghang; Ma Xiaoguang

    2010-01-01

    Research highlights: → Chemical delay and chemical coupling can tame chaotic bursting. → Chemical delay-induced transitions from bursting synchronization to intermittent multiple spiking synchronizations. → Chemical coupling-induced different types of delay-dependent firing transitions. - Abstract: Chemical synaptic connections are more common than electric ones in neurons, and information transmission delay is especially significant for the synapses of chemical type. In this paper, we report a phenomenon of ordering spatiotemporal chaos and synchronization transitions by the delays and coupling through chemical synapses of modified Hodgkin-Huxley (MHH) neurons on scale-free networks. As the delay τ is increased, the neurons exhibit transitions from bursting synchronization (BS) to intermittent multiple spiking synchronizations (SS). As the coupling g syn is increased, the neurons exhibit different types of firing transitions, depending on the values of τ. For a smaller τ, there are transitions from spatiotemporal chaotic bursting (SCB) to BS or SS; while for a larger τ, there are transitions from SCB to intermittent multiple SS. These findings show that the delays and coupling through chemical synapses can tame the chaotic firings and repeatedly enhance the firing synchronization of neurons, and hence could play important roles in the firing activity of the neurons on scale-free networks.

  12. Networks in biological systems: An investigation of the Gene Ontology as an evolving network

    International Nuclear Information System (INIS)

    Coronnello, C; Tumminello, M; Micciche, S; Mantegna, R.N.

    2009-01-01

    Many biological systems can be described as networks where different elements interact, in order to perform biological processes. We introduce a network associated with the Gene Ontology. Specifically, we construct a correlation-based network where the vertices are the terms of the Gene Ontology and the link between each two terms is weighted on the basis of the number of genes that they have in common. We analyze a filtered network obtained from the correlation-based network and we characterize its evolution over different releases of the Gene Ontology.

  13. Evolvable Neuronal Paths: A Novel Basis for Information and Search in the Brain

    Science.gov (United States)

    Fernando, Chrisantha; Vasas, Vera; Szathmáry, Eörs; Husbands, Phil

    2011-01-01

    We propose a previously unrecognized kind of informational entity in the brain that is capable of acting as the basis for unlimited hereditary variation in neuronal networks. This unit is a path of activity through a network of neurons, analogous to a path taken through a hidden Markov model. To prove in principle the capabilities of this new kind of informational substrate, we show how a population of paths can be used as the hereditary material for a neuronally implemented genetic algorithm, (the swiss-army knife of black-box optimization techniques) which we have proposed elsewhere could operate at somatic timescales in the brain. We compare this to the same genetic algorithm that uses a standard ‘genetic’ informational substrate, i.e. non-overlapping discrete genotypes, on a range of optimization problems. A path evolution algorithm (PEA) is defined as any algorithm that implements natural selection of paths in a network substrate. A PEA is a previously unrecognized type of natural selection that is well suited for implementation by biological neuronal networks with structural plasticity. The important similarities and differences between a standard genetic algorithm and a PEA are considered. Whilst most experiments are conducted on an abstract network model, at the conclusion of the paper a slightly more realistic neuronal implementation of a PEA is outlined based on Izhikevich spiking neurons. Finally, experimental predictions are made for the identification of such informational paths in the brain. PMID:21887266

  14. Exercise-induced neuronal plasticity in central autonomic networks: role in cardiovascular control.

    Science.gov (United States)

    Michelini, Lisete C; Stern, Javier E

    2009-09-01

    It is now well established that brain plasticity is an inherent property not only of the developing but also of the adult brain. Numerous beneficial effects of exercise, including improved memory, cognitive function and neuroprotection, have been shown to involve an important neuroplastic component. However, whether major adaptive cardiovascular adjustments during exercise, needed to ensure proper blood perfusion of peripheral tissues, also require brain neuroplasticity, is presently unknown. This review will critically evaluate current knowledge on proposed mechanisms that are likely to underlie the continuous resetting of baroreflex control of heart rate during/after exercise and following exercise training. Accumulating evidence indicates that not only somatosensory afferents (conveyed by skeletal muscle receptors, baroreceptors and/or cardiopulmonary receptors) but also projections arising from central command neurons (in particular, peptidergic hypothalamic pre-autonomic neurons) converge into the nucleus tractus solitarii (NTS) in the dorsal brainstem, to co-ordinate complex cardiovascular adaptations during dynamic exercise. This review focuses in particular on a reciprocally interconnected network between the NTS and the hypothalamic paraventricular nucleus (PVN), which is proposed to act as a pivotal anatomical and functional substrate underlying integrative feedforward and feedback cardiovascular adjustments during exercise. Recent findings supporting neuroplastic adaptive changes within the NTS-PVN reciprocal network (e.g. remodelling of afferent inputs, structural and functional neuronal plasticity and changes in neurotransmitter content) will be discussed within the context of their role as important underlying cellular mechanisms supporting the tonic activation and improved efficacy of these central pathways in response to circulatory demand at rest and during exercise, both in sedentary and in trained individuals. We hope this review will stimulate

  15. Multi-channels coupling-induced pattern transition in a tri-layer neuronal network

    Science.gov (United States)

    Wu, Fuqiang; Wang, Ya; Ma, Jun; Jin, Wuyin; Hobiny, Aatef

    2018-03-01

    Neurons in nerve system show complex electrical behaviors due to complex connection types and diversity in excitability. A tri-layer network is constructed to investigate the signal propagation and pattern formation by selecting different coupling channels between layers. Each layer is set as different states, and the local kinetics is described by Hindmarsh-Rose neuron model. By changing the number of coupling channels between layers and the state of the first layer, the collective behaviors of each layer and synchronization pattern of network are investigated. A statistical factor of synchronization on each layer is calculated. It is found that quiescent state in the second layer can be excited and disordered state in the third layer is suppressed when the first layer is controlled by a pacemaker, and the developed state is dependent on the number of coupling channels. Furthermore, the collapse in the first layer can cause breakdown of other layers in the network, and the mechanism is that disordered state in the third layer is enhanced when sampled signals from the collapsed layer can impose continuous disturbance on the next layer.

  16. A balanced memory network.

    Directory of Open Access Journals (Sweden)

    Yasser Roudi

    2007-09-01

    Full Text Available A fundamental problem in neuroscience is understanding how working memory--the ability to store information at intermediate timescales, like tens of seconds--is implemented in realistic neuronal networks. The most likely candidate mechanism is the attractor network, and a great deal of effort has gone toward investigating it theoretically. Yet, despite almost a quarter century of intense work, attractor networks are not fully understood. In particular, there are still two unanswered questions. First, how is it that attractor networks exhibit irregular firing, as is observed experimentally during working memory tasks? And second, how many memories can be stored under biologically realistic conditions? Here we answer both questions by studying an attractor neural network in which inhibition and excitation balance each other. Using mean-field analysis, we derive a three-variable description of attractor networks. From this description it follows that irregular firing can exist only if the number of neurons involved in a memory is large. The same mean-field analysis also shows that the number of memories that can be stored in a network scales with the number of excitatory connections, a result that has been suggested for simple models but never shown for realistic ones. Both of these predictions are verified using simulations with large networks of spiking neurons.

  17. Wireless Sensor Network Congestion Control Based on Standard Particle Swarm Optimization and Single Neuron PID.

    Science.gov (United States)

    Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong

    2018-04-19

    Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm⁻neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS.

  18. Wireless Sensor Network Congestion Control Based on Standard Particle Swarm Optimization and Single Neuron PID

    Science.gov (United States)

    Yang, Xiaoping; Chen, Xueying; Xia, Riting; Qian, Zhihong

    2018-01-01

    Aiming at the problem of network congestion caused by the large number of data transmissions in wireless routing nodes of wireless sensor network (WSN), this paper puts forward an algorithm based on standard particle swarm–neural PID congestion control (PNPID). Firstly, PID control theory was applied to the queue management of wireless sensor nodes. Then, the self-learning and self-organizing ability of neurons was used to achieve online adjustment of weights to adjust the proportion, integral and differential parameters of the PID controller. Finally, the standard particle swarm optimization to neural PID (NPID) algorithm of initial values of proportion, integral and differential parameters and neuron learning rates were used for online optimization. This paper describes experiments and simulations which show that the PNPID algorithm effectively stabilized queue length near the expected value. At the same time, network performance, such as throughput and packet loss rate, was greatly improved, which alleviated network congestion and improved network QoS. PMID:29671822

  19. Biological and Environmental Research Network Requirements

    Energy Technology Data Exchange (ETDEWEB)

    Balaji, V. [Princeton Univ., NJ (United States). Earth Science Grid Federation (ESGF); Boden, Tom [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cowley, Dave [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Dart, Eli [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Dattoria, Vince [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Desai, Narayan [Argonne National Lab. (ANL), Argonne, IL (United States); Egan, Rob [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Foster, Ian [Argonne National Lab. (ANL), Argonne, IL (United States); Goldstone, Robin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gregurick, Susan [U.S. Dept. of Energy, Washington, DC (United States). Biological Systems Science Division; Houghton, John [U.S. Dept. of Energy, Washington, DC (United States). Biological and Environmental Research (BER) Program; Izaurralde, Cesar [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Johnston, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Joseph, Renu [U.S. Dept. of Energy, Washington, DC (United States). Climate and Environmental Sciences Division; Kleese-van Dam, Kerstin [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Lipton, Mary [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Monga, Inder [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Pritchard, Matt [British Atmospheric Data Centre (BADC), Oxon (United Kingdom); Rotman, Lauren [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Strand, Gary [National Center for Atmospheric Research (NCAR), Boulder, CO (United States); Stuart, Cory [Argonne National Lab. (ANL), Argonne, IL (United States); Tatusova, Tatiana [National Inst. of Health (NIH), Bethesda, MD (United States); Tierney, Brian [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). ESNet; Thomas, Brian [Univ. of California, Berkeley, CA (United States); Williams, Dean N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Zurawski, Jason [Internet2, Washington, DC (United States)

    2013-09-01

    The Energy Sciences Network (ESnet) is the primary provider of network connectivity for the U.S. Department of Energy (DOE) Office of Science (SC), the single largest supporter of basic research in the physical sciences in the United States. In support of SC programs, ESnet regularly updates and refreshes its understanding of the networking requirements of the instruments, facilities, scientists, and science programs that it serves. This focus has helped ESnet be a highly successful enabler of scientific discovery for over 25 years. In November 2012, ESnet and the Office of Biological and Environmental Research (BER) of the DOE SC organized a review to characterize the networking requirements of the programs funded by the BER program office. Several key findings resulted from the review. Among them: 1) The scale of data sets available to science collaborations continues to increase exponentially. This has broad impact, both on the network and on the computational and storage systems connected to the network. 2) Many science collaborations require assistance to cope with the systems and network engineering challenges inherent in managing the rapid growth in data scale. 3) Several science domains operate distributed facilities that rely on high-performance networking for success. Key examples illustrated in this report include the Earth System Grid Federation (ESGF) and the Systems Biology Knowledgebase (KBase). This report expands on these points, and addresses others as well. The report contains a findings section as well as the text of the case studies discussed at the review.

  20. Leaky Integrate-and-Fire Neuron Circuit Based on Floating-Gate Integrator

    Science.gov (United States)

    Kornijcuk, Vladimir; Lim, Hyungkwang; Seok, Jun Yeong; Kim, Guhyun; Kim, Seong Keun; Kim, Inho; Choi, Byung Joon; Jeong, Doo Seok

    2016-01-01

    The artificial spiking neural network (SNN) is promising and has been brought to the notice of the theoretical neuroscience and neuromorphic engineering research communities. In this light, we propose a new type of artificial spiking neuron based on leaky integrate-and-fire (LIF) behavior. A distinctive feature of the proposed FG-LIF neuron is the use of a floating-gate (FG) integrator rather than a capacitor-based one. The relaxation time of the charge on the FG relies mainly on the tunnel barrier profile, e.g., barrier height and thickness (rather than the area). This opens up the possibility of large-scale integration of neurons. The circuit simulation results offered biologically plausible spiking activity (circuit was subject to possible types of noise, e.g., thermal noise and burst noise. The simulation results indicated remarkable distributional features of interspike intervals that are fitted to Gamma distribution functions, similar to biological neurons in the neocortex. PMID:27242416

  1. Uncovering Biological Network Function via Graphlet Degree Signatures

    Directory of Open Access Journals (Sweden)

    Nataša Pržulj

    2008-01-01

    Full Text Available Motivation: Proteins are essential macromolecules of life and thus understanding their function is of great importance. The number of functionally unclassified proteins is large even for simple and well studied organisms such as baker’s yeast. Methods for determining protein function have shifted their focus from targeting specific proteins based solely on sequence homology to analyses of the entire proteome based on protein-protein interaction (PPI networks. Since proteins interact to perform a certain function, analyzing structural properties of PPI networks may provide useful clues about the biological function of individual proteins, protein complexes they participate in, and even larger subcellular machines.Results: We design a sensitive graph theoretic method for comparing local structures of node neighborhoods that demonstrates that in PPI networks, biological function of a node and its local network structure are closely related. The method summarizes a protein’s local topology in a PPI network into the vector of graphlet degrees called the signature of the protein and computes the signature similarities between all protein pairs. We group topologically similar proteins under this measure in a PPI network and show that these protein groups belong to the same protein complexes, perform the same biological functions, are localized in the same subcellular compartments, and have the same tissue expressions. Moreover, we apply our technique on a proteome-scale network data and infer biological function of yet unclassified proteins demonstrating that our method can provide valuable guidelines for future experimental research such as disease protein prediction.Availability: Data is available upon request.

  2. Local excitation-inhibition ratio for synfire chain propagation in feed-forward neuronal networks

    Science.gov (United States)

    Guo, Xinmeng; Yu, Haitao; Wang, Jiang; Liu, Jing; Cao, Yibin; Deng, Bin

    2017-09-01

    A leading hypothesis holds that spiking activity propagates along neuronal sub-populations which are connected in a feed-forward manner, and the propagation efficiency would be affected by the dynamics of sub-populations. In this paper, how the interaction between local excitation and inhibition effects on synfire chain propagation in feed-forward network (FFN) is investigated. The simulation results show that there is an appropriate excitation-inhibition (EI) ratio maximizing the performance of synfire chain propagation. The optimal EI ratio can significantly enhance the selectivity of FFN to synchronous signals, which thereby increases the stability to background noise. Moreover, the effect of network topology on synfire chain propagation is also investigated. It is found that synfire chain propagation can be maximized by an optimal interlayer linking probability. We also find that external noise is detrimental to synchrony propagation by inducing spiking jitter. The results presented in this paper may provide insights into the effects of network dynamics on neuronal computations.

  3. SBEToolbox: A Matlab Toolbox for Biological Network Analysis.

    Science.gov (United States)

    Konganti, Kranti; Wang, Gang; Yang, Ence; Cai, James J

    2013-01-01

    We present SBEToolbox (Systems Biology and Evolution Toolbox), an open-source Matlab toolbox for biological network analysis. It takes a network file as input, calculates a variety of centralities and topological metrics, clusters nodes into modules, and displays the network using different graph layout algorithms. Straightforward implementation and the inclusion of high-level functions allow the functionality to be easily extended or tailored through developing custom plugins. SBEGUI, a menu-driven graphical user interface (GUI) of SBEToolbox, enables easy access to various network and graph algorithms for programmers and non-programmers alike. All source code and sample data are freely available at https://github.com/biocoder/SBEToolbox/releases.

  4. Mining Functional Modules in Heterogeneous Biological Networks Using Multiplex PageRank Approach.

    Science.gov (United States)

    Li, Jun; Zhao, Patrick X

    2016-01-01

    Identification of functional modules/sub-networks in large-scale biological networks is one of the important research challenges in current bioinformatics and systems biology. Approaches have been developed to identify functional modules in single-class biological networks; however, methods for systematically and interactively mining multiple classes of heterogeneous biological networks are lacking. In this paper, we present a novel algorithm (called mPageRank) that utilizes the Multiplex PageRank approach to mine functional modules from two classes of biological networks. We demonstrate the capabilities of our approach by successfully mining functional biological modules through integrating expression-based gene-gene association networks and protein-protein interaction networks. We first compared the performance of our method with that of other methods using simulated data. We then applied our method to identify the cell division cycle related functional module and plant signaling defense-related functional module in the model plant Arabidopsis thaliana. Our results demonstrated that the mPageRank method is effective for mining sub-networks in both expression-based gene-gene association networks and protein-protein interaction networks, and has the potential to be adapted for the discovery of functional modules/sub-networks in other heterogeneous biological networks. The mPageRank executable program, source code, the datasets and results of the presented two case studies are publicly and freely available at http://plantgrn.noble.org/MPageRank/.

  5. Parameter Diversity Induced Multiple Spatial Coherence Resonances and Spiral Waves in Neuronal Network with and Without Noise

    International Nuclear Information System (INIS)

    Li Yuye; Jia Bing; Gu Huaguang; An Shucheng

    2012-01-01

    Diversity in the neurons and noise are inevitable in the real neuronal network. In this paper, parameter diversity induced spiral waves and multiple spatial coherence resonances in a two-dimensional neuronal network without or with noise are simulated. The relationship between the multiple resonances and the multiple transitions between patterns of spiral waves are identified. The coherence degrees induced by the diversity are suppressed when noise is introduced and noise density is increased. The results suggest that natural nervous system might profit from both parameter diversity and noise, provided a possible approach to control formation and transition of spiral wave by the cooperation between the diversity and noise. (general)

  6. Impact of sub and supra-threshold adaptation currents in networks of spiking neurons.

    Science.gov (United States)

    Colliaux, David; Yger, Pierre; Kaneko, Kunihiko

    2015-12-01

    Neuronal adaptation is the intrinsic capacity of the brain to change, by various mechanisms, its dynamical responses as a function of the context. Such a phenomena, widely observed in vivo and in vitro, is known to be crucial in homeostatic regulation of the activity and gain control. The effects of adaptation have already been studied at the single-cell level, resulting from either voltage or calcium gated channels both activated by the spiking activity and modulating the dynamical responses of the neurons. In this study, by disentangling those effects into a linear (sub-threshold) and a non-linear (supra-threshold) part, we focus on the the functional role of those two distinct components of adaptation onto the neuronal activity at various scales, starting from single-cell responses up to recurrent networks dynamics, and under stationary or non-stationary stimulations. The effects of slow currents on collective dynamics, like modulation of population oscillation and reliability of spike patterns, is quantified for various types of adaptation in sparse recurrent networks.

  7. Activating and inhibiting connections in biological network dynamics

    Directory of Open Access Journals (Sweden)

    Knight Rob

    2008-12-01

    Full Text Available Abstract Background Many studies of biochemical networks have analyzed network topology. Such work has suggested that specific types of network wiring may increase network robustness and therefore confer a selective advantage. However, knowledge of network topology does not allow one to predict network dynamical behavior – for example, whether deleting a protein from a signaling network would maintain the network's dynamical behavior, or induce oscillations or chaos. Results Here we report that the balance between activating and inhibiting connections is important in determining whether network dynamics reach steady state or oscillate. We use a simple dynamical model of a network of interacting genes or proteins. Using the model, we study random networks, networks selected for robust dynamics, and examples of biological network topologies. The fraction of activating connections influences whether the network dynamics reach steady state or oscillate. Conclusion The activating fraction may predispose a network to oscillate or reach steady state, and neutral evolution or selection of this parameter may affect the behavior of biological networks. This principle may unify the dynamics of a wide range of cellular networks. Reviewers Reviewed by Sergei Maslov, Eugene Koonin, and Yu (Brandon Xia (nominated by Mark Gerstein. For the full reviews, please go to the Reviewers' comments section.

  8. Emergence of communication in socio-biological networks

    CERN Document Server

    Berea, Anamaria

    2018-01-01

    This book integrates current advances in biology, economics of information and linguistics research through applications using agent-based modeling and social network analysis to develop scenarios of communication and language emergence in the social aspects of biological communications. The book presents a model of communication emergence that can be applied both to human and non-human living organism networks. The model is based on economic concepts and individual behavior fundamental for the study of trust and reputation networks in social science, particularly in economics; it is also based on the theory of the emergence of norms and historical path dependence that has been influential in institutional economics. Also included are mathematical models and code for agent-based models to explore various scenarios of language evolution, as well as a computer application that explores language and communication in biological versus social organisms, and the emergence of various meanings and grammars in human ...

  9. Multiple Spatial Coherence Resonances and Spatial Patterns in a Noise-Driven Heterogeneous Neuronal Network

    International Nuclear Information System (INIS)

    Li Yu-Ye; Ding Xue-Li

    2014-01-01

    Heterogeneity of the neurons and noise are inevitable in the real neuronal network. In this paper, Gaussian white noise induced spatial patterns including spiral waves and multiple spatial coherence resonances are studied in a network composed of Morris—Lecar neurons with heterogeneity characterized by parameter diversity. The relationship between the resonances and the transitions between ordered spiral waves and disordered spatial patterns are achieved. When parameter diversity is introduced, the maxima of multiple resonances increases first, and then decreases as diversity strength increases, which implies that the coherence degrees induced by noise are enhanced at an intermediate diversity strength. The synchronization degree of spatial patterns including ordered spiral waves and disordered patterns is identified to be a very low level. The results suggest that the nervous system can profit from both heterogeneity and noise, and the multiple spatial coherence resonances are achieved via the emergency of spiral waves instead of synchronization patterns. (interdisciplinary physics and related areas of science and technology)

  10. Multiple Spatial Coherence Resonances and Spatial Patterns in a Noise-Driven Heterogeneous Neuronal Network

    Science.gov (United States)

    Li, Yu-Ye; Ding, Xue-Li

    2014-12-01

    Heterogeneity of the neurons and noise are inevitable in the real neuronal network. In this paper, Gaussian white noise induced spatial patterns including spiral waves and multiple spatial coherence resonances are studied in a network composed of Morris—Lecar neurons with heterogeneity characterized by parameter diversity. The relationship between the resonances and the transitions between ordered spiral waves and disordered spatial patterns are achieved. When parameter diversity is introduced, the maxima of multiple resonances increases first, and then decreases as diversity strength increases, which implies that the coherence degrees induced by noise are enhanced at an intermediate diversity strength. The synchronization degree of spatial patterns including ordered spiral waves and disordered patterns is identified to be a very low level. The results suggest that the nervous system can profit from both heterogeneity and noise, and the multiple spatial coherence resonances are achieved via the emergency of spiral waves instead of synchronization patterns.

  11. Characterization of a patch-clamp microchannel array towards neuronal networks analysis

    DEFF Research Database (Denmark)

    Alberti, Massimo; Snakenborg, Detlef; Lopacinska, Joanna M.

    2010-01-01

    for simultaneous patch clamping of cultured cells or neurons in the same network. A disposable silicon/silicon dioxide (Si/SiO2) chip with a microhole array was integrated in a microfluidic system for cell handling, perfusion and electrical recording. Fluidic characterization showed that our PC mu CA can work...

  12. Network Biology (http://www.iaees.org/publications/journals/nb/online-version.asp

    Directory of Open Access Journals (Sweden)

    networkbiology@iaees.org

    Full Text Available Network Biology ISSN 2220-8879 URL: http://www.iaees.org/publications/journals/nb/online-version.asp RSS: http://www.iaees.org/publications/journals/nb/rss.xml E-mail: networkbiology@iaees.org Editor-in-Chief: WenJun Zhang Aims and Scope NETWORK BIOLOGY (ISSN 2220-8879; CODEN NBEICS is an open access, peer-reviewed international journal that considers scientific articles in all different areas of network biology. It is the transactions of the International Society of Network Biology. It dedicates to the latest advances in network biology. The goal of this journal is to keep a record of the state-of-the-art research and promote the research work in these fast moving areas. The topics to be covered by Network Biology include, but are not limited to: •Theories, algorithms and programs of network analysis •Innovations and applications of biological networks •Ecological networks, food webs and natural equilibrium •Co-evolution, co-extinction, biodiversity conservation •Metabolic networks, protein-protein interaction networks, biochemical reaction networks, gene networks, transcriptional regulatory networks, cell cycle networks, phylogenetic networks, network motifs •Physiological networksNetwork regulation of metabolic processes, human diseases and ecological systems •Social networks, epidemiological networks •System complexity, self-organized systems, emergence of biological systems, agent-based modeling, individual-based modeling, neural network modeling, and other network-based modeling, etc. We are also interested in short communications that clearly address a specific issue or completely present a new ecological network, food web, or metabolic or gene network, etc. Authors can submit their works to the email box of this journal, networkbiology@iaees.org. All manuscripts submitted to this journal must be previously unpublished and may not be considered for publication elsewhere at any time during review period of this journal

  13. Assessment of network perturbation amplitudes by applying high-throughput data to causal biological networks

    Directory of Open Access Journals (Sweden)

    Martin Florian

    2012-05-01

    Full Text Available Abstract Background High-throughput measurement technologies produce data sets that have the potential to elucidate the biological impact of disease, drug treatment, and environmental agents on humans. The scientific community faces an ongoing challenge in the analysis of these rich data sources to more accurately characterize biological processes that have been perturbed at the mechanistic level. Here, a new approach is built on previous methodologies in which high-throughput data was interpreted using prior biological knowledge of cause and effect relationships. These relationships are structured into network models that describe specific biological processes, such as inflammatory signaling or cell cycle progression. This enables quantitative assessment of network perturbation in response to a given stimulus. Results Four complementary methods were devised to quantify treatment-induced activity changes in processes described by network models. In addition, companion statistics were developed to qualify significance and specificity of the results. This approach is called Network Perturbation Amplitude (NPA scoring because the amplitudes of treatment-induced perturbations are computed for biological network models. The NPA methods were tested on two transcriptomic data sets: normal human bronchial epithelial (NHBE cells treated with the pro-inflammatory signaling mediator TNFα, and HCT116 colon cancer cells treated with the CDK cell cycle inhibitor R547. Each data set was scored against network models representing different aspects of inflammatory signaling and cell cycle progression, and these scores were compared with independent measures of pathway activity in NHBE cells to verify the approach. The NPA scoring method successfully quantified the amplitude of TNFα-induced perturbation for each network model when compared against NF-κB nuclear localization and cell number. In addition, the degree and specificity to which CDK

  14. morphforge: a toolbox for simulating small networks of biologically detailed neurons in Python

    Directory of Open Access Journals (Sweden)

    Michael James Hull

    2014-01-01

    Full Text Available The broad structure of a modelling study can often be explained over a cup of coffee, butconverting this high-level conceptual idea into graphs of the final simulation results may requiremany weeks of sitting at a computer. Although models themselves can be complex, oftenmany mental resources are wasted working around complexities of the software ecosystemsuch as fighting to manage files, interfacing between tools and data formats, finding mistakesin code or working out the units of variables. morphforge is a high-level, Python toolboxfor building and managing simulations of small populations of multicompartmental biophysicalmodel neurons. An entire in silico experiment, including the definition of neuronal morphologies,channel descriptions, stimuli, visualisation and analysis of results can be written within a singleshort Python script using high-level objects. Multiple independent simulations can be createdand run from a single script, allowing parameter spaces to be investigated. Consideration hasbeen given to the reuse of both algorithmic and parameterisable components to allow bothspecific and stochastic parameter variations. Some other features of the toolbox include: theautomatic generation of human-readable documentation (e. g. PDF-files about a simulation; thetransparent handling of different biophysical units; a novel mechanism for plotting simulationresults based on a system of tags; and an architecture that supports both the use of establishedformats for defining channels and synapses (e. g. MODL files, and the possibility to supportother libraries and standards easily. We hope that this toolbox will allow scientists to quicklybuild simulations of multicompartmental model neurons for research and serve as a platform forfurther tool development.

  15. The pairwise phase consistency in cortical network and its relationship with neuronal activation

    Directory of Open Access Journals (Sweden)

    Wang Daming

    2017-01-01

    Full Text Available Gamma-band neuronal oscillation and synchronization with the range of 30-90 Hz are ubiquitous phenomenon across numerous brain areas and various species, and correlated with plenty of cognitive functions. The phase of the oscillation, as one aspect of CTC (Communication through Coherence hypothesis, underlies various functions for feature coding, memory processing and behaviour performing. The PPC (Pairwise Phase Consistency, an improved coherence measure, statistically quantifies the strength of phase synchronization. In order to evaluate the PPC and its relationships with input stimulus, neuronal activation and firing rate, a simplified spiking neuronal network is constructed to simulate orientation columns in primary visual cortex. If the input orientation stimulus is preferred for a certain orientation column, neurons within this corresponding column will obtain higher firing rate and stronger neuronal activation, which consequently engender higher PPC values, with higher PPC corresponding to higher firing rate. In addition, we investigate the PPC in time resolved analysis with a sliding window.

  16. Effect of acute stretch injury on action potential and network activity of rat neocortical neurons in culture.

    Science.gov (United States)

    Magou, George C; Pfister, Bryan J; Berlin, Joshua R

    2015-10-22

    The basis for acute seizures following traumatic brain injury (TBI) remains unclear. Animal models of TBI have revealed acute hyperexcitablility in cortical neurons that could underlie seizure activity, but studying initiating events causing hyperexcitability is difficult in these models. In vitro models of stretch injury with cultured cortical neurons, a surrogate for TBI, allow facile investigation of cellular changes after injury but they have only demonstrated post-injury hypoexcitability. The goal of this study was to determine if neuronal hyperexcitability could be triggered by in vitro stretch injury. Controlled uniaxial stretch injury was delivered to a spatially delimited region of a spontaneously active network of cultured rat cortical neurons, yielding a region of stretch-injured neurons and adjacent regions of non-stretched neurons that did not directly experience stretch injury. Spontaneous electrical activity was measured in non-stretched and stretch-injured neurons, and in control neuronal networks not subjected to stretch injury. Non-stretched neurons in stretch-injured cultures displayed a three-fold increase in action potential firing rate and bursting activity 30-60 min post-injury. Stretch-injured neurons, however, displayed dramatically lower rates of action potential firing and bursting. These results demonstrate that acute hyperexcitability can be observed in non-stretched neurons located in regions adjacent to the site of stretch injury, consistent with reports that seizure activity can arise from regions surrounding the site of localized brain injury. Thus, this in vitro procedure for localized neuronal stretch injury may provide a model to study the earliest cellular changes in neuronal function associated with acute post-traumatic seizures. Copyright © 2015. Published by Elsevier B.V.

  17. A single hidden layer feedforward network with only one neuron in the hidden layer can approximate any univariate function

    OpenAIRE

    Guliyev , Namig; Ismailov , Vugar

    2016-01-01

    The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous function provided that an unlimited number of neurons in a hidden layer is permitted. In this paper, we consider constructive approximation on any finite interval of $\\mathbb{R}$ by neural networks with only one neuron in the hid...

  18. Differential Patterns of Dysconnectivity in Mirror Neuron and Mentalizing Networks in Schizophrenia

    NARCIS (Netherlands)

    Schilbach, Leonhard; Derntl, Birgit; Aleman, Andre; Caspers, Svenja; Clos, Mareike; Diederen, Kelly M J; Gruber, Oliver; Kogler, Lydia; Liemburg, Edith J; Sommer, Iris E; Müller, Veronika I; Cieslik, Edna C; Eickhoff, Simon B

    Impairments of social cognition are well documented in patients with schizophrenia (SCZ), but the neural basis remains poorly understood. In light of evidence that suggests that the "mirror neuron system" (MNS) and the "mentalizing network" (MENT) are key substrates of intersubjectivity and joint

  19. A decaying factor accounts for contained activity in neuronal networks with no need of hierarchical or modular organization

    International Nuclear Information System (INIS)

    Amancio, Diego R; Oliveira Jr, Osvaldo N; Costa, Luciano da F

    2012-01-01

    The mechanisms responsible for containing activity in systems represented by networks are crucial in various phenomena, for example, in diseases such as epilepsy that affect the neuronal networks and for information dissemination in social networks. The first models to account for contained activity included triggering and inhibition processes, but they cannot be applied to social networks where inhibition is clearly absent. A recent model showed that contained activity can be achieved with no need of inhibition processes provided that the network is subdivided into modules (communities). In this paper, we introduce a new concept inspired in the Hebbian theory, through which containment of activity is achieved by incorporating a dynamics based on a decaying activity in a random walk mechanism preferential to the node activity. Upon selecting the decay coefficient within a proper range, we observed sustained activity in all the networks tested, namely, random, Barabási–Albert and geographical networks. The generality of this finding was confirmed by showing that modularity is no longer needed if the dynamics based on the integrate-and-fire dynamics incorporated the decay factor. Taken together, these results provide a proof of principle that persistent, restrained network activation might occur in the absence of any particular topological structure. This may be the reason why neuronal activity does not spread out to the entire neuronal network, even when no special topological organization exists. (paper)

  20. Neural network models: from biology to many - body phenomenology

    International Nuclear Information System (INIS)

    Clark, J.W.

    1993-01-01

    The current surge of research on practical side of neural networks and their utility in memory storage/recall, pattern recognition and classification is given in this article. The initial attraction of neural networks as dynamical and statistical system has been investigated. From the view of many-body theorist, the neurons may be thought of as particles, and the weighted connection between the units, as the interaction between these particles. Finally, the author has seen the impressive capabilities of artificial neural networks in pattern recognition and classification may be exploited to solve data management problems in experimental physics and the discovery of radically new theoretically description of physical problems and neural networks can be used in physics. (A.B.)

  1. Implementing Signature Neural Networks with Spiking Neurons.

    Science.gov (United States)

    Carrillo-Medina, José Luis; Latorre, Roberto

    2016-01-01

    Spiking Neural Networks constitute the most promising approach to develop realistic Artificial Neural Networks (ANNs). Unlike traditional firing rate-based paradigms, information coding in spiking models is based on the precise timing of individual spikes. It has been demonstrated that spiking ANNs can be successfully and efficiently applied to multiple realistic problems solvable with traditional strategies (e.g., data classification or pattern recognition). In recent years, major breakthroughs in neuroscience research have discovered new relevant computational principles in different living neural systems. Could ANNs benefit from some of these recent findings providing novel elements of inspiration? This is an intriguing question for the research community and the development of spiking ANNs including novel bio-inspired information coding and processing strategies is gaining attention. From this perspective, in this work, we adapt the core concepts of the recently proposed Signature Neural Network paradigm-i.e., neural signatures to identify each unit in the network, local information contextualization during the processing, and multicoding strategies for information propagation regarding the origin and the content of the data-to be employed in a spiking neural network. To the best of our knowledge, none of these mechanisms have been used yet in the context of ANNs of spiking neurons. This paper provides a proof-of-concept for their applicability in such networks. Computer simulations show that a simple network model like the discussed here exhibits complex self-organizing properties. The combination of multiple simultaneous encoding schemes allows the network to generate coexisting spatio-temporal patterns of activity encoding information in different spatio-temporal spaces. As a function of the network and/or intra-unit parameters shaping the corresponding encoding modality, different forms of competition among the evoked patterns can emerge even in the absence

  2. A Neuronal Network Model for Pitch Selectivity and Representation.

    Science.gov (United States)

    Huang, Chengcheng; Rinzel, John

    2016-01-01

    Pitch is a perceptual correlate of periodicity. Sounds with distinct spectra can elicit the same pitch. Despite the importance of pitch perception, understanding the cellular mechanism of pitch perception is still a major challenge and a mechanistic model of pitch is lacking. A multi-stage neuronal network model is developed for pitch frequency estimation using biophysically-based, high-resolution coincidence detector neurons. The neuronal units respond only to highly coincident input among convergent auditory nerve fibers across frequency channels. Their selectivity for only very fast rising slopes of convergent input enables these slope-detectors to distinguish the most prominent coincidences in multi-peaked input time courses. Pitch can then be estimated from the first-order interspike intervals of the slope-detectors. The regular firing pattern of the slope-detector neurons are similar for sounds sharing the same pitch despite the distinct timbres. The decoded pitch strengths also correlate well with the salience of pitch perception as reported by human listeners. Therefore, our model can serve as a neural representation for pitch. Our model performs successfully in estimating the pitch of missing fundamental complexes and reproducing the pitch variation with respect to the frequency shift of inharmonic complexes. It also accounts for the phase sensitivity of pitch perception in the cases of Schroeder phase, alternating phase and random phase relationships. Moreover, our model can also be applied to stochastic sound stimuli, iterated-ripple-noise, and account for their multiple pitch perceptions.

  3. Network Analysis Tools: from biological networks to clusters and pathways.

    Science.gov (United States)

    Brohée, Sylvain; Faust, Karoline; Lima-Mendez, Gipsi; Vanderstocken, Gilles; van Helden, Jacques

    2008-01-01

    Network Analysis Tools (NeAT) is a suite of computer tools that integrate various algorithms for the analysis of biological networks: comparison between graphs, between clusters, or between graphs and clusters; network randomization; analysis of degree distribution; network-based clustering and path finding. The tools are interconnected to enable a stepwise analysis of the network through a complete analytical workflow. In this protocol, we present a typical case of utilization, where the tasks above are combined to decipher a protein-protein interaction network retrieved from the STRING database. The results returned by NeAT are typically subnetworks, networks enriched with additional information (i.e., clusters or paths) or tables displaying statistics. Typical networks comprising several thousands of nodes and arcs can be analyzed within a few minutes. The complete protocol can be read and executed in approximately 1 h.

  4. On the sample complexity of learning for networks of spiking neurons with nonlinear synaptic interactions.

    Science.gov (United States)

    Schmitt, Michael

    2004-09-01

    We study networks of spiking neurons that use the timing of pulses to encode information. Nonlinear interactions model the spatial groupings of synapses on the neural dendrites and describe the computations performed at local branches. Within a theoretical framework of learning we analyze the question of how many training examples these networks must receive to be able to generalize well. Bounds for this sample complexity of learning can be obtained in terms of a combinatorial parameter known as the pseudodimension. This dimension characterizes the computational richness of a neural network and is given in terms of the number of network parameters. Two types of feedforward architectures are considered: constant-depth networks and networks of unconstrained depth. We derive asymptotically tight bounds for each of these network types. Constant depth networks are shown to have an almost linear pseudodimension, whereas the pseudodimension of general networks is quadratic. Networks of spiking neurons that use temporal coding are becoming increasingly more important in practical tasks such as computer vision, speech recognition, and motor control. The question of how well these networks generalize from a given set of training examples is a central issue for their successful application as adaptive systems. The results show that, although coding and computation in these networks is quite different and in many cases more powerful, their generalization capabilities are at least as good as those of traditional neural network models.

  5. Study on algorithm of process neural network for soft sensing in sewage disposal system

    Science.gov (United States)

    Liu, Zaiwen; Xue, Hong; Wang, Xiaoyi; Yang, Bin; Lu, Siying

    2006-11-01

    A new method of soft sensing based on process neural network (PNN) for sewage disposal system is represented in the paper. PNN is an extension of traditional neural network, in which the inputs and outputs are time-variation. An aggregation operator is introduced to process neuron, and it makes the neuron network has the ability to deal with the information of space-time two dimensions at the same time, so the data processing enginery of biological neuron is imitated better than traditional neuron. Process neural network with the structure of three layers in which hidden layer is process neuron and input and output are common neurons for soft sensing is discussed. The intelligent soft sensing based on PNN may be used to fulfill measurement of the effluent BOD (Biochemical Oxygen Demand) from sewage disposal system, and a good training result of soft sensing was obtained by the method.

  6. Fast and Efficient Asynchronous Neural Computation with Adapting Spiking Neural Networks

    NARCIS (Netherlands)

    D. Zambrano (Davide); S.M. Bohte (Sander)

    2016-01-01

    textabstractBiological neurons communicate with a sparing exchange of pulses - spikes. It is an open question how real spiking neurons produce the kind of powerful neural computation that is possible with deep artificial neural networks, using only so very few spikes to communicate. Building on

  7. The formation of synchronization cliques during the development of modular neural networks

    International Nuclear Information System (INIS)

    Fuchs, Einat; Ayali, Amir; Ben-Jacob, Eshel; Boccaletti, Stefano

    2009-01-01

    Modular organization is a special feature shared by many biological and social networks alike. It is a hallmark for systems exhibiting multitasking, in which individual tasks are performed by separated and yet coordinated functional groups. Understanding how networks of segregated modules develop to support coordinated multitasking functionalities is the main topic of the current study. Using simulations of biologically inspired neuronal networks during development, we study the formation of functional groups (cliques) and inter-neuronal synchronization. The results indicate that synchronization cliques first develop locally according to the explicit network topological organization. Later on, at intermediate connectivity levels, when networks have both local segregation and long-range integration, new synchronization cliques with distinctive properties are formed. In particular, by defining a new measure of synchronization centrality, we identify at these developmental stages dominant neurons whose functional centrality largely exceeds the topological one. These are generated mainly in a few dominant clusters that become the centers of the newly formed synchronization cliques. We show that by the local synchronization properties at the very early developmental stages, it is possible to predict with high accuracy which clusters will become dominant in later stages of network development

  8. Dynamical System Approach for Edge Detection Using Coupled FitzHugh-Nagumo Neurons.

    Science.gov (United States)

    Li, Shaobai; Dasmahapatra, Srinandan; Maharatna, Koushik

    2015-12-01

    The prospect of emulating the impressive computational capabilities of biological systems has led to considerable interest in the design of analog circuits that are potentially implementable in very large scale integration CMOS technology and are guided by biologically motivated models. For example, simple image processing tasks, such as the detection of edges in binary and grayscale images, have been performed by networks of FitzHugh-Nagumo-type neurons using the reaction-diffusion models. However, in these studies, the one-to-one mapping of image pixels to component neurons makes the size of the network a critical factor in any such implementation. In this paper, we develop a simplified version of the employed reaction-diffusion model in three steps. In the first step, we perform a detailed study to locate this threshold using continuous Lyapunov exponents from dynamical system theory. Furthermore, we render the diffusion in the system to be anisotropic, with the degree of anisotropy being set by the gradients of grayscale values in each image. The final step involves a simplification of the model that is achieved by eliminating the terms that couple the membrane potentials of adjacent neurons. We apply our technique to detect edges in data sets of artificially generated and real images, and we demonstrate that the performance is as good if not better than that of the previous methods without increasing the size of the network.

  9. Hybrid discrete-time neural networks.

    Science.gov (United States)

    Cao, Hongjun; Ibarz, Borja

    2010-11-13

    Hybrid dynamical systems combine evolution equations with state transitions. When the evolution equations are discrete-time (also called map-based), the result is a hybrid discrete-time system. A class of biological neural network models that has recently received some attention falls within this category: map-based neuron models connected by means of fast threshold modulation (FTM). FTM is a connection scheme that aims to mimic the switching dynamics of a neuron subject to synaptic inputs. The dynamic equations of the neuron adopt different forms according to the state (either firing or not firing) and type (excitatory or inhibitory) of their presynaptic neighbours. Therefore, the mathematical model of one such network is a combination of discrete-time evolution equations with transitions between states, constituting a hybrid discrete-time (map-based) neural network. In this paper, we review previous work within the context of these models, exemplifying useful techniques to analyse them. Typical map-based neuron models are low-dimensional and amenable to phase-plane analysis. In bursting models, fast-slow decomposition can be used to reduce dimensionality further, so that the dynamics of a pair of connected neurons can be easily understood. We also discuss a model that includes electrical synapses in addition to chemical synapses with FTM. Furthermore, we describe how master stability functions can predict the stability of synchronized states in these networks. The main results are extended to larger map-based neural networks.

  10. The NEST Dry-Run Mode: Efficient Dynamic Analysis of Neuronal Network Simulation Code

    Directory of Open Access Journals (Sweden)

    Susanne Kunkel

    2017-06-01

    Full Text Available NEST is a simulator for spiking neuronal networks that commits to a general purpose approach: It allows for high flexibility in the design of network models, and its applications range from small-scale simulations on laptops to brain-scale simulations on supercomputers. Hence, developers need to test their code for various use cases and ensure that changes to code do not impair scalability. However, running a full set of benchmarks on a supercomputer takes up precious compute-time resources and can entail long queuing times. Here, we present the NEST dry-run mode, which enables comprehensive dynamic code analysis without requiring access to high-performance computing facilities. A dry-run simulation is carried out by a single process, which performs all simulation steps except communication as if it was part of a parallel environment with many processes. We show that measurements of memory usage and runtime of neuronal network simulations closely match the corresponding dry-run data. Furthermore, we demonstrate the successful application of the dry-run mode in the areas of profiling and performance modeling.

  11. Searching for collective behavior in a large network of sensory neurons.

    Directory of Open Access Journals (Sweden)

    Gašper Tkačik

    2014-01-01

    Full Text Available Maximum entropy models are the least structured probability distributions that exactly reproduce a chosen set of statistics measured in an interacting network. Here we use this principle to construct probabilistic models which describe the correlated spiking activity of populations of up to 120 neurons in the salamander retina as it responds to natural movies. Already in groups as small as 10 neurons, interactions between spikes can no longer be regarded as small perturbations in an otherwise independent system; for 40 or more neurons pairwise interactions need to be supplemented by a global interaction that controls the distribution of synchrony in the population. Here we show that such "K-pairwise" models--being systematic extensions of the previously used pairwise Ising models--provide an excellent account of the data. We explore the properties of the neural vocabulary by: 1 estimating its entropy, which constrains the population's capacity to represent visual information; 2 classifying activity patterns into a small set of metastable collective modes; 3 showing that the neural codeword ensembles are extremely inhomogenous; 4 demonstrating that the state of individual neurons is highly predictable from the rest of the population, allowing the capacity for error correction.

  12. Novel topological descriptors for analyzing biological networks

    Directory of Open Access Journals (Sweden)

    Varmuza Kurt K

    2010-06-01

    Full Text Available Abstract Background Topological descriptors, other graph measures, and in a broader sense, graph-theoretical methods, have been proven as powerful tools to perform biological network analysis. However, the majority of the developed descriptors and graph-theoretical methods does not have the ability to take vertex- and edge-labels into account, e.g., atom- and bond-types when considering molecular graphs. Indeed, this feature is important to characterize biological networks more meaningfully instead of only considering pure topological information. Results In this paper, we put the emphasis on analyzing a special type of biological networks, namely bio-chemical structures. First, we derive entropic measures to calculate the information content of vertex- and edge-labeled graphs and investigate some useful properties thereof. Second, we apply the mentioned measures combined with other well-known descriptors to supervised machine learning methods for predicting Ames mutagenicity. Moreover, we investigate the influence of our topological descriptors - measures for only unlabeled vs. measures for labeled graphs - on the prediction performance of the underlying graph classification problem. Conclusions Our study demonstrates that the application of entropic measures to molecules representing graphs is useful to characterize such structures meaningfully. For instance, we have found that if one extends the measures for determining the structural information content of unlabeled graphs to labeled graphs, the uniqueness of the resulting indices is higher. Because measures to structurally characterize labeled graphs are clearly underrepresented so far, the further development of such methods might be valuable and fruitful for solving problems within biological network analysis.

  13. A fully connected network of Bernoulli units with correlation learning

    Science.gov (United States)

    Dente, J. A.; Vilela Mendes, R.

    1996-02-01

    Biological evidence suggests that pattern recognition and associative memory in the mammalian nervous system operates through the establishment of spatio-temporal patterns of activity and not by the evolution towards an equilibrium point as in attractor neural networks. Information is carried by the space-time correlation of the activity intensities rather than by the details of individual neuron signals. Furthermore the fast recognition times that are achieved with relatively slow biological neurons seem to be associated to the chaotic nature of the basal nervous activity. To copy the biology hardware may not be technologically sound, but to look for inspiration in the efficient biological information processing methods is an idea that deserves consideration. Inspired by the mechanisms at work in the mammalian olfactory system we study a network where, in the absence of external inputs, the units have a dynamics of the Bernoulli shift type. When an external signal is presented, the pattern of excitation bursts depends on the learning history of the network. Association and pattern identification in the network operates by the selection, by the external stimulus, of distinct invariant measures in the chaotic system. The simplicity of the node dynamics, that is chosen, allows a reasonable analytical control of the network behavior.

  14. Oscillations, complex spatiotemporal behavior, and information transport in networks of excitatory and inhibitory neurons

    International Nuclear Information System (INIS)

    Destexhe, A.

    1994-01-01

    Various types of spatiotemporal behavior are described for two-dimensional networks of excitatory and inhibitory neurons with time delayed interactions. It is described how the network behaves as several structural parameters are varied, such as the number of neurons, the connectivity, and the values of synaptic weights. A transition from spatially uniform oscillations to spatiotemporal chaos via intermittentlike behavior is observed. The properties of spatiotemporally chaotic solutions are investigated by evaluating the largest positive Lyapunov exponent and the loss of correlation with distance. Finally, properties of information transport are evaluated during uniform oscillations and spatiotemporal chaos. It is shown that the diffusion coefficient increases significantly in the spatiotemporal phase similar to the increase of transport coefficients at the onset of fluid turbulence. It is proposed that such a property should be seen in other media, such as chemical turbulence or networks of oscillators. The possibility of measuring information transport from appropriate experiments is also discussed

  15. Intermittent synchronization in a network of bursting neurons

    Science.gov (United States)

    Park, Choongseok; Rubchinsky, Leonid L.

    2011-09-01

    Synchronized oscillations in networks of inhibitory and excitatory coupled bursting neurons are common in a variety of neural systems from central pattern generators to human brain circuits. One example of the latter is the subcortical network of the basal ganglia, formed by excitatory and inhibitory bursters of the subthalamic nucleus and globus pallidus, involved in motor control and affected in Parkinson's disease. Recent experiments have demonstrated the intermittent nature of the phase-locking of neural activity in this network. Here, we explore one potential mechanism to explain the intermittent phase-locking in a network. We simplify the network to obtain a model of two inhibitory coupled elements and explore its dynamics. We used geometric analysis and singular perturbation methods for dynamical systems to reduce the full model to a simpler set of equations. Mathematical analysis was completed using three slow variables with two different time scales. Intermittently, synchronous oscillations are generated by overlapped spiking which crucially depends on the geometry of the slow phase plane and the interplay between slow variables as well as the strength of synapses. Two slow variables are responsible for the generation of activity patterns with overlapped spiking, and the other slower variable enhances the robustness of an irregular and intermittent activity pattern. While the analyzed network and the explored mechanism of intermittent synchrony appear to be quite generic, the results of this analysis can be used to trace particular values of biophysical parameters (synaptic strength and parameters of calcium dynamics), which are known to be impacted in Parkinson's disease.

  16. Neuronal network disturbance after focal ischemia in rats

    International Nuclear Information System (INIS)

    Kataoka, K.; Hayakawa, T.; Yamada, K.; Mushiroi, T.; Kuroda, R.; Mogami, H.

    1989-01-01

    We studied functional disturbances following left middle cerebral artery occlusion in rats. Neuronal function was evaluated by [14C]2-deoxyglucose autoradiography 1 day after occlusion. We analyzed the mechanisms of change in glucose utilization outside the infarct using Fink-Heimer silver impregnation, axonal transport of wheat germ agglutinin-conjugated-horseradish peroxidase, and succinate dehydrogenase histochemistry. One day after occlusion, glucose utilization was remarkably reduced in the areas surrounding the infarct. There were many silver grains indicating degeneration of the synaptic terminals in the cortical areas surrounding the infarct and the ipsilateral cingulate cortex. Moreover, in the left thalamus where the left middle cerebral artery supplied no blood, glucose utilization significantly decreased compared with sham-operated rats. In the left thalamus, massive silver staining of degenerated synaptic terminals and decreases in succinate dehydrogenase activity were observed 4 and 5 days after occlusion. The absence of succinate dehydrogenase staining may reflect early changes in retrograde degeneration of thalamic neurons after ischemic injury of the thalamocortical pathway. Terminal degeneration even affected areas remote from the infarct: there were silver grains in the contralateral hemisphere transcallosally connected to the infarct and in the ipsilateral substantia nigra. Axonal transport study showed disruption of the corticospinal tract by subcortical ischemia; the transcallosal pathways in the cortex surrounding the infarct were preserved. The relation between neural function and the neuronal network in the area surrounding the focal cerebral infarct is discussed with regard to ischemic penumbra and diaschisis

  17. The CNP signal is able to silence a supra threshold neuronal model

    Directory of Open Access Journals (Sweden)

    Francesca eCamera

    2015-04-01

    Full Text Available Several experimental results published in the literature showed that weak pulsed magnetic fields affected the response of the central nervous system. However, the specific biological mechanisms that regulate the observed behaviors are still unclear and further scientific investigation is required. In this work we performed simulations on a neuronal network model exposed to a specific pulsed magnetic field signal that seems to be very effective in modulating the brain activity: the Complex Neuroelectromagnetic Pulse (CNP. Results show that CNP can silence the neurons of a feed-forward network for signal intensities that depend on the strength of the bias current, the endogenous noise level and the specific waveforms of the pulses.

  18. Social network size relates to developmental neural sensitivity to biological motion

    Directory of Open Access Journals (Sweden)

    L.A. Kirby

    2018-04-01

    Full Text Available The ability to perceive others’ actions and goals from human motion (i.e., biological motion perception is a critical component of social perception and may be linked to the development of real-world social relationships. Adult research demonstrates two key nodes of the brain’s biological motion perception system—amygdala and posterior superior temporal sulcus (pSTS—are linked to variability in social network properties. The relation between social perception and social network properties, however, has not yet been investigated in middle childhood—a time when individual differences in social experiences and social perception are growing. The aims of this study were to (1 replicate past work showing amygdala and pSTS sensitivity to biological motion in middle childhood; (2 examine age-related changes in the neural sensitivity for biological motion, and (3 determine whether neural sensitivity for biological motion relates to social network characteristics in children. Consistent with past work, we demonstrate a significant relation between social network size and neural sensitivity for biological motion in left pSTS, but do not find age-related change in biological motion perception. This finding offers evidence for the interplay between real-world social experiences and functional brain development and has important implications for understanding disorders of atypical social experience. Keywords: Biological motion, Social networks, Middle childhood, Neural specialization, Brain-behavior relations, pSTS

  19. Neuron-Like Networks Between Ribosomal Proteins Within the Ribosome

    Science.gov (United States)

    Poirot, Olivier; Timsit, Youri

    2016-05-01

    From brain to the World Wide Web, information-processing networks share common scale invariant properties. Here, we reveal the existence of neural-like networks at a molecular scale within the ribosome. We show that with their extensions, ribosomal proteins form complex assortative interaction networks through which they communicate through tiny interfaces. The analysis of the crystal structures of 50S eubacterial particles reveals that most of these interfaces involve key phylogenetically conserved residues. The systematic observation of interactions between basic and aromatic amino acids at the interfaces and along the extension provides new structural insights that may contribute to decipher the molecular mechanisms of signal transmission within or between the ribosomal proteins. Similar to neurons interacting through “molecular synapses”, ribosomal proteins form a network that suggest an analogy with a simple molecular brain in which the “sensory-proteins” innervate the functional ribosomal sites, while the “inter-proteins” interconnect them into circuits suitable to process the information flow that circulates during protein synthesis. It is likely that these circuits have evolved to coordinate both the complex macromolecular motions and the binding of the multiple factors during translation. This opens new perspectives on nanoscale information transfer and processing.

  20. A Note on Some Numerical Approaches to Solve a θ˙ Neuron Networks Model

    Directory of Open Access Journals (Sweden)

    Samir Kumar Bhowmik

    2014-01-01

    Full Text Available Space time integration plays an important role in analyzing scientific and engineering models. In this paper, we consider an integrodifferential equation that comes from modeling θ˙ neuron networks. Here, we investigate various schemes for time discretization of a theta-neuron model. We use collocation and midpoint quadrature formula for space integration and then apply various time integration schemes to get a full discrete system. We present some computational results to demonstrate the schemes.

  1. On the origin of distribution patterns of motifs in biological networks

    Directory of Open Access Journals (Sweden)

    Lesk Arthur M

    2008-08-01

    Full Text Available Abstract Background Inventories of small subgraphs in biological networks have identified commonly-recurring patterns, called motifs. The inference that these motifs have been selected for function rests on the idea that their occurrences are significantly more frequent than random. Results Our analysis of several large biological networks suggests, in contrast, that the frequencies of appearance of common subgraphs are similar in natural and corresponding random networks. Conclusion Indeed, certain topological features of biological networks give rise naturally to the common appearance of the motifs. We therefore question whether frequencies of occurrences are reasonable evidence that the structures of motifs have been selected for their functional contribution to the operation of networks.

  2. Spiny Neurons of Amygdala, Striatum and Cortex Use Dendritic Plateau Potentials to Detect Network UP States

    Directory of Open Access Journals (Sweden)

    Katerina D Oikonomou

    2014-09-01

    Full Text Available Spiny neurons of amygdala, striatum, and cerebral cortex share four interesting features: [1] they are the most abundant cell type within their respective brain area, [2] covered by thousands of thorny protrusions (dendritic spines, [3] possess high levels of dendritic NMDA conductances, and [4] experience sustained somatic depolarizations in vivo and in vitro (UP states. In all spiny neurons of the forebrain, adequate glutamatergic inputs generate dendritic plateau potentials (dendritic UP states characterized by (i fast rise, (ii plateau phase lasting several hundred milliseconds and (iii abrupt decline at the end of the plateau phase. The dendritic plateau potential propagates towards the cell body decrementally to induce a long-lasting (longer than 100 ms, most often 200 – 800 ms steady depolarization (~20 mV amplitude, which resembles a neuronal UP state. Based on voltage-sensitive dye imaging, the plateau depolarization in the soma is precisely time-locked to the regenerative plateau potential taking place in the dendrite. The somatic plateau rises after the onset of the dendritic voltage transient and collapses with the breakdown of the dendritic plateau depolarization. We hypothesize that neuronal UP states in vivo reflect the occurrence of dendritic plateau potentials (dendritic UP states. We propose that the somatic voltage waveform during a neuronal UP state is determined by dendritic plateau potentials. A mammalian spiny neuron uses dendritic plateau potentials to detect and transform coherent network activity into a ubiquitous neuronal UP state. The biophysical properties of dendritic plateau potentials allow neurons to quickly attune to the ongoing network activity, as well as secure the stable amplitudes of successive UP states.

  3. A computational paradigm for dynamic logic-gates in neuronal activity

    Directory of Open Access Journals (Sweden)

    Amir eGoldental

    2014-04-01

    Full Text Available In 1943 McCulloch and Pitts suggested that the brain is composed of reliable logic-gates similar to the logic at the core of today's computers. This framework had a limited impact on neuroscience, since neurons exhibit far richer dynamics. Here we propose a new experimentally corroborated paradigm in which the truth tables of the brain's logic-gates are time dependent, i.e. dynamic logic-gates (DLGs. The truth tables of the DLGs depend on the history of their activity and the stimulation frequencies of their input neurons. Our experimental results are based on a procedure where conditioned stimulations were enforced on circuits of neurons embedded within a large-scale network of cortical cells in-vitro. We demonstrate that the underlying biological mechanism is the unavoidable increase of neuronal response latencies to ongoing stimulations, which imposes a non-uniform gradual stretching of network delays. The limited experimental results are confirmed and extended by simulations and theoretical arguments based on identical neurons with a fixed increase of the neuronal response latency per evoked spike. We anticipate our results to lead to better understanding of the suitability of this computational paradigm to account for the brain's functionalities and will require the development of new systematic mathematical methods beyond the methods developed for traditional Boolean algebra.

  4. Continuum Modeling of Biological Network Formation

    KAUST Repository

    Albi, Giacomo; Burger, Martin; Haskovec, Jan; Markowich, Peter A.; Schlottbom, Matthias

    2017-01-01

    We present an overview of recent analytical and numerical results for the elliptic–parabolic system of partial differential equations proposed by Hu and Cai, which models the formation of biological transportation networks. The model describes

  5. Flow-Based Network Analysis of the Caenorhabditis elegans Connectome.

    Science.gov (United States)

    Bacik, Karol A; Schaub, Michael T; Beguerisse-Díaz, Mariano; Billeh, Yazan N; Barahona, Mauricio

    2016-08-01

    We exploit flow propagation on the directed neuronal network of the nematode C. elegans to reveal dynamically relevant features of its connectome. We find flow-based groupings of neurons at different levels of granularity, which we relate to functional and anatomical constituents of its nervous system. A systematic in silico evaluation of the full set of single and double neuron ablations is used to identify deletions that induce the most severe disruptions of the multi-resolution flow structure. Such ablations are linked to functionally relevant neurons, and suggest potential candidates for further in vivo investigation. In addition, we use the directional patterns of incoming and outgoing network flows at all scales to identify flow profiles for the neurons in the connectome, without pre-imposing a priori categories. The four flow roles identified are linked to signal propagation motivated by biological input-response scenarios.

  6. Network science of biological systems at different scales: A review

    Science.gov (United States)

    Gosak, Marko; Markovič, Rene; Dolenšek, Jurij; Slak Rupnik, Marjan; Marhl, Marko; Stožer, Andraž; Perc, Matjaž

    2018-03-01

    Network science is today established as a backbone for description of structure and function of various physical, chemical, biological, technological, and social systems. Here we review recent advances in the study of complex biological systems that were inspired and enabled by methods of network science. First, we present

  7. A neuronal network model with simplified tonotopicity for tinnitus generation and its relief by sound therapy.

    Science.gov (United States)

    Nagashino, Hirofumi; Kinouchi, Yohsuke; Danesh, Ali A; Pandya, Abhijit S

    2013-01-01

    Tinnitus is the perception of sound in the ears or in the head where no external source is present. Sound therapy is one of the most effective techniques for tinnitus treatment that have been proposed. In order to investigate mechanisms of tinnitus generation and the clinical effects of sound therapy, we have proposed conceptual and computational models with plasticity using a neural oscillator or a neuronal network model. In the present paper, we propose a neuronal network model with simplified tonotopicity of the auditory system as more detailed structure. In this model an integrate-and-fire neuron model is employed and homeostatic plasticity is incorporated. The computer simulation results show that the present model can show the generation of oscillation and its cessation by external input. It suggests that the present framework is promising as a modeling for the tinnitus generation and the effects of sound therapy.

  8. The energy demand of fast neuronal network oscillations: insights from brain slice preparations

    Directory of Open Access Journals (Sweden)

    Oliver eKann

    2012-01-01

    Full Text Available Fast neuronal network oscillations in the gamma range (30-100 Hz in the cerebral cortex have been implicated in higher cognitive functions such as sensual perception, working memory, and, perhaps, consciousness. However, little is known about the energy demand of gamma oscillations. This is mainly caused by technical limitations that are associated with simultaneous recordings of neuronal activity and energy metabolism in small neuronal networks and at the level of mitochondria in vivo. Thus recent studies have focused on brain slice preparations to address the energy demand of gamma oscillations in vitro. Here, reports will be summarized and discussed that combined electrophysiological recordings, oxygen sensor microelectrodes and live-cell fluorescence imaging in acutely prepared slices and organotypic slice cultures of the hippocampus from both, mouse and rat. These reports consistently show that gamma oscillations can be reliably induced in hippocampal slice preparations by different pharmacological tools. They suggest that gamma oscillations are associated with high energy demand, requiring both rapid adaptation of oxidative energy metabolism and sufficient supply with oxygen and nutrients. These findings might help to explain the exceptional vulnerability of higher cognitive functions during pathological processes of the brain, such as circulatory disturbances, genetic mitochondrial diseases, and neurodegeneration.

  9. Leader neurons in population bursts of 2D living neural networks

    International Nuclear Information System (INIS)

    Eckmann, J-P; Zbinden, Cyrille; Jacobi, Shimshon; Moses, Elisha; Marom, Shimon

    2008-01-01

    Eytan and Marom (2006 J. Neurosci. 26 8465-76) recently showed that the spontaneous bursting activity of rat neuron cultures includes 'first-to-fire' cells that consistently fire earlier than others. Here, we analyze the behavior of these neurons in long-term recordings of spontaneous activity of rat hippocampal and rat cortical neuron cultures from three different laboratories. We identify precursor events that may either subside ('aborted bursts') or can lead to a full-blown burst ('pre-bursts'). We find that the activation in the pre-burst typically has a first neuron ('leader'), followed by a localized response in its neighborhood. Locality is diminished in the bursts themselves. The long-term dynamics of the leaders is relatively robust, evolving with a half-life of 23-34 h. Stimulation of the culture alters the leader distribution, but the distribution stabilizes within about 1 h. We show that the leaders carry information about the identity of the burst, as measured by the signature of the number of spikes per neuron in a burst. The number of spikes from leaders in the first few spikes of a precursor event is furthermore shown to be predictive with regard to the transition into a burst (pre-burst versus aborted burst). We conclude that the leaders play a role in the development of the bursts and conjecture that they are part of an underlying sub-network that is excited first and then acts as a nucleation center for the burst

  10. Healthy human CSF promotes glial differentiation of hESC-derived neural cells while retaining spontaneous activity in existing neuronal networks

    Directory of Open Access Journals (Sweden)

    Heikki Kiiski

    2013-05-01

    The possibilities of human pluripotent stem cell-derived neural cells from the basic research tool to a treatment option in regenerative medicine have been well recognized. These cells also offer an interesting tool for in vitro models of neuronal networks to be used for drug screening and neurotoxicological studies and for patient/disease specific in vitro models. Here, as aiming to develop a reductionistic in vitro human neuronal network model, we tested whether human embryonic stem cell (hESC-derived neural cells could be cultured in human cerebrospinal fluid (CSF in order to better mimic the in vivo conditions. Our results showed that CSF altered the differentiation of hESC-derived neural cells towards glial cells at the expense of neuronal differentiation. The proliferation rate was reduced in CSF cultures. However, even though the use of CSF as the culture medium altered the glial vs. neuronal differentiation rate, the pre-existing spontaneous activity of the neuronal networks persisted throughout the study. These results suggest that it is possible to develop fully human cell and culture-based environments that can further be modified for various in vitro modeling purposes.

  11. Repeated Stimulation of Cultured Networks of Rat Cortical Neurons Induces Parallel Memory Traces

    Science.gov (United States)

    le Feber, Joost; Witteveen, Tim; van Veenendaal, Tamar M.; Dijkstra, Jelle

    2015-01-01

    During systems consolidation, memories are spontaneously replayed favoring information transfer from hippocampus to neocortex. However, at present no empirically supported mechanism to accomplish a transfer of memory from hippocampal to extra-hippocampal sites has been offered. We used cultured neuronal networks on multielectrode arrays and…

  12. Synchronizations in small-world networks of spiking neurons: Diffusive versus sigmoid couplings

    International Nuclear Information System (INIS)

    Hasegawa, Hideo

    2005-01-01

    By using a semianalytical dynamical mean-field approximation previously proposed by the author [H. Hasegawa, Phys. Rev. E 70, 066107 (2004)], we have studied the synchronization of stochastic, small-world (SW) networks of FitzHugh-Nagumo neurons with diffusive couplings. The difference and similarity between results for diffusive and sigmoid couplings have been discussed. It has been shown that with introducing the weak heterogeneity to regular networks, the synchronization may be slightly increased for diffusive couplings, while it is decreased for sigmoid couplings. This increase in the synchronization for diffusive couplings is shown to be due to their local, negative feedback contributions, but not due to the short average distance in SW networks. Synchronization of SW networks depends not only on their structure but also on the type of couplings

  13. Dense module enumeration in biological networks

    Science.gov (United States)

    Tsuda, Koji; Georgii, Elisabeth

    2009-12-01

    Analysis of large networks is a central topic in various research fields including biology, sociology, and web mining. Detection of dense modules (a.k.a. clusters) is an important step to analyze the networks. Though numerous methods have been proposed to this aim, they often lack mathematical rigorousness. Namely, there is no guarantee that all dense modules are detected. Here, we present a novel reverse-search-based method for enumerating all dense modules. Furthermore, constraints from additional data sources such as gene expression profiles or customer profiles can be integrated, so that we can systematically detect dense modules with interesting profiles. We report successful applications in human protein interaction network analyses.

  14. Dense module enumeration in biological networks

    International Nuclear Information System (INIS)

    Tsuda, Koji; Georgii, Elisabeth

    2009-01-01

    Analysis of large networks is a central topic in various research fields including biology, sociology, and web mining. Detection of dense modules (a.k.a. clusters) is an important step to analyze the networks. Though numerous methods have been proposed to this aim, they often lack mathematical rigorousness. Namely, there is no guarantee that all dense modules are detected. Here, we present a novel reverse-search-based method for enumerating all dense modules. Furthermore, constraints from additional data sources such as gene expression profiles or customer profiles can be integrated, so that we can systematically detect dense modules with interesting profiles. We report successful applications in human protein interaction network analyses.

  15. Delay selection by spike-timing-dependent plasticity in recurrent networks of spiking neurons receiving oscillatory inputs.

    Directory of Open Access Journals (Sweden)

    Robert R Kerr

    Full Text Available Learning rules, such as spike-timing-dependent plasticity (STDP, change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.

  16. Stability of discrete memory states to stochastic fluctuations in neuronal systems

    Science.gov (United States)

    Miller, Paul; Wang, Xiao-Jing

    2014-01-01

    Noise can degrade memories by causing transitions from one memory state to another. For any biological memory system to be useful, the time scale of such noise-induced transitions must be much longer than the required duration for memory retention. Using biophysically-realistic modeling, we consider two types of memory in the brain: short-term memories maintained by reverberating neuronal activity for a few seconds, and long-term memories maintained by a molecular switch for years. Both systems require persistence of (neuronal or molecular) activity self-sustained by an autocatalytic process and, we argue, that both have limited memory lifetimes because of significant fluctuations. We will first discuss a strongly recurrent cortical network model endowed with feedback loops, for short-term memory. Fluctuations are due to highly irregular spike firing, a salient characteristic of cortical neurons. Then, we will analyze a model for long-term memory, based on an autophosphorylation mechanism of calcium/calmodulin-dependent protein kinase II (CaMKII) molecules. There, fluctuations arise from the fact that there are only a small number of CaMKII molecules at each postsynaptic density (putative synaptic memory unit). Our results are twofold. First, we demonstrate analytically and computationally the exponential dependence of stability on the number of neurons in a self-excitatory network, and on the number of CaMKII proteins in a molecular switch. Second, for each of the two systems, we implement graded memory consisting of a group of bistable switches. For the neuronal network we report interesting ramping temporal dynamics as a result of sequentially switching an increasing number of discrete, bistable, units. The general observation of an exponential increase in memory stability with the system size leads to a trade-off between the robustness of memories (which increases with the size of each bistable unit) and the total amount of information storage (which decreases

  17. Using biological networks to improve our understanding of infectious diseases

    Directory of Open Access Journals (Sweden)

    Nicola J. Mulder

    2014-08-01

    Full Text Available Infectious diseases are the leading cause of death, particularly in developing countries. Although many drugs are available for treating the most common infectious diseases, in many cases the mechanism of action of these drugs or even their targets in the pathogen remain unknown. In addition, the key factors or processes in pathogens that facilitate infection and disease progression are often not well understood. Since proteins do not work in isolation, understanding biological systems requires a better understanding of the interconnectivity between proteins in different pathways and processes, which includes both physical and other functional interactions. Such biological networks can be generated within organisms or between organisms sharing a common environment using experimental data and computational predictions. Though different data sources provide different levels of accuracy, confidence in interactions can be measured using interaction scores. Connections between interacting proteins in biological networks can be represented as graphs and edges, and thus studied using existing algorithms and tools from graph theory. There are many different applications of biological networks, and here we discuss three such applications, specifically applied to the infectious disease tuberculosis, with its causative agent Mycobacterium tuberculosis and host, Homo sapiens. The applications include the use of the networks for function prediction, comparison of networks for evolutionary studies, and the generation and use of host–pathogen interaction networks.

  18. Revisiting the variation of clustering coefficient of biological networks suggests new modular structure.

    Science.gov (United States)

    Hao, Dapeng; Ren, Cong; Li, Chuanxing

    2012-05-01

    A central idea in biology is the hierarchical organization of cellular processes. A commonly used method to identify the hierarchical modular organization of network relies on detecting a global signature known as variation of clustering coefficient (so-called modularity scaling). Although several studies have suggested other possible origins of this signature, it is still widely used nowadays to identify hierarchical modularity, especially in the analysis of biological networks. Therefore, a further and systematical investigation of this signature for different types of biological networks is necessary. We analyzed a variety of biological networks and found that the commonly used signature of hierarchical modularity is actually the reflection of spoke-like topology, suggesting a different view of network architecture. We proved that the existence of super-hubs is the origin that the clustering coefficient of a node follows a particular scaling law with degree k in metabolic networks. To study the modularity of biological networks, we systematically investigated the relationship between repulsion of hubs and variation of clustering coefficient. We provided direct evidences for repulsion between hubs being the underlying origin of the variation of clustering coefficient, and found that for biological networks having no anti-correlation between hubs, such as gene co-expression network, the clustering coefficient doesn't show dependence of degree. Here we have shown that the variation of clustering coefficient is neither sufficient nor exclusive for a network to be hierarchical. Our results suggest the existence of spoke-like modules as opposed to "deterministic model" of hierarchical modularity, and suggest the need to reconsider the organizational principle of biological hierarchy.

  19. Linking Neurons to Network Function and Behavior by Two-Photon Holographic Optogenetics and Volumetric Imaging.

    Science.gov (United States)

    Dal Maschio, Marco; Donovan, Joseph C; Helmbrecht, Thomas O; Baier, Herwig

    2017-05-17

    We introduce a flexible method for high-resolution interrogation of circuit function, which combines simultaneous 3D two-photon stimulation of multiple targeted neurons, volumetric functional imaging, and quantitative behavioral tracking. This integrated approach was applied to dissect how an ensemble of premotor neurons in the larval zebrafish brain drives a basic motor program, the bending of the tail. We developed an iterative photostimulation strategy to identify minimal subsets of channelrhodopsin (ChR2)-expressing neurons that are sufficient to initiate tail movements. At the same time, the induced network activity was recorded by multiplane GCaMP6 imaging across the brain. From this dataset, we computationally identified activity patterns associated with distinct components of the elicited behavior and characterized the contributions of individual neurons. Using photoactivatable GFP (paGFP), we extended our protocol to visualize single functionally identified neurons and reconstruct their morphologies. Together, this toolkit enables linking behavior to circuit activity with unprecedented resolution. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Exploitation of complex network topology for link prediction in biological interactomes

    KAUST Repository

    Alanis Lobato, Gregorio

    2014-06-01

    The network representation of the interactions between proteins and genes allows for a holistic perspective of the complex machinery underlying the living cell. However, the large number of interacting entities within the cell makes network construction a daunting and arduous task, prone to errors and missing information. Fortunately, the structure of biological networks is not different from that of other complex systems, such as social networks, the world-wide web or power grids, for which growth models have been proposed to better understand their structure and function. This means that we can design tools based on these models in order to exploit the topology of biological interactomes with the aim to construct more complete and reliable maps of the cell. In this work, we propose three novel and powerful approaches for the prediction of interactions in biological networks and conclude that it is possible to mine the topology of these complex system representations and produce reliable and biologically meaningful information that enriches the datasets to which we have access today.

  1. Social traits, social networks and evolutionary biology.

    Science.gov (United States)

    Fisher, D N; McAdam, A G

    2017-12-01

    effects) provides the potential to understand how entire networks of social interactions in populations influence phenotypes and predict how these traits may evolve. By theoretical integration of social network analysis and quantitative genetics, we hope to identify areas of compatibility and incompatibility and to direct research efforts towards the most promising areas. Continuing this synthesis could provide important insights into the evolution of traits expressed in a social context and the evolutionary consequences of complex and nuanced social phenotypes. © 2017 European Society For Evolutionary Biology. Journal of Evolutionary Biology © 2017 European Society For Evolutionary Biology.

  2. Spatiotemporal network motif reveals the biological traits of developmental gene regulatory networks in Drosophila melanogaster

    Directory of Open Access Journals (Sweden)

    Kim Man-Sun

    2012-05-01

    Full Text Available Abstract Background Network motifs provided a “conceptual tool” for understanding the functional principles of biological networks, but such motifs have primarily been used to consider static network structures. Static networks, however, cannot be used to reveal time- and region-specific traits of biological systems. To overcome this limitation, we proposed the concept of a “spatiotemporal network motif,” a spatiotemporal sequence of network motifs of sub-networks which are active only at specific time points and body parts. Results On the basis of this concept, we analyzed the developmental gene regulatory network of the Drosophila melanogaster embryo. We identified spatiotemporal network motifs and investigated their distribution pattern in time and space. As a result, we found how key developmental processes are temporally and spatially regulated by the gene network. In particular, we found that nested feedback loops appeared frequently throughout the entire developmental process. From mathematical simulations, we found that mutual inhibition in the nested feedback loops contributes to the formation of spatial expression patterns. Conclusions Taken together, the proposed concept and the simulations can be used to unravel the design principle of developmental gene regulatory networks.

  3. PyPathway: Python Package for Biological Network Analysis and Visualization.

    Science.gov (United States)

    Xu, Yang; Luo, Xiao-Chun

    2018-05-01

    Life science studies represent one of the biggest generators of large data sets, mainly because of rapid sequencing technological advances. Biological networks including interactive networks and human curated pathways are essential to understand these high-throughput data sets. Biological network analysis offers a method to explore systematically not only the molecular complexity of a particular disease but also the molecular relationships among apparently distinct phenotypes. Currently, several packages for Python community have been developed, such as BioPython and Goatools. However, tools to perform comprehensive network analysis and visualization are still needed. Here, we have developed PyPathway, an extensible free and open source Python package for functional enrichment analysis, network modeling, and network visualization. The network process module supports various interaction network and pathway databases such as Reactome, WikiPathway, STRING, and BioGRID. The network analysis module implements overrepresentation analysis, gene set enrichment analysis, network-based enrichment, and de novo network modeling. Finally, the visualization and data publishing modules enable users to share their analysis by using an easy web application. For package availability, see the first Reference.

  4. Hardware implementation of stochastic spiking neural networks.

    Science.gov (United States)

    Rosselló, Josep L; Canals, Vincent; Morro, Antoni; Oliver, Antoni

    2012-08-01

    Spiking Neural Networks, the last generation of Artificial Neural Networks, are characterized by its bio-inspired nature and by a higher computational capacity with respect to other neural models. In real biological neurons, stochastic processes represent an important mechanism of neural behavior and are responsible of its special arithmetic capabilities. In this work we present a simple hardware implementation of spiking neurons that considers this probabilistic nature. The advantage of the proposed implementation is that it is fully digital and therefore can be massively implemented in Field Programmable Gate Arrays. The high computational capabilities of the proposed model are demonstrated by the study of both feed-forward and recurrent networks that are able to implement high-speed signal filtering and to solve complex systems of linear equations.

  5. Revisiting the variation of clustering coefficient of biological networks suggests new modular structure

    Directory of Open Access Journals (Sweden)

    Hao Dapeng

    2012-05-01

    Full Text Available Abstract Background A central idea in biology is the hierarchical organization of cellular processes. A commonly used method to identify the hierarchical modular organization of network relies on detecting a global signature known as variation of clustering coefficient (so-called modularity scaling. Although several studies have suggested other possible origins of this signature, it is still widely used nowadays to identify hierarchical modularity, especially in the analysis of biological networks. Therefore, a further and systematical investigation of this signature for different types of biological networks is necessary. Results We analyzed a variety of biological networks and found that the commonly used signature of hierarchical modularity is actually the reflection of spoke-like topology, suggesting a different view of network architecture. We proved that the existence of super-hubs is the origin that the clustering coefficient of a node follows a particular scaling law with degree k in metabolic networks. To study the modularity of biological networks, we systematically investigated the relationship between repulsion of hubs and variation of clustering coefficient. We provided direct evidences for repulsion between hubs being the underlying origin of the variation of clustering coefficient, and found that for biological networks having no anti-correlation between hubs, such as gene co-expression network, the clustering coefficient doesn’t show dependence of degree. Conclusions Here we have shown that the variation of clustering coefficient is neither sufficient nor exclusive for a network to be hierarchical. Our results suggest the existence of spoke-like modules as opposed to “deterministic model” of hierarchical modularity, and suggest the need to reconsider the organizational principle of biological hierarchy.

  6. Spiking computation and stochastic amplification in a neuron-like semiconductor microstructure

    International Nuclear Information System (INIS)

    Samardak, A. S.; Nogaret, A.; Janson, N. B.; Balanov, A.; Farrer, I.; Ritchie, D. A.

    2011-01-01

    We have demonstrated the proof of principle of a semiconductor neuron, which has dendrites, axon, and a soma and computes information encoded in electrical pulses in the same way as biological neurons. Electrical impulses applied to dendrites diffuse along microwires to the soma. The soma is the active part of the neuron, which regenerates input pulses above a voltage threshold and transmits them into the axon. Our concept of neuron is a major step forward because its spatial structure controls the timing of pulses, which arrive at the soma. Dendrites and axon act as transmission delay lines, which modify the information, coded in the timing of pulses. We have finally shown that noise enhances the detection sensitivity of the neuron by helping the transmission of weak periodic signals. A maximum enhancement of signal transmission was observed at an optimum noise level known as stochastic resonance. The experimental results are in excellent agreement with simulations of the FitzHugh-Nagumo model. Our neuron is therefore extremely well suited to providing feedback on the various mathematical approximations of neurons and building functional networks.

  7. Advanced models of neural networks nonlinear dynamics and stochasticity in biological neurons

    CERN Document Server

    Rigatos, Gerasimos G

    2015-01-01

    This book provides a complete study on neural structures exhibiting nonlinear and stochastic dynamics, elaborating on neural dynamics by introducing advanced models of neural networks. It overviews the main findings in the modelling of neural dynamics in terms of electrical circuits and examines their stability properties with the use of dynamical systems theory. It is suitable for researchers and postgraduate students engaged with neural networks and dynamical systems theory.

  8. Spatiotemporal dynamics on small-world neuronal networks: The roles of two types of time-delayed coupling

    Energy Technology Data Exchange (ETDEWEB)

    Wu Hao; Jiang Huijun [Hefei National Laboratory for Physical Sciences at the Microscale and Department of Chemical Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China); Hou Zhonghuai, E-mail: hzhlj@ustc.edu.cn [Hefei National Laboratory for Physical Sciences at the Microscale and Department of Chemical Physics, University of Science and Technology of China, Hefei, Anhui 230026 (China)

    2011-10-15

    Highlights: > We compare neuronal dynamics in dependence on two types of delayed coupling. > Distinct results induced by different delayed coupling can be achieved. > Time delays in type 1 coupling can induce a most spatiotemporal ordered state. > For type 2 coupling, the systems exhibit synchronization transitions with delay. - Abstract: We investigate temporal coherence and spatial synchronization on small-world networks consisting of noisy Terman-Wang (TW) excitable neurons in dependence on two types of time-delayed coupling: {l_brace}x{sub j}(t - {tau}) - x{sub i}(t){r_brace} and {l_brace}x{sub j}(t - {tau}) - x{sub i}(t - {tau}){r_brace}. For the former case, we show that time delay in the coupling can dramatically enhance temporal coherence and spatial synchrony of the noise-induced spike trains. In addition, if the delay time {tau} is tuned to nearly match the intrinsic spike period of the neuronal network, the system dynamics reaches a most ordered state, which is both periodic in time and nearly synchronized in space, demonstrating an interesting resonance phenomenon with delay. For the latter case, however, we cannot achieve a similar spatiotemporal ordered state, but the neuronal dynamics exhibits interesting synchronization transitions with time delay from zigzag fronts of excitations to dynamic clustering anti-phase synchronization (APS), and further to clustered chimera states which have spatially distributed anti-phase coherence separated by incoherence. Furthermore, we also show how these findings are influenced by the change of the noise intensity and the rewiring probability of the small-world networks. Finally, qualitative analysis is given to illustrate the numerical results.

  9. Spatiotemporal dynamics on small-world neuronal networks: The roles of two types of time-delayed coupling

    International Nuclear Information System (INIS)

    Wu Hao; Jiang Huijun; Hou Zhonghuai

    2011-01-01

    Highlights: → We compare neuronal dynamics in dependence on two types of delayed coupling. → Distinct results induced by different delayed coupling can be achieved. → Time delays in type 1 coupling can induce a most spatiotemporal ordered state. → For type 2 coupling, the systems exhibit synchronization transitions with delay. - Abstract: We investigate temporal coherence and spatial synchronization on small-world networks consisting of noisy Terman-Wang (TW) excitable neurons in dependence on two types of time-delayed coupling: {x j (t - τ) - x i (t)} and {x j (t - τ) - x i (t - τ)}. For the former case, we show that time delay in the coupling can dramatically enhance temporal coherence and spatial synchrony of the noise-induced spike trains. In addition, if the delay time τ is tuned to nearly match the intrinsic spike period of the neuronal network, the system dynamics reaches a most ordered state, which is both periodic in time and nearly synchronized in space, demonstrating an interesting resonance phenomenon with delay. For the latter case, however, we cannot achieve a similar spatiotemporal ordered state, but the neuronal dynamics exhibits interesting synchronization transitions with time delay from zigzag fronts of excitations to dynamic clustering anti-phase synchronization (APS), and further to clustered chimera states which have spatially distributed anti-phase coherence separated by incoherence. Furthermore, we also show how these findings are influenced by the change of the noise intensity and the rewiring probability of the small-world networks. Finally, qualitative analysis is given to illustrate the numerical results.

  10. Dynamic behaviors in directed networks

    International Nuclear Information System (INIS)

    Park, Sung Min; Kim, Beom Jun

    2006-01-01

    Motivated by the abundance of directed synaptic couplings in a real biological neuronal network, we investigate the synchronization behavior of the Hodgkin-Huxley model in a directed network. We start from the standard model of the Watts-Strogatz undirected network and then change undirected edges to directed arcs with a given probability, still preserving the connectivity of the network. A generalized clustering coefficient for directed networks is defined and used to investigate the interplay between the synchronization behavior and underlying structural properties of directed networks. We observe that the directedness of complex networks plays an important role in emerging dynamical behaviors, which is also confirmed by a numerical study of the sociological game theoretic voter model on directed networks

  11. Theoretical analysis of transcranial magneto-acoustical stimulation with Hodgkin–Huxley neuron model

    Directory of Open Access Journals (Sweden)

    Yi eYuan

    2016-04-01

    Full Text Available Transcranial magneto-acoustical stimulation (TMAS is a novel stimulation technology in which an ultrasonic wave within a magnetostatic field generates an electric current in an area of interest in the brain to modulate neuronal activities. As a key part of the neural network, neurons transmit information in the nervous system. However, the effect of TMAS on the neuronal firing rhythm remains unknown. To address this problem, we investigated the stimulatory mechanism of TMAS on neurons with a Hodgkin-Huxley neuron model. The simulation results indicate that the magnetostatic field intensity and ultrasonic power can affect the amplitude and interspike interval of neuronal action potential under continuous wave ultrasound. The simulation results also show that the ultrasonic power, duty cycle and repetition frequency can alter the firing rhythm of neural action potential under pulsed ultrasound. This study can help to reveal and explain the biological mechanism of TMAS and to provide a theoretical basis for TMAS in the treatment or rehabilitation of neuropsychiatric disorders.

  12. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.

    Directory of Open Access Journals (Sweden)

    Gabriel Koch Ocker

    2015-08-01

    Full Text Available The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.

  13. Self-Organization of Microcircuits in Networks of Spiking Neurons with Plastic Synapses.

    Science.gov (United States)

    Ocker, Gabriel Koch; Litwin-Kumar, Ashok; Doiron, Brent

    2015-08-01

    The synaptic connectivity of cortical networks features an overrepresentation of certain wiring motifs compared to simple random-network models. This structure is shaped, in part, by synaptic plasticity that promotes or suppresses connections between neurons depending on their joint spiking activity. Frequently, theoretical studies focus on how feedforward inputs drive plasticity to create this network structure. We study the complementary scenario of self-organized structure in a recurrent network, with spike timing-dependent plasticity driven by spontaneous dynamics. We develop a self-consistent theory for the evolution of network structure by combining fast spiking covariance with a slow evolution of synaptic weights. Through a finite-size expansion of network dynamics we obtain a low-dimensional set of nonlinear differential equations for the evolution of two-synapse connectivity motifs. With this theory in hand, we explore how the form of the plasticity rule drives the evolution of microcircuits in cortical networks. When potentiation and depression are in approximate balance, synaptic dynamics depend on weighted divergent, convergent, and chain motifs. For additive, Hebbian STDP these motif interactions create instabilities in synaptic dynamics that either promote or suppress the initial network structure. Our work provides a consistent theoretical framework for studying how spiking activity in recurrent networks interacts with synaptic plasticity to determine network structure.

  14. Epigenetics and Why Biological Networks are More Controllable than Expected

    Science.gov (United States)

    Motter, Adilson

    2013-03-01

    A fundamental property of networks is that perturbations to one node can affect other nodes, potentially causing the entire system to change behavior or fail. In this talk, I will show that it is possible to exploit this same principle to control network behavior. This approach takes advantage of the nonlinear dynamics inherent to real networks, and allows bringing the system to a desired target state even when this state is not directly accessible or the linear counterpart is not controllable. Applications show that this framework permits both reprogramming a network to a desired task as well as rescuing networks from the brink of failure, which I will illustrate through various biological problems. I will also briefly review the progress our group has made over the past 5 years on related control of complex networks in non-biological domains.

  15. Sequentially switching cell assemblies in random inhibitory networks of spiking neurons in the striatum.

    Science.gov (United States)

    Ponzi, Adam; Wickens, Jeff

    2010-04-28

    The striatum is composed of GABAergic medium spiny neurons with inhibitory collaterals forming a sparse random asymmetric network and receiving an excitatory glutamatergic cortical projection. Because the inhibitory collaterals are sparse and weak, their role in striatal network dynamics is puzzling. However, here we show by simulation of a striatal inhibitory network model composed of spiking neurons that cells form assemblies that fire in sequential coherent episodes and display complex identity-temporal spiking patterns even when cortical excitation is simply constant or fluctuating noisily. Strongly correlated large-scale firing rate fluctuations on slow behaviorally relevant timescales of hundreds of milliseconds are shown by members of the same assembly whereas members of different assemblies show strong negative correlation, and we show how randomly connected spiking networks can generate this activity. Cells display highly irregular spiking with high coefficients of variation, broadly distributed low firing rates, and interspike interval distributions that are consistent with exponentially tailed power laws. Although firing rates vary coherently on slow timescales, precise spiking synchronization is absent in general. Our model only requires the minimal but striatally realistic assumptions of sparse to intermediate random connectivity, weak inhibitory synapses, and sufficient cortical excitation so that some cells are depolarized above the firing threshold during up states. Our results are in good qualitative agreement with experimental studies, consistent with recently determined striatal anatomy and physiology, and support a new view of endogenously generated metastable state switching dynamics of the striatal network underlying its information processing operations.

  16. Analysis and logical modeling of biological signaling transduction networks

    Science.gov (United States)

    Sun, Zhongyao

    The study of network theory and its application span across a multitude of seemingly disparate fields of science and technology: computer science, biology, social science, linguistics, etc. It is the intrinsic similarities embedded in the entities and the way they interact with one another in these systems that link them together. In this dissertation, I present from both the aspect of theoretical analysis and the aspect of application three projects, which primarily focus on signal transduction networks in biology. In these projects, I assembled a network model through extensively perusing literature, performed model-based simulations and validation, analyzed network topology, and proposed a novel network measure. The application of network modeling to the system of stomatal opening in plants revealed a fundamental question about the process that has been left unanswered in decades. The novel measure of the redundancy of signal transduction networks with Boolean dynamics by calculating its maximum node-independent elementary signaling mode set accurately predicts the effect of single node knockout in such signaling processes. The three projects as an organic whole advance the understanding of a real system as well as the behavior of such network models, giving me an opportunity to take a glimpse at the dazzling facets of the immense world of network science.

  17. Real-time computing platform for spiking neurons (RT-spike).

    Science.gov (United States)

    Ros, Eduardo; Ortigosa, Eva M; Agís, Rodrigo; Carrillo, Richard; Arnold, Michael

    2006-07-01

    A computing platform is described for simulating arbitrary networks of spiking neurons in real time. A hybrid computing scheme is adopted that uses both software and hardware components to manage the tradeoff between flexibility and computational power; the neuron model is implemented in hardware and the network model and the learning are implemented in software. The incremental transition of the software components into hardware is supported. We focus on a spike response model (SRM) for a neuron where the synapses are modeled as input-driven conductances. The temporal dynamics of the synaptic integration process are modeled with a synaptic time constant that results in a gradual injection of charge. This type of model is computationally expensive and is not easily amenable to existing software-based event-driven approaches. As an alternative we have designed an efficient time-based computing architecture in hardware, where the different stages of the neuron model are processed in parallel. Further improvements occur by computing multiple neurons in parallel using multiple processing units. This design is tested using reconfigurable hardware and its scalability and performance evaluated. Our overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so we evaluate the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important.

  18. Relationship between neuronal network architecture and naming performance in temporal lobe epilepsy: A connectome based approach using machine learning.

    Science.gov (United States)

    Munsell, B C; Wu, G; Fridriksson, J; Thayer, K; Mofrad, N; Desisto, N; Shen, D; Bonilha, L

    2017-09-09

    Impaired confrontation naming is a common symptom of temporal lobe epilepsy (TLE). The neurobiological mechanisms underlying this impairment are poorly understood but may indicate a structural disorganization of broadly distributed neuronal networks that support naming ability. Importantly, naming is frequently impaired in other neurological disorders and by contrasting the neuronal structures supporting naming in TLE with other diseases, it will become possible to elucidate the common systems supporting naming. We aimed to evaluate the neuronal networks that support naming in TLE by using a machine learning algorithm intended to predict naming performance in subjects with medication refractory TLE using only the structural brain connectome reconstructed from diffusion tensor imaging. A connectome-based prediction framework was developed using network properties from anatomically defined brain regions across the entire brain, which were used in a multi-task machine learning algorithm followed by support vector regression. Nodal eigenvector centrality, a measure of regional network integration, predicted approximately 60% of the variance in naming. The nodes with the highest regression weight were bilaterally distributed among perilimbic sub-networks involving mainly the medial and lateral temporal lobe regions. In the context of emerging evidence regarding the role of large structural networks that support language processing, our results suggest intact naming relies on the integration of sub-networks, as opposed to being dependent on isolated brain areas. In the case of TLE, these sub-networks may be disproportionately indicative naming processes that are dependent semantic integration from memory and lexical retrieval, as opposed to multi-modal perception or motor speech production. Copyright © 2017. Published by Elsevier Inc.

  19. Two classes of bipartite networks: nested biological and social systems.

    Science.gov (United States)

    Burgos, Enrique; Ceva, Horacio; Hernández, Laura; Perazzo, R P J; Devoto, Mariano; Medan, Diego

    2008-10-01

    Bipartite graphs have received some attention in the study of social networks and of biological mutualistic systems. A generalization of a previous model is presented, that evolves the topology of the graph in order to optimally account for a given contact preference rule between the two guilds of the network. As a result, social and biological graphs are classified as belonging to two clearly different classes. Projected graphs, linking the agents of only one guild, are obtained from the original bipartite graph. The corresponding evolution of its statistical properties is also studied. An example of a biological mutualistic network is analyzed in detail, and it is found that the model provides a very good fitting of all the main statistical features. The model also provides a proper qualitative description of the same features observed in social webs, suggesting the possible reasons underlying the difference in the organization of these two kinds of bipartite networks.

  20. Networks of VTA Neurons Encode Real-Time Information about Uncertain Numbers of Actions Executed to Earn a Reward

    Directory of Open Access Journals (Sweden)

    Jesse Wood

    2017-08-01

    Full Text Available Multiple and unpredictable numbers of actions are often required to achieve a goal. In order to organize behavior and allocate effort so that optimal behavioral policies can be selected, it is necessary to continually monitor ongoing actions. Real-time processing of information related to actions and outcomes is typically assigned to the prefrontal cortex and basal ganglia, but also depends on midbrain regions, especially the ventral tegmental area (VTA. We were interested in how individual VTA neurons, as well as networks within the VTA, encode salient events when an unpredictable number of serial actions are required to obtain a reward. We recorded from ensembles of putative dopamine and non-dopamine neurons in the VTA as animals performed multiple cued trials in a recording session where, in each trial, serial actions were randomly rewarded. While averaging population activity did not reveal a response pattern, we observed that different neurons were selectively tuned to low, medium, or high numbered actions in a trial. This preferential tuning of putative dopamine and non-dopamine VTA neurons to different subsets of actions in a trial allowed information about binned action number to be decoded from the ensemble activity. At the network level, tuning curve similarity was positively associated with action-evoked noise correlations, suggesting that action number selectivity reflects functional connectivity within these networks. Analysis of phasic responses to cue and reward revealed that the requirement to execute multiple and uncertain numbers of actions weakens both cue-evoked responses and cue-reward response correlation. The functional connectivity and ensemble coding scheme that we observe here may allow VTA neurons to cooperatively provide a real-time account of ongoing behavior. These computations may be critical to cognitive and motivational functions that have long been associated with VTA dopamine neurons.

  1. Bayesian Network Webserver: a comprehensive tool for biological network modeling.

    Science.gov (United States)

    Ziebarth, Jesse D; Bhattacharya, Anindya; Cui, Yan

    2013-11-01

    The Bayesian Network Webserver (BNW) is a platform for comprehensive network modeling of systems genetics and other biological datasets. It allows users to quickly and seamlessly upload a dataset, learn the structure of the network model that best explains the data and use the model to understand relationships between network variables. Many datasets, including those used to create genetic network models, contain both discrete (e.g. genotype) and continuous (e.g. gene expression traits) variables, and BNW allows for modeling hybrid datasets. Users of BNW can incorporate prior knowledge during structure learning through an easy-to-use structural constraint interface. After structure learning, users are immediately presented with an interactive network model, which can be used to make testable hypotheses about network relationships. BNW, including a downloadable structure learning package, is available at http://compbio.uthsc.edu/BNW. (The BNW interface for adding structural constraints uses HTML5 features that are not supported by current version of Internet Explorer. We suggest using other browsers (e.g. Google Chrome or Mozilla Firefox) when accessing BNW). ycui2@uthsc.edu. Supplementary data are available at Bioinformatics online.

  2. Nonlinear signaling on biological networks: The role of stochasticity and spectral clustering

    Science.gov (United States)

    Hernandez-Hernandez, Gonzalo; Myers, Jesse; Alvarez-Lacalle, Enrique; Shiferaw, Yohannes

    2017-03-01

    Signal transduction within biological cells is governed by networks of interacting proteins. Communication between these proteins is mediated by signaling molecules which bind to receptors and induce stochastic transitions between different conformational states. Signaling is typically a cooperative process which requires the occurrence of multiple binding events so that reaction rates have a nonlinear dependence on the amount of signaling molecule. It is this nonlinearity that endows biological signaling networks with robust switchlike properties which are critical to their biological function. In this study we investigate how the properties of these signaling systems depend on the network architecture. Our main result is that these nonlinear networks exhibit bistability where the network activity can switch between states that correspond to a low and high activity level. We show that this bistable regime emerges at a critical coupling strength that is determined by the spectral structure of the network. In particular, the set of nodes that correspond to large components of the leading eigenvector of the adjacency matrix determines the onset of bistability. Above this transition the eigenvectors of the adjacency matrix determine a hierarchy of clusters, defined by its spectral properties, which are activated sequentially with increasing network activity. We argue further that the onset of bistability occurs either continuously or discontinuously depending upon whether the leading eigenvector is localized or delocalized. Finally, we show that at low network coupling stochastic transitions to the active branch are also driven by the set of nodes that contribute more strongly to the leading eigenvector. However, at high coupling, transitions are insensitive to network structure since the network can be activated by stochastic transitions of a few nodes. Thus this work identifies important features of biological signaling networks that may underlie their biological

  3. Universal Connection through Art: Role of Mirror Neurons in Art Production and Reception.

    Science.gov (United States)

    Piechowski-Jozwiak, Bartlomiej; Boller, François; Bogousslavsky, Julien

    2017-05-05

    Art is defined as expression or application of human creative skill and imagination producing works to be appreciated primarily for their aesthetic value or emotional power. This definition encompasses two very important elements-the creation and reception of art-and by doing so it establishes a link, a dialogue between the artist and spectator. From the evolutionary biological perspective, activities need to have an immediate or remote effect on the population through improving survival, gene selection, and environmental adjustment, and this includes art. It may serve as a universal means of communication bypassing time, cultural, ethnic, and social differences. The neurological mechanisms of both art production and appreciation are researched by neuroscientists and discussed both in terms of healthy brain biology and complex neuronal networking perspectives. In this paper, we describe folk art and the issue of symbolic archetypes in psychoanalytic thought as well as offer neuronal mechanisms for art by emphasizing mirror/neurons and the role they play in it.

  4. Stochastic multiresonance in coupled excitable FHN neurons

    Science.gov (United States)

    Li, Huiyan; Sun, Xiaojuan; Xiao, Jinghua

    2018-04-01

    In this paper, effects of noise on Watts-Strogatz small-world neuronal networks, which are stimulated by a subthreshold signal, have been investigated. With the numerical simulations, it is surprisingly found that there exist several optimal noise intensities at which the subthreshold signal can be detected efficiently. This indicates the occurrence of stochastic multiresonance in the studied neuronal networks. Moreover, it is revealed that the occurrence of stochastic multiresonance has close relationship with the period of subthreshold signal Te and the noise-induced mean period of the neuronal networks T0. In detail, we find that noise could induce the neuronal networks to generate stochastic resonance for M times if Te is not very large and falls into the interval ( M × T 0 , ( M + 1 ) × T 0 ) with M being a positive integer. In real neuronal system, subthreshold signal detection is very meaningful. Thus, the obtained results in this paper could give some important implications on detecting subthreshold signal and propagating neuronal information in neuronal systems.

  5. Overexpression of cypin alters dendrite morphology, single neuron activity, and network properties via distinct mechanisms

    Science.gov (United States)

    Rodríguez, Ana R.; O'Neill, Kate M.; Swiatkowski, Przemyslaw; Patel, Mihir V.; Firestein, Bonnie L.

    2018-02-01

    Objective. This study investigates the effect that overexpression of cytosolic PSD-95 interactor (cypin), a regulator of synaptic PSD-95 protein localization and a core regulator of dendrite branching, exerts on the electrical activity of rat hippocampal neurons and networks. Approach. We cultured rat hippocampal neurons and used lipid-mediated transfection and lentiviral gene transfer to achieve high levels of cypin or cypin mutant (cypinΔPDZ PSD-95 non-binding) expression cellularly and network-wide, respectively. Main results. Our analysis revealed that although overexpression of cypin and cypinΔPDZ increase dendrite numbers and decrease spine density, cypin and cypinΔPDZ distinctly regulate neuronal activity. At the single cell level, cypin promotes decreases in bursting activity while cypinΔPDZ reduces sEPSC frequency and further decreases bursting compared to cypin. At the network level, by using the Fano factor as a measure of spike count variability, cypin overexpression results in an increase in variability of spike count, and this effect is abolished when cypin cannot bind PSD-95. This variability is also dependent on baseline activity levels and on mean spike rate over time. Finally, our spike sorting data show that overexpression of cypin results in a more complex distribution of spike waveforms and that binding to PSD-95 is essential for this complexity. Significance. Our data suggest that dendrite morphology does not play a major role in cypin action on electrical activity.

  6. Stability switches, Hopf bifurcation and chaos of a neuron model with delay-dependent parameters

    International Nuclear Information System (INIS)

    Xu, X.; Hu, H.Y.; Wang, H.L.

    2006-01-01

    It is very common that neural network systems usually involve time delays since the transmission of information between neurons is not instantaneous. Because memory intensity of the biological neuron usually depends on time history, some of the parameters may be delay dependent. Yet, little attention has been paid to the dynamics of such systems. In this Letter, a detailed analysis on the stability switches, Hopf bifurcation and chaos of a neuron model with delay-dependent parameters is given. Moreover, the direction and the stability of the bifurcating periodic solutions are obtained by the normal form theory and the center manifold theorem. It shows that the dynamics of the neuron model with delay-dependent parameters is quite different from that of systems with delay-independent parameters only

  7. Inference of topology and the nature of synapses, and the flow of information in neuronal networks

    Science.gov (United States)

    Borges, F. S.; Lameu, E. L.; Iarosz, K. C.; Protachevicz, P. R.; Caldas, I. L.; Viana, R. L.; Macau, E. E. N.; Batista, A. M.; Baptista, M. S.

    2018-02-01

    The characterization of neuronal connectivity is one of the most important matters in neuroscience. In this work, we show that a recently proposed informational quantity, the causal mutual information, employed with an appropriate methodology, can be used not only to correctly infer the direction of the underlying physical synapses, but also to identify their excitatory or inhibitory nature, considering easy to handle and measure bivariate time series. The success of our approach relies on a surprising property found in neuronal networks by which nonadjacent neurons do "understand" each other (positive mutual information), however, this exchange of information is not capable of causing effect (zero transfer entropy). Remarkably, inhibitory connections, responsible for enhancing synchronization, transfer more information than excitatory connections, known to enhance entropy in the network. We also demonstrate that our methodology can be used to correctly infer directionality of synapses even in the presence of dynamic and observational Gaussian noise, and is also successful in providing the effective directionality of intermodular connectivity, when only mean fields can be measured.

  8. Mechanisms and neuronal networks involved in reactive and proactive cognitive control of interference in working memory.

    Science.gov (United States)

    Irlbacher, Kerstin; Kraft, Antje; Kehrer, Stefanie; Brandt, Stephan A

    2014-10-01

    Cognitive control can be reactive or proactive in nature. Reactive control mechanisms, which support the resolution of interference, start after its onset. Conversely, proactive control involves the anticipation and prevention of interference prior to its occurrence. The interrelation of both types of cognitive control is currently under debate: Are they mediated by different neuronal networks? Or are there neuronal structures that have the potential to act in a proactive as well as in a reactive manner? This review illustrates the way in which integrating knowledge gathered from behavioral studies, functional imaging, and human electroencephalography proves useful in answering these questions. We focus on studies that investigate interference resolution at the level of working memory representations. In summary, different mechanisms are instrumental in supporting reactive and proactive control. Distinct neuronal networks are involved, though some brain regions, especially pre-SMA, possess functions that are relevant to both control modes. Therefore, activation of these brain areas could be observed in reactive, as well as proactive control, but at different times during information processing. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Sequentially firing neurons confer flexible timing in neural pattern generators

    International Nuclear Information System (INIS)

    Urban, Alexander; Ermentrout, Bard

    2011-01-01

    Neuronal networks exhibit a variety of complex spatiotemporal patterns that include sequential activity, synchrony, and wavelike dynamics. Inhibition is the primary means through which such patterns are implemented. This behavior is dependent on both the intrinsic dynamics of the individual neurons as well as the connectivity patterns. Many neural circuits consist of networks of smaller subcircuits (motifs) that are coupled together to form the larger system. In this paper, we consider a particularly simple motif, comprising purely inhibitory interactions, which generates sequential periodic dynamics. We first describe the dynamics of the single motif both for general balanced coupling (all cells receive the same number and strength of inputs) and then for a specific class of balanced networks: circulant systems. We couple these motifs together to form larger networks. We use the theory of weak coupling to derive phase models which, themselves, have a certain structure and symmetry. We show that this structure endows the coupled system with the ability to produce arbitrary timing relationships between symmetrically coupled motifs and that the phase relationships are robust over a wide range of frequencies. The theory is applicable to many other systems in biology and physics.

  10. Spatially structured oscillations in a two-dimensional excitatory neuronal network with synaptic depression

    KAUST Repository

    Kilpatrick, Zachary P.

    2009-10-29

    We study the spatiotemporal dynamics of a two-dimensional excitatory neuronal network with synaptic depression. Coupling between populations of neurons is taken to be nonlocal, while depression is taken to be local and presynaptic. We show that the network supports a wide range of spatially structured oscillations, which are suggestive of phenomena seen in cortical slice experiments and in vivo. The particular form of the oscillations depends on initial conditions and the level of background noise. Given an initial, spatially localized stimulus, activity evolves to a spatially localized oscillating core that periodically emits target waves. Low levels of noise can spontaneously generate several pockets of oscillatory activity that interact via their target patterns. Periodic activity in space can also organize into spiral waves, provided that there is some source of rotational symmetry breaking due to external stimuli or noise. In the high gain limit, no oscillatory behavior exists, but a transient stimulus can lead to a single, outward propagating target wave. © Springer Science + Business Media, LLC 2009.

  11. Spatially structured oscillations in a two-dimensional excitatory neuronal network with synaptic depression

    KAUST Repository

    Kilpatrick, Zachary P.; Bressloff, Paul C.

    2009-01-01

    We study the spatiotemporal dynamics of a two-dimensional excitatory neuronal network with synaptic depression. Coupling between populations of neurons is taken to be nonlocal, while depression is taken to be local and presynaptic. We show that the network supports a wide range of spatially structured oscillations, which are suggestive of phenomena seen in cortical slice experiments and in vivo. The particular form of the oscillations depends on initial conditions and the level of background noise. Given an initial, spatially localized stimulus, activity evolves to a spatially localized oscillating core that periodically emits target waves. Low levels of noise can spontaneously generate several pockets of oscillatory activity that interact via their target patterns. Periodic activity in space can also organize into spiral waves, provided that there is some source of rotational symmetry breaking due to external stimuli or noise. In the high gain limit, no oscillatory behavior exists, but a transient stimulus can lead to a single, outward propagating target wave. © Springer Science + Business Media, LLC 2009.

  12. Fast grid layout algorithm for biological networks with sweep calculation.

    Science.gov (United States)

    Kojima, Kaname; Nagasaki, Masao; Miyano, Satoru

    2008-06-15

    Properly drawn biological networks are of great help in the comprehension of their characteristics. The quality of the layouts for retrieved biological networks is critical for pathway databases. However, since it is unrealistic to manually draw biological networks for every retrieval, automatic drawing algorithms are essential. Grid layout algorithms handle various biological properties such as aligning vertices having the same attributes and complicated positional constraints according to their subcellular localizations; thus, they succeed in providing biologically comprehensible layouts. However, existing grid layout algorithms are not suitable for real-time drawing, which is one of requisites for applications to pathway databases, due to their high-computational cost. In addition, they do not consider edge directions and their resulting layouts lack traceability for biochemical reactions and gene regulations, which are the most important features in biological networks. We devise a new calculation method termed sweep calculation and reduce the time complexity of the current grid layout algorithms through its encoding and decoding processes. We conduct practical experiments by using 95 pathway models of various sizes from TRANSPATH and show that our new grid layout algorithm is much faster than existing grid layout algorithms. For the cost function, we introduce a new component that penalizes undesirable edge directions to avoid the lack of traceability in pathways due to the differences in direction between in-edges and out-edges of each vertex. Java implementations of our layout algorithms are available in Cell Illustrator. masao@ims.u-tokyo.ac.jp Supplementary data are available at Bioinformatics online.

  13. GraphAlignment: Bayesian pairwise alignment of biological networks

    Czech Academy of Sciences Publication Activity Database

    Kolář, Michal; Meier, J.; Mustonen, V.; Lässig, M.; Berg, J.

    2012-01-01

    Roč. 6, November 21 (2012) ISSN 1752-0509 Grant - others:Deutsche Forschungsgemeinschaft(DE) SFB 680; Deutsche Forschungsgemeinschaft(DE) SFB-TR12; Deutsche Forschungsgemeinschaft(DE) BE 2478/2-1 Institutional research plan: CEZ:AV0Z50520514 Keywords : Graph alignment * Biological networks * Parameter estimation * Bioconductor Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 2.982, year: 2012

  14. Stochastic Variational Learning in Recurrent Spiking Networks

    Directory of Open Access Journals (Sweden)

    Danilo eJimenez Rezende

    2014-04-01

    Full Text Available The ability to learn and perform statistical inference with biologically plausible recurrent network of spiking neurons is an important step towards understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators conveying information about ``novelty on a statistically rigorous ground.Simulations show that our model is able to learn bothstationary and non-stationary patterns of spike trains.We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.

  15. Stochastic variational learning in recurrent spiking networks.

    Science.gov (United States)

    Jimenez Rezende, Danilo; Gerstner, Wulfram

    2014-01-01

    The ability to learn and perform statistical inference with biologically plausible recurrent networks of spiking neurons is an important step toward understanding perception and reasoning. Here we derive and investigate a new learning rule for recurrent spiking networks with hidden neurons, combining principles from variational learning and reinforcement learning. Our network defines a generative model over spike train histories and the derived learning rule has the form of a local Spike Timing Dependent Plasticity rule modulated by global factors (neuromodulators) conveying information about "novelty" on a statistically rigorous ground. Simulations show that our model is able to learn both stationary and non-stationary patterns of spike trains. We also propose one experiment that could potentially be performed with animals in order to test the dynamics of the predicted novelty signal.

  16. Gradient Learning in Spiking Neural Networks by Dynamic Perturbation of Conductances

    International Nuclear Information System (INIS)

    Fiete, Ila R.; Seung, H. Sebastian

    2006-01-01

    We present a method of estimating the gradient of an objective function with respect to the synaptic weights of a spiking neural network. The method works by measuring the fluctuations in the objective function in response to dynamic perturbation of the membrane conductances of the neurons. It is compatible with recurrent networks of conductance-based model neurons with dynamic synapses. The method can be interpreted as a biologically plausible synaptic learning rule, if the dynamic perturbations are generated by a special class of 'empiric' synapses driven by random spike trains from an external source

  17. Criticality in Neuronal Networks

    Science.gov (United States)

    Friedman, Nir; Ito, Shinya; Brinkman, Braden A. W.; Shimono, Masanori; Deville, R. E. Lee; Beggs, John M.; Dahmen, Karin A.; Butler, Tom C.

    2012-02-01

    In recent years, experiments detecting the electrical firing patterns in slices of in vitro brain tissue have been analyzed to suggest the presence of scale invariance and possibly criticality in the brain. Much of the work done however has been limited in two ways: 1) the data collected is from local field potentials that do not represent the firing of individual neurons; 2) the analysis has been primarily limited to histograms. In our work we examine data based on the firing of individual neurons (spike data), and greatly extend the analysis by considering shape collapse and exponents. Our results strongly suggest that the brain operates near a tuned critical point of a highly distinctive universality class.

  18. A Model to Explain the Emergence of Reward Expectancy neurons using Reinforcement Learning and Neural Network

    OpenAIRE

    Shinya, Ishii; Munetaka, Shidara; Katsunari, Shibata

    2006-01-01

    In an experiment of multi-trial task to obtain a reward, reward expectancy neurons,###which responded only in the non-reward trials that are necessary to advance###toward the reward, have been observed in the anterior cingulate cortex of monkeys.###In this paper, to explain the emergence of the reward expectancy neuron in###terms of reinforcement learning theory, a model that consists of a recurrent neural###network trained based on reinforcement learning is proposed. The analysis of the###hi...

  19. A spiking neuron circuit based on a carbon nanotube transistor

    International Nuclear Information System (INIS)

    Chen, C-L; Kim, K; Truong, Q; Shen, A; Li, Z; Chen, Y

    2012-01-01

    A spiking neuron circuit based on a carbon nanotube (CNT) transistor is presented in this paper. The spiking neuron circuit has a crossbar architecture in which the transistor gates are connected to its row electrodes and the transistor sources are connected to its column electrodes. An electrochemical cell is incorporated in the gate of the transistor by sandwiching a hydrogen-doped poly(ethylene glycol)methyl ether (PEG) electrolyte between the CNT channel and the top gate electrode. An input spike applied to the gate triggers a dynamic drift of the hydrogen ions in the PEG electrolyte, resulting in a post-synaptic current (PSC) through the CNT channel. Spikes input into the rows trigger PSCs through multiple CNT transistors, and PSCs cumulate in the columns and integrate into a ‘soma’ circuit to trigger output spikes based on an integrate-and-fire mechanism. The spiking neuron circuit can potentially emulate biological neuron networks and their intelligent functions. (paper)

  20. Neuronal survival in the brain: neuron type-specific mechanisms

    DEFF Research Database (Denmark)

    Pfisterer, Ulrich Gottfried; Khodosevich, Konstantin

    2017-01-01

    Neurogenic regions of mammalian brain produce many more neurons that will eventually survive and reach a mature stage. Developmental cell death affects both embryonically produced immature neurons and those immature neurons that are generated in regions of adult neurogenesis. Removal of substantial...... numbers of neurons that are not yet completely integrated into the local circuits helps to ensure that maturation and homeostatic function of neuronal networks in the brain proceed correctly. External signals from brain microenvironment together with intrinsic signaling pathways determine whether...... for survival in a certain brain region. This review focuses on how immature neurons survive during normal and impaired brain development, both in the embryonic/neonatal brain and in brain regions associated with adult neurogenesis, and emphasizes neuron type-specific mechanisms that help to survive for various...

  1. Atypical cross talk between mentalizing and mirror neuron networks in autism spectrum disorder.

    Science.gov (United States)

    Fishman, Inna; Keown, Christopher L; Lincoln, Alan J; Pineda, Jaime A; Müller, Ralph-Axel

    2014-07-01

    Converging evidence indicates that brain abnormalities in autism spectrum disorder (ASD) involve atypical network connectivity, but it is unclear whether altered connectivity is especially prominent in brain networks that participate in social cognition. To investigate whether adolescents with ASD show altered functional connectivity in 2 brain networks putatively impaired in ASD and involved in social processing, theory of mind (ToM) and mirror neuron system (MNS). Cross-sectional study using resting-state functional magnetic resonance imaging involving 25 adolescents with ASD between the ages of 11 and 18 years and 25 typically developing adolescents matched for age, handedness, and nonverbal IQ. Statistical parametric maps testing the degree of whole-brain functional connectivity and social functioning measures. Relative to typically developing controls, participants with ASD showed a mixed pattern of both over- and underconnectivity in the ToM network, which was associated with greater social impairment. Increased connectivity in the ASD group was detected primarily between the regions of the MNS and ToM, and was correlated with sociocommunicative measures, suggesting that excessive ToM-MNS cross talk might be associated with social impairment. In a secondary analysis comparing a subset of the 15 participants with ASD with the most severe symptomology and a tightly matched subset of 15 typically developing controls, participants with ASD showed exclusive overconnectivity effects in both ToM and MNS networks, which were also associated with greater social dysfunction. Adolescents with ASD showed atypically increased functional connectivity involving the mentalizing and mirror neuron systems, largely reflecting greater cross talk between the 2. This finding is consistent with emerging evidence of reduced network segregation in ASD and challenges the prevailing theory of general long-distance underconnectivity in ASD. This excess ToM-MNS connectivity may reflect

  2. Innate Synchronous Oscillations in Freely-Organized Small Neuronal Circuits

    Science.gov (United States)

    Shein Idelson, Mark; Ben-Jacob, Eshel; Hanein, Yael

    2010-01-01

    Background Information processing in neuronal networks relies on the network's ability to generate temporal patterns of action potentials. Although the nature of neuronal network activity has been intensively investigated in the past several decades at the individual neuron level, the underlying principles of the collective network activity, such as the synchronization and coordination between neurons, are largely unknown. Here we focus on isolated neuronal clusters in culture and address the following simple, yet fundamental questions: What is the minimal number of cells needed to exhibit collective dynamics? What are the internal temporal characteristics of such dynamics and how do the temporal features of network activity alternate upon crossover from minimal networks to large networks? Methodology/Principal Findings We used network engineering techniques to induce self-organization of cultured networks into neuronal clusters of different sizes. We found that small clusters made of as few as 40 cells already exhibit spontaneous collective events characterized by innate synchronous network oscillations in the range of 25 to 100 Hz. The oscillation frequency of each network appeared to be independent of cluster size. The duration and rate of the network events scale with cluster size but converge to that of large uniform networks. Finally, the investigation of two coupled clusters revealed clear activity propagation with master/slave asymmetry. Conclusions/Significance The nature of the activity patterns observed in small networks, namely the consistent emergence of similar activity across networks of different size and morphology, suggests that neuronal clusters self-regulate their activity to sustain network bursts with internal oscillatory features. We therefore suggest that clusters of as few as tens of cells can serve as a minimal but sufficient functional network, capable of sustaining oscillatory activity. Interestingly, the frequencies of these

  3. Engineering connectivity by multiscale micropatterning of individual populations of neurons.

    Science.gov (United States)

    Albers, Jonas; Toma, Koji; Offenhäusser, Andreas

    2015-02-01

    Functional networks are the basis of information processing in the central nervous system. Essential for their formation are guided neuronal growth as well as controlled connectivity and information flow. The basis of neuronal development is generated by guiding cues and geometric constraints. To investigate the neuronal growth and connectivity of adjacent neuronal networks, two-dimensional protein patterns were created. A mixture of poly-L-lysine and laminin was transferred onto a silanized glass surface by microcontact printing. The structures were populated with dissociated primary cortical embryonic rat neurons. Triangular structures with diverse opening angles, height, and design were chosen as two-dimensional structures to allow network formation with constricted gateways. Neuronal development was observed by immunohistochemistry to pursue the influence of the chosen structures on the neuronal outgrowth. Neurons were stained for MAP2, while poly-L-lysine was FITC labeled. With this study we present an easy-to-use technique to engineer two-dimensional networks in vitro with defined gateways. The presented micropatterning method is used to generate daisy-chained neuronal networks with predefined connectivity. Signal propagation among geometrically constrained networks can easily be monitored by calcium-sensitive dyes, providing insights into network communication in vitro. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. CytoCluster: A Cytoscape Plugin for Cluster Analysis and Visualization of Biological Networks.

    Science.gov (United States)

    Li, Min; Li, Dongyan; Tang, Yu; Wu, Fangxiang; Wang, Jianxin

    2017-08-31

    Nowadays, cluster analysis of biological networks has become one of the most important approaches to identifying functional modules as well as predicting protein complexes and network biomarkers. Furthermore, the visualization of clustering results is crucial to display the structure of biological networks. Here we present CytoCluster, a cytoscape plugin integrating six clustering algorithms, HC-PIN (Hierarchical Clustering algorithm in Protein Interaction Networks), OH-PIN (identifying Overlapping and Hierarchical modules in Protein Interaction Networks), IPCA (Identifying Protein Complex Algorithm), ClusterONE (Clustering with Overlapping Neighborhood Expansion), DCU (Detecting Complexes based on Uncertain graph model), IPC-MCE (Identifying Protein Complexes based on Maximal Complex Extension), and BinGO (the Biological networks Gene Ontology) function. Users can select different clustering algorithms according to their requirements. The main function of these six clustering algorithms is to detect protein complexes or functional modules. In addition, BinGO is used to determine which Gene Ontology (GO) categories are statistically overrepresented in a set of genes or a subgraph of a biological network. CytoCluster can be easily expanded, so that more clustering algorithms and functions can be added to this plugin. Since it was created in July 2013, CytoCluster has been downloaded more than 9700 times in the Cytoscape App store and has already been applied to the analysis of different biological networks. CytoCluster is available from http://apps.cytoscape.org/apps/cytocluster.

  5. Supramolecular assembly of biological molecules purified from bovine nerve cells: from microtubule bundles and necklaces to neurofilament networks

    International Nuclear Information System (INIS)

    Needleman, Daniel J; Jones, Jayna B; Raviv, Uri; Ojeda-Lopez, Miguel A; Miller, H P; Li, Y; Wilson, L; Safinya, C R

    2005-01-01

    With the completion of the human genome project, the biosciences community is beginning the daunting task of understanding the structures and functions of a large number of interacting biological macromolecules. Examples include the interacting molecules involved in the process of DNA condensation during the cell cycle, and in the formation of bundles and networks of filamentous actin proteins in cell attachment, motility and cytokinesis. In this proceedings paper we present examples of supramolecular assembly based on proteins derived from the vertebrate nerve cell cytoskeleton. The axonal cytoskeleton in vertebrate neurons provides a rich example of bundles and networks of neurofilaments, microtubules (MTs) and filamentous actin, where the nature of the interactions, structures, and structure-function correlations remains poorly understood. We describe synchrotron x-ray diffraction, electron microscopy, and optical imaging data, in reconstituted protein systems purified from bovine central nervous system, which reveal unexpected structures not predicted by current electrostatic theories of polyelectrolyte bundling, including three-dimensional MT bundles and two-dimensional MT necklaces

  6. Local and global control of ecological and biological networks

    OpenAIRE

    Alessandro Ferrarini

    2014-01-01

    Recently, I introduced a methodological framework so that ecological and biological networks can be controlled both from inside and outside by coupling network dynamics and evolutionary modelling. The endogenous control requires the network to be optimized at the beginning of its dynamics (by acting upon nodes, edges or both) so that it will then go inertially to the desired state. Instead, the exogenous control requires that exogenous controllers act upon the network at each time step. By th...

  7. Beyond Critical Exponents in Neuronal Avalanches

    Science.gov (United States)

    Friedman, Nir; Butler, Tom; Deville, Robert; Beggs, John; Dahmen, Karin

    2011-03-01

    Neurons form a complex network in the brain, where they interact with one another by firing electrical signals. Neurons firing can trigger other neurons to fire, potentially causing avalanches of activity in the network. In many cases these avalanches have been found to be scale independent, similar to critical phenomena in diverse systems such as magnets and earthquakes. We discuss models for neuronal activity that allow for the extraction of testable, statistical predictions. We compare these models to experimental results, and go beyond critical exponents.

  8. Neuronal synchrony: peculiarity and generality.

    Science.gov (United States)

    Nowotny, Thomas; Huerta, Ramon; Rabinovich, Mikhail I

    2008-09-01

    Synchronization in neuronal systems is a new and intriguing application of dynamical systems theory. Why are neuronal systems different as a subject for synchronization? (1) Neurons in themselves are multidimensional nonlinear systems that are able to exhibit a wide variety of different activity patterns. Their "dynamical repertoire" includes regular or chaotic spiking, regular or chaotic bursting, multistability, and complex transient regimes. (2) Usually, neuronal oscillations are the result of the cooperative activity of many synaptically connected neurons (a neuronal circuit). Thus, it is necessary to consider synchronization between different neuronal circuits as well. (3) The synapses that implement the coupling between neurons are also dynamical elements and their intrinsic dynamics influences the process of synchronization or entrainment significantly. In this review we will focus on four new problems: (i) the synchronization in minimal neuronal networks with plastic synapses (synchronization with activity dependent coupling), (ii) synchronization of bursts that are generated by a group of nonsymmetrically coupled inhibitory neurons (heteroclinic synchronization), (iii) the coordination of activities of two coupled neuronal networks (partial synchronization of small composite structures), and (iv) coarse grained synchronization in larger systems (synchronization on a mesoscopic scale). (c) 2008 American Institute of Physics.

  9. Biologically Inspired Target Recognition in Radar Sensor Networks

    Directory of Open Access Journals (Sweden)

    Liang Qilian

    2010-01-01

    Full Text Available One of the great mysteries of the brain is cognitive control. How can the interactions between millions of neurons result in behavior that is coordinated and appears willful and voluntary? There is consensus that it depends on the prefrontal cortex (PFC. Many PFC areas receive converging inputs from at least two sensory modalities. Inspired by human's innate ability to process and integrate information from disparate, network-based sources, we apply human-inspired information integration mechanisms to target detection in cognitive radar sensor network. Humans' information integration mechanisms have been modelled using maximum-likelihood estimation (MLE or soft-max approaches. In this paper, we apply these two algorithms to cognitive radar sensor networks target detection. Discrete-cosine-transform (DCT is used to process the integrated data from MLE or soft-max. We apply fuzzy logic system (FLS to automatic target detection based on the AC power values from DCT. Simulation results show that our MLE-DCT-FLS and soft-max-DCT-FLS approaches perform very well in the radar sensor network target detection, whereas the existing 2D construction algorithm does not work in this study.

  10. Nonlinear Bayesian filtering and learning: a neuronal dynamics for perception.

    Science.gov (United States)

    Kutschireiter, Anna; Surace, Simone Carlo; Sprekeler, Henning; Pfister, Jean-Pascal

    2017-08-18

    The robust estimation of dynamical hidden features, such as the position of prey, based on sensory inputs is one of the hallmarks of perception. This dynamical estimation can be rigorously formulated by nonlinear Bayesian filtering theory. Recent experimental and behavioral studies have shown that animals' performance in many tasks is consistent with such a Bayesian statistical interpretation. However, it is presently unclear how a nonlinear Bayesian filter can be efficiently implemented in a network of neurons that satisfies some minimum constraints of biological plausibility. Here, we propose the Neural Particle Filter (NPF), a sampling-based nonlinear Bayesian filter, which does not rely on importance weights. We show that this filter can be interpreted as the neuronal dynamics of a recurrently connected rate-based neural network receiving feed-forward input from sensory neurons. Further, it captures properties of temporal and multi-sensory integration that are crucial for perception, and it allows for online parameter learning with a maximum likelihood approach. The NPF holds the promise to avoid the 'curse of dimensionality', and we demonstrate numerically its capability to outperform weighted particle filters in higher dimensions and when the number of particles is limited.

  11. Mean-field analysis of orientation selectivity in inhibition-dominated networks of spiking neurons.

    Science.gov (United States)

    Sadeh, Sadra; Cardanobile, Stefano; Rotter, Stefan

    2014-01-01

    Mechanisms underlying the emergence of orientation selectivity in the primary visual cortex are highly debated. Here we study the contribution of inhibition-dominated random recurrent networks to orientation selectivity, and more generally to sensory processing. By simulating and analyzing large-scale networks of spiking neurons, we investigate tuning amplification and contrast invariance of orientation selectivity in these networks. In particular, we show how selective attenuation of the common mode and amplification of the modulation component take place in these networks. Selective attenuation of the baseline, which is governed by the exceptional eigenvalue of the connectivity matrix, removes the unspecific, redundant signal component and ensures the invariance of selectivity across different contrasts. Selective amplification of modulation, which is governed by the operating regime of the network and depends on the strength of coupling, amplifies the informative signal component and thus increases the signal-to-noise ratio. Here, we perform a mean-field analysis which accounts for this process.

  12. Large-scale modelling of neuronal systems

    International Nuclear Information System (INIS)

    Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.

    2009-01-01

    The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.

  13. Network-state modulation of power-law frequency-scaling in visual cortical neurons.

    Science.gov (United States)

    El Boustani, Sami; Marre, Olivier; Béhuret, Sébastien; Baudot, Pierre; Yger, Pierre; Bal, Thierry; Destexhe, Alain; Frégnac, Yves

    2009-09-01

    Various types of neural-based signals, such as EEG, local field potentials and intracellular synaptic potentials, integrate multiple sources of activity distributed across large assemblies. They have in common a power-law frequency-scaling structure at high frequencies, but it is still unclear whether this scaling property is dominated by intrinsic neuronal properties or by network activity. The latter case is particularly interesting because if frequency-scaling reflects the network state it could be used to characterize the functional impact of the connectivity. In intracellularly recorded neurons of cat primary visual cortex in vivo, the power spectral density of V(m) activity displays a power-law structure at high frequencies with a fractional scaling exponent. We show that this exponent is not constant, but depends on the visual statistics used to drive the network. To investigate the determinants of this frequency-scaling, we considered a generic recurrent model of cortex receiving a retinotopically organized external input. Similarly to the in vivo case, our in computo simulations show that the scaling exponent reflects the correlation level imposed in the input. This systematic dependence was also replicated at the single cell level, by controlling independently, in a parametric way, the strength and the temporal decay of the pairwise correlation between presynaptic inputs. This last model was implemented in vitro by imposing the correlation control in artificial presynaptic spike trains through dynamic-clamp techniques. These in vitro manipulations induced a modulation of the scaling exponent, similar to that observed in vivo and predicted in computo. We conclude that the frequency-scaling exponent of the V(m) reflects stimulus-driven correlations in the cortical network activity. Therefore, we propose that the scaling exponent could be used to read-out the "effective" connectivity responsible for the dynamical signature of the population signals measured

  14. Growth dynamics explain the development of spatiotemporal burst activity of young cultured neuronal networks in detail.

    Directory of Open Access Journals (Sweden)

    Taras A Gritsun

    Full Text Available A typical property of isolated cultured neuronal networks of dissociated rat cortical cells is synchronized spiking, called bursting, starting about one week after plating, when the dissociated cells have sufficiently sent out their neurites and formed enough synaptic connections. This paper is the third in a series of three on simulation models of cultured networks. Our two previous studies [26], [27] have shown that random recurrent network activity models generate intra- and inter-bursting patterns similar to experimental data. The networks were noise or pacemaker-driven and had Izhikevich-neuronal elements with only short-term plastic (STP synapses (so, no long-term potentiation, LTP, or depression, LTD, was included. However, elevated pre-phases (burst leaders and after-phases of burst main shapes, that usually arise during the development of the network, were not yet simulated in sufficient detail. This lack of detail may be due to the fact that the random models completely missed network topology .and a growth model. Therefore, the present paper adds, for the first time, a growth model to the activity model, to give the network a time dependent topology and to explain burst shapes in more detail. Again, without LTP or LTD mechanisms. The integrated growth-activity model yielded realistic bursting patterns. The automatic adjustment of various mutually interdependent network parameters is one of the major advantages of our current approach. Spatio-temporal bursting activity was validated against experiment. Depending on network size, wave reverberation mechanisms were seen along the network boundaries, which may explain the generation of phases of elevated firing before and after the main phase of the burst shape.In summary, the results show that adding topology and growth explain burst shapes in great detail and suggest that young networks still lack/do not need LTP or LTD mechanisms.

  15. A hierarchical model for structure learning based on the physiological characteristics of neurons

    Institute of Scientific and Technical Information of China (English)

    WEI Hui

    2007-01-01

    Almost all applications of Artificial Neural Networks (ANNs) depend mainly on their memory ability.The characteristics of typical ANN models are fixed connections,with evolved weights,globalized representations,and globalized optimizations,all based on a mathematical approach.This makes those models to be deficient in robustness,efficiency of learning,capacity,anti-jamming between training sets,and correlativity of samples,etc.In this paper,we attempt to address these problems by adopting the characteristics of biological neurons in morphology and signal processing.A hierarchical neural network was designed and realized to implement structure learning and representations based on connected structures.The basic characteristics of this model are localized and random connections,field limitations of neuron fan-in and fan-out,dynamic behavior of neurons,and samples represented through different sub-circuits of neurons specialized into different response patterns.At the end of this paper,some important aspects of error correction,capacity,learning efficiency,and soundness of structural representation are analyzed theoretically.This paper has demonstrated the feasibility and advantages of structure learning and representation.This model can serve as a fundamental element of cognitive systems such as perception and associative memory.Key-words structure learning,representation,associative memory,computational neuroscience

  16. Quantifying the connectivity of scale-free and biological networks

    Energy Technology Data Exchange (ETDEWEB)

    Shiner, J.S. E-mail: shiner@alumni.duke.edu; Davison, Matt E-mail: mdavison@uwo.ca

    2004-07-01

    Scale-free and biological networks follow a power law distribution p{sub k}{proportional_to}k{sup -{alpha}} for the probability that a node is connected to k other nodes; the corresponding ranges for {alpha} (biological: 1<{alpha}<2; scale-free: 2<{alpha}{<=}3) yield a diverging variance for the connectivity k and lack of predictability for the average connectivity. Predictability can be achieved with the Renyi, Tsallis and Landsberg-Vedral extended entropies and corresponding 'disorders' for correctly chosen values of the entropy index q. Escort distributions p{sub k}{proportional_to}k{sup -{alpha}}{sup q} with q>3/{alpha} also yield a nondiverging variance and predictability. It is argued that the Tsallis entropies may be the appropriate quantities for the study of scale-free and biological networks.

  17. Task-dependent changes in cross-level coupling between single neurons and oscillatory activity in multiscale networks.

    Directory of Open Access Journals (Sweden)

    Ryan T Canolty

    Full Text Available Understanding the principles governing the dynamic coordination of functional brain networks remains an important unmet goal within neuroscience. How do distributed ensembles of neurons transiently coordinate their activity across a variety of spatial and temporal scales? While a complete mechanistic account of this process remains elusive, evidence suggests that neuronal oscillations may play a key role in this process, with different rhythms influencing both local computation and long-range communication. To investigate this question, we recorded multiple single unit and local field potential (LFP activity from microelectrode arrays implanted bilaterally in macaque motor areas. Monkeys performed a delayed center-out reach task either manually using their natural arm (Manual Control, MC or under direct neural control through a brain-machine interface (Brain Control, BC. In accord with prior work, we found that the spiking activity of individual neurons is coupled to multiple aspects of the ongoing motor beta rhythm (10-45 Hz during both MC and BC, with neurons exhibiting a diversity of coupling preferences. However, here we show that for identified single neurons, this beta-to-rate mapping can change in a reversible and task-dependent way. For example, as beta power increases, a given neuron may increase spiking during MC but decrease spiking during BC, or exhibit a reversible shift in the preferred phase of firing. The within-task stability of coupling, combined with the reversible cross-task changes in coupling, suggest that task-dependent changes in the beta-to-rate mapping play a role in the transient functional reorganization of neural ensembles. We characterize the range of task-dependent changes in the mapping from beta amplitude, phase, and inter-hemispheric phase differences to the spike rates of an ensemble of simultaneously-recorded neurons, and discuss the potential implications that dynamic remapping from oscillatory activity to

  18. Statistics of Visual Responses to Image Object Stimuli from Primate AIT Neurons to DNN Neurons.

    Science.gov (United States)

    Dong, Qiulei; Wang, Hong; Hu, Zhanyi

    2018-02-01

    Under the goal-driven paradigm, Yamins et al. ( 2014 ; Yamins & DiCarlo, 2016 ) have shown that by optimizing only the final eight-way categorization performance of a four-layer hierarchical network, not only can its top output layer quantitatively predict IT neuron responses but its penultimate layer can also automatically predict V4 neuron responses. Currently, deep neural networks (DNNs) in the field of computer vision have reached image object categorization performance comparable to that of human beings on ImageNet, a data set that contains 1.3 million training images of 1000 categories. We explore whether the DNN neurons (units in DNNs) possess image object representational statistics similar to monkey IT neurons, particularly when the network becomes deeper and the number of image categories becomes larger, using VGG19, a typical and widely used deep network of 19 layers in the computer vision field. Following Lehky, Kiani, Esteky, and Tanaka ( 2011 , 2014 ), where the response statistics of 674 IT neurons to 806 image stimuli are analyzed using three measures (kurtosis, Pareto tail index, and intrinsic dimensionality), we investigate the three issues in this letter using the same three measures: (1) the similarities and differences of the neural response statistics between VGG19 and primate IT cortex, (2) the variation trends of the response statistics of VGG19 neurons at different layers from low to high, and (3) the variation trends of the response statistics of VGG19 neurons when the numbers of stimuli and neurons increase. We find that the response statistics on both single-neuron selectivity and population sparseness of VGG19 neurons are fundamentally different from those of IT neurons in most cases; by increasing the number of neurons in different layers and the number of stimuli, the response statistics of neurons at different layers from low to high do not substantially change; and the estimated intrinsic dimensionality values at the low

  19. Neuronal spike sorting based on radial basis function neural networks

    Directory of Open Access Journals (Sweden)

    Taghavi Kani M

    2011-02-01

    Full Text Available "nBackground: Studying the behavior of a society of neurons, extracting the communication mechanisms of brain with other tissues, finding treatment for some nervous system diseases and designing neuroprosthetic devices, require an algorithm to sort neuralspikes automatically. However, sorting neural spikes is a challenging task because of the low signal to noise ratio (SNR of the spikes. The main purpose of this study was to design an automatic algorithm for classifying neuronal spikes that are emitted from a specific region of the nervous system."n "nMethods: The spike sorting process usually consists of three stages: detection, feature extraction and sorting. We initially used signal statistics to detect neural spikes. Then, we chose a limited number of typical spikes as features and finally used them to train a radial basis function (RBF neural network to sort the spikes. In most spike sorting devices, these signals are not linearly discriminative. In order to solve this problem, the aforesaid RBF neural network was used."n "nResults: After the learning process, our proposed algorithm classified any arbitrary spike. The obtained results showed that even though the proposed Radial Basis Spike Sorter (RBSS reached to the same error as the previous methods, however, the computational costs were much lower compared to other algorithms. Moreover, the competitive points of the proposed algorithm were its good speed and low computational complexity."n "nConclusion: Regarding the results of this study, the proposed algorithm seems to serve the purpose of procedures that require real-time processing and spike sorting.

  20. Analysis and modeling of ensemble recordings from respiratory pre-motor neurons indicate changes in functional network architecture after acute hypoxia

    Directory of Open Access Journals (Sweden)

    Roberto F Galán

    2010-09-01

    Full Text Available We have combined neurophysiologic recording, statistical analysis, and computational modeling to investigate the dynamics of the respiratory network in the brainstem. Using a multielectrode array, we recorded ensembles of respiratory neurons in perfused in situ rat preparations that produce spontaneous breathing patterns, focusing on inspiratory pre-motor neurons. We compared firing rates and neuronal synchronization among these neurons before and after a brief hypoxic stimulus. We observed a significant decrease in the number of spikes after stimulation, in part due to a transient slowing of the respiratory pattern. However, the median interspike interval did not change, suggesting that the firing threshold of the neurons was not affected but rather the synaptic input was. A bootstrap analysis of synchrony between spike trains revealed that, both before and after brief hypoxia, up to 45 % (but typically less than 5 % of coincident spikes across neuronal pairs was not explained by chance. Most likely, this synchrony resulted from common synaptic input to the pre-motor population, an example of stochastic synchronization. After brief hypoxia most pairs were less synchronized, although some were more, suggesting that the respiratory network was “rewired” transiently after the stimulus. To investigate this hypothesis, we created a simple computational model with feed-forward divergent connections along the inspiratory pathway. Assuming that 1 the number of divergent projections was not the same for all presynaptic cells, but rather spanned a wide range and 2 that the stimulus increased inhibition at the top of the network; this model reproduced the reduction in firing rate and bootstrap-corrected synchrony subsequent to hypoxic stimulation observed in our experimental data.